[time-nuts] Question about frequency counter testing

Magnus Danielson magnus at rubidium.dyndns.org
Sat Jun 23 16:40:04 UTC 2018


Hi Oleg,

On 06/21/2018 03:05 PM, Oleg Skydan wrote:
> Hi!
> 
> From: "Magnus Danielson" <magnus at rubidium.dyndns.org>
> 
>>> I have write a note and attached it. The described modifications to the
>>> original method were successfully tested on my experimental HW.
>>
>> You should add the basic formula
>>
>> x_{N_1+n} = x_{N_1} + x_n^0
>>
>> prior to (5) and explain that the expected phase-ramp within the block
>> will have a common offset in x_{N-1} and that the x_n^0 series is the
>> series of values with the offset removed from the series. This is fine,
>> it should just be introduced before applied on (5).
> 
> I have corrected the document and put it here (it should be clearer now):
> http://skydan.in.ua/FC/Efficient_C_and_D_sums.pdf
> 
> It should be more clear now.

It is much better now. You should consider publish that, with more
description of the surrounding setup.

>> Notice that E as introduced in (8) and (9) is not needed, as you can
>> directly convert it into N(N_2-1)/2.
> 
> Oh! I should notice it, thanks for the valuable comment!

Well, you should realize that it is exactly sums like these that I need
to solve for the full processing-trick, so it was natural and should be
used even for this application of the basic approach.

>>> They can be computed with small memory footprint, but it will be non
>>> overlapped PDEVs, so the confidence level at large taus will be poor
>>> (with the practical durations of the measurements). I have a working
>>> code that realizes such algorithm. It uses only 272bytes of memory for
>>> each decade (1-2-5 values).
>>
>> Seems very reasonable. If you are willing to use more memory, you can do
>> overlapping once decimated down to suitable rate. On the other hand,
>> considering the rate of samples, lots of gain already there.
> 
> I have optimized continuous PDEV calculation algorithm, and it uses only
> 140bytes per decade now.
> 
> I will not probably implement overlapping PDEV calculations to keep the
> things simple (with no external memory) and will just do the continuous
> PDEV calculations only. The more sophisticated calculations can be
> easily done on the PC side.

Notice that tau0, N, C and D should be delivered to the PC, one way or
another. To continue the processing that is what you need to extend it,
so you do not want to produce phase or frequency measures.

>>> ... but 2x or 4x
>>> one shot resolution improvement (down to 1.25ns or 625ps) is relatively
>>> simple to implement in HW and should be a good idea to try.
> 
> So, I tried it with a "quick and dirty" HW. It appeared to be not as
> simple in real life :) There was a problem (probably the crosstalk or
> grounding issue) which leaded to unstable phase measurements. So, I got
> no improvements (the results with 1.25ns resolution were worse then with
> the 2.5ns resolution). I have to do more experiments with better HW
> implementation.

Yes, for that type of time, you need to get good separation, where
ground-bounce can be troublesome. It's a future improvement if you can
learn how to design that part properly.

Cheers,
Magnus



More information about the Time-nuts_lists.febo.com mailing list