[time-nuts] Question about frequency counter testing

Magnus Danielson magnus at rubidium.dyndns.org
Wed Jun 6 21:18:57 UTC 2018


Hi Oleg,

On 06/06/2018 02:53 PM, Oleg Skydan wrote:
> Hi, Magnus!
> 
> Sorry for the late answer, I injured my left eye last Monday, so had
> very limited abilities to use computer.

Sorry to hear that. Hope you heal up well and quick enough.

> From: "Magnus Danielson" <magnus at rubidium.dyndns.org>
>> As long as the sums C and D becomes correct, your
>> path to it can be whatever.
> 
> Yes. It produces the same sums.
> 
>> Yes please do, then I can double check it.
> 
> I have write a note and attached it. The described modifications to the
> original method were successfully tested on my experimental HW.

You should add the basic formula

x_{N_1+n} = x_{N_1} + x_n^0

prior to (5) and explain that the expected phase-ramp within the block
will have a common offset in x_{N-1} and that the x_n^0 series is the
series of values with the offset removed from the series. This is fine,
it should just be introduced before applied on (5).

Notice that E as introduced in (8) and (9) is not needed, as you can
directly convert it into N(N_2-1)/2.

Anyway, you have sure understood the toolbox given to you, and your
contribution is to play the same game, but to reduce the needed dynamics
of the blocks. Neat. I may include that with due reference.

>> Yeah, now you can move your harware focus on considering interpolation
>> techniques beyond the processing power of least-square estimation, which
>> integrate noise way down.
> 
> If you are talking about adding traditional HW interpolation of the
> trigger events I have no plans to do it. It is not possible to do it
> keeping 2.5ns base counter resolution (there is no way to output 400MHz
> clock signal out of the chip) and I do not want to add extra complexity
> to the HW of this project.
> 
> But, the HW I use can simultaneously sample up to 10 timestamps. So, I
> can push the one shoot resolution down to 250ps using several delay
> lines (theoretically). I do not think that going down to 250ps has much
> sense (also I have another plans for that additional HW), but 2x or 4x
> one shot resolution improvement (down to 1.25ns or 625ps) is relatively
> simple to implement in HW and should be a good idea to try.

Sounds fun!

>>> I will probably throw out the power hungry and expensive SDRAM chip or
>>> use much smaller one :).
>>
>> Yeah, it would only be if you build multi-tau PDEV plots that you would
>> need much memory, other than that it is just buffer memory to buffer
>> before it goes to off-board processing, at which time you would need to
>> convey the C, D, N and tau0 values.
> 
> Yes, I want to produce multi-tau PDEV plots :).

Make good sense. :)

> They can be computed with small memory footprint, but it will be non
> overlapped PDEVs, so the confidence level at large taus will be poor
> (with the practical durations of the measurements). I have a working
> code that realizes such algorithm. It uses only 272bytes of memory for
> each decade (1-2-5 values).

Seems very reasonable. If you are willing to use more memory, you can do
overlapping once decimated down to suitable rate. On the other hand,
considering the rate of samples, lots of gain already there.

> I need to think how to do the overlapping PDEV calculations with minimal
> memory/processing power requirements (I am aware that decimation
> routines should not use the overlapped calculations).

It's fairly simple, as you decimate samples and/or blocks, the produced
blocks overlaps one way or another. The multiple overlap variants should
each behave as a complete PDEV stream, and the variances can then be
added safely.

> BTW, are there any "optimal overlapping"? Or I should just use as much
> data as I can process?

"optimal overlapping" would be when all overlapping variants is used,
that is all with tau0 offsets available. When done for Allan Deviation
some refer to this as OADEV. This is however an misnomer as it is an
ADEV estimator which just has better confidence intervals than the
non-overlapping ADEV estimator. Thus, both estimator algorithms have the
same scale of measure, that of ADEV, but different amount of Equivalent
Degrees of Freedom (EDF) which has direct implications on the confidence
interval bounds. The more EDF, the better confidence interval. The more
overlapping, the more EDF. Further improvements would be TOTAL ADEV and
Theo, which both aim to squeeze out as much EDF as possible from the
dataset, in an attempt of reducing the length of measurement.

>> Please report on that progress! Sounds fun!
> 
> I will drop a note when I will move on the next step. The things are a
> bit slower now.

Take care. Heal up properly. It's a hobby after all. :)

Good work there.

Cheers,
Magnus

> Thanks!
> Oleg
> 
> 
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
> 



More information about the Time-nuts_lists.febo.com mailing list