[time-nuts] Re: Timestamping counter techniques : phase computation question

Attila Kinali attila at kinali.ch
Mon Jan 31 17:00:57 UTC 2022


On Sun, 30 Jan 2022 12:46:19 +0100
Erik Kaashoek <erik at kaashoek.com> wrote:

> 1: Is using linear regression as described above a good method to 
> calculate the phase relation between events and clock? If not, what 
> method to use?

Yes, it is. You can see each measurment as an statistically independent
sample. Pendulum uses this in their time-stamping counters [1].
One big assumption of this method is that the frequency (i.e. the frequency
of the device measured vs the frequency of the counter's reference) is
constant. This is an ok assumption for short total measurement periods
(up to a few seconds) and stable oscillators. For anything beyond 1-100s
I would do at least a linear regression or a drift + temperature regression.
For anything longer than that, I would split it into pieces less than 1s and
process these as 1s measurement samples and use the usual machinery (ADEV, MTIE,...).

> 2: For highest accuracy of the calculation output, is it best the 
> captures are at (almost) regular intervals (as above) or is some form of 
> dithering of the interval better? And what form of dithering is best?

This is a very good question and the answer is a resounding "It depends."
A lot depends on how your counter behaves. If it is well behaved in terms
of sampling vs internal clock phase, then you don't have to worry. If there
are correlation effect (e.g. due to non-linearity of the fine interpolator)
then you have to account for that and do some analysis of its effect on your
measurement. But be warned, that's higher level statistics with non-linear
variables. Hic sunt dracones!

> 3: Assuming it is possible to have a large amount (1e+5) of captures per 
> measurement interval, are there other or additional methods to further 
> improve the accuracy?

You can model your oscillator, its environmental depencency, estimate
its parameters and compensate for them. Then you can model the effect
of the environment on your counter, estimate and compensate for it.

There are many things one can do. What makes sense to do depends a lot
on your exact setup, what the stabilities of the oscillators involved are.
And the more complex your model is, the more complex its validation becomes.
In my humble opinion, it is better to stick to a simpler statistical model
and know where its limits are, than using a poorly understood and motivated
complex model that might or might not be a better fit. What a lot of people
underestimate is, that we are, when doing measurements like this, ultimately
doing complex statistical analysis with lots of unspoken assumption on the
underlying mechanics. Statistics is a quite unintuitive field even in the
simple cases handled in school/university. When dealing with random variables
that are non-linear and lead to non-convergent moments, then things become
quite "interesting". It becomes even more interesting, when the standard
model for noise (i.e. infinite bandwidth signal with specific characteristics
in the frequency domain) is mathematically nonsense and leads to violations
in the assumptions in the analytical tools we use.

				Attila Kinali


[1] "New frequency counting principle improves resolution", by Johanson, 2005
https://doi.org/10.1109/FREQ.2005.1574007
(there was a non-paywalled version of this, but the link has gone away)
-- 
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
        -- Richard W. Hamming, The Art of Doing Science and Engineering




More information about the Time-nuts_lists.febo.com mailing list