[time-nuts] Re: Build a 3 hat timestanp counter
Magnus Danielson
magnus at rubidium.se
Tue May 24 23:18:01 UTC 2022
Hi,
The first limit you run into is the 1/tau slope of the measurement
setup. This is often claimed to be white phase modulation noise, but it
is also the effect of the single-shot resolution of the counter, and the
actual slope level depends on the interaction of these two.
So, you might want to try a simple approach first, just to get started.
Nothing wrong with that. You will end up want to get better, so I will
try to provide a few guiding comments for things to think of and improve.
So, in general, try to use as high frequency as you can so that as you
average down, your sqrt(f/f0) gets as high as possible as the benefit
will be 1/sqrt(f/f0) where f is the oscillator frequency and f0 is the
rate after average.
As you do ADEV, the f0 frequency will control your bandwidth.
The filter effect of the averaging as you reduce and sub-sample will
help to some degree with anti-aliasing, but rather than doing averaging,
consider doing proper anti-aliasing filtering as the effect of aliasing
into these measures is established and improvements into the upcoming
IEEE Std 1139 reflect this. In short, aliasing folds the white noise and
straight averaging tends to be a poor suppressor of aliasing noise.
For white phase modulation (WPM) the expected ADEV response depends
linearly with the bandwidth of the measurement filter. It's often
modelled as a brick-wall filter, which it never is. For classical
counters, the input bandwidth is high, then the sampling rate forms a
Nyquist sampling frequency, but wide band noise just aliase around that.
Anti-aliasing filter helps to reduce or even remove the effect, and then
the bandwidth of the anti-aliasing filter replace the physical channel
bandwidth. If the anti-aliasing is done digitally after the counter
front-end, you already got some aliasing wrapping, but keeping that rate
as high as possible keep the number of overlays low and then
filtering-wise reduce it will get you better result.
For aliassing effects, see Claudio Calosso of INRIM. Great guy.
This is where the sub-sampling filter approach is nice, since a filter
followed by sub-sampling removes the need to produce all the outputs of
the original sample rate, so filter processing can operate on the
sub-sampled rate.
As your measures goes for higher taus in ADEV, the significant amount of
the ADEV power will be well within the pass-band of the filter, so just
making sure you have a flat top avoids surprises. For shorter taus, the
anti-aliasing filter will be dominant, so assume first decade of tau to
be waste.
I say this to guide you to get the best result with the proposed setup.
The classical three-cornered hat calculation has a limitation in that it
becomes limited by noise and can sometimes result in non-stable results.
The Grosslambert analysis is more robust, since it is essentially the
same as doing the cross-correlation measurement. The key is that you
average down before squaring where as in the three-cornered hat to
square early and is unable to surpress noise of the other sources with
as good quality. For Grosslambert analysis, see François Vernotte series
of papers and presentation. François is another great guy. I spent some
time discussing the Grosslambert analysis with Demetrios the other week.
I think I need to also say that Demetrios is a great guy too, not to
single him out, but he really is.
There is another trick up the sleeve thought. If you do the modified
Allan deviation (MDEV) processing, it actually integrate the sqrt()
trick with measurement, achieving a 1/tau^1.5 slope for the WPM. This
will push it down quicker if you let it use enough high rate of samples,
so that you hit the flicker phase-modulation slope (1/tau), the white
frequency modulation slope (1/tau^0.5) and finally flicker frequency
modulation (flat) quicker. The reference levels will be different from
ADEV for the various noise-types, but that you can look up in tables and
correct for.
Cheers,
Magnus
On 2022-05-24 18:37, Hans-Georg Lehnard via time-nuts wrote:
> Hi,
>
> my Name is Hans-Georg Lehnard from Germany and I'm new here, worked as a
> developer for hardware then for software and last as a system developer.
> Now I'm retired and I can play with hardware again ;-).
>
> I have:
>
> 4 x 20MHz Rubium (TEMEX MCFRS-1),
> 2 x 10MHz HP10811-60111
> 1 x Samsung UCCM GPSDO
> 1 x FA2 counter.
> lots of OCXO
>
> and try to build a house standard that I can trust and qualify my
> oscillators.
> Reproducible measurements with the FA2 in 10s precision mode I trust to
> 10E-11.
> The short-term stability of the HP oscillators cannot be measured with
> it, or both are defective.
> The FA2 is not suitable for short-term measurements of 0.01 ... 1s.
>
> For measurements against a reference frequency, the stability of the
> reference must be 5 to 10 times better than the measured frequency, and
> I don't have that. Now there are 2 options DMTD mixer or 3-hat
> measurements.
> Because I'm a digital person I chose the 3-hat method.
>
> The idea is now to divide the 3 measuring frequencies (20 or 10 MHz)
> down to 100Khz and to measure the phases with a TDC against the next
> reference edge. Average the measurement results until I am down to 0.001
> ... 1 s. That should improve the 100ps resolution of a TDC7200 far
> enough and can also be output via RS232.
>
> Are my thoughts correct and could it work ?
>
> Hans-Georg
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com
> To unsubscribe send an email to time-nuts-leave at lists.febo.com
More information about the Time-nuts_lists.febo.com
mailing list