[time-nuts] theoretical Allan Variance question

Stewart Cobb stewart.cobb at gmail.com
Sat Oct 29 23:38:20 UTC 2016


What's the expected value of ADEV at tau = 1 s for time-interval
measurements quantized at 1 ns?

This question can probably be answered from pure theory (by someone more
mathematical than me), but it arises from a very practical situation. I
have several HP5334B counters comparing PPS pulses from various devices.
The HP5334B readout is quantized at 1 ns, and the spec sheet (IIRC) also
gives the instrument accuracy as 1 ns.

The devices under test are relatively stable. Their PPS pulses are all
within a few microseconds of each other but uncorrelated.  They are stable
enough that the dominant error source on the ADEV plot out to several
hundred seconds is the 1 ns quantization of the counter. The plots all
start near 1 ns and follow a -1 slope down to the point where the
individual device characteristics start to dominate the counter
quantization error.

One might expect that the actual ADEV value in this situation would be
exactly 1 ns at tau = 1 second.  Values of 0.5 ns or sqrt(2)/2 ns might not
be surprising. My actual measured value is about 0.65 ns, which does not
seem to have an obvious explanation.  This brings to mind various questions:

What is the theoretical ADEV value of a perfect time-interval measurement
quantized at 1 ns? What's the effect of an imperfect measurement
(instrument errors)? Can one use this technique in reverse to sort
instruments by their error contributions, or to tune up an instrument
calibration?

I'd be grateful for answers to any of these questions.

BTW, thanks to whichever time-nuts recommended the HP5334B, back in the
archives; they're perfect for what I'm doing. And thanks to fellow time-nut
Rick Karlquist for his part in designing them.

Cheers!
--Stu



More information about the Time-nuts_lists.febo.com mailing list