[time-nuts] theoretical Allan Variance question

Tom Van Baak tvb at LeapSecond.com
Sun Oct 30 05:14:43 UTC 2016


> One might expect that the actual ADEV value in this situation would be
> exactly 1 ns at tau = 1 second.  Values of 0.5 ns or sqrt(2)/2 ns might not
> be surprising. My actual measured value is about 0.65 ns, which does not
> seem to have an obvious explanation.  This brings to mind various questions:
> 
> What is the theoretical ADEV value of a perfect time-interval measurement
> quantized at 1 ns? What's the effect of an imperfect measurement
> (instrument errors)? Can one use this technique in reverse to sort
> instruments by their error contributions, or to tune up an instrument
> calibration?

Hi Stu,

If you have white phase noise with standard deviation of 1 then the ADEV will be sqrt(3). This is because each term in the ADEV formula is based on the addition/subtraction of 3 phase samples. And the variance of normally distributed random variables is the sum of the variances. So if your standard deviation is 0.5 ns, then the AVAR should be 1.5 ns and the ADEV should be 0.87 ns, which is sqrt(3)/2 ns. You can check this with a quick simulation [1].

Note this assumes that 1 ns quantization error has a normal distribution with standard deviation of +/- 0.5 ns. Someone who's actually measured the hp 5334B quantization noise can correct this assumption.

/tvb

[1] Simulation:

C:\tvb> rand 100000 0.5e-9 0 | adev4 /at 1
rand 100000(count) 5e-010(sdev) 0(mean)
** tau from 1 to 1 step 1
       1 a 8.676237e-010 99998 t 5.009227e-010 99998

In this 100k sample simulation we see ADEV is close to sqrt(3)/2 ns. The TDEV is 0.5 ns. This is because TDEV is based on tau * MDEV / sqrt(3). In other words, the sqrt(3) is eliminated in definition of TDEV.





More information about the Time-nuts_lists.febo.com mailing list