[time-nuts] Allan Variance vs. Least squares

Olaf Bossen obossen at physik.uzh.ch
Mon May 23 10:23:56 UTC 2011


Hi there,

I am looking for some advice on stability metrics for a slow oscillator. 
The oscillator is used to do a measurement of another quantity which is 
connected to the frequency of the Oscillator. I have taken generously 
oversampled data of the oscillator voltage and now I have two 
contradicting measures:

1) When I record the zero crossings and use them as phase data  for the 
Allan variance, the minimum is 10^-4 and it initially decays with a 
slope of t^-0.5

2) On the other hand if I do least squares fits of the same data with 
consecutively longer runs the reported frequency uncertainty goes down 
to sigma_f/f = 10^-7. It also drops much faster with t^-1.5 wich seems 
to be due to the "Cramèr-Rao lower bound" (not that I really understand 
what it means), and it doesn't really go up again

It seems to be common lore that the Allan variance minimum is the best 
obtainable frequency accuracy for an oscillator, but the least squares 
fits seem to be much better. I have problems understanding this.

I have a mental picture that might explain this, maybe you can tell me 
if it seems correct to you: Oscillators are running at a very precise 
frequency much better than what our measurement devices are capable of 
resolving. So the drop in the Allan variance, that is initially gained 
when the measurement is longer, is actually just the reduction of the 
measurement error for this precise frequency. Only when the Allan 
variance goes up again, we are on a timescale over which the true 
frequency of the oscillator varies.

So actually the Allan variance tells us how well we can measure the 
frequency stability, but the actual frequency stability at least in the 
white noise regime is much higher (several orders of magnitude). What do 
you think?

Cheers,
obo




More information about the Time-nuts_lists.febo.com mailing list