[time-nuts] Changing ADEV, (was Phase, One edge or two?)

Magnus Danielson magnus at rubidium.dyndns.org
Fri Oct 24 22:25:21 UTC 2014


Tom,

On 10/24/2014 11:31 PM, Tom Van Baak wrote:
>>> ADEV most certainly does change with time, even for short tau's.
>>
>> Can you elaborate?
>> Such as when, why, what kind of change, how much change,
>> at how short of tau's, over how long of time,
>> and using what type Oscillators?
>> Do you know what in the freq or Phase plot is causing the ADEV to change?
>
> I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
>
> OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
>
> In general, never do an ADEV calculation without visually checking the phase or frequency time series first.

You should make sure that you remove all forms of systematic effects 
before turning the residue random noise over to ADEV.

If you have random noise being modulated in amplitude, you need to 
measure long enough for the averaging end not to have a great impact on 
the result.

>> Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
>> seldom have I seen any significant change (say greater than 10%),
>> in the short tau (0.01 sec to 1 sec) ADEV values,  after the systematic
>> type errors are removed. (even when starting soon after turn on)
>
> This is not my experience at all. Let's figure out what's happening to you.
>
> If all your standards look sort the same from tau 0.01 to tau 0.1 to tau 1 then either you need more oscillators to play with or maybe you have a measurement problem. This is especially true if you are doing post-comparator averaging. Averaging, by definition, tends to remove noise, to smooth things out. If your goal is to measure noise, the last thing you want to do is create any electronics or use any analog or digital or numerical filtering that removes or reduces the very thing you're trying to measure.
>
> I remind you of this page http://leapsecond.com/pages/adev-avg/ of the perils of averaging data.
>
> For most of the world, there's signal and noise. Signal good. Noise bad. But for us, measuring precision clocks, the noise is the signal. So don't do anything that removes or reduces noise.

Systematic signals is however disturbances for the ADEV.

>> ADEV is used to measure random types of noise so there are of
>> course the statistical uncertainty variations that are a function of
>> the number of valid data points. I find that using a minimum of
>> a thousand points at each tau gives good consistent results.
>
> Are you crazy? The minimum is just 3 or 4 or 5 data points. Not 1000! You should not see much difference at 10 or 100 or 1000 points. If so, something is wrong with your measurement model. If ADEV(tau) is *that* dependent on tau, check the frequency time-series. Consider removing drift or using HDEV instead of ADEV. We need to talk. If your logic was true, we'd all have to wait 3 years before we could compute the ADEV of a GPSDO at tau 1 day.

No, he is not crazy on this point. While the algorithm only needs 3 
points to produce a value, that value will have so bad degrees of 
freedom that the confidence interval is WAAAY out there.
The reason that we don't need to wait 3 years for tau of 1 day is that 
we learned to use interleaving spans of time. We have since had much 
more development in the algorithms to improve the degrees of freedom for 
the same N samples, all in an effort to achieve as small confidence 
interval as possible. Also, the degrees of freedom achieved varies with 
the tau0 multiple m, and with dominant noise-type.

A great way to illustrate the point of degrees of freedom and the number 
of sample-points needed to get tight confidence intervals is to see how 
the high-tau end of a curve updates in TimeLab and behaves as the 
jiggeling end of a long rope, and as more samples comes in, the 
jiggeling end moves towards higher taus, but for a particular tau, the 
amplitude of the jiggeling decreases until it almost stops. This is the 
effect of the confidence intervals becoming tighter, the range within 
the real value is becomes smaller and eventually is very tight.

The modern algorithms like TOTAL and Theo can cram out really impressive 
degrees of freedom for a particular set of N and m compared to older 
algorithms, which effectively translates into tighter confidence 
intervals for the same N and m.

Cheers,
Magnus



More information about the Time-nuts_lists.febo.com mailing list