[time-nuts] Frequency standards for different tau in Allen Dev measurement

Magnus Danielson magnus at rubidium.se
Fri Feb 21 19:40:49 UTC 2020


Bert,

I missed it because I do not have one or worked with one, which is not
to say it is bad or anything, I just took the examples I recall because
it is familiar to me. There is more of these for sure. I hope it was
good enough to illustrate the points with some real-life examples.

Cheers,
Magnus

On 2020-02-21 13:45, ew via time-nuts wrote:
> You missed my favorite  HP5345A  only direct 500 MHz and internal 500 MHz, only recently replaced it with 53132A still use it for 40 GHz work
> Bert Kehren
>
>
>
> In a message dated 2/21/2020 7:27:32 AM Eastern Standard Time, magnus at rubidium.se writes:
>
> Hi Taka,
>
> On 2020-02-21 04:45, Taka Kamiya via time-nuts wrote:
>> I was in electronics in big ways in 70s.  Then had a long break and came back to it in last few years.  Back then, if I wanted 1s resolution, the gate time had to be 1s.  So measuring ns and ps was pretty much impossible.  As I understand it, HP53132A (my main counter) takes thousands of samples (I assume t samples) to arrive at most likely real frequency.  That was something I had hard time wrapping my head around. 
> It actually does two things.
>
> First, it interpolates the occurrence of a rising edge (for start and
> stop channel), so if this does not happen in perfect alignment with a
> rising edge of the reference/coarse clock. Often the OCXO/Rubidium is
> for 10 MHz,  but then a 90-500 MHz oscillator is locked to the
> reference, and this higher clock is then used instead of the 10 MHz for
> coarse-counting. Coarse-counting is counting of cycles just back in the
> good old days of counters. The resolution is increased further not by
> raising the counting frequency, but by measuring the time-error of the
> trigger channel event in relation to the coarse-counter clock-edge.
> Thus, measuring 0.000-0.999 of a coarse-counting cycle. In practice it
> becomes hard to design for that, as the shorter end has problem is gate
> delay times to be well decided, so one add one or two coarse cycles to
> do 1.000-1.999 or 2.000-2.999 cycles, but these extra cycles is only for
> the interpolator design, so once the fractional cycle is known the other
> can be ignored.
>
> Just to give you an idea of what different counters do, here is from the
> top of my head some numbers:
>
> HP5370A: Ref 10 MHz, Coarse 200 MHz, Interpolation gain 256, time
> resolution < 20 ps
> HP5328A: Ref 10 MHz, Coarse 10 MHz, Interpolation gain 1, time
> resolution 100 ns
> HP5328A with Option 040-042 and HP5328B: Ref 10 MHz, Coarse 100 MHZ,
> Interpolation gain 1 (TI-average has other interpolation means), time
> resolution 10 ns or for TI-avg 10 ps (claimed)
> HP5335A: Ref 10 MHz, Coarse 10 MHz, Interpolation gain 200, time
> resolution 1 ns
> HP5372A: Ref 10 MHz, Coarse 500 MHz, Interpolation gain 10, time
> resolution 200 ps
> HP53132A: Ref 10 MHz, Coarse 100 MHz, Interpolation gain 1000, time
> resolution 100 ps
> SR620: Ref 10 MHz, Coarse 90 MHz, Interpolation gain 512?, time
> resolution < 25 ps (don't recall details)
> PM6863: Ref 10 MHz, Coarse 500 MHz, Interpolation gain 1, time
> resolution 2 ns
> CNT-90: Ref 10 MHz, Coarse 100 MHz, Interpolation gain 512, time
> resolution 100 ps (claimed)
> CNT-91: Ref 10 MHz, Coarse 100 MHz, Interpolation gain 512, time
> resolutions 50 ps (claimed)
> SIA3000: Ref 100 MHz, Coarse, 100 MHz, Interpolation gain 50000, time
> resolution 200 fs
>
> As I write claimed above, the actual performance can be better, but the
> spec on the sheet did not overstate it more. While all the numbers may
> not be 100% correct, I think they help to illustrate the relationships
> very well. As you calculate the length of the coarse counter period from
> it's frequency, and then divide with the interpolation gain, which is by
> how many steps the period is interpolated, the raw time resolutions pops
> out.
>
> Interpolation methods differs, but typically first an error signal is
> generated and then it is stored into a capacitor which is then measured
> with some slower technique. The 5335A use a very simple technique where
> the discharge of the capacitor is done with a much lower current than
> the charging, so now the discharge time can be measured using the coarse
> clock. This is called pulse-stretching. Today the far most common
> technique is to use an ADC to digitize the voltage.
>
> The 5328 counters have a unique interpolation technique by
> phase-modulating the reference clock with noise, effectively shifting
> the reference transitions around and that way interpolate over time a
> higher resolution. It works better than claimed.
>
> Remember that this single-shot resolution is reduced by the trigger
> jitter as well as unstability of reference oscillator. In practice the
> trigger jitter or resolution dominates as a 1/tau limit as you look at
> the Allan Deviation, to fix that you need to buy a better counter or
> signal condition for better trigger.
>
> The second trick used in 53132 for measuring frequency is averaging. It
> uses an average technique originally from optical frequency measures to
> accumulate data into blocks and then subtract the time-stamps of two
> subsequent blocks. This is the same as average the output of a number of
> overlapping frequency estimations.
>
> This has advantages as white noise is supressed with a steeper slope,
> and the associated deviation is the modified Allan Deviation MDEV.
>
>>   
>>
>> I understand most of what you said, but I've never taken statistics, so I am guessing on some part.  I can see how adev goes down as tau gets longer.  Basically, averaging is taking place.  But I am still not sure why at some point, it goes back up.  I understand noise will start to take effect, but the same noise has been there all along while adev was going down.  Then, why is this inflection point where sign of slope suddenly changes? 
> OK, so the trouble is that rather than only white noise as classical
> statistics deal with, we have at least 4 noise types, with different
> frequency slopes. As we try to analyze this with standard deviation, the
> standard deviation estimator (RMS estimator) does not converge, is
> simply keeps producing noise even if we add more values. To put that in
> another way, we do not gain more knowledge by doing more measurements.
> The classical white noise is what is called white phase modulation
> noise, we then have flicker phase noise, white frequency noise and
> flicker frequency noise. All these noise-types is to be expected
> according to the David Leeson model, and it is due to those that we need
> to use more advanced statistics as introduced by David Allan.
>
> The White Phase Modulation has a flat frequency response in phase noise
> spectrum, 1/tau in ADEV.
> The Flicker Phase Modulation has 1/sqrt(f) respone in phase noise
> spectrum, 1/tau in ADEV.
> The White Frequency Modulation has 1/f respone in phase noise spectrum,
> 1/sqrt(tau) in ADEV.
> The Flicker Frequency Modulation has 1/sqrt(f^3) response in phase noise
> spectrum, flat in ADEV.
>
> In addition to this, linear frequency drift creates a slope that scales
> with drift and tau, so that is an upper limit. Thermal sensitivity tends
> to lay ontop as well, so does other disturbances.
>
> Depending on details of oscillators and their sensitivity to thermal
> noise, their effective minimum shifts around.
>
>>   
>>
>> Also, to reach adev(tau=10), it takes longer than 10 seconds.  Manual for TimeLab basically says more samples are taken than just 10, but does not elaborate further.  Say it takes 50 seconds to get there, and say that's the lowest point of adev, does that mean it is the best to set gate time to 10 second or 50 second?  (or even, take whatever gate time and repeat the measurement until accumulated gate time equals tau?
> The Allan Deviation takes a number of estimates to produce values, but
> remember these are stability values for a certain observationtime for
> frequency, not the frequency measure itself.
>
> Cheers,
> Magnus
>
>> --------------------------------------- 
>> (Mr.) Taka Kamiya
>> KB4EMF / ex JF2DKG
>>   
>>
>>     On Thursday, February 20, 2020, 7:54:22 PM EST, Magnus Danielson <magnus at rubidium.se> wrote:  
>>   
>>   Hi Taka,
>>
>> On 2020-02-20 19:40, Taka Kamiya via time-nuts wrote:
>>> I have a question concerning frequency standard and their Allen deviation.  (to measure Allen Dev in frequency mode using TimeLab)
>>>
>>> It is commonly said that for shorter tau measurement, I'd need OCXO because it's short tau jitter is superior to just about anything else.  Also, it is said that for longer tau measurement, I'd need something like Rb or Cs which has superior stability over longer term.
>> Seems reasonably correct.
>>> Here's the question part.  A frequency counter that measures DUT basically puts out a reading every second during the measurement.  When TimeLab is well into 1000s or so, it is still reading every second; it does not change the gate time to say, 1000s.
>>> That being the case, why this consensus of what time source to use for what tau?
>>> I recall reading on TICC, in time interval mode, anything that's reasonably good is good enough.  I'm aware TI mode and Freq mode is entirely different, but it is the same in fact that measurement is made for very short time span AT A TIME.
>>> I'm still trying to wrap my small head around this.  
>> OK.
>>
>> I can understand that this is confusing. You are not alone being
>> confused about it, so don't worry.
>>
>> As you measure frequency, you "count" a number of cycles over some time,
>> hence the name frequency counter. The number of periods (sometimes
>> called events) over the observation time (also known as time-base or
>> tau) can be used to estimate frequency like this:
>>
>> f = events / time
>>
>> while it is practical that average period time becomes
>>
>> t = time / events
>>
>> In modern counters (that is starting from early 70thies) we can
>> interpolate time to achieve better time-resolution for the integer
>> number of events.
>>
>> This is all nice and dandy, but now consider that the start and stop
>> events is rather represented by time-stamps in some clock x, such that
>> for the measurements we have
>>
>> time = x_stop - x_start
>>
>> This does not really change anything for the measurements, but it helps
>> to bridge over to the measurement of Allan deviation for multiple tau.
>> It turns out that trying to build a standard deviation for the estimated
>> frequency becomes hard, so that is why a more indirect method had to be
>> applied, but the Allan deviation fills the role of the standard
>> deviation for the frequency estimation of two phase-samples being the
>> time-base time tau inbetween. As we now combine the counters noise-floor
>> with that of the reference, the Allan deviation plots provide a slopes
>> of different directions due to different noises. At the lowest point on
>> the curve, is where the least deviation of frequency measurement occurs.
>> Due to the characteristics of a crystal oscillator to that of the
>> rubidium, cesium or hydrogen maser, the lowest point occurs at different
>> taus, and provide different values. Lowest value is better, so there is
>> where I should select the time-base for my frequency measurement. So,
>> this may be at 10 s, 100 s or 1000 s, which means that the frequency
>> measurement should be using start and stop measurements with that
>> distance. OK, fine. So what about TimeLab in all this. Well, as we
>> measure with a TIC we collect a bunch of phase-samples at some base
>> rate, such as 10 Hz or whatever. TimeLab and other tools can then use
>> this to calculate Allan Deviation for a number of different taus simply
>> by using three samples, these being tau in between and algoritmically do
>> that for different taus. One then collects a number of such measurements
>> to form an average, the more, the better confidence interval we can but
>> on the Allan Deviation estimation, but it does not improve our frequency
>> estimation, just our estimation of uncertainty for that frequency
>> estimation for that tau. Once you have that Allan Deviation plot, you
>> can establish the lowest point and then only need two phase samples to
>> estimate frequency.
>>
>> So, the measurement per second thing is more collection of data rather
>> than frequency estimation in itself.
>>
>> Cheers,
>> Magnus
>>
>>
>> _______________________________________________
>> time-nuts mailing list -- time-nuts at lists.febo.com
>> To unsubscribe, go to http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
>> and follow the instructions there.
>>   
>> _______________________________________________
>> time-nuts mailing list -- time-nuts at lists.febo.com
>> To unsubscribe, go to http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
>> and follow the instructions there.
>
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com
> To unsubscribe, go to http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
> and follow the instructions there.
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com
> To unsubscribe, go to http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
> and follow the instructions there.




More information about the Time-nuts_lists.febo.com mailing list