[time-nuts] Re: Is this the right way to compare the short term accuracy of two frequency counters?

Bob kb8tq kb8tq at n1k.org
Mon Jan 24 14:47:18 UTC 2022


Hi

First off, yes, standard deviation is a pretty good way to look at what a counter
is doing. Reducing the answer to time ( = picoseconds ) is usually the easy way
to look at the data. 

Next up, counters have a *lot* of things that impact what they do. The slew rate
of the input signal is a big one on most counters ( = square waves and sine 
waves will give you different answers).  It is best to do your testing with the
type of signal you are most likely to use. 

Some counters do very odd things when the input and reference are tightly coupled. 
An SR620 counting it’s reference output is one good example of this. Best practice
is to run two independent sources. One supplies the reference, a second one 
generates the test signal. 

HP counters (and likely some others) have interesting “dead spots” at 10 MHz, at
10MHz / N and 10 MHz * N. to get the best performance numbers, test with a signal 
that is not in one of these regions. 

It is worth looking at the ADEV / phase noise of your test signal. You can indeed 
get into trouble there…..

Fun !!!

Bob

> On Jan 24, 2022, at 8:43 AM, Erik Kaashoek <erik at kaashoek.com> wrote:
> 
> For some project I'm trying to establish the short term accuracy of a
> frequency counter versus the gate time.
> As using the Allan Deviation for this type of measurement did lead to
> extensive discussion over the validity of using ADEV for measuring the
> short term performance of a counter I tried to find a different, but still
> relevant way to establish the performance.
> To exclude as much as possible external and long term factors I'm using a
> single fairly stable OCXO (short term error below 1e-10) to output 10 MHz.
> This 10 MHz goes into an SI5351 as reference for its PLL and the SI5351
> outputs two frequencies from the same VCO, one at 10 MHz into input A of
> the counter and one at 10.00003319 MHz into input B of the counter. The
> counter is setup to measure the ratio of A/B and to display the STDDEV of
> the ratio over n=100. The STDEV of counter B is calculated as the square
> root of  ( (the sum of the squares of the difference between the measured
> ratio and the average ratio ) divided by the number of measurements )
> I'm aware the SI5351 uses a fractional divider but I hope the impact is
> below the measurement accuracy required.
> Doing this test with two counter gave these results:
> 
> Counter A
> Gate time :  STDDEV
> 1 s : 1.0-10
> 0.1 s : 1.0e-9
> 0.02 s : 6.5e-9
> 
> Counter B
> Gate time :  STDDEV
> 1 s : 1.3--9
> 0.1 s : 1.5e-8
> 0.02 s : 1.4e-7
> 
> The results have been verified by performing multiple measurements. Counter
> A and B are both fractional counters that use interpolation.
> 
> The manual of the Agilent 53132A specifies the worst case RMS  error of a
> frequency measurement for different gate times and an input frequency of
> 10MHz as:
> 
> Agilent 53132A
> Gate time : Max RMS error (estimated)
> 1 s :  2e-10
> 0.1 s : 2e-9
> 0.02 s :  5e-8
> 
> Assuming the RMS error and the STDDEV are the same the steps with gate time
> change of the Agilent and Counter A seem to be comparable but Counter B
> behaves a bit different for 0.02 s gate time.
> 
> This leads me to the following questions:
> Is measuring the STDDEV of the ratio of two input frequencies derived from
> the same timebase a valid way to assess the short term measurement accuracy
> of a frequency counter?
> If not, how should this be done?
> If yes, do the numbers I'v listed above make sense?
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com -- To unsubscribe send an email to time-nuts-leave at lists.febo.com
> To unsubscribe, go to and follow the instructions there.




More information about the Time-nuts_lists.febo.com mailing list