[time-nuts] Is this the right way to compare the short term accuracy of two frequency counters?

Erik Kaashoek erik at kaashoek.com
Mon Jan 24 13:43:40 UTC 2022


 For some project I'm trying to establish the short term accuracy of a
frequency counter versus the gate time.
As using the Allan Deviation for this type of measurement did lead to
extensive discussion over the validity of using ADEV for measuring the
short term performance of a counter I tried to find a different, but still
relevant way to establish the performance.
To exclude as much as possible external and long term factors I'm using a
single fairly stable OCXO (short term error below 1e-10) to output 10 MHz.
This 10 MHz goes into an SI5351 as reference for its PLL and the SI5351
outputs two frequencies from the same VCO, one at 10 MHz into input A of
the counter and one at 10.00003319 MHz into input B of the counter. The
counter is setup to measure the ratio of A/B and to display the STDDEV of
the ratio over n=100. The STDEV of counter B is calculated as the square
root of  ( (the sum of the squares of the difference between the measured
ratio and the average ratio ) divided by the number of measurements )
I'm aware the SI5351 uses a fractional divider but I hope the impact is
below the measurement accuracy required.
Doing this test with two counter gave these results:

Counter A
Gate time :  STDDEV
1 s : 1.0-10
0.1 s : 1.0e-9
0.02 s : 6.5e-9

Counter B
Gate time :  STDDEV
1 s : 1.3--9
0.1 s : 1.5e-8
0.02 s : 1.4e-7

The results have been verified by performing multiple measurements. Counter
A and B are both fractional counters that use interpolation.

The manual of the Agilent 53132A specifies the worst case RMS  error of a
frequency measurement for different gate times and an input frequency of
10MHz as:

Agilent 53132A
Gate time : Max RMS error (estimated)
1 s :  2e-10
0.1 s : 2e-9
0.02 s :  5e-8

Assuming the RMS error and the STDDEV are the same the steps with gate time
change of the Agilent and Counter A seem to be comparable but Counter B
behaves a bit different for 0.02 s gate time.

This leads me to the following questions:
Is measuring the STDDEV of the ratio of two input frequencies derived from
the same timebase a valid way to assess the short term measurement accuracy
of a frequency counter?
If not, how should this be done?
If yes, do the numbers I'v listed above make sense?




More information about the Time-nuts_lists.febo.com mailing list