[time-nuts] Re: Counter internal resolution error

Magnus Danielson magnus at rubidium.se
Sun Mar 19 00:35:16 UTC 2023


Hi Bob,

That's the effect of quantization of time as such. All counters have 
that. The single-shot resolution creates a staircase waveform for which 
it will represent any phase. In the ideal case that is linear, but what 
we have discussed is uneven distribution of those steps due to leakage, 
making it non-linear. If we for a moment just consider the fully linear 
case with all steps being equal, then the actual value and the 
stair-case differs and the error signal forms a sawtooth.

Consider now that we aim to measure the frequency of a signal, we get a 
start signal comming in at some true time TA and a stop signal at some 
true time TB. These will be quantized into the measured scale and each 
of those time-stamps will then sample the error curve. As you now takes 
f = Events/(TB-TA) estimation you actually do it with the quantized 
verstions of TA and TB, so error function wise you will see the effect 
of the sawtooth. So your actual frequency as you attempt to model it in 
a rational number will probe the sawtooth errors in different ways and 
depending on how close you are, you may very nicely fit onto the 
sawtooth, in which case the error cancel, or most often not very well at 
which you fare worse. On average you experience that 1/sqrt(12) number 
that pops out of the geometry. There is a HP app-note showing the error, 
so that is why you say "HP counters".

Now, curious as I am, I set about to investigate the 1/tau shape, 
knowing that noise and other signal tends to smooth out quantization in 
average. This is done to audio and other processes called dithering, 
it's done to magnetic tape where it is called bias tone but it is also 
done in counters if you pick up the good old HP5328A with OPTION 040, 
041 or 042 or the HP5328B and turn the measurement knob onto TI AVG, at 
which time the front-end digitization card swaps the regular 10 MHz 
reference into the 100 MHz PLL to a 10 MHz which has been deeply 
phase-modulated with the white noise of a diode. This modulate the phase 
location of the 100 MHz to sample the phase of the input signal at more 
phase-locations, thus probing the error function at more places, which 
makes the net effect of the error function to smooth out.

I wrote a paper on that, which Demetrios has the mixed fortune to read, 
as I ended up not doing a good job explain it all. However, I analyzed 
what noise does, and it starts to reduce the narrow edge when there is a 
little noise, and the more noise you apply the more of the large step 
noise becomes averaged out. Plotting the overall error, one can see that 
for very little noise, the noise is actuall exagerated by quantization, 
but the more noise you add the less the error. This is a form of 
compression, where low levels of noise experience gain, which is reduced 
the higher the noise is in relation to the quantization step. With 
enough noise, the noise reaches a minimum since it is actuall somewhat 
oversupressed until for noise above the quantization noise, the noise is 
being characterized fairly well after quantization, so the gain is about 1.

So, if you do averaging on your estimation, you simply lack to little 
white noise for your quantization. You need to add noise to improve your 
measurements.

I did try a HP5328A with GPIB and pull into TimeLab and tried different 
settings. It was doing better than the traditional analysis of the HP 
engineers have anticipated according to manual. I can try to dig that 
measurement up or redo it.

That 1/tau slope is not all random noise. It is systematic noise mixed 
with random noise, and it interact to modulate.

Another thing, systematic noise rolls of quicker with MDEV, since the 
averaging just of the different points of the error function ends to do 
that, which a delta-counter frequency estimation benefits from. Same 
would go for omega-counter and PDEV.

So, in conclusion, your final resolution of counter will be limited by 
single point resolution systematics, lack of white noise to average 
quantization steps and lack of averaging phase/frequency estimator. This 
is the way to improve beyond the single-step resolution for arbitrary 
frequencies.

The exact same thing happens on the output side of a DDS. It's the same 
thing happening in reverse order, so the error function creates all the 
spurs you see. As you analyze DDSes, you have sawtooth functions 
overlayed with various periods.

So much fun.

Cheers,
Magnus

On 2023-03-18 13:17, Bob Camp wrote:
> Hi
>
> Some of the HP counters have a leakage path between the reference input and
> the measurement inputs. This shows up as frequency measurement issues at
> the reference frequency and at other frequencies with fractional relations to it.
>
> Bob
>
>> On Mar 17, 2023, at 7:03 PM, Demetrios Matsakis via time-nuts <time-nuts at lists.febo.com> wrote:
>>
>>    On fact one of our best engineers concluded that there was leakage
>>    across the inputs, as Magnus mentioned.  I thought at the time he had
>>    measured it, but I am not 100% sure of that.
>>
>>      On Mar 17, 2023, at 13:47, Magnus Danielson <magnus at rubidium.se>
>>      wrote:
>>
>>    
>>
>>      I also recall one paper relating to laser ranging measurement of the
>>      moon which also looked at temperature dependence of counters, and
>>      SR620 showed more sensitivity than some other counters. For some
>>      measurement purposes, the impact is less than for others.
>>
>>      A fun experiment would be to use a delay-stepper to plot this. I
>>      accumulated equipment for that over the years, with increasing
>>      resolution and performance but never got around to it. Good little
>>      practical experiment now that I was able to steer the Colby DL10
>>      programmable trombone delay.
>>
>>      There is two common reasons for non-linearity, one is from the
>>      interpolator itself where error-pulse shaper as well as
>>      pulse-to-voltage converter has non-linearities. Another one is du to
>>      leakage of either clock or other input shifts the trigger point due
>>      to lacking isolation. Such non-linearities can be handled through
>>      measurement setup and at times with averaging.
>>
>>      Some properties can be managed through wise use of the
>>      autocalibration.
>>
>>      Then again, most of the times I do not bother to go the extra
>>      stretch, but it is good to know the effects are there so one can
>>      consider them and if needed cope with them.
>>
>>      So, time to close down computer, check out and leave Vancouver after
>>      a WSTS conference.
>>
>>      Cheers,
>>      Magnus
>>
>>    On 2023-03-17 16:57, Demetrios Matsakis wrote:
>>
>>      I don’t know how SR counters are today, but when we were upgrading
>>      our infrastructure over a decade ago we found other counters had
>>      better linearity.  Rover et al’s open source article has a good
>>      discussion of these issues, although of course you need to have one
>>      if you are going to experiment.  See   G. D. Rovera, M. Siccardi, S.
>>      Romisch, and M. Abgrali, “Time delay measurements: estimation of the
>>      error budget”, Metrologia 56, 2019 035004
>>
>>    On Mar 17, 2023, at 9:46 AM, Magnus Danielson via time-nuts
>>    [1]<time-nuts at lists.febo.com> wrote:
>>
>>    Dear Michael,
>>    On 2023-03-16 08:17, Michael Wouters via time-nuts wrote:
>>
>>      Dear time-nuts
>>      Counter specs often include an “internal resolution” error. For
>>      example,
>>      the SR620 specs say that it is 25 ps in single-shot, but this can be
>>      reduced to 4 ps with sufficient, repeated measurements. Can anyone
>>        offer
>>      any enlightenment as to the origin of this error, and the
>>      statistical
>>      distribution it has? I mentioned the SR620 but information about the
>>      53230A
>>      would be interesting too.
>>
>>    First of all, the single-shot resolution is somewhat of a hallmark
>>    measure when it comes to counters.
>>    The interpolator resolution is part of this, but consider that there
>>    exists non-linearities in the interpolator which makes the error
>>    larger. I recall there being a plot of the non-linearity in the SR620
>>    manual.
>>    It is not uncommon to have interpolator resolution better than
>>    non-linearities, but the later may be more subtle to most.
>>    Averaging can help, but depending to specifics, it's hard to give a
>>    number.
>>    Cheers,
>>    Magnus
>>
>>      Cheers
>>      Michael
>>      _______________________________________________
>>      time-nuts mailing list -- [2]time-nuts at lists.febo.com
>>      To unsubscribe send an email to [3]time-nuts-leave at lists.febo.com
>>
>>    _______________________________________________
>>    time-nuts mailing list -- [4]time-nuts at lists.febo.com
>>    To unsubscribe send an email to [5]time-nuts-leave at lists.febo.com
>>
>> References
>>
>>    1. mailto:time-nuts at lists.febo.com
>>    2. mailto:time-nuts at lists.febo.com
>>    3. mailto:time-nuts-leave at lists.febo.com
>>    4. mailto:time-nuts at lists.febo.com
>>    5. mailto:time-nuts-leave at lists.febo.com
>> _______________________________________________
>> time-nuts mailing list -- time-nuts at lists.febo.com
>> To unsubscribe send an email to time-nuts-leave at lists.febo.com




More information about the Time-nuts_lists.febo.com mailing list