[time-nuts] SR620 binary dump

Magnus Danielson magnus at rubidium.dyndns.org
Sun Mar 9 16:59:30 UTC 2014


Hi Tom,

On 08/03/14 22:12, Tom Van Baak wrote:
>> You make me curious. Any specific issue you're having?
>>
>> I haven't tried doing any programming to the SR620 yet, but I have some
>> plans to do it.
>>
>> Cheers,
>> Magnus
>
> Hi Magnus,
>
> Thanks for asking. Here's an update.
>
> I was curious why time interval or period measurements give slightly
> different results than frequency measurements on a SR620. Maybe the
> 5370 too. We often advise people to use time interval mode and not
> frequency mode. Volker's posting got me to dig further. His
> frequency-data ADEV plots looked too different from his phase-data ADEV plots.



> It's not just that frequency introduces dead time, but frequency also
> gives less precise results. But why is this. Ignoring other sources
> of noise, the 620 interpolator has 2.7 ps numerical granularity
> (1 / 90 MHz / 4096). It seemed to me that regardless of time interval,
> period, or frequency mode selection, the identical quantization and
> noise levels should be evident regardless how one measures an
 > external source(s).

Almost. The frequency measurement gate (page 86-87) creates a different 
insertion delay than any of the TI modes, and therefore requires a 
separate insertion delay calibration, as found in cal byte 50 (page 75 
and 78). According to the manual no manual calibration is needed, but I 
assume that autocal handles it. Typical time-errors will scale with 
time-base, which is a great way to discover it's presence.

It is actually fairly common that counters have separate signal path for 
TI and frequency modes, and hence different needs in this regard.

However, you are getting at the conversions, and that can indeed be a 
factor too. I've seen this before with one counter.

> So there are two tempting commands to explore. One is BDMP
> (binary dump) which avoids ascii conversion and gives raw 64-bit
> binary values. Some people use it to dramatically increase
> measurement speed over GPIB. The other is EXPD (x1000 expanded
> resolution). My plan was to use two different exceptionally pure
> but drifting 10 MHz sources and see to what extent the 620 could
> measure/compare them.

For this systematic effect you might start with just measuring the 
time-base at various times and with various methods and see where it 
gets you. Make sure to read out byte 50 and see if it makes sense for 
compensation. Also see what Autocal does to it and the result.

> The bad news is that EXPD is not allowed with frequencies above
> 5 digits (>= 1 MHz) so it doesn't apply to 10 MHz inputs. That
> experiment waits while I make a clean <1 MHz source (e.g.,
> 10 MHz/12 = 833.3333 kHz).

The built-in 1 kHz source is a good start. TADD-2 100 kHz next.

> In a true time-stamping counter one continuously counts events
> and continuously counts time. The 620 isn't continuous but it
> does have two registers, which the manual refers to as the
 > event or cycle counter and the time interval counter.

This is typical of the reciprocal counter principle. You clear the 
counters for the next measurement interval. Zero Dead-time counters like 
the 5371/5372 has continuous running counters and on a event they are 
sampled and stored while the counters keeps ticking away.

> Better yet, page 97 also refers to them as the "top" and "bottom" counter. Hint, hint.

Your page numbering does not match my hardcopy of the manual.

> The binary dump data matches my expectations for period and interval. But for frequency, the 620 does not return the binary values of either the top or bottom counters. Instead it does a fixed point division and returns a scaled top/bottom binary quotient.

It gives you the frequency in binary format. It's not the raw format of 
the HP537x counters.

> The net effect is that the ideal BDMP value for a 1 second 10 MHz measurements would be 0x001C71C71C71C71C. (note 0x1C7 / 4095 = 455 / 4095 = 10 / 90, as in 90 MHz). But you never actually get this hex value with a 10 MHz input. With a couple hundred 1-second measurements I saw only one of three values each time:
>
>      0x1c71c71c721bf3 = 8006399337569267 = 10000000.000027126 Hz
>      0x1c71c71c7270c9 = 8006399337590985 = 10000000.000054251 Hz
>      0x1c71c71c72c5a0 = 8006399337612704 = 10000000.000081379 Hz
>
> The delta among these values is 21718 or 21719 counts. So the lower ~16-bits isn't really "noise", it's just deterministic quantized residuals from the long-division. And that explains why when the scaled binary values are converted to decimal Hz you get the odd-looking values above. The granularity, or resolution is 27 uHz / 10 MHz = 2.7e-12 = 2.7 ps. Nice, yes?

To be expected. The bias is there.

> Of hundreds of samples, they were all +1 +2 and +3 times 2.7 ps above nominal. A +0 would give 10000000.000000000 Hz.
>
> These BDMP readings have (much) greater resolution that what you get with plain strt;*wai;xall? values. What one normally sees over RS232 or GPIB are ascii readings like:
>
>      1.00000000000E7
>      1.00000000002E7
>      1.00000000001E7
>      1.00000000001E7
>      1.00000000000E7
>
> Note these readings are 0, 100, or 200 uHz from nominal; that is, the granularity is 100 uHz / 10 MHz = 1e-11.

Again with a bias.

> Using XREL exposes an additional digit of resolution. Set XREL to 1E7 and then you get ascii readings like:
>
>      1.E-4
>      -6.E-5
>      2.E-4
>      -1.4E-4
>      .0E-6
>      -1.9E-4
>
> So not only does XREL give you an extra digit of precision but it also transmits fewer bytes. The granularity in this case is 10 uHz / 10 MHz = 1e-12.

I assume you also means /tau for all these.

> The hope is that XALL, XREL/XALL, and BDMP agree. But each has different truncation, rounding, and granularity rules. My recommendation is that when making high-precision SR620 10 MHz frequency measurements to use XREL to gain back that least dignificant digit otherwise lost to truncation. Over RS2332/GPIB that's "xrel1e7" but you can also do it from the front panel.

Indeed. A good find.

> I'll try this again with EXPD turned on to see what effect that has. I'll also adjust the interpolator calibration tables.
>
> The code I used to grab and decode the SR620 binary data is at http://leapsecond.com/tools/620bdmp.c
>
> Let me know if you want to jump into this too, or have any corrections/comments.

You know I have comments. :)

You have touched on a subject that I have been looking at. When dealing 
with 5370/5372 you can get raw readings out. The 5372 programmers manual 
is a lovely read once you realize that it teaches you to grab the raw 
data from the hardware and all the processing that is available on the 
front is more or less completely described. You can play nice tricks 
there without loosing precision anywhere. Same goes for the 5370 if you 
look closely.

In that regard, you have disclosed some of the nitty-gritty details 
about the SR620 binary modes which I think many misses. Toss in my 
reminder on calibration byte 50.

There is several sources by which jitter and biases may be introduced in 
surprising ways. This is a good illustration.

I have no time to setup an experiment today, as I will be travelling to 
the US next week.

Cheers,
Magnus



More information about the Time-nuts_lists.febo.com mailing list