[time-nuts] Re: Timestamping counter techniques : dead zone quantification

Erik@tinySA erik at kaashoek.com
Mon Feb 7 15:57:34 UTC 2022


Tom,
Thanks, the concept of de-trending is understood and the first 
sub-sample is a very good estimate of the trend but its a bit 
frustrating not to understand how to implement in integer math the 
regression sums when the x interval of the subsamples is not constant as 
the capture moment depends on the incoming signal edges and one  can not 
predict when time timer capture interrupt arrives.
In a simulation using actual sub-sample data, when de-trending y with a 
constant value assuming a constant X interval the standard error 
increases from around 1e-11 to 1e-9 so this variation in x interval 
seems big enough to matter.
Only when doing the sub-sample calculations in float I can understand 
how to de-trend but floats have insufficient accuracy and doubles will 
become way to slow..
So I still need to do a lot of reading....
Erik.

On 7-2-2022 16:05, Tom Van Baak wrote:
> Erik,
>
> The hp 53132A counter was mentioned in an earlier posting. Check the 
> documentation on the frequency command(s) and also the programming 
> examples in the appendix. Look for words like: "pre-measurement", 
> "expected frequency", and "optimizing throughput". Another good source 
> is the SRS FS740 manual, as well as Pendulum CNT-91 documents.
>
> The least squares fit (regression) is ok in textbooks but, especially 
> for large blocks of timestamp data, you run into loss of precision and 
> range problems, as you've seen.
>
> The trick that I use is to roughly detrend the data before you compute 
> the regression. I know that sounds odd, to detrend before you apply a 
> formula to compute the trend, but when you look at your sums you will 
> see why it works so well. We don't have access to hp or srs source 
> code, but perhaps this is why regression-based counters make use of 
> the expected value.
>
> Here are two examples based on 10 000 picPET timestamp data, with 
> debug mode turned on [1]:
>
> (1) A not-so-pretty least squares fit directly from raw timestamp 
> data. Note r^2 and steyx are suspect:
>
> sums:
>                833333324.999999880000000 Sxx
>                833333316.906344180000000 Syy
>                833333320.953173520000000 Sxy
>       694444423810844930.000000000000000 Sxy*Sxy
>       694444423810842370.000000000000000 Sxx*Syy
>                        1.000000000000004 Sxy*Sxy/Sxx/Syy
>                833333316.906344180000000 Syy
>                833333316.906347160000000 Sxy*Sxy/Sxx
>                       -0.000002980232239 Syy-Sxy*Sxy/Sxx
> stats:
>         10000.000000   1.000000000000000e+004 n
>           499.950000   4.999500000000000e+002 x_mean
>           499.949998   4.999499977851931e+002 y_mean
>           288.689568   2.886895679907167e+002 x_sdev
>           288.689567   2.886895665887843e+002 y_sdev
>             1.000000   9.999999951438083e-001 m
>             0.000000   2.130461211891088e-007 b
>             1.000000   1.000000000000004e+000 r2
>            -1.#IND00  -1.#IND00000000000e+000 steyx
>
> (2) Since this is tau 0.1 s data, I apply a pre-detrend of 0.10001 to 
> the data before the least squares fit:
>
> sums:
>                        8.333333245853750 Sxx
>                        8.334142635551871 Syy
>                        8.333737930750992 Sxy
>                       69.451187898437823 Sxy*Sxy
>                       69.451187900531593 Sxx*Syy
>                        0.999999999969853 Sxy*Sxy/Sxx/Syy
>                        8.334142635551871 Syy
>                        8.334142635300619 Sxy*Sxy/Sxx
>                        0.000000000251251 Syy-Sxy*Sxy/Sxx
> stats:
>         10000.000000   1.000000000000000e+004 n
>            -0.049995  -4.999499998841863e-002 x_mean
>         26719.148510   2.671914850958520e+004 y_mean
>             0.028869   2.886895679188980e-002 x_sdev
>             0.028870   2.887035873203724e-002 y_sdev
>             1.000049   1.000048562188179e+000 m
>         26719.198507   2.671919850701305e+004 b
>             1.000000   9.999999999698527e-001 r2
>             0.000000   1.585249899616256e-007 steyx
>
> You can play around with this to determine the right approach given 
> the frequency, batch size, quantization, and noise of your counter.
>
> And again, a suggestion to re-read the TimeLab, TimePod, 53132A, and 
> FS740 literature, even once a week. The more you play with your own 
> counter the more you will understand what's in those manuals. Notice 
> also that these regression-based counters don't have to work with 
> fixed gate times either.
>
> /tvb
>
> [1] See xystats3.c / .exe in my leapsecond.com/tools/ directory.
>
>
> On 2/6/2022 4:40 AM, Erik Kaashoek wrote:
>> 4: I've looked into the math producing the steyx and its clear there 
>> are insufficient digits (16) in my math, only with low input 
>> frequencies, short gate times and low subsample rate it can always 
>> produce a relevant (non zero) number. I have no clue how to reduce 
>> the digit count, I tried subtracting an estimated global trend but as 
>> the x intervals are not constant that does not work with the integer 
>> math. I tried shifting the Y so the sumxy term gets lower but that is 
>> insufficient as the sumy2 is already > 1e+16 with 10 MHz input, will 
>> be even worse with 100MHz input. sumy2 is > 1e+20 with 0.1 s gate 
>> time. So it seems the steyx is usable to detect when measuring noise 
>> but otherwise only under very specific conditions. 
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com -- To unsubscribe 
> send an email to time-nuts-leave at lists.febo.com
> To unsubscribe, go to and follow the instructions there.




More information about the Time-nuts_lists.febo.com mailing list