[time-nuts] Cheap jitter measurements

Hal Murray hmurray at megapathdsl.net
Wed Apr 11 08:58:10 UTC 2018


gem at rellim.com said:
> It tests the time to do two back to back clock_gettime().

That's the time it takes to read the clock.  That's not what I mean by 
granularity but I think I see how you might use that word.  The comment at 
the top of the code says "latency".

When I hear "granularity", I think of the tick size of the clock.  It's easy to measure a big tick size, read the clock, spin reading until it changes, subtract.

Things get interesting if the tick size is less than the time it takes to read the clock.  In that case, you may be able see the tick size with a histograms of the times to read the clock.

With calibration, I can correct for the duration of time it takes to read the clock.  I'm not sure how to do the calibration.  In some cases it cancels out.  Cache misses may be more significant.


PS: Try running that code on an ARM.  And consider reading the clock 3 times and collecting 2 sets of data.  The idea is that the first read takes all the cache misses so the time for the second read should be faster and cleaner.

I see a tick size of 1 microsecond on a Pi 1, and 52 ns on a Pi 2 and Pi 3.

-- 
These are my opinions.  I hate spam.






More information about the Time-nuts_lists.febo.com mailing list