[time-nuts] iPhone keeping better time?
hmurray at megapathdsl.net
Wed Nov 16 16:43:47 EST 2011
mail at miguelgoncalves.com said:
> On every sync, the timestamp returned from the NTP server is on the 6 ms
> mark this means that the local clock of the Arduino drifts a lot. I am
> installing a realtime clock (Chronodot) this weekend that has an accuracy of
> +/- 3.5 ppm from -40C to 85C (I read somewhere that between 0 and 40C it is
> 2 ppm). This RTC can output a square wave signal at 1 Hz and Arduino can
> read that and use it to update the display at the exact second mark.
How does the timekeeping on the Arduino work?
My guess is that there is an interrupt that does something like:
time = time + delta
where delta is calculated from the interrupt frequency.
The first step is to make sure the software is doing the arithmetic
correctly. A common bug in that area is to be using the wrong value of delta.
You may need to fix the code to handle fractional parts of delta. The code
is simple after you see it. It's just standard multi-precision integer
arithmetic with the decimal point in the middle rather than the right end.
The code is roughly:
time = time + delta;
fraction = fraction + delta2;
time = time + 1;
You may need to handle delta2 being negative.
You need enough bits in fraction so the bottom bit can represent the accuracy
you want. 1 PPM is good for a second per week. If you are a time-nut, you
probably want better than that. There is no point in going overboard unless
your temperature is very stable. I'd probably shoot for PPB but be happy
with several bits closer to PPM if that was convenient.
The second step is is to adjust delta2 to match what your crystal is actually
doing. You could measure it with a counter and calculate, or let it run long
enough so you can see the difference with a scope on the update to the LED
display compared to something known-good like a PPS signal. If it's way out,
you can probably see the drift on a clock display. You just have to wait
I started working for Xerox back in 1976. Shortly after I got there, Ed
Taft fixed a bug in the Alto timekeeping code. The system was designed
around a 170 ns cycle time. Oscillators come in megahertz rather than
nanaoseconds, so they ordered 5.88 MHz crystals. The software used 170 ns.
If I did the math right, the difference is 340 ppm. That's enough to be
annoying, but good enough that it won't be noticed during the early stages of
software development. The Altos reset their time from a time server on each
reboot. That happened often enough to hide the problem for a while.
Back in those days, we calibrated the crystal on the time servers by hand.
The config file had the correction in seconds per day. (I think.) You could
get the rough calibration in a day and much better if you waited a week.
Mostly, they were in air-conditioned machine rooms. In hindsight, it was
primitive, but it worked well enough.
These are my opinions, not necessarily my employer's. I hate spam.
More information about the time-nuts