[time-nuts] Leap second to be introduced at midnight UTC December 31 this year

Gregory Maxwell gmaxwell at gmail.com
Fri Jul 22 00:14:33 UTC 2016


On Thu, Jul 21, 2016 at 5:27 PM, Tom Van Baak <tvb at leapsecond.com> wrote:
> Time to mention this again...
>
> If we adopted the LSEM (Leap Second Every Month) model then none of this would be a problem. The idea is not to decide *if* there will be leap second, but to force every month to have a leap second. The IERS decision is then what the *sign* of the leap second should be this month.

I think dithered leapsecond would be a massive improvement over what
we have now, I'd even pay for travel to send someone to advocate it at
whatever relevant standards meeting was needed.

But I still think it is inferior to no leapseconds at all (and
adjusting timezones after the several thousand years required to make
that needed).  The reason I hold this is three fold:

(1) The complexity to deal with leapseconds remains in LSEM. It won't
be as much of a constant source of failure but correct handling is a
real cost that goes into the engineering millions of
products/projects. Some of the current practical methods of handling
leap seconds (like shutting off/rebooting critical systems or
discarding data) would also be less reasonable with LSEM. They might
be less needed with LSEM but I cannot say that they would be
completely unneeded*.

(2) I'm not aware of any application that cares greatly about the
precise alignment with the _mean_ solar day that doesn't need to go on
and apply location specific corrections.  Those applications could
easily apply the UTC to mean solar adjustment along with their
location specific adjustment.

(3) Obtaining the leap second sign requires a trusted data source.
When UTC has leap seconds a happily ticking atomic clock cannot keep
second level alignment with UTC without some trusted source of data.
Existing mechanisms for distributing leap second information have
communicated false information in several well known events in the
past, and these were accidents not malicious cases. LSEM would improve
this somewhat, since there would always be an update you just need to
learn the sign, so communications failures could be detected and
alarmed. But the need for this trusted input still creates a security
vulnerability window that could be avoided. Systems where their
security/integrity depend on time sync, could be pushed 24 seconds out
of sync in one year by an attacker that can control their leap
observations-- creating error greater than their free running
oscillators might have absent any sync at all.  This is especially
acute in that in a decade or two we might have 1PPB solid state
optical clock chips which are inexpensive and allow for "set once and
run forever without sync" operation for a great many applications --
getting potentially 0.5 ppm of error added on top from the lack of
leap seconds would basically limit the civil time performance of
unsynchronized clocks what you can get from a TCXO bought off mouser
for a couple bucks today.

So: It's my experience that the current handling of leap seconds is a
slow motion disaster. LSEM would be a big improvement-- reducing the
costs to the "intended costs" by eliminating many of the extra costs
created by inadequate testing. But it would not eliminate the base
cost of continuing to have our civil time perturbed by leap seconds to
begin with-- costs that are only increasing as atomic oscillators come
down in price and applications with tight synchronization requirements
become more common.


(*) once a month is not adequate testing to ensure freeness of fringe
race conditions. Meaning that at a minimum large scale life or large
value handling systems that might be impacted would still get the
reboot treatment in LSEM, but now with disruption every month.



More information about the Time-nuts_lists.febo.com mailing list