[time-nuts] once again timenuts in the news
jimlux at earthlink.net
Mon Jul 2 14:18:43 EDT 2018
On 7/2/18 10:42 AM, jimlux wrote:
> 100 ns... Doesn't seem particularly challenging if you have something
> like PTP.
> But there's this:"The new synchronization system will make it possible
> for Nasdaq to offer “pop-up” electronic markets on short notice anywhere
> in the world, Mr. Prabhakar said"
> OK, 100ns world wide is a trickier proposition.
oooohhh I see what's new and different - the authors of the paper
they're using Support Vector Machines and Machine Learning (how sexy!
and "Because HUYGENS is implemented in software running on standard
hardware, it can be readily deployed in current data centers."
And, more troubling - in terms of the desire to do fast transactions..
They do the processing in batches ( on page 82 of the paper)
A crucial feature of HUYGENS is that it processes the transmit (Tx)
and receive (Rx) timestamps of probe packets exchanged
by a pair of clocks in bulk: over a 2 second interval
and simultaneously from multiple servers.
SO they're actually not really "synchronizing the systems" (in the sense
that I can schedule an event on System A to occur tomorrow at
12:34:56.000,000,000 and an event on System B tomorrow at
12:34:56.000,000,001 and record the events on System C by starting my
recorder at 12:34:55.999,999,999 and stopping the recorder at
They're "adjusting the time of events recorded by multiple different
clocks to a common time scale, post hoc"
We do this now in spacecraft work - it's generically called "time
correlation" where you relate the onboard spacecraft clock (generally
some sort of free running counter) to external measured events (i.e. the
Earth Received Time of a message, or GPS 1pps ticks, or ...) and then
create a suitable model of the clock behavior so that you can schedule
future events (thruster burns, camera actions), etc.
Sometimes the results are quite impressive
But I will readily concede that the current spacecraft approach is not
scalable to dozens, much less, thousands of entities.
So Huygens meets the "reconcile transaction time stamps at the end of
the day" kind of need, but does NOT meet the "schedule future events
I find that "knowing the time of an event in the past precisely" is
interesting and useful, but "scheduling the time of an event in the
future precisely" is, in the long term, a more useful thing.
Here's a practical example..
You have a constellation of N satellites, all independent (in the sense
that they do not communicate with each other) and observing an
astronomical object for a sporadic phenomenon (a coronal mass ejection,
as it happens). You don't have enough storage or communication
bandwidth to record everything all the time, so you record some at a
duty cycle of 1% - record a millisecond's data every 100 milliseconds.
The phenomenon of interest lasts minutes or hours, so you'll get plenty
of data. However, the 1 millisecond recording times must be
synchronized across all satellites, because you're going to combine the
data later, and it needs to be coherent - (they're radio interferometers).
So what you need is "good clocks" that are predictable in the future -
if satellite 1 is at 10.000,01 MHz and satellite 2 is at 9.999,997, you
can adjust the sampling rate generator accordingly (in terms of rate and
offset) to ensure that every 100 ms, the sampling gate opens for 1 ms -
and they're all the same millisecond.
(Full Disclosure - I'm the project manager for SunRISE)
More information about the time-nuts