[time-nuts] synchronization for telescopes

Michael Wouters michaeljwouters at gmail.com
Sun May 1 21:52:09 UTC 2016


Attila,

I don't think a cheap receiver like a LEAxxx will quite get you there.

On Mon, May 2, 2016 at 1:10 AM, Attila Kinali <attila at kinali.ch> wrote:
> Moin,
>
> Let's quickly recap what the requirements are and what has been discussed
> so far:
>

> What I think has the best chances of success is to use an Rb frequency
> standard at each site instead. This will give you a stable reference
> frequency which will allow you to average the data from the GPS module
> to find the precise time in the prostprocessing.
>
> As a GPS module, I would either use an LEA-M8F or a LTE-Lite. The LEA has
> an frequency/phase input which which an external reference can be measured.
> the LTE-Lite supports using an external oscillator. What you definitely
> need is to get the satellite phase data ouf of the module to relate
> the phase differences between the modules local oscillator to the satellites
> and from there to the other locations.
>

> This should bring you at least down to a 1ns uncertainty level
> (after calibration). Judging from Michael Wouters said, probably
> close to 200-300ps.
>

The number I quoted is for high quality geodetic receivers. There are
crucial differences between these and the cheap receivers in regard to
time-transfer. The first is how you relate your external clock's 1 pps
to GPS time.

 For a geodetic receiver, this is 'simple' - it takes a 1 pps and 10
MHz that it locks to and does a one-off pps sync to. The code and
phase measurements are then reported with respect to this clock. For
some receivers, there will be an internal delay that depends on the
phase relationship between the 10 MHz and 1 pps so you have to control
that.

For cheap receivers, with no external oscillator, the connection
between your clock and GPS time is more complicated. You normally set
the GPS receiver's reference time to be GPS. Code measurements are
then reported with respect to a software GPS clock, based on the
receiver's XO. It's a software clock because the XO isn't steered. The
receiver then outputs a 1 pps (which you can then measure with respect
to but with the limitation that the receiver can only place this pps
modulo the period of its internal clock, resulting in the usual
sawtooth. The receiver outputs a sawtooth correction which allows you
to reduce the sawtooth in post-processing, with varying degrees of
success. Of course you can average, but being confident that you have
eliminated bias at the level of a few hundred ps may be tricky.

Some aspects of this may eg the sawtooth be improved by using an
external oscillator but I don't have any experience of this.

The other important difference is the resolution of the receiver's
measurements. A cheap receiver reports the code measurements at
relatively coarse resolution, sometimes a few ns, whereas a geodetic
receiver reports at much higher resolution. If you had a cheap
receiver, the code measurement resolution is seldom specified so you
would have to test candidate receivers.

I have many years of raw code measurement data from many identical
receivers operating on baselines of a few km up to 20 km. I will try
to have a look later this week to confirm/deny/make ambiguous what I
said above.

Cheers
Michael



More information about the Time-nuts_lists.febo.com mailing list