[time-nuts] LPRO-101 with Brooks Shera's GPS locking circuit

Ulrich Bangert df6jb at ulrich-bangert.de
Thu Dec 14 14:02:05 EST 2006


Hi folks,

> On the subject of Brooks Shera's design, the one thing that 
> troubles me is the use of a 24 MHz oscillator to count the 
> width of the 1PPS signal. This yields a precision of 4.16e-8, 
> but does it really?

> This oscillator is uncontrolled and any drift would exist 
> as noise that would have to be filtered (He uses a software 
> low pass filter).

When i was a newbie in time nuts business the Shera design was the only
one available in amateur radio literature and i studied it in detail. It
were exactly these two questions that bothered me too. I would like to
explain my nowadays view of these two questions and by the way direct
your interest into some subtle details of the Shera design that not all
of you may be aware of:

> This oscillator is uncontrolled and any drift would exist 
> as noise that would have to be filtered (He uses a software 
> low pass filter).

In the Shera design the GPS's 1 pps is compared to a down-divided
version of the OCXO frequency. Shera explains "This (the comparison is
meant) can be done with less ambiguity if the measurement is done at a
frequency lower than 5 Mhz say 300 KHz". In his circuit the 5 MHz OCXO
signal is divided by 16 to give a frequency of 312.5 KHz. At 312.5 KHz
the ambiguity-free measurement range of the phase comparator is 1/312.5
KHz or 3.2 microseconds. 

This small measurement range of the phase comparator is one of the
reasons for the complex procedure that is necessary to get the OCXO
locked for the first time because its frequency has to be close enough
to the setpoint that the phase difference to the GPS's 1pps stays long
enough within the measuring range of 3.2 microseconds to get it measured
ambiguity-free and to get the loop to lock. Since digital dividers are
cheap you may ask yourself why Shera did not use additional dividers to
get to even a lower frequency and a bigger ambiguity-free phase
measurement range which would have eased the oscillator lock setup. With
one of the signals being a 1pps, why not divide down the second signal
also to a 1pps raising the ambiguity-free measurement range of the phase
comparator to 500 ms (!) ?

The answer to this question is directly related to the properties of the
oscillator used for the time interval measurement. If you measure a time
interval of 3.2 microseconds length (or less) with a timebase having 24
MHz you may get 76 or 77 counts (or less) depending on the phase
relation. Now consider what circumstances are necessary to give you a
result of 75 or 78 counts. The period length of the 24 MHz oscillator
needs to change by more than +/- 1/77!!! That is almost 1.3 % or 13000
ppm!!! Even for a very simple packaged xtal oscillator having a tempco
of some ppm/K it is almost impossible that it changes its frequency by
13000 ppm due to environmental reasons. We see: If we measure short time
intervals where the period length of the timebase comes in the percent
range of the time interval to be measured the drift and noise of the
timebase are not of concern because they do not lead to a different
count value. That is the reason why the Shera design may use a cheap
canned xtal oscillator for the time interval measurement but is also the
reason why the Shera design depends heavily on measureing short times.

> On the subject of Brooks Shera's design, the one thing that 
> troubles me is the use of a 24 MHz oscillator to count the 
> width of the 1PPS signal. This yields a precision of 4.16e-8, 
> but does it really?

Due to what I have read in this newsgroup before I believe that not all
of you are aware of the influence of the measurement apparatus on time
stability measurements. Let me try to explain it with a thought
experiment: 

Consider two perfect oscillators with no frequency and/or phase
fluctuations at all and a perfect time interval counter with infinite
measurement resolution and no measurement errors. I know, stuff like
this does not exist but it is a good idea to start with. With equipment
like this we would measure ALWAYS THE SAME phase delay between the two
oscillators. Let us call this time 't'. If we compute the Allan
Deviation from a number of identical values t we will always get a
result of zero regardless of the ovservation time Tau. That is
completely correct for the perfect scenario assumed.

Now consider the same situation with only one slight change: The time
interval counter shall not have a infinite resolution but shall be
limited to a certain resolution value. This value is the number that you
may find under 'single shot resolution' in the TIC's specs. Let us
assume a modern design like the Agilent 51131 universal counter. That
one has a single shot resolution of 500 ps. Let this number be
'delta_t'. If we now use this real world TIC to measure the phase delay
between the otherwise perfect oscillators we will sometimes get the
result 't' but - with some statistical probability - we will somtimes
get the result 't+delt_t' and also 't-delta_t'. 

If we feed a number of these values into a Allen Deviation computation
the pure mathematics will of course NOT be aware of the fact that this
slight changes in the values are due to the measurement equipment.
Instead the mathematics will kind of 'believe' that the slight changes
are really due to the oscillators under test und will compute us
non-zero Allan Deviations for all observation times Tau. That is why we
can say that the simple fact (and only this!) that the measurement
equipment has a certain limited resolution this generates a 'noise
floor' that we are not able to measure Allan Deviations smaller than
that.

The amplitude of this noise floor is directly related to the time
resolution related to the times measured. Assume the phase delay between
two 1pps signals is measured with a TIC like this you get a noise floor
of 500ps/s = 5E-10. You will never be able to measure Allan Deviations
below 5E-10 @ 1s with a TIC like that!!! Now consider the case of the
famous HP5370 having a single shot resolution of 20 ps! A big
improvement but you will never be able to measure Allan Deviation below
2E-11 @ 1 s with this TIC!!! Good (for the amateur achieveable) xtal
oscillators feature Allan Deviations below 1E-12 @ 1s and (not so easy
achieveable) BVA resonator based oscillators may even be better than
that by an order of magnitude. I hope this has shown that most of us do
not own the necessary equipment to measure low Allan Deviations at short
observation times Tau. 

In this situation the usual argument to be heard is: But i can take the
MEAN over some measurements and hereby improve the resolution a lot.
That is what it is done in the Shera design. By computing the mean over
30 s the resolution is improved by a factor of 30! Or it is not? 

If there is no 'special rule' for when we measure 't' or 't+delt_t' or
't-delta_t' it is not unreasonable to assume that it is completely at
random when we measure which value. Perhaps I am going to simplify a lot
now, but effects that are 'at pure random' indeed tend to cancel out the
more measurement values one has available to compute the mean from. In a
Tau-Sigma-Diagram a effect that is 'at pure random' displays itself as
straight line having a slope of -1 and having a starting point that is
determined by the 'randomness' at the basic measurement interval. For
the 53131 its noise floor would be a straight line starting at 5E-10 @ 1
s and having a slope of -1. 

For the TIC employed in the Shera design the noise floor is 4.2E-8 @ 1s
and it has slope of -1. And yes, you may run down this line for whatever
you want. Run to a observation time of 30 s and you get a value that is
a factor 30 more precise than the 1 second value, run to a observation
time of 1000 s and you get a value that is more precise by a factor of
1000 than the 1 second value, run to whatever you want and you will get
an improvement of whatever you want. 

But is it really an improvement that you get out of it? The answer is
NO! He, why not? The answer is: Because you have to PAY the increase in
precision with the increase in observation time. For every increase of
10 in precision you need to increase the observation time by 10! 

A real improvement would have been if the starting point of the line had
been lower than the 4.2E-8 @ 1s of the TIC employed in the Shera design.
Would the Shera design make use of a Agilent 51151 as a phase comparator
its noise floor would start at 5E-10 @ 1s which is a REAL improvement by
a factor of almost 100!!! For any given precision the Shera TIC will
need 100 X the time that the 51131 needs. 

A lot of people may perhaps agree to this argument now. However they may
not immediately see how this effect really improves a frequency
standard. In order to understand this you have to get an idea on what
the Tau-Sigma-Diagram of an OXCO is. The Tau-Sigma-Diagram of a (good)
OCXO is a banana-like figure that starts at 1E-12 @ 1s, drops down to
say 3E-13 @ 10-100 s and increases from that. In a GPSDO we would draw
the Tau-Sigma of the OCXO alone into the diagram and the Tau-Sigma of
the GPS receiver into the same diagram. Where the -1 slope Tau-Sigma of
the receiver meets the banana one of the OCXO is a prominent point: We
need to make this the time constant of the loop because below this time
the OCXO has more stability than the GPS and above this point the GPS
has more stability than the OCXO. Below the loop's time constant the
frequency standard's stability is excusively determined by the LO and
above it it is exclusively determined by the GPS receiver. 

Because the OCXO's banana like figure is already on its ASCENDING branch
where the lines meet each other it becomes clear immediatly that we want
to meet the lines AS EARLY AS POSSIBLE to make the overall stability at
whatever observation time as low as possible. That in conclusion is the
reason why we need a TIC measurement resolution that fits what is
available from a good GPS receiver. The 4.2E-8 @ 1s of the Shera design
does surely not fit the 2E-9 @ 1s resolution of a M12+ (including
sawtooth correction).

Everything written in this mail before has already been treated
absolutely correct by Dr. Bruce Griffiths on earlier occasions. I have
just tried to express it in a semi-scientifical language.             

> Secondly, someone can double check me here -- but
> it seems to me that any GPSDO that uses a built-in TIC
> to monitor the deviation between the GPS 1PPS and
> the OCXO 1PPS is a closed loop system and so the
> actual accuracy of the TIC timebase has no effect on
> the function of the GPSDO. I mean, the 24 MHz clock
> could drift down to 20 MHz or up to 30 MHz and the
> GPSDO would still work fine (hey, maybe even better).

It depends on how fast the TIC's clock frequency changes. Every change
in clock frequency that is big enough to be noticed at all (and my first
point has shown that the clock frequency has to change by more 13000 ppm
for a change to be noticed) then the controller will measure a different
time interval between the LO and the GPS. This will lead the loop to
change the LO frequency until the original time interval is reached
again. After this equilibrium has being achieved again the LO will again
be on the correct setpoint. In so far TVB's argument is completely
correct: The absolute value of the TIC's clock frequency is of no
concern at all. However, changes in the clock frequency lead to time
limited changes in the LO frequency and it depends on the loop time
constant and some parameters more whether the so induced LO's frequency
changes stay within allowable bounds or not.

Best regards
Ulrich Bangert, DF6JB

> -----Ursprüngliche Nachricht-----
> Von: time-nuts-bounces at febo.com 
> [mailto:time-nuts-bounces at febo.com] Im Auftrag von Tom Van Baak
> Gesendet: Donnerstag, 14. Dezember 2006 07:47
> An: Discussion of precise time and frequency measurement
> Betreff: Re: [time-nuts] LPRO-101 with Brooks Shera's GPS 
> locking circuit
> 
> 
> > On the subject of Brooks Shera's design, the one thing that 
> troubles 
> > me is
> the
> > use of a 24 MHz oscillator to count the width of the 1PPS 
> signal. This 
> > yields a precision of 4.16e-8, but does it really?
> 
> No, with averaging it's much better than that.
> 
> > This oscillator is uncontrolled and any drift would exist as noise 
> > that
> would
> > have to be filtered (He uses a software low pass filter).
> 
> No, when an oscillator is used as a timebase for what
> is essentially a short period time interval counter the
> XO drift rate does not affect the result like you think.
> 
> Suppose you use a cheap XO with a huge drift rate of
> 100 ppm per year or even 1 ppm per day to make TI
> measurements between the OCXO and GPS. So an
> average measurement that is, say 12.34 ns today,
> will be off by 1 ppm tomorrow: it will be 12.34001 ns
> instead. Do you see now why it doesn't matter how
> bad the XO is?
> 
> Secondly, someone can double check me here -- but
> it seems to me that any GPSDO that uses a built-in TIC
> to monitor the deviation between the GPS 1PPS and
> the OCXO 1PPS is a closed loop system and so the
> actual accuracy of the TIC timebase has no effect on
> the function of the GPSDO. I mean, the 24 MHz clock
> could drift down to 20 MHz or up to 30 MHz and the
> GPSDO would still work fine (hey, maybe even better).
> 
> /tvb
> 
> 
> _______________________________________________
> time-nuts mailing list
> time-nuts at febo.com 
> https://www.febo.com/cgi-> bin/mailman/listinfo/time-nuts
> 





More information about the time-nuts mailing list