[time-nuts] Characterising frequency standards

Steve Rooke sar10538 at gmail.com
Sat Apr 25 09:40:39 UTC 2009


Hi Magnus,

2009/4/14 Magnus Danielson <magnus at rubidium.dyndns.org>

> > Say I have a 1Hz input source and my counter measures the period of
> > the first cycle and assigns this to A1. At the end of the first cycle
> > the counter is able to be rest and re-triggered to capture the second
> > cycle and assign this to A2. So far 2 sec have passed and I have two
> > readings in data set A.
>


>
> Strange counter. Traditionally counters rests after the stop event have
> occured, since they cannot know anything else.The Gate time gives a hint
> on the first point in time it can trigger, the gate just arms the stop
> event. There is no real end point. It can however rest and retrigger the
> start event ASAP when gate times are sufficiently large. It's just a
> smart rearrangement of what to do when to achieve zero dead-time for
> period/frequency measurements.


I am making period measurements so the gate time does not come into it. My
counter can be set to continuously take period readings starting/stopping on
a positive or negative edge. Also when my counter finishes a reading it can
generate a SRQ allowing me to transfer the measurement to the PC and I can
also immediately generate a reset of the counter to take another
measurement. Unfortunately it is not possible for the counter to be reset
and then trigger again before the last triggering event has finished, IE. an
individual trigger event can only be used once per measurement cycle, the
same trigger event cannot stop one period measurement and start a second
one. All this means that there will always be a one period gap between each
period measurement.


> You could also use a counter which is pseudo zero dead time in that it
> can time-stamp three values, two differences without deadtime but has
> deadtime after that. Essentially two counters where the stop event of
> the first is the start event of the next.


Yes, I could do that but it is extra expense and complication which I do not
think is necessary.


> > I now repeat the experiment and assign the measurement of the first
> > period to B1. The counter I am using this time is unable to stop at
> > the end of the first measurement and retrigger immediately so I'm
> > unable to measure the second cycle but is left in the armed position.
> > When the third cycle starts, the counter triggers and completes the
> > measurement of the third cycle which is now assigned to B2.
> This is what most normal counters do.


So we can agree on this.

> For the purposes of my original text, the first data set refers to A1
> > & A2. Similarly the second data set refers to B1 & B2. Reference to
> > pre-processing of the second data set refers to mathematically
> > removing the effects of drift from B1 & B2 to produce a third data set
> > which is used as the data input for an ADEV calculation where tau0 = 1
> > sec with output of tau = 1 sec.
> You would need to use bias adjustments, but the B1 & B2 period/frequency
> samples is badly tainted data and should not be used.having a deadtime
> at the size of tau0 is serious bussness. Removing the phase drift over


But for the purposes of how i now think it can be calculated, tau0 will be
set equal to 2 x actual period of input source, IE. if f = 1Hz, tau0 = 2
sec.

Lets take a look at what we are saying about "badly tainted data" here. The
whole purpose of this exercise is to predict the effects of noise on a
stable frequency. We have already agreed that a phase/frequency modulation
source ate EXACTLY 1/2 of the input source will be masked by this method but
we can get round that. So for the rest of the measurement, we have half the
data per tau than if there was no missing data. This will have some baring
on the accuracy of the result but will only be significant for maximum tau,
in almost exactly the same degree that existing ADEV measurements have
limited accuracy at maximum tau as there are not enough measurements to
provide the statistical probably over that time, IE, if we measure for
100,000 seconds, the calculation for tau = 100,000 will have only one set of
values. Remember we are looking at noise here and if for the "missing data"
method we take readings for twice the full test time as a "conventional"
test, we will have data with the same amount of statistical probability.
This "badly tainted data" is just the same unless we have such periodic
effects that over the period of the whole test we will always miss them.
There is no magic here.

the dead time does not aid you since if you remove the phase ramp of the
> evolving clock, that of f*t or v*t (depending on which normalisation you
> prefer), you have the background phase noise. What we want to do is to
> characterize this phase noise. Taking two samples of it back-to-back and
> taking two samples with a (equalent sized length) gap becomes two
> different filters. Maybe some ascii art may aid:


For a 1Hz input I would be able to calculate for tau >= 2 with the
unmodified data using tau0 = 2 sec. If I remove the effects of drift, all my
data points are the same as measuring for a "conventional" ADEV test
provided that I I only calculate for tau = 1 and tau0 = 1. Using the data
with the effects of drift removed for calculating for all tau would
certainly give incorrect results as it would not show the effects of drift.
In a "conventional" measurement of ADEV for tau = 1, successive pairs of
data points are used in the calculation and the whole lot averaged. The
effects of drift (for any reasonable oscillator we are considering) between
any two sequential 1 second period measurements is so small that it does not
affect the ADEV measurement. You would only see an incorrect result if you
took measurement data points with large periods of time between them, IE.
the first and last data points on a 100,000 second run for instance.


>      __
> __   |  |__
>  |__|
>
>   y1 y2 y3
>   A1 A2
>
> A2-A1 = y2-y1
>
> vs.
>         __
> __    __|  |__
>  |__|
>
>      y1  y2   y3
>      B1        B2
>
> B2-B1 = y3-y1


Actually we are considering the period of a waveform which is time between
successive instances of the waveform moving through the same point in the
same direction, IE. it would include the positive and negative half cycles
of the waveform. BTW, ASCII art does not work so well on today's
proportionate fonts.


> Consider now the case when frequency samples has twice the tau of the
> above examples
>         _____
> __      |     |__
>  |_____|
>
>     y1    y2
> y2-y1
>
> These examples where all based on sequences of frequency measurements,
> just as you indicate in your caes.
>
> As you see on the differences, the nominal frequency cancels and the
> nominal phase error has also cancled out, so there is nothing to
> compensate there. Drift rate would however not be canceled, but for most
> of our sources, the noise is higher than the drift rate for shorter taus.


Well, if there is a phase error it would cause the positive and negative
halves of waveform to differ and therefore not cancel out. But as you so
rightly say, and to what I alluded to before, this phase error would be
expected to be considerably smaller than the noise in our tests. Now for my
"missing data" method, there is twice the amount of time between data points
so the phase errors would be doubled in size. This may, or may not now
affect the measurement for tau0 = 1 for tau = 1 so I have proposed to remove
that phase error by pre-processing the data but ONLY for this one
calculation of ADEV for tau = 1, NOT for the other tau. It may be that the
phase error is still small compared with the noise and so the data does not
need to be processed to remove the drift. That would be proved by performing
the calculation with and without the drift removed. But this does not mean
that it would then e OK to calculate ADEV for tau >= 1 using tau0 = 1 using
the "missing data" method as this WOULD give the wrong indication of the
effects of drift.


> Time-differences allows us to skip every other cycle thought.
>
> > In this case the data set is constructed from the measurement of the
> > cycle periods of a 1Hz input source where even cycles are skipped,
> > hence each data point is a measurement of the period of each odd (1,
> > 3, 5, 7...) cycle of the incoming waveform. In this case the time
> > between each measurement is 2 sec so ADEV is calculated with tau = 2
> > sec for tau >= 2 sec. This data set is then mathematically processed
> > to remove the effects of drift, bearing in mind the 2 sec spacing of
> > each data point, and ADEV is then calculated with tau0 = 1 sec for tau
> > = 1 sec.
> How did you establish the effect of drift?


In the case of the data I was using, there actually does not appear to be a
great deal of drift and I have made no adjustments to account for it BUT
theoretically it could make a difference.

> PN - White noise phase WPM, Flicker noise phase FPM, White noise
> > frequency WFM, Flicker noise frequency FFM and Random walk frequency
> > RWFM.
> These are just the names for the various 1/f power noises. They enter
> through a myriad of places, white phase noise and 1/f is common to
> amplifiers, 1/f^5 is thermal noise onto the same amplifiers. 1/f^2 is
> oscillator shaped white phase noise and 1/f^3 is oscillator shaped 1/f
> noise. Rubiola spends quite some time on that subject, both in his
> excelent book and in various papers.


But these are the effects we are measuring here.

> Indeed, and this is an important aspect to consider as we have been
> > discussing the effects of induced jitter/PN to a frequency standard
> > when it is buffered and divided down. Ideally measurements of ADEV
> > would be made on the raw frequency standard source (eg. 10MHz) rather
> > than, say, a divided 1Hz signal.
> Yes and no. There are benefits in dividing it down, you can identify
> cycle slips easier and adjust for them, where as one 10 MHz cycle to
> another can be a bit anonymous. To get the best performance for ADEV at
> 1 s using a 1 Hz signal is not optimum thought. A slightly higher rate
> will allow for quicker gathering of high statistical freedom and thus
> improved statistical stability as allowed through the overlapping Allan
> Deviation estimator as compared to use the non-overlapping Allan
> Deviation estimator on the same time-stretch of samples. When running
> long runs, sufficient freedom may be achieved even using the
> non-overlapping estimator.


Agreed, it would be down to what is the maximum rate that the measurement
system is able to take readings and record them. 1 second has just been used
as an example in this discussion but it is really not that optimal.


> A divide down does not have to make significant change to phase-noise,
> its effect can be minimized as we have discussed before.


Maybe but this topic has in itself generated a lot of discussion and the
outcome is that care must be taken with this aspect. This is really the
point I was making, we don't want to be measuring noise induced in the
buffering and division circuits as this will completely ruin our tests.

The 1 PPS signal is also quite historical artifact which is still quite
> handy. It allows direct comparision of non-equalent frequencies as the
> division ration is adjusted. It is also what comes out of a majority of
> GPS receivers. Few GPS receivers evaluate their time offset at a faster
> rate than 1 Hz anyway, but 2, 5, 10 and 20 Hz is available. The L1 C/A
> signal would allow for a rate of 1 kHz but it would require really good
> signal conditions.


Indeed although it has been suggested that 10MHz ocxos be divided down to
around 1 Hz so they can be measured by sound cards in the past.


> For high resolution work, the PPS is not that good, since beating two 10
> MHz would give you some 5-7 decades of better resolution if you can
> handle the problems with slow slopes.


Are you proposing measuring phase differences here as for ADEV this would
surely add the noise of both sources into the mix which would be
undesirable.

>> For short measurement times quantisation noise and instrumental noise
> >> may mask the noise from the source but they are still present.
> >
> > Well, these form the noise floor of our measurement system.
> Some of them we can control, though better triggering devices, as
> learned the hard way and investigated by many.


I guess it would be possible to measure the system against itself with
something like a short delay from it's own internal timing source.


> Other ways to handle it is to use cross-correlation techniques where two
> independent system noises sees the same signal, in which case only the
> input source noise correlate and the system noise effect can be
> partially canceled out.


If you could measure the same signal with multiple systems, it would be
possible to cancel out the noise effects of the measuring system.


> There are systematic noise problems also, such as lack of zero dead
> time, resolution, interpolator distorsion etc.


Indeed. Thanks for your input on this, it's really enabled me to focus more.

Cheers,
Steve

>
> Cheers,
> Magnus
>
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to
> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>



-- 
Steve Rooke - ZL3TUV & G8KVD & JAKDTTNW
A man with one clock knows what time it is;
A man with two clocks is never quite sure.



More information about the Time-nuts_lists.febo.com mailing list