[time-nuts] TPLL secret reveled

Steve Rooke sar10538 at gmail.com
Thu Jun 10 13:28:13 UTC 2010


Ulrich,

May I please tackle some points here please. Regarding Warren's
implementation of the TPLL, these points have been covered before,
several times, but there is obviously an inability for either Warren
or myself to communicate effectively on these matters.

On 10 June 2010 22:36, Ulrich Bangert <df6jb at ulrich-bangert.de> wrote:
> Warren,
>
> I know you like my software and therefore please allow me to put my 50 cts.
> into the discussion:
>
>> The reason that the simple TPLL works so good
>> but is hard for some "experts" to accept, seems
>> to come down to the fact that this method uses
>> Frequency and not Phase to make the raw data
>> log used to then calculate ADEV data.
>
> This belief is the biggest misconceptions of yours. No one has ever denied
> that correct ADEV values can be computed from frequency data and (as far as
> I believe) Allan came out with a formula for phase data and for frequency
> data at the same time. The problem is a bit more subtle but by far not out
> of the reach as a good technician as you.

The correct calculation of ADEV requires that the average of the
variable be taken over a specific time interval. When you measure
averaged frequency at specific time intervals of Tau0 this requirement
is met. The measurement of phase data is taken over the current period
of each waveform and therefore, assuming the waveform contains noise,
each measurement will not be be spaced evenly at Tau0, it will be
spaced at the current length of the period of the waveform which will
vary with its noise component. To further make this point clear, even
though the unknown frequency may be divided down to 1Hz for Tau0 =
1HZ, each successive measurement of phase data will occur at a time
interval of 1s +- current noise component, this does not satisfy
Allan's equation for AVAR/ADEV calculation.

> I would like to keep the topic of deadtime out of the discussion. Therefore
> please consider a situation where two old fashion frequency counters (the
> ones that were only counting) are synchronized in such a way that they
> produce frequency data at a Tau0 of 1 second without any deadtime, the first
> counter for second n then the second counter for second n+1 then the first
> counter for second n+2 and so on. If you feed the produced data into Allans
> frequency formula then you will get a perfect ADEV calculation out ouf it.
> The only drawback is that it will have a high noise floor because with the
> counters counting complete periods of the wave their effective resolution
> may be considered 1 period length of the wave.

Agreed.

> Now let us consider what the old fashioned counter REALLY does: Over a gate
> time of 1 second (identical to Tau0) it COUNTS the number of WHOLE periods.
> Basically the old fashioned counter does make an integrating phase
> measurement over the time integral Tau0. The result is not displayed in
> units of the phase domain but it units of the frequency domain but the key
> point is that the frequency measurement gathered this way contains the same
> information contents as if a phase measurement had taken place. Therefore it
> becomes clear immediately why one must use a slightly different formula for
> the frequency values but why otherwise everything we know from phase data is
> contained in in the frequency data as well.

The Allan equation specifically requires integrated values over a
specified time interval. To include values outside of this specific
time interval with the collection of phase data would be incorrect. By
it's very nature, the collection of data in the phase domain does not
lend itself to measurements at specified time intervals as the tail
wags the dog. Frequency data in this case is taken over a specific
gate time and it is this gate time which is kept constant, therefore
the data is spaced at a specific time interval. Phase domain data is
controlled by the length of the period of the waveform and so it is
that which determines the time interval of the measurements. I can see
what people are getting confused about because for a single cycle of a
waveform, it is true that frequency = the reciprocal of the period but
that is too simple when we look at calculating Allan Variance.

> Next consider the case that the frequency of the DUT lineary changes with a
> negative slope during the first half of a second to a minimum at the center
> of the second and then changes with the same but positive slope so that at
> the end of the second the frequency is the same as at the beginning of the
> second. Clearly a phase measurement will reveal this behaviour and the old
> fashioned counter will as well. This is why we say that the phase
> measurement as well as the frequency measurement gathered this way are
> characteristic for the WHOLE of the second of Tau0.

Agreed, and this is a valid point which will come up later here.

> The next improvement to the old fashioned pure counter was the invention of
> subclock interpolation schemes. A counter using this works so: After the
> beginning of the gate time it waits of the next zero crossing and then
> measures the time up to the last zero crossing within the gate time with a
> fixed resolution of say 1 ns (like the well known Racal Dana
> 1992/1996/1998). The frequency value is then the result of a computation. If
> you consider this working principle you notice that this is even more a
> phase meter like thing than the original counter only thing. For that reason
> frequency measurements with a counter like that are suited as well for ADEV
> calculation.

This is really a simple case of the frequency meter having a much
smaller effective gate time and hence giving an accurate reading of
frequency at the specified period, IE. you select a gate time of 1s to
obtain a reading every 1s but the counter gives the frequency reading
over a period slightly less than this period. It is not specifically
phase data as the last zero crossing is not measured directly, it's
just determined to be within 1ns gate time so this still makes it a
frequency measurement. There are some other issues with this in that
what happens with the little bit of the waveform between the subclock
and the selected gate time, is this reflected in the subsequent time
interval measurement or is anything lost which would result in
dead-time (but you didn't want to go there).

> The next improvement in counter technology is applying tricks as not to
> measure a single time interval during the gate time but instead making
> thousands of time-delayed measurements and then applying statistics to it.
> The Agilent 53131/2 and the new Pendulum counters belong to this class. They
> deliver even more frequency resolution but is has been shown and discussed
> in another thread here why frequency measurements with these class of
> counters are NOT WELL suited for ADEV calculation. That is why we let them
> out.

And who can afford them anyway :)

> Once we have understood these facts let us return to the tight pll method.
> Let us consider what would happen with the above case with the frequency
> changing down and up lineary within one second. Well, since the pll tightly
> tracks the dut in frequency the loop voltage will be the exact copy in the
> voltage domain of what is happening in the frequency domain. The key point
> is that the integrating process that is involved in the nature of the
> counter only measurement and also in the improved counter measurement does
> NOT take place INSIDE the pll loop.

Well, this is the function of the oversampling measurement method
which we have been trying to explain all this time. As the voltage of
the EFC goes up and down, or down and up, during the Tau0 specific
period, the oversampled measurements are taken and subsequently
averaged to produce an average measurement at Tau0 time. Each of the
oversampled measurements are added together and divided by the number
of those measurements taken over the Tau0 period. This is simply
performed by a computer. Do we understand this point yet as I have
described this in some detail previously. I guess I had not said that
this averaging was performed by a computer in so many words but it has
been said that the oversampled measurements are taken by an ADC DAQ so
presumably that is connected to a computer which is obviously
performing the calculation.

> Had you looked to the loop voltage at a Tau0 of 1 s you would not have
> noticed ANYTHING from the frequency changes because the loop voltage
> measurements deliver an instantaneous frequency information and not one that
> is characteristic for your Tau0. Because the loop voltage contains
> INSTANTANEOUS frequency information it is different from counter originated
> data and needs special treatment: It needs integration afterwards which in
> the original NIST method is applied by the voltage to frequency counter and
> the following impulse counter. The case of the frequency changing down and
> up lineary within one second documents in the impulse counter values if
> looked at at a Tau0 of 1 s but it does not document in the loop voltage if
> looked at at 1 s. That is the reason why measuring the loop voltage with an
> a/d converter delivers samples of instantaneous frequency data that do not
> compare 1:1 to values measured with conventional counters.

And that is why oversampling is being used my friend. I have even
spoken about the VFC before as I was aware of this, as its action can
be seen from the NIST documentation. Warren's implementation of the
TPLL takes into account any variances that occur during the whole
period of Tau0 and averages those out so the value of the EFC voltage
for Tau0 is the average NOT the instantaneous value. This point has
also been laboured by me.

> Had you included the voltage to frequency converter and counted the impulses
> coming from it with a PC and some software then Bruce would have applauded
> to you because these ingredients would have performed the necessary
> integration over the loop voltage. Since you left out the integration in
> hardware Bruce has been pointing to the fact that you need integration in
> the software if you want to claim that you have build an implementation of
> NIST's tight pll method. If you leave out the integration in software IT IS
> NOT NIST'S TIGHT PLL METHOD with its well known properties. Instead it is
> WARREN'S TIGHT PLL METHOD with its not so well known properties. WARREN'S
> TIGHT PLL METHOD must not be bad a priori but since it is different from the
> NIST method you cannot rely on annything that has been said about the NIST
> method. You will have to show in what cases it works well and in what cases
> it works not so well completely on your own.

Well, I guess that Bruce did not see that the effect of the VFC was
being exactly duplicated by this oversampling. OK, so what's with the
integration with the PLL loop filter? Well, try feeding two
oscillators into a phase comparator and then feeding the output
directly into an EFC of one of the oscillators without any form of
dampening. See how much fun you have trying to get that stable. So you
include a simple low pass filter to stabilise the loop but you make
sure that the BW of this low pass filter does not interfere with the
reference oscillator tracking the unknown oscillator and all its noise
(as much as is possible). This means that you choose a filter that has
a wider BW than the Tau0 you are measuring at. The instantaneous value
of the EFC at any time now will not reflect the integrated average of
the EFC over the Tau0 period but by sampling this value many times
(oversampling) during the Tau0 period and averaging those values, a
true approximation, limited ony by the oversampling rate, of the
average value of the EFC over the Tau0 period can be calculated. So
this is why Warren has been claiming his method is the same as the
NIST TPLL method. There has obviously been a complete lack of ability
of Warren and myself to communicate these points to yourselves. I do
not have the classical university vocabulary that I'm sure those of
you who have not understood what I have been saying. I have tried
repeatedly to write the operation of this up in plain English but it's
blatantly obvious that I am not speaking the correct technical banter
that is required to get concepts across to some of you.

> I understand that an important part of your argumentation is the fact that
> you do not look at the loop voltage at a rate of Tau0 (which would be a
> catastrophe for my example) but at a much higher rate that you call
> oversampling with some right. Therefore the down and up in frequency of my
> example indeed is contained in your samples of the loop voltage. What you
> have to proove is that the signal processing that you apply to your samples
> basically IS EQUIVALENT to the integration of the NIST method. My last
> posting concerning this case already indicated that real world experiments
> are a limited tool for that purpose. You would need a strong mathematical
> treatment to show this equivalence for ALL practical cases. Otherwise it
> will stay Warren's tight pll method and we need to wait for the next years
> to come to see its impact on the world of scientifics.

I believe that my mathematics education obtained at high-school and
college  is sufficient to explain this. Each oversampled value taken
over the period of Tau0 is added together and then that sum of all
those oversampled values is divided by the number of the oversampling.
This seems to be be the normal way that we determine the average of a
group of numbers, I'm pretty sure of that or does anyone wish to
challenge that. I'm not sure that there is the need for any "strong
mathematical treatment" to show how to average a group of numbers, I
seem to remember doing this sort of thing at the age of 11. Now maybe
there is some fancy-pancy signal processing thingy that Bruce and
yourself thinks that is needed to be done here but I'm buggered if I
know why that would be the case unless this is the secret key to the
time lords domain where only true time gods are allowed :)  (that word
is in common usage here)

> It is by far not as simple as that:

And it's not as complicated as you make out.

>> The reason that the simple TPLL works so good
>> but is hard for some "experts" to accept, seems
>> to come down to the fact that this method uses
>> Frequency and not Phase to make the raw data
>> log used to then calculate ADEV data.
>
> and you should check yourself whether you really want to stay at claimes
> like that. Had you listened a bit more on what Bruce has been saying in the
> last weeks we would perhaps already have a nice hardware (Yours!) AND a
> correct mathematical treatment of the samples (delivered by Bruce). This
> missed opportunity is a real pity.

I think I have already described the differences between data taken in
the frequency and phase domains and how this affects the calculation
of the Allan functions. I think that if Bruce, and perhaps others, had
really taken the time to understand what was actually being proposed,
then this whole affair would not have degraded into this mess in the
first place. Another case of communication being the biggest barrier
to communication.

Best regards,
Steve

> Best regards
> Ulrich Bangert

-- 
Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.
- Einstein




More information about the Time-nuts_lists.febo.com mailing list