[time-nuts] ADEV slopes and measurement mode

Ole Petter Ronningen opronningen at gmail.com
Thu Apr 19 08:01:38 EDT 2018


I think it will require more than one reading.. :)


On Thu, Apr 19, 2018 at 12:08 AM, Magnus Danielson <
magnus at rubidium.dyndns.org> wrote:

> Hi Ole Petter,
> On 04/16/2018 12:12 PM, Ole Petter Ronningen wrote:
> > Hi, All
> >
> > This will be a bit long, and I apologize for it. Perhaps someone else
> also
> > struggle with the same.
> >
> > One of the properties of the familiar ADEV-plots are the slopes - and how
> > the slope identify the dominant noise type for the various portions of
> the
> > plots. My understanding is that the slopes stem from "how the noise
> behaves
> > under averaging" - White FM follows the usual white noise slope of
> > 1/sqrt(N), or to put it another way; the standard deviation of a set of
> > many averages of length N will fall as sqrt(N) when N increases. If the
> > standard deviation of the whole non-averaged list is 1, the standard
> > deviation of a set of 100-point averages will be 1/10. Other noise types
> > does not follow the same 1/sqrt(N) law, hence will give rise to other
> > slopes as the number of points averaged increases as we look further to
> the
> > right of the plot.
> >
> > If I have already droppped the ball, I'd appreciate a correction..
> Hrhrm!
> It's not directly averaging but kind of. Ehm. Let me see how to explain
> this...
> When you measure frequency, one way or another, two time-stamps is
> formed. The distance between those two time-stamps will be the
> observation time tau. As you count how many cycles, often called events,
> that occurred over that time, you can calculate the frequency as
> events/time. This is the basis of all traditional counters, which we
> these days call Pi-counters, for reasons I will explain separately.
> Now, we could be doing this measurement as N sequential back-to-back
> measurements, where the stop event becomes the start event of the next
> measurmeent. As we sum these up the phase of the stop and the start of
> the next cancels and the full sum will become that of the first
> start-event and last stop-event. Regardless how we divide it up or not,
> it will end up being the same measurement. The averaging thus kind of
> cancels out and interestingly does not do anything.
> Now, as we attempt to establish the statistical stability of this value
> using the normal standard variance and standard deviation methods, also
> known as Root Mean Square (RMS) algorithm, we have a bit of a problem,
> because the noises of an oscillator is non-convergent. So, an
> alternative method to handle that was presented in the 1966 Feb article
> by that David Allan. It still provide variance and deviation measures,
> but without being caught by the convergence problems.
> The ADEV for a certain observation interval is thus an equivalent to
> standard deviation measure, to explain how good stability there is for
> the measure at that observation interval, regardless of how we divided
> the measurement up to form the frequency estimations, as long as they
> form a continuous measurement, thus without dead-time.
> The slopes stems from how the 1/f^n power distributions of noises gets
> inverse Fourier transformed into time-domain, which is the basis for
> David Allans article, he uses that out of the M.J.Lighthill "An
> introduction to Fourier analysis and generalised functions" and adapts
> to the power distributions noises. Because different 1/f^n noises is
> dominant in different parts of the spectrum, their slopes in the time
> domain will also show the same dominant features and thus be the limit
> of precision in our measurement.
> Now, if there happens to be gaps in the data as gathered, the dead-time
> would cause a bias in the estimated stability, but that bias can be
> predicted and thus compensated for, and was passed to a separate bias
> function that was also modeled in the same article. That way, the bias
> of the counters they had at the time could be overcome to result in
> comparable measures.
> The obsticle of the averaging over N samples that prohibited the use of
> the normal RMS function could be separated out into another bias
> function that also was modeled in the article. This way a 5-point
> stability measure and a 3-point stability measure could be compared, by
> converting it into the 2-point stability measure. The 2-point variance
> later became coined Allan's variance and later Allan variance. That
> article forged the ring to control the variances.
> I can explain more about how the averaging does not give you more help,
> but the bias function is good enough. I've done simulations to prove
> this to myself and it is really amazing to see it misbehave as soon as
> there is a slope there. Classical statistics is out the window.
> > Now onto where I confuse myself (assuming I havent already): measuring
> > phase vs measuring frequency, and white FM versus white PM. Specifically,
> > using a frequency counter to either measure the frequency directly, or
> > setting up a time interval measurement to measure phase.
> Actually, you tap the data from your counter in two different ways,
> either just to get time-stamps one way or another, or get two
> time-stamps that has been post-processed into a frequency estimate.
> A good crash-coarse on how frequency counters do their dirty work can be
> had by reading the HP5372A programmers manual. It actually shows how to
> use the raw data digital format and do all the calculations yourself.
> A more gentle approach can be had by reading the HP Application Note 200
> series.
> > Thinking about where the quantization noise/trigger noise/whatever is
> added
> > to the measurements, my initial reasoning was as follows:
> > * When measuring frequency, the quantization noise is added to the
> > frequency estimate. This, when converted to phase in the adev
> calculations,
> > will result in random walk in the phase plot - and hence a slope of -0.5.
> > "Two over, one down".
> > * When measuring the phase directly in a time interval measurement, the
> > noise is added to the phase record, and will not result in random walk in
> > phase - and hence a slope of -1. "One over, one down"
> The way that classical counters do their business, and here I mean any
> counters doing Pi-counters, for many purposes the time and frequency
> measurements is essentially the same things. The only real difference is
> if you time-stamp phase as a you do in a frequency measurement, and thus
> compare the reference clock and the measurement channel, or if you
> measure the time-difference between two channels using the reference
> channel as a "transfer-clock".
> > This is in fact not what happens - or kinda, it depends, see below. As
> > someone on the list helpfully pointed out; as the gate time of your
> > frequency measurements increase, the noise stays the same. So the slope
> is
> > -1. This makes perfect sense when measuring phase, and is consistent with
> > my reasoning above - the way to "average" a phase record is simply to
> drop
> > the intermediate datapoints.
> >
> > My confusion concerns measuring frequency. I can not see why the slope is
> > -1, and I am confused for two reasons:
> >
> > 1. The gate time as such is not increasing - we are averaging multiple
> > frequency readings, all of them has noise in them which, in my mind, when
> > converted into phase should result in random walk in phase and follow the
> > -.5 slope.[1]
> See my explanation above. The averaging you do fools you to believe you
> got more data in then you did.
> phase[3] - phase[2] + phase[2] - phase[1] + phase[1] - phase[0]
> = phase[3] - phase[0].
> That's all there is to it. What you gain by the longer measurement is
> that you get a longer time for the single-shot resolution of phase[3]
> and phase[0] to affect the precision, this is a systematic effect which
> is trivial to understand. We also get in the more subtle point of
> stability slope and we get a reading from where there is less effective
> noise because the length of the reading got longer. It can get so long
> that we start to loose again.
> ADEV and friends is greatly misunderstood, it's a way to get stability
> measures for the type of readout you do with your counters. This was at
> the time when for this span there was no other useful method than
> counters. Much of the ADEV limits and motivation comes from the limited
> instruments of the times. Once the tool was there and the strength of
> various noises could be determined, it became handy tool.
> > 2. Trying to get to grips with this, I also did a quick experiment -
> which
> > only increased my confusion:
> >
> > I set up my 53230a to measure a BVA-8600 against my AHM, in three modes,
> > tau 1 second, each measurement lasting one hour:
> > 1. Blue trace: Time interval mode, the slope on the left falls with a
> slope
> > of -1, as expected.
> > 2. Pink trace: Frequency mode, "gap free"; the slope on the left also
> falls
> > with a slope of -1, somewhat confusing given my reasoning about random
> walk
> > in phase, but could maybe make sense somehow.
> Notice how these two have the same shape up to about 200 s, at which
> time they start to deviate somewhat from each other. What you should
> know that the confidence interval for the ADEV starts to open up really
> much at 200 s, and as you sit down and see TimeLab collect the data in
> real time, you can see that the upper end flaps around up and down, just
> illustrating that the noise process is yanking it around and you don't
> know where it will end up. Do the same measurement again and they could
> look differently, that's the meaning of confidence interval, it just
> gives you a rough indication that the noise process makes the true value
> to be with a certain certainty within those limits, but not where. As it
> opens up, the true value can be in an increasingly wide range. This is
> due to lack of degrees of freedom, and was DF increases, the confidence
> interval becomes tighter. ADEV is thus following a chi-square statistics.
> There is a small offset between the blue and pink trace. It would be
> interesting to find out where that offset comes from, but they should
> trace the same until confidence inverval blows up.
> > 3. Green trace: Frequency mode, "RECiprocal" - *not* gap free. The slope
> on
> > the left falls with a slope of -0.5. Very confusing, given the result in
> > step 2. Random walk is evident in the phase plot.
> Hmm, strange.
> > Since the 53230A has some "peculiarities", I am not ruling out the
> counter
> > as the source of the difference in slopes for gap free vs not gap free
> > frequency measurement - although I can see how gaps in frequency leads to
> > random walk in the phase record and a slope of -0.5 - I just cant see
> how a
> > gap free frequency record (where every frequency estimate contains an
> > error) does *not* result in RW in phase.. :)
> Yeah. It should not do that.
> > So my questions are:
> > 1. Does gaps in the frequency record always lead to random walk in phase
> > and a slope of -0.5, is this a "known truth" that I missed? Or is this
> > likely another artifact of the 53230A?
> Gaps would give a bias, and that bias function is known.
> Check if that could be what you seek for.
> > 2. Is my understanding of how the slopes arise, they show how the noise
> > types "behave under averaging", correct?
> No, not really.
> > 3. Depending on the answer to 1; What am I missing in my understanding of
> > white PM vs white FM - why does gap free frequency measurement not lead
> to
> > a slope of -0.5?
> Because the underlying power distribution slopes does not inverse
> fourier convert to time like that. However, other mecanisms of gapped
> data interacts. I would recommend a careful reading of David Allan's
> paper from Feb 1966. It takes a few readings to figure out what's going on.
> > Thanks for any and all insight!
> > Ole
> I hope I have contributed some to the understanding.
> > ---------
> > [1] Thinking about how frequency measurements are made, it kinda maybe
> > makes sense that the slope is -1; the zerocrossings actually *counted* by
> > the frequency counter is not subject to noise, it is a nice integer. It
> is
> > only the start and stop interpolated cycles that are subject to
> > quantization noise, and as more and more frequency estimates are
> averaged,
> > the "portion" of the frequency estimate that is subject to the noise
> > decreases linearly. I suppose.
> Well, turns out that the 1/tau slope is not well researched at all. I
> have however done research on it, and well, the way that noise and
> quantization interacts is... uhm... interesting. :)
> The non-linearity it represents does interesting things. I have promised
> to expand that article to a more readable one.
> You can however understand how the time quantization part of it works,
> and the single-shot resolution of the start and stop methods is really
> the key there.
> Now, I did talk about Pi-counters. A Pi-counter is a counter which
> provides equal weighing to frequency over the measurement period, that
> which is about tau long. The shape of the weighing function looks like a
> greek Pi-sign. As you integrate this, you take the phase at stop minus
> the phase of start, that just how the math works and is fully
> equivalent, and very practical as the frequency is not directly
> observable but the relative phase is, so we work with that.
> Now, what J.J.Snyder presented in a paper 1980 and then 1981 was an
> improved measurement method used in laser technology that improved the
> precision. It does this by averaging over the observation interval. This
> then inspired the modified Allan deviation MDEV which solved the problem
> of separating white and flicker phase noise. For white noise, the slope
> now becomes tau^-1.5 rather than tau^-1. This thus pushes the noise down
> and reveals the other noises more quickly, which is an improved
> frequency measure. The separation of noise-types is helpful for
> analysis. This was also used in the HP53131/132A counters, but also K+K
> counters etc. and has been know as Delta-counters, because of the shape
> of the frequency weighing looks like the greek Delta-sign. Frequency
> measures done with such a counter will have a deviation of the MDEV and
> not the ADEV. As you measure with such a frequency estimator, you need
> to treat the data correct to extend it properly to ADEV or MDEV. Also,
> many counters output measurements that is interleaved, and if you do not
> treat it as interleaved you get the hockey-puck response at low tau,
> which wears of at higher taus at which time you have the ADEV as if you
> used raw time-samples and all you got was a bit of useless data at the
> low-tau end.
> Some counters, including the Philips Industrier/Fluke/Pendulum
> PM-6681/CNT-81 and PM-6690/CNT-90 uses a linear regression type of
> processing to get even better data. This was presented in papers and
> app-notes by Staffan Johnsson of Philips/Fluke/Pendulum. He also showed
> that you could do ADEV using that. Now, Enrico Rubiola got curious about
> that, and looked into it and realized that the linear regression would
> form a parabolic weighing of the frequency samples, and that the
> deviation measure of such pre-filtering to ADEV would not become ADEV
> but a parabolic deviation PDEV. The trouble with the linear regression
> scheme is that compared to the others, you cannot decimate data, where
> as for the others you can. This meant that for proper PDEV calculations
> you could not use the output of a Omega-counter, again to reflect the
> shape of the weighing of frequency, but you needed to access the data
> directly. However, I later showed that the equivalent least-square
> problem could be decimated in efficient form and do multi-tau processing
> memory and CPU efficient in arbitrary hierarchial form.
> The filtering mechanisms of Pi, Delta and Omega shapes is really
> filtering mechanisms that surpresses noise, which makes frequency
> measures more precise as you follow the development chain, and their
> deviations is the ADEV, MDEV and PDEV respectively. As you then want to
> estimate the underlying noise-powers of the various 1/f^n power
> distributions, you need to use the right form to correctly judge it. If
> you intend to present it as ADEV, you can use MDEV/ADEV or PDEV/ADEV
> biases functions to correct the data to understand it in ADEV equivalent
> plots, but that would not be valid of the Delta or Omega prediciton of
> frequency.
> Similarly can phase be estimated, but again the weighing of frequency,
> or for that matter phase, over the observation time will render
> different precision values. The time deviation TDEV is based on a
> rescaled MDEV, so that is valid only for measures using a Delta-shaped
> frequency weighing. The lack of time deviation of Pi and Omega shapes
> remains an annoying detail, but TDEV was not meant for that purpose.
> OK, so this is the first lesson of "ADEV greatly missunderstood". :)
> Most people that say they understand ADEV and friends actually don't
> understand it.
> Cheers,
> Magnus
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/
> mailman/listinfo/time-nuts
> and follow the instructions there.

More information about the time-nuts mailing list