[time-nuts] What is "accuracy"? (newbie timenut, hi folks!)

David mcquate at sonic.net
Fri May 6 03:19:10 UTC 2016


Hi Belinda, 

    When a manufacturer builds the first N oscillators, they are tested,
usually over a range of temperatures, and some of them are characterized
over a long period of time.  They will note the distribution in
oscillator frequency at (say 25 C) and decide where to make a cut, to
accept or reject each.  The published frequency accuracy specification
is probably larger than the manufacturing specification limit.  For
example, if the published spec is plus or minus 5 ppm, they might reject
any oscillator whose frequency is more than 3 ppm from nominal.  From
the distribution one can tell how many will be scrapped. They are trying
to prevent a customer from getting a unit that "doesn't meet spec."  As
a customer, if you buy a large quantity and characterize them yourself,
you'll probably see a truncated gaussian distribution. 

Usually there's also a drift or aging spec that says that the oscillator
will remain within X ppm of nominal for a given period.  Maybe for more
than one time interval, such as one year and ten years.  

All these specs apply to a given temperature, but there may be other
specs that address how much the oscillator's frequency will change as a
function of ambient temperature, power supply voltage, changes of
loading on its output, etc. 

An oscillator may also have various spectral purity specs, such as phase
noise at various offset frequencies.  These address the oscillator's
behavior at much shorter time frames. 

Dave 

On 2016-05-05 18:34, BJ wrote:

> Hi Time Nuts,
> 
> I'm fairly new to the fascinating world of time and frequency, so I
> apologise profusely in advance for my blatant ignorance.
> 
> When I ask "what is accuracy" (in relation to oscillators), I am not asking
> for the textbook definition - I have already done extensive reading on
> accuracy, stability and precision and I think I understand the basics fairly
> well - although, after you read the rest of this, you may well (rightly)
> think  I am deluding myself. It doesn't help matters when some textbooks,
> papers and web articles use the words precision, accuracy and uncertainty
> interchangeably. (Incidentally, examples of my light reading include the
> 'Vig tutorial' on oscillators, HP's Science of Timekeeping Application note,
> various NIST documents including the tutorial introduction on frequency
> standards and clocks, Michael Lombardi's chapter on Time and Frequency in
> the Mechatronics Handbook and many other documents including PTTI and other
> conference proceedings). Anyway, you can safely assume I understand the
> difference between accuracy and precision in the confused musings that
> follow below.
> 
> What I am trying to understand is, what does it REALLY mean when the
> manufacturer's specs for a frequency standard or 'clock' claim a certain
> accuracy. For ease and argument's sake let us assume that the accuracy is
> given as 100 ppm or 1e-4 ....  
> 
> As per the textbook approach, I know I can therefore expect my 'clock' to
> have an error of up to 86400x1e-4= 8.64 s per day.
> 
> But does that mean that, say, after one day I can be certain that my clock
> will be fast/slow by no more than 8.64 seconds or could it potentially be
> greater than that? In other words, is the accuracy a hard limit or is it a
> statistical quantity (so that there is a high probability that my clock will
> function this way, but that there is still a very small chance (say in the
> 3sigma range) that the error may be greater so that the clock may be
> fast/slow by, say, 10 seconds)? Is it something inherent, due to the nature
> of the type of oscillator (e.g. a characteristic of the crystal or atom,
> etc.) or does it vary so that it needs to be measured, and if so, how is
> that measurement made to produce the accuracy figure? Are environmental
> conditions taken into account when making these measurements (I am assuming
> so)? In other words, how is the accuracy of a clock determined? 
> 
> Note that I am conscious of the fact that I am being somewhat ambiguous with
> the definitions myself. It is my understanding that the accuracy (as given
> in an oscillator's specs) relates to frequency - i.e. how close the
> (measured?) frequency of the oscillator is to its nominal frequency - rather
> than time i.e. how well the clock keeps time in comparison to an official
> UTC source.... but I am assuming it is fair to say they are two sides of the
> same coin. 
> 
> Does accuracy also take stability into account (since, clearly, if an
> oscillator experiences drift, that will affect the accuracy - or does it?)
> or do these two 'performance indicators' need to be considered
> independently? 
> 
> I am guessing that the accuracy value is provided as general indicator of
> oscillator performance (i.e. the accuracy does REALLY just mean one can
> expect an error of up to, or close to?, a certain amount) and that stability
> (as indicated by the ADEV) is probably more significant/relevant.
> 
> It is also entirely possible I am asking all the wrong questions. As you can
> see, confusion reigns. I am hoping things will become clearer to me as I
> start playing around with hardware (fingers and toes crossed on that one).
> 
> In the meantime, if anyone could provide some clarity on this topic or set
> my crooked thinking straight, my gratitude will be bountiful. 
> 
> Thanks.
> 
> Belinda
> 
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.


More information about the Time-nuts_lists.febo.com mailing list