[time-nuts] Oscilloscope-based measurements of frequency stability

Dana Whitlow k8yumdoober at gmail.com
Sun Oct 7 21:03:30 UTC 2018


Hello,

Here is the promised discussion (from about a week ago) of my scheme for
using a DSO to capture the information needed to produce detailed plots of
phase and frequency modulations of a noisy source under test.

Alas, the method is apparently inadequate for characterizing a source as
quiet as I feel I need for some future experiments, but I felt that the
method
itself (if implemented with better equipment) could be of interest as an
alternative to TIC-based methods.  The text file also includes some
discussion
of a different method that I feel will probably be better suited to my
future
needs.

Look for two attached files:  a text file bearing the method's description
and
discussion, and a pdf file (~102 kB) showing a sample plot of relative phase
and frequency versus time.

Dana Whitlow    K8YUM
10/7/2018

On Mon, Oct 1, 2018 at 7:45 AM Dana Whitlow <k8yumdoober at gmail.com> wrote:

> I cheered when I saw Dave B's "silly question", for
> then I realized that I'm not the only one who likes
> to measure things with an o'scope.
>
> I had purchased a GPSDO a few weeks before and
> had  been observing its behavior relative to a free-
> running Rb by watching 10 MHz sinewaves drift with
> respect to each other as an aid in setting the Rb's
> frequency.  However, I was seeing enough fairly
> rapid random drift to limit the usefulness of this kind
> of observation.   It dawned on me that I was sometimes
> seeing drifts of several ns over the course of just
> several seconds, thus implying that sometimes the
> relative frequency error between the two sources was
> reaching as high as roughly 1E-9.  I wanted to be able
> to capture and plot a somewhat extended run of data
> so I could try to understand this behavior better.
>
> Being TIC-less, I decided to see what I could do with
> my o'scope, which is a Chinese-made 2-channel DSO
> with synchronous sampling by the two channels and
> with a respectable trace memory depth (28 MSA per
> channel).
>
> I began this effort  in earnest a couple of days before I
> saw Dave's question, and have only now brought it to
> a sufficient state of completion to feel justified in reporting
> some results.
>
> I am presently able to record about 45 minute's worth of
> data as limited by the 'scope's trace memory, but my XP
> computer's RAM space limits me to processing only about
> 35 minutes of that in a seamless run.   Over that time
> span I've seen a peak relative frequency discrepancy of
> about 1.4E-9, with a handful reaching or exceeding 1E-9.
> I've also measured average frequency differences between
> the source's a a few parts in 10E11.
>
> Most of the effort went into developing a C program to do
> the processing and then correctly scaling and displaying
> the results in a form which I considered useful to me.  This
> processing of course had to deal with an off-frequency and
> drifting 'scope timebase, which is *horrible* compared to the
> quantities under measurement (as expected from the outset).
>
> Present indications are that at this level of GPSDO mis-
> behavior, the results I'm viewing are about 20 dB higher
> than the basic floor, which I am still characterizing.  I
> believe that the floor is limited primarily by uncorrelated
> sampling jitter between the two 'scope channels.
>
> If there is an expression of interest in this technique, I'll
> publish a detailed description of the technique and some
> plots showing results, probably in the form of an attachment
> in pdf format.
>
> Dana
>
>
-------------- next part --------------
Oscilloscope-Based High Resolution Relative Frequency Measurement

I've recently been working out a scheme for using a
2-chan DSO to make recordings from which I derive and
plot the phase difference by subtracting the phase
of one channel from that of the other.  In this way I
sidestep (to the first order at least) drift in the
sampling frequency of the 'scope, which can easily
be in the 0.1 to 1 PPM regime.

My principal objective in the effort was to take a
look at phase variations in a source's output to 
answer the following question: how much would its
phase typically wander around as seen in a specified
band of frequencies at the output of an idealized
phase detector comparing the signal under question
against an ideal source at the "same frequency"?

The catch is that for a reasonable sample size, say a
few megapoints to a few 10's of megapoints, one cannot
afford to use a sample rate of, say, 20+ MSa/s and get
any reasonable measurement duration.  What I've been
doing is seriously undersampling the 10 MHz waveforms
and using "creative aliasing" to effectively heterodyne
the 10 MHz down to a low frequency.  In my case I'm
getting an apparent frequency of around 84 Hz +/- a
couple Hz (due to drift).  This frequency arises because
the sample rate of the 'scope is a few PPM off, which
is a fortunate thing.  It's important to have it be
far enough off that the apparent frequency remains
always on the same side of the frequencies of the
two sources under comparison.  If my 'scope's sample
rate were right on, or too close, this technique would
not work as described.

I'm doing this work with a Rigol DS2202E, which when
warmed up typically displays a sample rate drift of
roughly two Hz (referred to 10 MHz) over a half-hour
or so measurement period.  
I'm using a 'scope sample rate setting of "10 kHz"
and an an acquisition memory depth of 28 MSa per
channel.  At this sample rate, the creative alising
I referred to above is using the 1000th harmonic of
the sample rate to heterodyne the 10 MHz signals
down to about 84 Hz.

With due post processing I'm presently realizing a
recording duration of ~45 minutes, and able to use
~35 minutes of that in the processing.  My PC (a
little $300 "NetBook") is severely memory bound,
else I could process the whole 45 minutes worth of
acquired data.

I've done a series of noise floor measurements on
this technique, and am seeing frequency noise in
the area of 2E-11 rms.  This with a final sample
rate of about 3.9 Hz after the processing I do.
This test involves splitting a signal from a single
source into both 'scope channels.  I've done this
with a pretty noisy GPSDO signal exhibiting rms
frequency noise of about 4E-10 & peak excursions
in excess of +/- 1E-9, a free-running Rb, and the
OCXO frequency reference in my spectrum analyzer,
which exhibits slow drift but no obvious rapid
misbehavior like the GPSDO.  In all cases the
frequency noise floor has been about the same
2E-11 rms.  The GPSDO under test apparently has a
very tight loop and is literally being jerked
around by GPS "noise".

Normally I will use a free-running Rb standard
(PRS-10) as the reference for my practical
measurements on "test articles".

I attribute the high floor to uncorrelated sampling
time jitter between the two 'scope channels.  Some
is probably due to sampling time jitter that is
common to both channels, and I've been thinking 
about how I might separate and measure this component.
I can't see blaming this on the 'scope's trigger
system jitter, for there is only a single trigger
for the whole 45 minutes acquistion- from then on
the 'scope is on its own with no further triggers.

Here's a not-so-brief description of how I post-
process the acquired data once I have it in
arrays in the computer.  

NOTE that the primary object of this processing
is to reduce a total of some 56 megasamples of
data to some pretty plots of phase and frequency
differences as well as rms measures of the phase
and frequency differences (and whatever else I
happen to think of along the way).

For each channel's data, I do the following steps:

> Downsample data by 5X by taking each output
  sample to be the average of a block of five
  input samples.  This is exactly what DSO's
  do for their "high resolution" acquisition
  mode.  It provides a mild degree of suppression
  of aliased-in higher frequency noise components,
  and brings the array size down enough that I can
  have room for several similar sized arrays in
  RAM at once.
  
> Perform an FFT of size 2^22 (4,194,304) on
  the downsized array to obtain a spectrum.

> Select a region 7800 bins wide centered
  roughly on the peak signal in the spectrum,
  using interactive graphics routines I've
  developed over the years to make this sort
  of thing easy, and then shift this block
  of spectrum data so that it is centered
  about zero frequency.  I used a complex
  FFT routine, so the retained spectrum data
  contains both I & Q data and I can center
  it about DC with no spectral folding issues.
    
> I then copy the zero-centered block into
  a much shorter array of 8192 samples so
  that all subsequent work (including an
  inverse FFT on each channel's data) is
  done in these much smaller arrays.
  
> Next I do the inverse FFTs, which now
  take only the blink of an eye since the
  array sizes are so small.  Each yields a
  time domain array of 8192 complex samples
  (I & Q), at a rate of 3.90625 Sa/sec,
  yielding a total duration of ~2097 seconds,
  which is just shy of 35 minutes. And the
  time resolution of the phase and frequency
  data is therefore 0.256 seconds.
  
> At this point I'll calculate the raw phase
  and envelope amplitude and view the plots
  as a check that nothing has gone wrong so
  far.  True confession: I actually do some
  visual checks at several stages of the 
  process, just to keep tht warm fuzzy
  feeling.  To obtain the phase I use the
  ATAN2 function, called for each bin as
  'phase[j]=atan2(data[j].im, data[j].re);'
  Note that phase obtained in this manner
  wraps, often creating something like the
  "hanging bridge" appearance.  In this case.
  this phase plot always looks really ugly
  because of the sloppy time base in the
  'scope I used to capture the data.  And
  I do mean UGLY- it's downright scary!
  But don't panic.
  
> At this point I could try subtracting such
  phase curves obtained from both channels
  to eliminate the effects of the 'scope's
  time base jitter and drift, but the prospect
  of dealing with many phase wraps that don't
  necessarily coincide between the two channels
  is too daunting for me.  
  
  So I adopt a better way: I perform a complex
  division, bin by bin, between the IQ arrays
  from the two 'scope channels.  The resultant
  quotient array ia also in complex IQ form, but
  is now free of the severe phase disturbances
  arising from the 'scope's sampling rate mal-
  feasances.
  
  Then I do a 2nd phase calculation, this time
  on the quotient array.  The results are much
  cleaner and simpler, and now represent just
  what I want: the history of the phase
  differences between the two signals recorded
  by the 'scope.  
  
  At this point phase wraps may still occur,
  but now usually only zero or one wrap. 
  Or occasinally more if the two signals are
  quite a bit different in frequency (by
  time-nuts standards).  Either way, I now
  need only one uncomplicated unwrapping
  operation to yield a smooth, continuous,
  phase history curve.  All this accrues
  because I have removed the effects of the
  'scope's sloppy by dint of taking the
  complex ratio of the test signal and the
  reference signal.
  
> This final phase curve is typically a
  slightly noisy parabola (if there is
  relative frequency drift involved), or a
  slightly noisy sloped line if there is
  just a static frequency difference, or
  a slightly noisy horizontal line if the
  sources are locked to each other.
  
  Now I calculate a linear regression to get
  best fit straight line parameters, and a
  "parabolic regression" to obtain a full
  description of the curve resulting from
  linear frequency drift.  The slope from
  the linear regression tells me the static
  frequency difference right off the bat,
  but this may or may not be of much use
  if there's a lot of relative frequency
  drift involved.
  
  The 2nd order factor from the parabolic
  regression could in principle be used to
  say what the frequency drift rate is, but
  for small drift rates the result may be
  too noisy to make it of much use.  And for
  large drift rates, what time nut would
  use such a drifty source in the first
  place?  So I don't bother to make the
  drift rate calculation at all.
  
> Now I put the parabolic regression result to
  practical use.  I calculate the smooth curve
  from the regression coefficients, and then
  subtract that curve from the unwrapped phase
  curve.  In one fell swoop I thus remove all
  the gross problems arising from static
  frequency and phase differences and from
  relative linear drift in frequency, leaving
  a nice plot of the noisy phase & frequency
  variations and perhaps other warts on the
  test article's output.  
  And the array data making up this curve is
  just right for calculating rms values of
  frequency and phase warts without being
  thrown off by the coarse stuff.
  
  This is what I had set out to do.  My initial
  intent was to see what the output of a
  suspect GPSDO was doing, and this scheme
  showed me all about it in graphic form.  The
  full-duration plots (35 minutes duration)
  ostly look like so much noise, but a greatly
  expanded plot is more interesting and
  informative.  I will include a pdf of the
  expanded plot with this text.
  
  
It's pretty clear that the GPSDO I've been
working with misses my performance goal by
a huge margin.  Expressing the phase wander
as distance, I'm looking to detect phase
variations in the fractional inch regime,
in the frequency range of about 0.2 Hz to
abou 5 Hz. The GPSDSO I've been working
with to date is showing wander equivalent
to 10's of feet over the frequncy range of
interest!  Too bad I can't afford a hydrogen
maser...

And it is also apparent that this measurement
method itself has too high a noise floor
to even adequately measure a source that's
good enough for my purpose. But it's been fun
and educational in the attempt, and I have
few regrets about expending the effort.

My next idea is to build a fair quadrature
demodulator comprising RF components (two
mixers and a quadrature phase splitter) and
capture the I & Q components with a 2-channel
DAQ running at perhaps 20 Sa/s.  There would
be anti-aliasing filters and some low-noise
gain between the demodulator's IQ outputs
and the DAQ.  There will be some challenges
in dealing with inaccuracies in the RF parts,
as well as with the non-simultaneous sampling
characteristic of cheap multichannel DAQs,
but hey- it's more educational opportunity!

Dana Whitlow   10/7 2018
  
   
  
  




-------------- next part --------------
A non-text attachment was scrubbed...
Name: Dana_freq_jumps.pdf
Type: application/pdf
Size: 103649 bytes
Desc: not available
URL: <http://febo.com/pipermail/time-nuts_lists.febo.com/attachments/20181007/8e5e365f/attachment.pdf>


More information about the Time-nuts_lists.febo.com mailing list