[time-nuts] Re: Simple simulation model for an OCXO?

Attila Kinali attila at kinali.ch
Wed May 4 16:49:29 UTC 2022


On Tue, 3 May 2022 08:06:22 -0700
"Lux, Jim" <jim at luxfamily.com> wrote:

> There's some papers out there (mentioned on the list in the past) about 
> synthesizing colored noise. Taking "White" noise and running it through 
> a filter is one approach. Another is doing an inverse FFT, but that has 
> the issue of needing to know how many samples you need. Although I 
> suppose one could do some sort of continuous overlapping window scheme 
> (which, when it comes right down to it is just another way of filtering)).
> 
> https://dl.acm.org/doi/abs/10.1145/368273.368574


This one does not work with the noises relevant to oscillators.
First of all, 1/f^a-noise is not stationary, the conditional
expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0
is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t)
neither is it its value X(0) at some start time. Also, the
ensemble variance changes over time grows (iirc with t^a).
Yes, this also means that 1/f^a noise is not ergodic and thus
most of what you learned in statistics classes does not apply!
You have been warned! ;-)

Second, 1/f^a noise has singularities in its auto-covariance
for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3.
See Mandelbrot and Van Ness' work [1] on this.
 
Over the years, I've tried a few methods on generating noise
for oscillator simmulation, but only two did have any practical
value (others had some flaws somewhere that made them sub-optimal):
1) FFT based systems 
2) Filter based systems

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

Filter based approaches use some approximation using linear
filters to get the same effect. They can acheive O(n*log(n)) as
well, with O(log(n)) consumption in memory if done right [2]
(based on [3] which explains the filter in simpler terms).
Be aware that it's only an approximation and that the spectrum
will have some wiggles. Though this shouldn't be a problem
in practice. Speed is, in my experience, slightly quicker
than FFT based approaches, as less memory is touched. But
it uses a quite a bit more randomness and thus the random
number generator becomes the bottleneck. But I also have to
admit, the implementation I had for comparison was far from
well coded, so take this with a grain of salt.

There is also the way of doing fractional integration as
has been proposed by Barnes and Allan in [4]. Unfortunately,
the naive implementation of this approach leads to a O(n^2)
runtime and size O(n) memory consumption. There are faster
algorithms to do fractional integration (e.g. [5]) and
I have a few ideas of my own, but I haven't had time to try
any of them yet.

And while we are at it, another warning: rand(3) and by extension
all random number generators based on it, has a rather small
number of states. IIRC the one implemented in glibc has only 31
bits of state. While two billion states sounds like a lot, this
isn't that much for simulation of noise in oscillators. Even
algorithms that are efficient in terms of randomness consume
tens of bytes of random numbers per sample produced. If a
normal distribution is approximated by averaging samples, that
goes quickly into the hundreds of bytes. And suddenly, the
random number generator warps around after just a few million
samples. Even less in a filter based approach.

Thus, I would highly recommend using a high number of state
PRNG generator like xoroshiro1024* [6]. That the algorithm
does not pass all randomness tests with perfect score isn't as
much of an issue in this application, as long as the random
numbers are sufficiently uncorrelated (which they are).

I also recommend using an efficient Gaussian random number
generator instead of averaging. Not only does averaging
create a maximum and minimum value based on the number of
samples averaged over, it is also very slow because it uses
a lot of random numbers. A better approach is to use the
Ziggurat algorithm [7] which uses only about 72-80bit
of entropy per generated sample.

And before you ask, yes sigma-theta/bruiteur uses xoroshift1024*
and the Ziggurat algorithm ;-)

			Attila Kinali

[1] Fractional Brownian Motions, Fractional Noises and Applications,
by Mandelbrot and Van Ness, 1968
http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf

[2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar
Simulation, by Brooker and Inggs, 2010

[3] Efficient modeling of 1/f^a Noise Using Multirate Process,
by Park, Muhammad, and Roy, 2006

[4] A Statistical Model of Flicker Noise, by Barnes and Allan, 1966

[5] A high-speed algorithm for computation of fractional
differentiation and fractional integration, 
by Fukunaga and Shimizu, 2013
http://dx.doi.org/10.1098/rsta.2012.0152

[6] https://prng.di.unimi.it/

[7] The Ziggurat Method for Generating Random Variables,
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08
-- 
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always 
wonder about. And that's why we do research.
		-- Kobayashi Makoto




More information about the Time-nuts_lists.febo.com mailing list