[time-nuts] Re: Simple simulation model for an OCXO?

Magnus Danielson magnus at rubidium.se
Sun May 15 17:37:36 UTC 2022


Hi Carsten,

On 2022-05-14 11:38, Carsten Andrich wrote:
> Hi Magnus,
>
> On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:
>> Do note that the model of no correlation is not correct model of 
>> reality. There is several effects which make "white noise" slightly 
>> correlated, even if this for most pratical uses is very small 
>> correlation. Not that it significantly changes your conclusions, but 
>> you should remember that the model only go so far. To avoid aliasing, 
>> you need an anti-aliasing filter that causes correlation between 
>> samples. Also, the noise has inherent bandwidth limitations and 
>> futher, thermal noise is convergent because of the power-distribution 
>> of thermal noise as established by Max Planck, and is really the 
>> existence of photons. The physics of it cannot be fully ignored as 
>> one goes into the math field, but rather, one should be aware that 
>> the simplified models may fool yourself in the mathematical exercise.
>
> Thank you for that insight. Duly noted. I'll opt to ignore the 
> residual correlation. As was pointed out here before, the 5 component 
> power law noise model is an oversimplification of oscillators, so the 
> remaining error due to residual correlation is hopefully negligible 
> compared to the general model error.

Indeed. My comment is more to point out details which becomes relevant 
for those attempting to do math exercises and prevent unnecessary insanity.

Yes, I keep riminding that the 5 component power law noise model is just 
that, only a model, and it does not really respect the "Leeson effect" 
(actually older) of resonator folding of noise, which becomes a 
systematic connection of noise of different slopes.

>
>
>> Here you skipped a few steps compared to your other derivation. You 
>> should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).
> Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that
>
> X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)
>
> because the variance is the scaling of a standard Gaussian N(0, 1) 
> distribution is the square root of its variance.
Reasonable. I just wanted it to be complete in the thread.
>
>
>> This is a result of using real-only values in the complex Fourier 
>> transform. It creates mirror images. Greenhall uses one method to 
>> circumvent the issue.
> Can't quite follow on that one. What do you mean by "mirror images"? 
> Do you mean that my formula for X[k] is missing the complex conjugates 
> for k = N/2+1 ... N-1? Used with a regular, complex IFFT the 
> previously posted formula for X[k] would obviously generate complex 
> output, which is wrong. I missed that one, because my implementation 
> uses a complex-to-real IFFT, which has the complex conjugate implied. 
> However, for a the regular, complex (I)FFT given by my derivation, the 
> correct formula for X[k] should be the following:
>
>        { N^0.5     * \sigma *  N(0, 1)                , k = 0, N/2
> X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
>        { conj(X[N-k])                                 , k = N/2 + 1 
> ... N - 1

If you process a real-value only sample list by the complex FFT, as you 
did, you will have mirror fourier frequencies of opposite sign. This 
comes as e^(i*2*pi*f*t)+e^(-i*2*pi*f*t) is only real. Rather than using 
the optimization to remove half unused inputs (imaginary) and half 
unused outputs (negative frequencies) with N/2 size transform, you can 
use the N-size transform more straightforward and accept the losses for 
simplicity of clarity. This is why Greenhall only use upper half 
frequencies.

Cheers,
Magnus




More information about the Time-nuts_lists.febo.com mailing list