[time-nuts] Continuous timestamping reciprocal counter question

Tijd Dingen tijddingen at yahoo.com
Fri May 13 16:21:39 UTC 2011


Now it is my turn for an "it depends". ;)


If by that you mean that it makes the bookkeeping for me the human easier, then yes. It certainly is easier for me conceptually when I know it is every Nth, and not just almost-but-not-quite-every-Nth.

But as far as the math is concerned, I do not see a bit of difference. Easy math as in computationally cheap? For an ordinary least squares approach, as far as I can tell the two varieties have the same computational cost. Either that, or I am missing something.

"In an FPGA keep in mind that your PLL may be a significant source of noise."

Heh, that can indeed be a significant source of noise. Which with the right approach is not as big a problem as some around here may think. As in, I suspect that is one of the reasons of why those "fpga based DIY counter" projects die so horribly around here. The advantage of an fpga is that you can process a fair amount of data, so you can do some averaging to compensate for some of the shortcomings. Certainly for a repeated process (as with frequency measurement), but also for single shot applications (with only 1 START/STOP).

Case in point for example the paper mentioned in this post:

http://www.febo.com/pipermail/time-nuts/2011-March/055240.html

I am doing something similar, and am getting similar results. Of course still plenty work do be done, but that is par for the course in hobby country...

regards,
Fred




----- Original Message -----
From: Bob Camp <lists at rtty.us>
To: 'Tijd Dingen' <tijddingen at yahoo.com>
Cc: 
Sent: Friday, May 13, 2011 6:02 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

The exactly every Nth cycle thing makes the math easy. Easy math means I can
do lots of samples. Lots of samples means better regression. Just how much
better depends on the type of noise.  

As long as you get the math right for your sample spacing, the result will
be ok.

In an FPGA keep in mind that your PLL may be a significant source of noise.

Enjoy!

Bob



-----Original Message-----
From: Tijd Dingen [mailto:tijddingen at yahoo.com] 
Sent: Friday, May 13, 2011 11:40 AM
To: Bob Camp; time-nuts-bounces at febo.com
Subject: Re: [time-nuts] Continuous timestamping reciprocal counter question

Hi Bob,

Well, for the moment it is as simple as calculating the frequency of a
reasonably stable frequency. Meaning it is not modulated, but it could very
well be just a cheap XO that is being measured. That sort of "reasonably
stable".

Not trying to recover modulation. If I was, given that I am using an fpga
for this, I'd take a different approach. Get some use out of those SERDES'
after all. :)

So as said, for now it's just calculating the frequency. I'm just trying to
make sure that I am not overlooking something stupid...

regards,
Fred



----- Original Message -----
From: Bob Camp <lists at rtty.us>
To: 'Tijd Dingen' <tijddingen at yahoo.com>
Cc: 
Sent: Friday, May 13, 2011 5:11 PM
Subject: RE: [time-nuts] Continuous timestamping reciprocal counter question

Hi

As always with any real question, the answer is "that depends".

If you are going into a measurement process that wants well defined bins for
it's tau, then it could be a problem.

If all you want is frequency, then the start and end sample *may* have all
the information you need in them. 

If you are trying to recover modulation, then both approaches have issues.
They just have different ones.  

It all depends on what you are trying to do. 

Bob

-----Original Message-----
From: time-nuts-bounces at febo.com [mailto:time-nuts-bounces at febo.com] On
Behalf Of Tijd Dingen
Sent: Friday, May 13, 2011 10:56 AM
To: time-nuts at febo.com
Subject: [time-nuts] Continuous timestamping reciprocal counter question

The counters of the continuous timestamping variety I've read about all
mention taking the Nth edge of the input signal. For example:

http://www.spectracomcorp.com/Support/HowCanWeHelpYou/Library/tabid/59/Defau
lt.aspx?EntryId=450&Command=Core_Download

In "Picture 5" on page 5 you see a bunch of data points that roughly
describe a straight line. Cycle number (of the input signal) on the x-axis,
timestamps on the y-axis. Now the question is this:

Will it also work when you get the timestamp of every almost-but-not-quite
Nth edge? I'd say yes, but who knows...

To clarify ... when I say "timestamp of edge N", I mean "the time stamp of
the positive going edge of the Nth cycle of the input signal". But the
former is a bit shorter. ;)

Assume an input signal of 30 MHz. Say you decide to get every 100th edge of
this signal, then you would end up with 300k timestamps every second. These
timestamps will define a straight line with positive slope. Find the slope,
and you have the frequency. And now for the "what if"....

What if the implementation does not always allow for getting the exact Nth
edge. What the implementation allows however is to /aim/ for to getting
numero 100, 200, 300, 400, and being sure that you are off by a maximum of 1
in either direction. So aim for 100, and you could get 99, 100 or 101. And
you can also guarantee that you KNOW about it. So you still know precisely
which cycle every time stamp corresponds to.

As far as I can see that places no limit on the accuracy of the calculated
frequency, but I could be wrong... Does anyone know of any limitations in
this regard?

Taking this one step further ... instead of accidentally being one off every
now and then, you could also introduce some randomness on purpose. So you
have these 2 different approaches:

Aim for integer multiple:
- try to get edge 0, 100, 200, 300, 400, 500, 600, ...
- actually get edge 0, 100, 200, 299, 399, 500, 601, ...

Aim for some random distibution on purpose:
- try to get edge 0, 103, 181, 287, 381, 497, 614, ...
- actually get edge 0, 104, 181, 287, 382, 497, 613, ...

To calculate the frequency from these time stamps you have to do some slop
fitting. If you use a least squares matrix approach for that I could see how
the more random distribution could help prevent singularities.

The only reason I can see now to really try harder to always get the exact
Nth edge is for numerical solving. As in, should you choose a solver that
only operates optimally for equidistant samples.

Any thoughts?

regards,
Fred


_______________________________________________
time-nuts mailing list -- time-nuts at febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.




More information about the Time-nuts_lists.febo.com mailing list