[time-nuts] Continuous timestamping reciprocal counter question

Tijd Dingen tijddingen at yahoo.com
Sun May 15 20:55:55 UTC 2011


Hi Magnus,

Magnus Danielson wrote:
>>> There are many things you can get away with, just how much trouble you
>>> want to verify it versus doing the proper thing is another issue.

>> Define "proper thing". ;-) From what I understand taking the exact Nth edge,
>> and then do linear regression is equivalent to taking roughly every Nth
>> edge and then do linear regression.

> Well, you always have the corner-case where numerical precision and near 
> same frequency beating comes into play, so what will help and what will 
> reduce your precision becomes a little fuzzy to say in general terms. 
> That's why I be careful to say that "they are roughly the same".

Okay, then I understand what you mean. Explaining the fun numerical
intricacies would be a whole other thread. And quite possibly a whole
other forum. ;-)

"They are roughly the same" is something I can work with.

> If you run exact Nth edge you could do some algorithmic steps that 
> avoids some rounding errors. Still, N can be allowed to be fairly large 
> (say 1 milion). Another algorithmic benefit is that you could put your 
> pre-processing upfront in the FPGA.

Understood. Amusingly enough by the exact same token, some algorithms
can run themselves into singularity trouble precisely because of the data
being too regular.

But rest assured I'll try several ways to do that linear regression. Right
now was just the sanity check if I am not overlooking something stupid.


>> Equivalent in the sense that the frequency estimates of the two will be
>> the same, to within the usual numerical uncertainties. Or to put that another way:
>> The first method of doing things is not inherently better or worse than the second
>> method. After all, that is the whole thing I am trying to be sure of right now.

> They will not be greatly different as far as I can see. Do recall that 
> linear regression may need a drift component to it. I regularly see 
> bending curves.

Check. I also plan to include a scatterplot with that line fit so you're
able to get a feeling for the data the frequency estimate is based on.


>> Of course I can make sure that I take exactly every Nth edge. It is just that there
>> are some considerable implementation advantages if that constraint does not have
>> to be so strict.

> You can never be quite sure you see every Nth edge. You can see every 
> Nth edge that your digital side detected. You will need to ensure that 
> trigger level and signal quality is good to avoid cycle slipping on the 
> trigger side. It requires care on the analogue side and adjustments of 
> trigger levels to the signal at hand. I've seen lost pulses and double 
> or additional triggers too many times.

In which case I think I now know that you meant by cycle-slip in the other
post. The analog front-end for now is a large part on the TODO list.
The digital processing part is a larger bottleneck than then analog frontend,
so I am tackling that part first. If I cannot get the counter core to work,
no point in having a fancy analog front-end...


>> One advantage being that if this constraint can be fairly loose, then using the
>> ISERDES2 in the spartan-6 as part of the coarse counter is fairly simple. I did
>> a couple of test with that, and all looks good. The main advantage there being
>> that if I use the serdes, this translates into a higher input frequency without the
>> need for a prescaler. Which translates into better precision.

> You would indeed be able to avoid a hardware pre-scaler, but you would 
> need a darn good analogue front-end to make sure the input side has 
> slew-rate needed. Lacking slew-rate can problematic and can cause you to 
> loose cycles or get multiple triggers.

Indeed, which brings all sorts of fun challenges of their own. Which is
why for now I do not use the serdes and keep the input frequency
relatively low.


> Also, you will get a high data-rate out of the SERDES which a FW 
> pre-scaler needs to sort out, but in parallel form rather than serial form.

Yeah, but that is totally easy. I've already done a module that does that.
You only need 2 stages each of 1 logic level deep with a bunch of LUT6's.

However to keep things simpler on the coarse counter front, I currently
don't use that.


> The SERDES provides a wonderful digital front-end for high-speed 
> signals, but the fixed sampling rate provides little interpolation 
> powers, a 10 Gb/s SERDES can sample every 100 ps for you.

Yep, which is why IMO it is better not to use the serdes as interpolator. You
can use it for your coarse event counter. The main drawback to that is that
your entire event counter is by definition sampled. This as opposed to a free
running counter that counts on the events, and is then sampled.

Another way of saying that is: with the serdes as a sampler, the signal from
the DUT is nothing but data. There are no flip-flops that toggle on the clock
of the DUT.

The approach I take now is a free-running bubble counter, and then resynchronize.
That is pretty much limited by the flip-flop toggle rate and local clock routing.

Anyways, with the current design limit of 400 MHz these last 2 things are non-issues.

>> Hence my current (over)focus to make absolutely sure that all the results are also
>> valid if one takes almost the Nth edge, but not quite right all the time... However,
>> you still know which edge is which. You just don't know it early enough in the
>> pipeline to use as basis for a triggering decision.

> You will have to work with multiple possible trigger-locations, but it 
> is possible to post-process out.

Indeed. It is not an impossibility. It's a tradeof regarding pipeline complexity.

>>> I have not looked on detail performance comparison between these
>>> algorithms lately. However, they should not be used naively together
>>> with AVAR and friends since they attempt to do the same thing, so the
>>> resulting filtering will become wrong and biased results will be produced.
>
>> Well, for the AVAR calculation I only use the raw time-stamps. So nothing
>> preprocessed. Then I should not have to worry about this sort of bias, right?

> Exactly, if you use raw time-stamps and have decent quality on tau0 
> measures, you have avoided a lot of problems.

This is good to know. Thank you for your detailed posts! :)

regards,
Fred


More information about the Time-nuts_lists.febo.com mailing list