[time-nuts] ADEV

jimlux jimlux at earthlink.net
Sat Nov 13 16:27:37 UTC 2010


Bob Camp wrote:
> Hi
> 
> I don't see anybody arguing that systems work better when there's a
> high ADEV than a low ADEV. Most of the papers are heading in the
> direction of "it doesn't catch all of the problems I worry about".
> Based on what systems need and what ADEV measures, I don't find that
> a particularly surprising conclusion. The next step would be for
> people to come up with system related measures that do catch what
> they are after. Some have tried that, many have not. The next link in
> the chain would be getting (paying) vendors to run the tests on
> product. As far as I can tell - that's not happening at all. ADEV had
> the same issues early on. Until it became a mandatory specification,
> not a lot of people paid much attention to it.
> 
> Bob
> 

Yes.. and if you read the discussion following that first batch of 
papers in the report Magnus linked, you can see the same sort of thing.. 
everyone had some measure they had developed that was important to their 
particular system, but none of them were the same, or even directly 
interconvertible.

The problem faced by them (and us at JPL now)is that we're a very, very 
low volume customer (a few units every few years)..  We *do* actually 
pay people to make the measurements, but sometimes, I think the 
measurements we ask for aren't necessarily appropriate.  It's expensive 
and time consuming to do the analysis for a new measurement, and 
particularly to validate that it's actually measuring something useful 
and relevant, so there's a very strong tendency to "do what we did before"..

Sometimes the previous measurement used something that happened to 
depend on an artifact of the system design so that you could test 
something you can measure to represent something that's difficult to 
measure (e.g. IP3 vs P1dB relationships presume a certain shape to the 
nonlinearity).  But if you've changed the underlying design, that 
artifact may not exist.

This shows up a lot with test procedures/specifications for systems 
based on all analog designs that are adopted unchanged for systems with 
digital conversions.  (look at all the various ways to specify 
performance of high speed ADCs).  For example, in my world, a common 
specification is for performance at "best lock frequency" (BLF, that is, 
the frequency at which you can acquire a carrier with the lowest SNR). 
In an analog system, this is often where the input corresponds to the 
rest frequency of the VCO with everything sort of in the middle of the 
range.   But a lot of modern systems have no BLF... their performance is 
essentially flat over some band, and any small variations are not 
indicative of, e.g. minimum loop stress, etc.  The time spent to 
determine BLF, and any assumptions about performance aren't necessarily 
valid.

On the other hand, we don't necessarily engage in a "science project" 
for each project to determine performance requirements (and 
corresponding test methods) unique to the performance in that system. 
There is a need for more generic performance numbers that have 
moderately universal understanding.. If I tell you the P1dB for an 
amplifier, and I tell you that my signals are 10dB below that, then, in 
a short statement, I've actually told you a fair amount about how my 
design works and the range over which it's likely to keep working. 
That's because you and I have a common understanding of what a P1dB spec 
"means"...

Over the past decades, I think a similar understanding has arisen with 
phase noise specs and to a lesser extent Allan deviation. That is, given 
a phase noise plot, a skilled practitioner can tell whether it's good or 
bad in the context of a particular system.




More information about the Time-nuts_lists.febo.com mailing list