[time-nuts] Re: 20210423: Introduction and Request for Spirent GSS4200 User Manual / Help

Andrew Kalman aekalman at gmail.com
Sun Apr 25 19:02:11 UTC 2021


Just a comment on real-time programming.

Obviously the accuracy of the real-time performance is a function of the
hardware/software system's clock speed -- if you have an attoHz-class
microprocessor (MCU), most things are going to look real-time to most
people (except time nuts, of course :-) ) no matter how they are coded ...

In the real world, where clocks are usually in the MHz range, you can
achieve the real-time performances of your hardware (say, of a 32-bit
output compare timer peripheral or a 12-bit DAC) as long as the overlaid
software does not interfere with the peripheral's proper functioning. With
a "set-and-forget" peripheral that is easy. With one that e.g. needs to be
"fed" by the overlaid software (say, a DAC that is fed by DMA from data
coming from outside), that is a bit more difficult. And if some of that
real-time performance needs to happen in a task, then you really need a
couple of software architectural features to get you there.

As an example, an MCU-based system that uses queues to manage priority is
not going to achieve real-time task performance, because e.g. the time to
enqueue a task will not be constant-time, and it will thereby introduce
timing jitter into the execution of all tasks. If, OTOH, in this example
the queuing mechanism were implemented via a hardware timer and ISR that
handles queueing via a an array (so that the queuing algorithm is
constant-time), AND that ISR is elevated to the highest priority, then the
task jitter in that system will be rather minimal, subject only to time
deltas due to the system serving any other interrupts (regardless of
interrupt preemption, etc). I.e. if the MCU is serving say a UART interrupt
when the "main" timing interrupt occurs, either the system has to (HW)
context-switch out of the UART interrupt to go service the timer ISR (and
(SW) context-switch), OR it has to wait for the UART ISR to end, before
servicing the timer interrupt. In the former, jitter will be a function of
the MCU's interrupt handling; in the latter, it'll be a function of your
code.

While hardware preemption is always your friend, software preemption (in
the general case) is not a panacea. For example, in a preemptive RTOS,
every context switch is going to involve an interrupt. That "extra"
interrupt will affect the responsiveness and jitter of your other
interrupts. A cooperative RTOS does not involve any interrupts when
context-switching, and so (from one of several perspectives), a cooperative
RTOS (that in theory is not as responsive as a preemptive one) may in fact
yield certain better real-time performances than a preemptive RTOS.

So, bottom-line rule of thumb: if you want to get minimal timing jitter out
of an MCU application, it is possible to run an RTOS (of any sort, most are
soft real-time, very very few are hard real-time) on that MCU, and with
careful attention to how you architect the system split between hardware
peripherals and firmware on the MCU, you can get to the exact hardware
timing specifications of the MCU itself. IMO the use of an RTOS makes a ton
of sense here, because you can e.g. implement a complete GUI, comms system,
terminal or other as part of the MCU application, safe in the knowledge
that all this "RTOS overhead" has _zero_ impact on the real-time
performance you hoped to get out of the MCU. This does require careful
coding in certain areas ...

I am a huge proponent of loosely-coupled, priority-based, event-driven MCU
programming. Assuming it's coded well, it is not incompatible with
nearly-real-time programming. For high-performance MCU programming, I
generally follow these rules:

   - For 1-to-100-instruction-cycle timing accuracy, use straight-line
   (uninterrupted) instructions (Assembly, or C if your compiler is good)
   while interrupts are disabled. Here, your jitter is your fundamental clock
   jitter as it passes through the MCU.
   - For 100-to-1000-instruction-cycle timing accuracy, code it using an
   interrupt. Here, your jitter is dependent on the interrupt handling of the
   MCU.
   - For greater than 1000-instruction-cycle timing accuracy, hand it over
   to the (properly configured) RTOS. Here, your jitter is dependent on how
   the foreground (interrupt-level) vs. background (task-level) code operates
   in your application.


--Andrew

--------------------------------
Andrew E. Kalman, Ph.D.


On Sun, Apr 25, 2021 at 7:14 AM Lux, Jim <jim at luxfamily.com> wrote:

> On 4/25/21 6:40 AM, Bob kb8tq wrote:
> > Hi
> >
> >
> >> On Apr 25, 2021, at 9:31 AM, Lux, Jim <jim at luxfamily.com> wrote:
> >>
> >> On 4/25/21 6:02 AM, Bob kb8tq wrote:
> >>> Hi
> >>>
> >>> The thing that I find useful about a GPS simulator is it’s ability to
> calibrate the
> >>> time delay through a GPS based system. In the case of a GPSDO, there
> may be
> >>> things beyond the simple receiver delay that get into the mix. Getting
> the entire
> >>> offset “picture” all at once is nice thing. Yes, that’s a Time Nutty
> way to look at it…..
> >>>
> >>> So far, I have not seen anybody extending this sort of calibration to
> the low cost
> >>> SDR based devices. Without digging into the specific device, I’m not
> sure how
> >>> well a “generic” calibration would do. Indeed, it might work quite
> well. Without
> >>> actually doing it … no way to tell.
> >>>
> >>> So if anybody knows of the results of such an effort, I suspect it
> would be of
> >>> interest to folks here on the list.
> >>>
> >>> Bob
> >>
> >> A double difference kind of relative measurement might be useful -
> compare two (or three) GNSS receivers.  Then the absolute timing of the
> test source isn't as important.
> > Well …. it is and it isn’t. If you are trying to get “UTC in the
> basement” (or even
> > GPS time)  to a couple nanoseconds, then you do need to know absolute
> delays
> > of a number of things. Is this a bit crazy? Of course it is :)
> >
> > Bob
> >
> Good point..
>
> For many SDRs, it's tricky to get the output synchronized to anything -
> a lot were designed as an RF ADC/DAC for software SDR (like gnuradio).
> The software  SDRs are sort of a pipeline of software, with not much
> attention to absolute timing, just that the samples come out in the same
> order and rate as samples go in, but with a non-deterministic delay.
> Partly a side effect of using things like USB or IP sockets as an
> interface. And, to a certain extent, running under a non-real time OS
> (where real time determinism is "difficult programming" - although
> clearly doable, since playing back a movie requires synchronizing the
> audio and video streams ).
>
> If your goal is "write software 802.11" you don't need good timing - the
> protocol is half duplex in any case, and a millisecond here or there
> makes no difference.
>
> A SDR that has a FPGA component to drive the DACs might work pretty
> well, if you can figure a way to get a sync signal into it.  One tricky
> thing is getting the chips lined up with the carrier - most inexpensive
> SDRs use some sort of upconverter from baseband I/Q, and even if the I/Q
> runs off the same clock as the PLL generating the carrier, getting it
> synced is hard.
>
> The best bet might be a "clock the bits out and pick an appropriate
> harmonic with a bandpass filter".   If the FPGA clock is running at a
> suitable multiple of 1.023 MHz, maybe this would work?  Some JPL
> receivers use 38.656 MHz as a sample rate, which puts the GPS signal at
> something like 3/4 of the sample rate.
> I'd have to work it backwards and see if you could generate a harmonic
> that's at 1575...
>
>
> _______________________________________________
> time-nuts mailing list -- time-nuts at lists.febo.com -- To unsubscribe send
> an email to time-nuts-leave at lists.febo.com
> To unsubscribe, go to and follow the instructions there.




More information about the Time-nuts_lists.febo.com mailing list