[time-nuts] M12+T ASCII interface - I'm confused?

Hal Murray hmurray at megapathdsl.net
Thu Nov 20 11:19:37 UTC 2008


> My gut tells me that  <Checksum><cr><lf><@><@> would be believable
> more than say 95% (if not 99%) of the time. I've got the following
> observations: 

I suggest that you debug things in software and then figure out how to 
implement that algorithm in the FPGA.  You can rearrange code and try again 
to make it easier to move your code to the FPGA.

Searching for <cr><lf><@><@> means you will wait for the start of the next 
packet before processing the current one.

> the checksum is one of the few things that are easy to do in VHDL.

So what.  What are you going to do with the rest of the data?

In the old days (TTL DIPs) we used to build microcoded machines rather than 
real CPUs.  They were tuned to the specific problem.  That was before c took 
over, at least with the people I worked with.  That general style actually 
fits fairly well into a FPGA.  The key idea is that it lets you think of the 
problem as software rather than hardware.  That's usually a lot simpler if 
your state machine has many states that mostly map to something like a 
program counter.  I'll say more if anybody wants.  I think Xilinx had an app 
note.  You have to write the microcode and you probably have to write an 
assembler.  (We just grabbed one from a previous project and hacked it to fit 
our new machine.)  These days it may be better to through a few more gates at 
the problem and use a real CPU that runs c.




RS-232 shouldn't make any bit errors if you are running at sensible 
speed/distances.  It shouldn't get any byte errors unless your hardware/OS is 
broken or your user code can't get enough CPU cycles.  So it's reasonable to 
assume that most of the data will be clean.  (not perfect, but good enough so 
you don't have to even think about complicated error recovery.)

If I wanted to process that stuff, I'd have a mode bit: hunt vs in-sync.  
(This is assuming I really do understand the packet format.)

In hunt mode, I'd scan for <cr><lf>@@, assume that was the start of a packet, 
then grab the command bytes, lookup the length, and read that much.  If the 
checksum matched, and the next two bytes were <cr><lf>, I'd switch to in-sync 
and process the packet.  If not, go back and try again.  If I didn't 
recognize the command bytes, stay in hunt mode and go back to looking for <cr>
<lf>@@.

When in-sync mode, I'd read two bytes, check for @@, grab the command bytes, 
lookup the length, read that much, check the checksum, check for <cr><lf>...  
If anything didn't check, I'd raise a flag and switch back to hunt mode.  
That flag shouldn't ever go off.

That mode bit is mostly for thinking.  The code may fold together so you 
don't need it.  But something similar may be needed for debugging.  You want 
to flag an error if something goes wrong after you have gotten in sync.  You 
don't want to raise an alarm until you do get in sync.




-- 
These are my opinions, not necessarily my employer's.  I hate spam.







More information about the Time-nuts_lists.febo.com mailing list