Welcome! Log In Create A New Profile

Advanced

Firmware communication protocol : Streaming vs Packets

Posted by BeagleFury 
Firmware communication protocol : Streaming vs Packets
February 04, 2010 08:20AM
Breaking this off the previous "is firmware necessary thread" as topic has diverged.

nophead Wrote:
-------------------------------------------------------
> @BeagleFury,
> I don't have any specific links but when I used
> to follow the RSS feed from the MakerBot Google
> groups many people had lots of pauses and hangs
> that spoilt the build.

I found a blog post on the builders blog; it talks about an arduino bug that causes 115Baud rate to be unusable, and offers a solution that may enhance reliability at that rate; I may experiment with this, since 115K baud has appealing throughput rates if it can achieve a lower error/data loss rate.

As I am not keen on buying new hardware and drive capability using ethernet, etc, the USB packet option you propose might work, or, maybe another one using streamed ASCII would work too...

Here are my initial thoughts.. any comments? This would satisfy one of my goals of being able to perform simple tests using a terminal based application, while still getting the full advantage of a CRC windowed packet system.

Frame mode - Full data integrety, guaranteed delivery
Packet framing: SOH swnd rwnd TAB data... TAB crc ETB ->
Active window state request: SYN ->
Window state response: <- ACK swnd rwnd TAB crc ETB
Abort/Reset all framing/windowing: CAN CAN CAN CAN (4 or more)

Line mode - echo based integrety checking, human enterable, no crc, no windows
Change to windowless request: ETX ETX ETX ETX (4 or more)
Windowless packet: data... CR LF
Windowless undo: BS
Windowless ack: "OK "

swnd and rwnd represent windowed positions for send/recieve, and take on values from ascii space to ascii '~' in cyclic ascending order (window size up to ~90, should be more than addequate.)
NOTE: a send window ' ' should probably indicate an out of band packet, with no delivery guarantee; and no ACK response (upper layers would need to handle reliable delivery handshaking).
Windowed packet service of delivery overhead is 9 bytes per packet + 7 bytes ack overhead, assuming no errors.
ETX sequence will turn echo; allows user to verify round trip integrety check.
ETX sequence will abort any in progress packet, whether windowed or windowless.
Windowless ack sent after ETX sequence, as well as immediately after any CR LF.
SOH, SYN, ACK will turn echo off; window and crc handle round trip integrety.
Responses to any windowed command will be windowed.
Responses to any non-windowed command will be non-windowed.
Windowed requests/responses to handle piggyback / inactivity 'active' ACK responses.
Windowed ACK responses can be embedded within data transmittions (might get tricky?)
Windowed crc to use a common 16 bit crc, packed into 3 characters.
No other difference between windowless and packetized internal data.

I believe this should allow a much more reliable operating basis, but still provide some ability to test/debug firmware without writing code.

Are there any (open source) protocols that already would satisfy the basic (implied) requirements outlined above?

Edited 2 time(s). Last edit at 02/04/2010 12:33PM by BeagleFury.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 01:44PM
Hmmmmmm

I think you need a conversation with Mr Triffid_Hunter.

He is on the 5d thread just in this firmware topic.

You could be about to fix something that is not as broken as you think it is.

Triffid-hunter has been rewriting some of the firmware taking heavy math out of interrupt service routines and dropping unnecessary Floating point in favor of scaled integer.

I am willing to bet he has clawed back an interesting amount of processing cycles and made the firmware more responsive by doing this.

Floating point in firmware is god awfully slow and if you are making floating point calls within ISR's you are halting the usual processing in favor of an interrupt that is running god awfully slow code.

Response of your Serial Routines being a very good case in point.

Where your processor is interrupted and doing something god awfully slow long enough to not clear the RX buffer on the UART, then the next character will over write it and you will loose the first. You can't tell it has happened unless you are using an error checking protocol.

The higher the baud rate you run at the more acute is the problem. (Sound familiar) Going from char stream to packet won't help. It will buy you some bandwidth compared to byte stream. And with error checking and re-transmit additions to the protocol will help you recover at the cost of bandwidth.

You are though masking the problem, rather than fixing it.

Simply because the problem may not be with the serial comms.

Do bear in mind as the conversations with Nophead have identitied, Nopheads fimrware is very light weight and unlikely to have the same processing/timing issues.

Hope this helps

aka47


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 01:55PM
BeagleFury;

My experence is with RS232 and Parallel streaming is that the "Master" or Host controls the flow.

When I send a step or direction pulse, the device responds as fast as the slew rate of electronics can happen. If there is an error condition the device returns the data immediately.

In the example of the player piano roll scanner I mentioned in the other thread, The first systems were bit banged on a parallel port. When I made a USB version, I found it awkward to request data from the serial line. The FTDI chip emulates a modem, so there is always data waiting. One has to check for null packets. In short more has to be done than simply banging on the bits. Some thought needs to be put into the threads used and how the timing of the polling works.

With the parallel scanner, it was practical to read a bit then draw something on the dos screen with a poke to video memory. Delays could simply be stall loops.

With USB it is next to impossible to draw real time as the USB is returning data into an endpoint. The host does not give this data back to the thread in real time, So much overhead is used by the graphics draw routines that there is little time to poll the device every 3.8mS.

The solution would be to increase the buffer size on the device, that way there would be less chance of the buffer overflowing.

In the case of the current USB serial link, the issue is that data is becoming lost. Most likely to the nature of forcing the stream into a packet system through a modem emulator. The host expects the data to be sent real time. The device never sees the data. There is no valid handshake.

At the least an EOL/ACK could be used. The CR/LF end of line varies between operating systems, it is best to make this an either or (like postscript) and treat the extra termination as whitespace. ACK would simply be the echo of the EOL.

Echo is a good state of system health, that at least tells the system that streaming is working. Echo with CRC helps determine if the data is valid, this does nothing to noise on the line as most systems just abort on CRC error.

Gcode was originally a dumb protocol. It was created by a teletype machine, with a paper tape punch attached. Pressing a key on the teletype punched holes in the tape. There was no computer involved. This was called NC.

Each "line" of Gcode is considered a "Block" The end of the block often ended with a *. (more so in the case of Gerber)

Programs were written onto a sheet of graph paper, looking much like an accountants ledger. There were columns for line or block number, Gcode, Mcode, X,Y,Z, I and J, F and S. Some systems tolerated % as a comment mine did not.

Canned cycles were just that. subroutines implemented in hardware.

I suspect that the early NC machines, where programed with a simple state machine, which is why the codes are modal. Only a few "registers" are required. These were implemented with discreet logic, Enough to move the servos when needed.

I suppose one could define a packet, that consisted of each Gcode block, these could then be assembled on the device in order. This would require some buffering. Possibly the download and verification of the program before the part is run. That way bad packets could be retried, or packets could be routed through different channels if a protocol like TCP was used.

One of the advantages of CNC over simple NC, was the ability to test the toolpaths graphically on the display. This way it can be seen if the tool or slides is likely to hit a clamp.

My personal preference is to watch a graphic display, rather than a DRO [Digital read out] This may reflect that my programs are small, Drilling patterns of holes or milling slots, cleaning up castings. Usually the first time I run a program, I run it through step mode.

Reprap, implies a different operation model. Where one off parts are made, that can take hours or days to manufacture, preferably unattended. This may lend better to a buffered packet system, rather than stream on demand.

In some ways it seems that the goal, is to create firmware, that can read a memory stick, containing a part. This would make the com protocol somewhat redundant.

It might be possible, to reverse the data flow.[1] This would be where the device echos the NC data back to the host as the part is run. This way the device it returning the positions of the axis, temperature and other data. This way one could simply setup a simple DRO with status lights, that could be monitored. Or such a system could return back to a host program that does virtual 3d solid modeling of the part being made.

This would also separate the joystick/setup motion from the display. The input would then relate to program downloading/verification to the on board memory stick.



-julie






*[1] good ghod I have been watching to much Dr. Who - never knew I would use that phrase in a practical sentence.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 02:21PM
aka47 Wrote:
> I think you need a conversation with Mr
> Triffid_Hunter.

I've looked at his code. I will probably be borrowing much of it.. however...

> Triffid-hunter has been rewriting some of the
> firmware taking heavy math out of interrupt
> service routines and dropping unnecessary Floating
> point in favor of scaled integer.
> ...
> Response of your Serial Routines being a very good
> case in point.

My test programs have nothing in them except the comms test. The *only* point of failure is the standard Arduino implementation of the Serial class. There is no heavy maths. No expensive loops. Nothing except a very tight loop checking for serial availability, a buffer save, and an echo back upon reciept. There is the possibility that the transmittion of data from the motherboard to host disables interrupts, and that could be where the data loss happens; which may imply that we effectively do not have bidirectional UART comms, but a crippled bidirectional / mostly unidirectional comms channel.

I did not include any existing firmware because I don't plan on using much of it (yet). I wrote the test from scratch. smiling smiley


> The higher the baud rate you run at the more acute
> is the problem. (Sound familiar) Going from char
> stream to packet won't help. It will buy you some
> bandwidth compared to byte stream. And with error
> checking and re-transmit additions to the protocol
> will help you recover at the cost of bandwidth.

I'm certainly open to better processing of serial, however, since a miscalculated bit can be catestrophic; I want 100% guarantee for the most likely errors - a crc check should give that confidence. UART comms do not provide any data guarantees, only a channel to send (possibly noisy) transmittions.

> You are though masking the problem, rather than
> fixing it. Simply because the problem may not be with the
> serial comms.

Unfortunately, noise can only be solved by isolated non-noisy circuits that process error detection / correcting / retransmission logic.

From how I've explained my understanding of the problem, do you still believe it is non-comms related? How would you guarantee that something sent from host was recieved by the motherboard then? (Serial lines do not guarantee data integrety... the whole reason for the X.25 level 2, OSI level 2, IP protocol wrappers address the problem that noise happens. Guaranteed. I believe that was Nophead's reason for dinging me when I mentioned using ascii as a forth 'protocol'; his assessment and comments accurately described the problem I discovered.)

> Do bear in mind as the conversations with Nophead
> have identitied, Nopheads fimrware is very light
> weight and unlikely to have the same
> processing/timing issues.

Nophead uses IP over ethernet, which guarantees data integrety. He seems to use UDP (or perhaps his own custom protocol within IP?) to deal with data loss to guarantee data delivery.. at least from what I gleam from what he has said.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 02:41PM
sheep Wrote:
-------------------------------------------------------
> BeagleFury;
>
> ... lots of great information.. thanks! ...

Hi Julie,

As the requests I plan on sending are effectively packet oriented (sending a single character to the firmware won't accomplish much), the streaming doesn't have a lot of advantage.. however...

You mentioned that echo verification would probably be an effective solution. What technique would be used to send data from firmware to host without confusing the echo verification? Some form of handshake that lets the firmware respond with anything it might have, where the host echos it back to firmware to guarantee delivery or something like that? Do you have any good illustrative examples to make sense of what you might propose?

I based my brainstorming on the idea that from the host perspective, I probably want to have 5-10 pending requests always waiting to be sent to the firmware. Discarding and retransmitting all of those costs very little on the host (As the buffer size in firmware may be limited... at best case, it might have a 2 or 3 out of order reconstruction it could possibly do, but that makes it more complicated...)

I've also made statements assuming people already know what I'm working on. smiling smiley Sorry about that. The system I'm working on implements following spline curves on the firmware. My firmware will be GCode illiterate because it is too computational expensive to turn lines into a two linkage polar arm form. I've got the cubic spline math working at ~15 microseconds per spline step on an arduino mega; To drive a 5D robot, I'll need 5 splines per path descriptor (less than .1 milliseconds for spline stepping.) I believe doing it this way greatly simplifies how the firmware operates, reducing all motion to a 5 dimentional spline curve (2 polar arms, 1 linear arm, the extrusion 'position', and the temperature); my accuracy for transforming linear to polar depends completely on my bandwidth, and the fixed point accuracy for which I've coded the firmware; it is currently a 16 bit integer + 32 bit binary fraction (over 2^32), with ranges limiting some of the components (160 bits total for spine step state -- each step requires 16 single byte ADD or ADC operations on the data structure). Sorry if that is a bit complex, but I have difficultly describing it in words. The code will probably be shorter than the paragraph I wrote. smiling smiley

Edited 1 time(s). Last edit at 02/04/2010 02:43PM by BeagleFury.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 03:09PM
hi guys, just tried out my serial system at 115200 baud.

At first it didn't work, but I checked the datasheet and the baud rate error is too high (-3.5%) without U2X0 set.

I adjusted my code to set that, gave it a whirl and lost not a single bit over several k of data in both directions. ringbuffers ftw smiling smiley

I think I'll leave it at 115200 so I can get a more thorough test over time.

ps: my tests were run with my full firmware in action stepping motors at 4000mm/min, nothing stripped out winking smiley


-----------------------------------------------
Wooden Mendel
Teacup Firmware
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 03:58PM
If you have no other firmware loaded and no ISR's then the issue must as you suggest be comms.

If you are writing a host end you might consider echoing from the machine end and doing a comparison at the host end.

It will show up errors as you go.

Personally I would be avoiding being so dependant on the comms, One of many reasons Why I was headed machine centric rather than host centric.

Best thing to be doing with lumps of data is as nop was suggesting write it to SD card as buffer. Whilst you could do a file system you don't actually need it. SD is a block structured device and the block sizes are small enough to buffer and write in the available ram.

The question then is how to get the lump of data onto your SD card. To do this you can run your comms slower or as you suggested use a packet transfer method with error checking and retransmission recovery. There are lots of serial protocols already designed for this (xmodem, ymodem, kermit etc etc) if you are really paranoid about data integrity. Serial noisy and problematic serial data comms is as old as ........serial data comms. smiling smiley

Or as you have suggested roll your own, minimalist is good.

I would head this way (avoiding abandoning completely my ascii interactive cos it is very useful)

For your block transfer just do binary, we are using 8N1 for comms anyway, do command followed by header followed by block followed by optional check. Header is one maybe two byte size of block including check bytes. It could even be part of the block transfer command. Check is one you can live with, could be checksum could be CRC8 could be CRC16 whatever you fancy.

Do your usual ascii interactive until you issue a block transfer command as ascii whatever-u-like-word, return to ascii interactive after block transfer. machine tells you if block in buffer is good or not (like you did'nt know already, cos the machine echoed the block and you verified it byte for byte. anyway), If the block as transfered is bad over write it with another transfer, if the block arrives short send packing bytes until the machine block transfer counts enough binary data to complete and return with an error. Easy algorithm something like send byte look for return string from machine, do until machine coughs. (bear in mind that one byte sent should give one byte received or less, OK is two. NOK error xyz is a bunch more.

Under normal good operation this protocol is pretty light. Should'nt fail if bytes are lost and should recover gracefully. You still get to keep your ascii interactive for standard terminal interaction.

Other than this if you are wanting guaranteed good serial comms even for the interactive bits you are going to need to write a driver for each end to encapsulate a full protocol. You will take a big hit though on the protocol over head out of your bandwidth.

Oh a last thing to bear in mind, USB is a packet/block protocol, if you are sending bytes etc down, it will send lumps of data as it see's fit. You can get interesting effects driving it as a byte stream. ie one half of a lump will be thrown at the receiving end at full tilt followed by a not insignificant pause followed by the other half of the lump at full tilt. put in simple terms the pacing of the bytes isn't what you think you sent it as.

Some hardware on the receiving end of USB to serial ie the serial end may be able to handle the baud rate but not the bandwidth ie the sustained data rate. Bearing in mind the earlier discussion of what USB does. Usual fix is drop the baudrate so that pacing is enforced. How you write your RX code and what other ISR's if any are running can have a big impact. I have bumped into this with high speed serial links before now (transputers with DSP front end cards).

Put an IO port bit toggle in your RX ISR (if your not ISR'ing RX this could be your problem see notes re sustained transfer) hook the port up to an oscilloscope and or a counter and do some counting/timing. It can be very revealing. I use a port toggle and oscilloscope often in real time embedded development work it gives debug feedback without overly loading up the processor with reporting and ruining the timing. I can also see exactly how much time a lump of code takes to execute without cycle counting.

Don't know if any of this will help, you already know about it (sorry in advance), or it is a waste (even sorrier).

Cheers

aka47


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 04:00PM
Triffid_Hunter Wrote:
-------------------------------------------------------
> I adjusted my code to set that, gave it a whirl
> and lost not a single bit over several k of data
> in both directions. ringbuffers ftw smiling smiley

BTW, the arduino Serial instance, derived from HardwareSerial, appears to use a ring buffer (128 byte).

Triffid, how does your code compare? Did you look at the arduino-0017/hardware/cores/arduino/HardwareSerial .h / .cpp file before writing your own?

There is definitely a lot of room to improve there though.. for example, the available() method uses a % operator, and the compiler doesn't seem smart enough to figure out that %128 ~== &127; sheesh. Okay, I might be tweaking that libray a little then.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 04:05PM
Hmm sounds remarkably like pacing issues and dodgy library then.

I found your posts when I checked the thread after writing war and peace, must have taken me so long you guys figured it out anyway. winking smiley

aka47


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 05:09PM
Quote
BeagleFury Wrote:
> You mentioned that echo verification would
> probably be an effective solution. What technique
> would be used to send data from firmware to host
> without confusing the echo verification? Some
> form of handshake that lets the firmware respond
> with anything it might have, where the host echos
> it back to firmware to guarantee delivery or
> something like that? Do you have any good
> illustrative examples to make sense of what you
> might propose?
>

My suggestion is to simply echo the commands, what was called full duplex, the FTDI serial converter emulates a modem. This was how a lot of the verification was done back in the BBS days before the nets united under IP. This is easy to implement with a dedicated stream such as RS232.

I am not quite sure how this would work when passed through the FTDI USB bridge. What delays are involved. USB wants the data formed in packets which can be interlaced and time muliplexed. Send a byte wait for return is a bit slow.

One could look at the Bootstrap loader protocols used by the AVR chips. This is the part in the Arduino represented as AVRDude. Atmel's representation of the Serial protocol has proven robust. In effect there are some simple commands which allow the bootloader talker to respond via the serial port.

You do bring up the question of the backchannel data. Where the host request data from the device.

I have in front of me the 20 year old docs from the DOS CNC program I just upgraded. A quick look through the list of G codes shows that the codes starting with G all do something to the machine, None return information.

MCodes have some modes for user input, M60 through 63 are input wait conditions. M70..73 are Monitor conditions. M76..79 service routine interrupts. The PC-DOS program waited for a key before continuing. It looks like the service interrupt conditions could be used for limit activities.

Since tool change was not implemented, the program paused and waited for a new tool. One could set a tool change Height.

The monitor conditions are interesting. These respond to the K column of the ledger sheet, used with the teletype entry of NC code.

In looking over the manual, the communications protocol is one way. Nothing is ever returned to the host.

This would be the simplest implementation. Treat each NC block as a packet.

The difficulty here is that it is hard to tell what an NC machine uses to end a block. The codes are modal so a G90 code sets the state machine to input. G92 in this implementation pre loads the counter registers. This can be used to set things like the start position if not zero. It is stated that G92 is modal and only affects the current block.

Returning to G00. If this is followed by a character such as X, this will load the X counter register. The PC-DOS program required line numbers. A non Numeric digit would signal the end of the data entered into the X counter register. If Y followed X then the Y counter is loaded. This process continues through the symbols ZIJKSF. Each in turn loading the register as needed.

These letter number combination are called Words. One of the first programs I maintained in college was called NCWord. It emulated the spreadsheet entry on a minicomputer. When the program was verified it would be punched to paper tape. There was no connection to the tool other than the paper tape.

I made a postscript simulator for this PC-DOS Gcode, which used N as the terminal character. When N was seen, the data for that block was rendered. In practice any whitespace could signal the end of block. Gerber uses an '*'

it is interesting to compare this documentation against the WIKIPedia entry for RS274D. There is a link to an EMC2 Gcode tutorial Which give some guidelines as to what makes a block of code.

My experience of different NC and CNC machines is that the EOL character that terminates the block can be anything. I have some 30+ years worth of NC code scattered about my computers and backups. Each "Gcode" file, pretty much only works on the tool it was coded for.

Even the format of the Word, which is the symbol to load the counter register differs from machine to machine. Some want fixed point decimal, others leading zero. Some programs set this with a % some with a word that starts with the letter 'O.'

This may be due to the early state machine architecture of the early NC controllers. Where any function not supported is a savings in hardware, so the machines issue a fault rather than to ignore one of the unsupported Words.

To get back on implementing a simple robust Gcode channel, The echo would happen at the end of the block. Before sending the block the host would calculate a checksum which would be sent following the end of block character. The device would echo the block and calculate a return checksum. If the two checksums did not match, then the host could re-send the block.

The weakness of this is that there could be a delay caused by the re-try. Depending on the CRC chosen for the check-sum, the device could attempt to correct.

This still does not address the issue if the entire block [USB] packet is lost. While line numbers on my PC-Dos Gcode are required, on EMC2 they are an option. The use of line numbers could be a way to detect or even route packets, so they are assembled in order on the device.

-julie

Edited 1 time(s). Last edit at 02/04/2010 05:13PM by sheep.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 05:35PM
Julie

I think we may be of a similar vintage....

Grin

aka47


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 06:35PM
BeagleFury Wrote:
> Triffid_Hunter Wrote:
> > in both directions. ringbuffers ftw smiling smiley
>
> BTW, the arduino Serial instance, derived from HardwareSerial, appears to use a ring buffer (128 byte).

Okay, looked a little deeper. Triffid did it right; the key was the mention of "both directions".. HardwareSerial does not appear to use a tx buffer, rather it waits until it can send the byte. This could be a (cpu time wise) very long time, and seems to explain some of the data loss I saw.. though, it doesn't make sense why it would work better when I slowed the rate down, unless it was a combination of factors.

How soon before you're comfortable about stability, Triffid? I'd like to take your code and plug it in under the Serial class.


Anyway, having thought about this a bit more, I don't believe any kind of echo "error detection" will work properly. Figuring out retransmition and recovery becomes significantly more difficult when the host sends 10 commands as 400-500 bytes get buffered in the bowels of USB drivers and OS streams. If the firmware fails to correctly recieve the first command, all the remaining commands become ticking time bombs if the firmware tries to execute them as if nothing was wrong.

However, I also believe the flow back to the host will almost certainly always have less critical nature.... the host can ask about what it wants to know, and keep asking until it gets a response that it likes. This makes things easier because the firmware doesn't have to worry about retransmission to the host; only the other way around. It also removes the need for a buffering and a window field for the firmware->host direction.

So..... given that...... I'm going try again... simplify simplify simplify, but still solve the problems and address the needs...

All non-framing characters ascii. I prefer stuff I can throw into a log file or send to screen, and still make sense of it, without needing ASCII lookup tables, or figuring out if ^X is a carot X or a ctrl X. A few control characters here and there are fine, especially TAB. I like tab. Oh, and I never want to see a NUL... too many ways to break stuff with NUL terminated strings. CR and/or LF also make good terminator characters; those map to a readable format in a relatively straightforward way.

So... host->firmware packet:

SOH(^A) id len len crc crc crc TAB(^I) data... LF(^J) and/or CR(^M)

- id a strictly cyclic increasing printable character; 0-9A-Za-z would seem to provide a nice big sliding window of up to 62 buffered packets and still be grokable by a human looking at a log file; host could decide how far forward to buffer, up to 62 (could be 1.. could be 62.. configurable probably).
- len : consisting of hexidecimal encoding for the number of characters in the data field.
- crc : a three character encoded 16 bit crc. 48 possible characters (0-9A-Za-l) I believe makes conversion math on firmware side bitshift + integer addition only (x*48 = x*32 + x*16; 48^3 > 2^16)
- TAB : a nice separator between header and data that makes scanning the commands easy when looking at a log file or a live monitor.
- data... : text based commands; firmware parses these similar to how it already operates.
- LF/CR : a nice packet terminator that makes scanning the commands easy when looking at a log file or a live monitor.


Any errors, for example, any invalid character, crc invalid, no LF/CR after length data characters, .. etc... anything at all, and the firmware begins transmitting "ERR " + id + CR + LF, where id is the packet id it is expecting to see. I think send this every few characters it recieves until it gets a sequence it recognizes as the start of a new packet or a reset request (E.G, until it sees SOH, CAN, or STX, and the characters following those look proper and valid.)

The host can reset the framing by sending a few CAN (CTRL-X) characters. If the firmware sees three of these in a row, it will discard anything, reset it's window back to '0', and start echoing the CAN characters back to the host.

A human on a dumb terminal can ask the firmware to enter 'dumb human is my master' mode -- by typing CTRL-B a few times. After the third one, the firmware will echo CR LF "> " as a prompt. After this point, the firmware will echo any printable character or back space, and accept as a command if it sees CR and/or LF, followed by CTRL-E (ENQ). I may be overly paranoid here.. I just want to avoid a false positive. This also opens up the opportunity to edit the line by entering something else after hitting CR/LF. In any case, the firmware would execute the command, and then print another prompt. To get out of this mode, the CAN / CTRL-X sequence would be used. No other difference would exist between these commands typed by a user, and the commands wrapped in a nice service of delivery packets sent by a host.

I'm going to see if I can't whip out some working code, both a host API and the firmware logic.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 08:38PM
BeagleFury Wrote:
-------------------------------------------------------
> Okay, looked a little deeper. Triffid did it
> right; the key was the mention of "both
> directions".. HardwareSerial does not appear to
> use a tx buffer, rather it waits until it can send
> the byte. This could be a (cpu time wise) very
> long time, and seems to explain some of the data
> loss I saw.. though, it doesn't make sense why it
> would work better when I slowed the rate down,
> unless it was a combination of factors.
>
> How soon before you're comfortable about
> stability, Triffid? I'd like to take your code
> and plug it in under the Serial class.

While my serial library does change from time to time to suit my current project, it (and the ringbuffer library it leans on) is extremely stable as I've been working on/with it for a long time. It's just the rest of my current project that isn't so stable winking smiley

My ringbuffers are currently 64 bytes in size, with 3 bytes used for head/tail/length so 61 bytes in the ring itself. If you made the size 2^n plus 3, you could indeed use binary AND instead of modulus for many of the operations. You could also remove length entirely if you make all ringbuffers the same size- my code is designed to allow different ringbuffers of different lengths in the same system.

I had/ to make a transmit ringbuffer, otherwise I can't send debug info from interrupt context, without making the interrupts take *ages* to run!

My latest change to the library involved dealing with a full transmit buffer while in interrupt context. Before, it would lock up waiting for stuff to be read out of the ringbuffer while the transmit complete interrupt went unserviced. Now it just drops the data (what else can we do?)

Feel free to grab it and shoehorn into your arduino IDE- personally I've never downloaded the arduino software and so have no idea what it looks like or how they've done things.


-----------------------------------------------
Wooden Mendel
Teacup Firmware
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 08:41PM
Quote
BeagleFury Wrote:
Anyway, having thought about this a bit more, I don't believe any kind of echo "error detection" will work properly. Figuring out retransmition and recovery becomes significantly more difficult when the host sends 10 commands as 400-500 bytes get buffered in the bowels of USB drivers and OS streams. If the firmware fails to correctly recieve the first command, all the remaining commands become ticking time bombs if the firmware tries to execute them as if nothing was wrong.


I may have been unclear. The most that should ever spin inside the usb buffer is one gcode block. This at most with a circular interpolation in 3 axises would be something like 30 or 40 bytes. The most characters that could be entered on a teletype line was 72. This limited the size of the blocks on the old DOS era CNC machines. Small Packets are better for the USB Bridge chip and USB in general.

The idea is that the host can not issue the next block until the device returns the block data. Granted there is some delay, especially if the device is moving, processing the prior block. I should strongly suggest that data pass-through to multiple processors is essential. Perhaps I have been influenced by reading Nop's Hydraraptor blog everyday for the last year.

In my abstraction, the main device processor, only contains the NC registers, The motor control is done through sub processors. I use ATtiny25 chips in a player piano valve system, where one processor controls two valves. The main processor is in effect a latched shift register every 50Khz the latch sets and the lines to the tiny25s change state (if needed.)

I now think that as part of the ACK the NC counter registers could be returned. This way a dumb terminal could implement a software DRO. First the state bits would be returned. Then the axis registers returned. This could be a large burst of info if the registers are returnd as ASCII encoded decimal. It might make sense to return the registers in machine units. The actual binary values dumped out of chip registers. Harder to decode by human, smaller packet size.

The worst case would be to echo the block in the old accounting sheet column format, with line position, this would be 72 characters. These packet would only be echoed when idle. Is that to much to ask the return channel? The host would have to poll frequently, The way the FTDI Bridge chip works, is that there is always data when the host polls the chip. (see the FTDI BSD/linux driver for details) Low level Returned packets always start with the Modem state bits, like DTR and CTS. The high level driver has to strip these out, then combine the buffers and return the characters through the emulated com port.

The problem with returning Err+id+EOL is that how is the user supposed to know what Err-31 is? At least with a dumb DRO the led blinks when the limit condition occurs. This is because there is a dyna lable (see back to the future part one The time computer is a DRO) telling the user this is the condition.

I am also unclear as to why 24 bits of CRC? The firmware is loaded with an 8 bit. This is mostly a sanity check anyway. Again I am an assembly programmer, so the way I do this with MIDI SYSEX messages is to read a byte compare and sum using the same registers. Since inside the 8 bit device, my resources are limited, I want to tokenize things as soon as possible. So the CRC is my token.

The monitor prompt is a good idea, this could put the system into executive mode. I would suggest a flexible EOL protocol, where CR OR LF is a an eol and the extra event ignored.

-julie
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 10:43PM
sheep Wrote:
> I may have been unclear. The most that should
> ever spin inside the usb buffer is one gcode
> block.

And if I do not send GCode? This might be one source of confusion. Let me restate again -- the firmware I am developing will not accept GCode. My timings for performing inverse kinematics did not give satisfactory results (>1ms per inverse kinematic computation). I also feel unacceptable the number of linear segments needed to keep accuraccy to .1mm over the large circular print area of my machine. See the Wiki, and a few of the video's I've posted to the builders blog.

GCode will feed to a host based application that will compute the spline curves needed to keep accuracy for that GCode to .1mm. The cubic spline segments will then be fed to the machine. No M codes. No G codes, nothing but "Set your 5D position to this point, then follow this sequence of spline segments."

> The problem with returning Err+id+EOL is that how
> is the user supposed to know what Err-31 is?

For interactive mode, the user would never see those errors. They are strictly for CRC packets. I would expect interactive mode users to watch echo carefully to detect errors.

Also, no sane user would use interactive mode to actually execute a build for my initial setup. The computer will be much more effective at computing the multitude of 160 bit spline segments. I intend on using it only for debugging/diagnosis.

> I am also unclear as to why 24 bits of CRC?

Only 16 bits of CRC (well, technically, since I am using base 48, they are 16.75 bits. smiling smiley) The three crc character codes must be printable ascii characters per my requirements to dump / monitor on screen. 16 bit codes require at least 3 8 bit characters. I only introduce control characters to avoid having to introduce escaping logic for framing signals.

> The monitor prompt is a good idea, this could put
> the system into executive mode. I would suggest
> a flexible EOL protocol, where CR OR LF is a an
> eol and the extra event ignored.

Yep. That is the plan.
Re: Firmware communication protocol : Streaming vs Packets
February 04, 2010 10:50PM
Quote
BeagleFury
My test programs have nothing in them except the comms test. The *only* point of failure is the standard Arduino implementation of the Serial class. There is no heavy maths. No expensive loops. Nothing except a very tight loop checking for serial availability, a buffer save, and an echo back upon reciept. There is the possibility that the transmittion of data from the motherboard to host disables interrupts, and that could be where the data loss happens; which may imply that we effectively do not have bidirectional UART comms, but a crippled bidirectional / mostly unidirectional comms channel.

What happens if you leave the AVR out of the equation, simpy connecting RX with TX?
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 02:02AM
OK

Echo based error checking.

The transmitter is responsible for checking that the byte that comes back, is the same as the byte that went (8ni)

The receiver is responsible for echoing each byte it receives.

Each end knows when it is Transmitting and Receiving.

If a byte is lost (which ever way), the transmitter can resend the byte.

No framing overhead needed.

Belief is not necessary, I am not preaching a vicar although my posts can be as long as sermons. Try it and see. My advice is based on experience. I will however accept that you don't feel entirely comfortable with something you have'nt tried yet.

If you are byte checking you are also auto pacing your characters (baud becomes semi irrelevant) simply because you don't send the next until the last has been echoed. (so it must have successfully be received and transmitted at what ever throughput the other end will work at)

Because you are byte checking (xor and look at result) it is fairly fast. You are also working with binary so the method does'nt care what you are sending. (Binary or Ascii)

When you use an ASCII protocol you have extra problems sending binary. Put simply you cannot guarantee that in a block of binary there won't be a byte that is exactly the same as a control or sequence of control characters you r protocol is looking for.

There are two methods to deal with this, both add significantly to the overhead.

1. You convert binary to an ascii representation (example intel hex) clearly though this lands you with more data to send, higher baud rate lost to more data to send.

2. You perform an activity known as bit stuffing where transmitted binary blocks are stuffed with bits to remove Binary Bytes that represent control characters. Again you have over head to process and encode then decode at the other end and the gains of higher baud are lost to lower through put.


On simplicity, apply occams razor to the above, give it a close shave and see what is left.

For me the simplest that auto paces and preserves the benefits of higher baud rate is echo based byte checking.

Ultimately though you are the guy with your fingers on the keyboard, the choice is yours.

aka47

PS Triffids advice on speeding up serial comms using binary increments of buffer size and logical math, is very sound. It is what I have done for quite a few years. Ring buffering RX & TX is essential too. Get the basics right then see if you need error detecting and fixing after all. You might be surprised.


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 03:08AM
Ohhh

As a slightly related aside.

RS485 data comms routinely uses byte echoing to check the quality of the data. The echoing is done at the bus rather than the far end echoing the byte that was sent.

If it is done right the rx is permanently enabled and monitors the bus. Every byte sent is compared to what is echoed back at the bus.

If two devices try to transmit at the same time the byte will be mangled and the compare will fail. Similarly corruption due to noise is picked up as well.

Transmission on 485 with multiple stations is routinely packet oriented the protocol can be as slim as header/packet/check, where header contains TX address, RX address Size fo packet and check.

If you really want robust serial comms use RS485........


For ultimate speed of serial comms, I usually use ring buffers RX and TX sized to be a binary increment with the beginning and end pointers external to the buffer. (you don't have to workout where they are to get the info).

TX. beginning = end => buffer empty, stop sending, disable tx interrupt (a simple test and quick, no math, all logical operations)

TX. beginning <> end => enable send interrupt and let it get on with sending.

RX interrupt permanently enabled, let it receive something if it is there to recieve

RX, beginning = end => nothing to do, return nothing to do from non blocking getch() equivalent.

Blocking getch() equivalent busy waits for <> below.

RX, beginning <> end => something to do, return the char from call whichever you used.

Non blocking getch() equivalent is important. You don;t have the luxury to wait for it.

you actually need to get the size so rarely that you calculate it from beginning and end only when you want to know what it is and then do it using logical operators. Making number of chars in the buffers meta-data, it is'nt stored anywahere.

For operation particularly when you are laying on a protocol, implement your protocol as a state machine. Beware the case statement, on some tool-chains it compiles to more processing cycles than other constructs. Take a look at your assembler listing if you are not sure. (normally an extra command line parameter will throw out an assembler listing from the build process.)

Trace your state machine through on paper to eliminate races, and lockups.

For best speed in-line assembler for the ISR's and optimize this for minimum wastage of processing cycles.

Again, if you know this already, sorry.

cheers

aka47

Edited 1 time(s). Last edit at 02/05/2010 03:22AM by aka47.

Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 03:40AM
And a last one

The arduino libraries are mostly written as the arduino is designed. ie to be an easy-in to microcontroling targeted at an educational audience.

Assume they written with a leaning towards educational expediency (as opposed to real time operational efficiency) and they will cause you to fall over less.

I fell over trying to generate custom waveforms using the PWM library functions and ISR's ( a differnt timer from the one for the PWM I chose was using, but it fell in a heap non the less, and me with it )

They are improving as more people fall over them and then submit patches and fixes. If you want to be sure and are counting cycles though DIY.

aka47


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 06:20AM
Hi aka47,

Many problems exist with the echo technique you propose:

- You cannot simply retransmit the byte because firmware cannot distinguish two bytes with the same value transmitted in sequence from a single byte with an error during echo back.
- You cannot simply verify echo worked because if data lost, you must decide how long to wait before assuming the data was lost and retransmitting it.
- Even if you could get an echo corrections scheme to work, it does not play well on the non-realtime aspect of the application code; It could be 2 or 3 seconds between each character if the host decided to start up a virus checker or the user decides to launch a web browser while printing, etc. It only gets worse when you consider that with some operating systems, large portions of device drivers run in multitasking user space process. Theoretically, you can mitigate this by switching to an RTOS, or putting the logic into an interrupt driver kernel driver itself.

Windowing and framing have long successful history at solving this problem. I don't want to invent something new, but I could not find a simple ASCII oriented protocol that gave just enough reliability for me to use it instead of making up my own. I still welcome any references to something close enough to what I need (Level 3 X.25 and IP stack seems like overkill, IMHO; level 2 X.25 requires special low level bit injection to escape long sequences of 1 bits and still does not solve the reliable delivery constraint; etc.)

I currently will not switch away from USB/UART because I have the hardware already, and it works (mostly). Switching to a more reliable medium has advantage but I do not consider it viable right now.

And yup, to quote Triffid_Hunter... ring buffers FTW. smiling smiley
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 06:22AM
anton Wrote:
> What happens if you leave the AVR out of the
> equation, simpy connecting RX with TX?

Seems like a reasonable question. I'll give it a try maybe.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 08:19AM
I wouldn't implement the protocol twice, once in binary and then in ascii just for interactive debugging with a terminal emulator. I just use an interactive Python terminal if I want to talk to my machine interactively. E.g.

>python
from hydra import *
hydra = Hydra()

hydra.goto_xyz(100, 100, 10)
print hydra

If I want to see the packet contents I just put some print statements in the comms routines to format the data nicely. Less work and duplication and a powerful interactive turing complete command line interface to the machine for free.


[www.hydraraptor.blogspot.com]
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 09:43AM
nophead Wrote:
-------------------------------------------------------
> I wouldn't implement the protocol twice, once in
> binary and then in ascii just for interactive
> debugging with a terminal emulator. I just use an
> interactive Python terminal if I want to talk to
> my machine interactively. E.g.

Yeah, I might take that route eventually, for nothing else than the extra throughput (8 bits per character vs. ~6 bits per character.)

In terms of complexity, so far, using Inversion of Control to implicitly define the state machine, the code is actually very short and simple for something similar to my last proposal:

#define IOC_STATEVAR relyp_state
IOC_DEFINE( ProcessRelyPChar, int inpc, int )
IOC_LOOP()
// -----------------------
// start by looking for a framing character, SOH, STX, or CAN
if( inpc == ASCII_SOH )
{
// -----------------------
// Read in the packet id.
IOC_YIELD_NEXTEVENT( EVENT_WAITCH );

// -----------------------
// read in the packet length, and initialize CRC
IOC_VERIFY_WAITCH( AsciiToCode64( inpc ) == expectedHeadId );
pktCrc = GetCRC( 0, inpc );
pktLen = AsciiToCode64( inpc );

// -----------------------
// read in second part of packet length.
IOC_VERIFY_WAITCH( IsCode64( inpc ) );
pktCrc = GetCRC( pktCrc, inpc );
pktLen = ( pktLen << 6 ) + AsciiToCode64( inpc );
if( !IsCode64( inpc ) )
{

////////////////////
// .... ~30 more lines for packet mode operations .....
////////////////////

else if( inpc == ASCII_STX )
{
// -----------------------
// STX x3 will enter interactive mode
IOC_YIELD_NEXTEVENT( EVENT_WAITCH );
IOC_VERIFY_WAITCH( inpc == ASCII_STX );
IOC_VERIFY_WAITCH( inpc == ASCII_STX );

// Change to interactive non-crc mode.
while(1)
{
while( inpc == ASCII_STX )
{
IOC_YIELD_NEXTEVENT( EVENT_PROMPT );
}

// Continue reading characters until a new line or cr.
pktLen = 0;
pktPtr = pktBuf;
while( ( inpc >= ' ' && inpc <= '~' ) || inpc == ASCII_BS || inpc == ASCII_ENQ )
{
////////////////////
// .... ~30 more lines for interactive mode ....
////////////////////

IOC_END( 1 )
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 12:59PM
Hey as I said you are the man with his fingers on the keyboard, I am not here to argue what I know works.


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 01:03PM
I realized last night I am sitting on a tube of mega168 chips, and all the AVR development tools one can desire. Most of this is wainscoting until I can get an Arduino board made. The epiphany is that in the mean time I can plug the chip into the STK500 or one of my dragons and test some of what I am suggestion here. At least up to the point of the USB bridge.

forgive the slight topic bend, What would be the best way to make my code available to others? I have my own web domain. Currently there are some old code distributions up there that are terribly dated (like an X11 GUI framework for mac OS7.6)

Most code now seems to be distributed with some sort of CVS package in mind, This may sound dumb or ignorant, Is there a preference as to what to use? A package that I would install on my own domain? Or does one use a hosting service? Any recommendations?

I would like to create a trunk for the pure AVR assembly version of the firmware that I am working on.

Edited 1 time(s). Last edit at 02/05/2010 01:27PM by sheep.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 01:43PM
github seems neat- free for open source projects. sourceforge offer subversion for open source plus a bunch of other services.


-----------------------------------------------
Wooden Mendel
Teacup Firmware
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 05:33PM
Drop Sebastien an Personnal message from this forum

He is the guardian of the Wiki, Web and Repository.

Sebastien will set you up with access to the repository, ask him for space under your own area.

Last I asked for some stuff he is a touch up to his lugs for approx a week but may be able to do something a little sooner. Ask.

Hope this helps

Cheers

aka57


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 05, 2010 10:07PM
Yeah. Me too. I'll add the source I have for ..hmmm.. I'll call it the OlaOneWay protocol (OOW - pronounced, well, like you're in pain, I suppose) given a place to put it, there any special organization rules or tips to follow?

So far, only tested via host emulation mode, no firmware download yet. Code size looks to be about 1.5K or so (plus overhead for serial comms.) (I didn't have the board hooked up, and didn't want to go down to the basement to do so.)

Uses Inversion of Control concepts, so structure pretty easy to follow (IMHO, easier than explicit state machine tables or structure).., should be easy to tear out the interactive parts, or change the CTRL vs. PRINT to an 8 bit packetized with an escape code sequence for controls... (I originally used a hacked 'switch/case' coroutine pattern; but, considering the weirdness, and the fact that some C programmers think it is broken code, I switched it to a switch/goto table for reinjecting flow of control back into the yield points.)

aka47, the manner to go interactive works quite well.. you just hit CTRL-B three times, and it gives you a prompt; Almost brings back the days of "+++" followed by "ATDT", hmm? smiling smiley I even added a back space, and a reprint line command (CTRL-E). Could add fancy edit controls pretty easily, just need to define the behavior. To switch back to crc/window mode, CTRL-A three times does the trick, and it responds with a NAK framing error to let you know where to start sending packets in the sliding window. By default, it starts out in packet mode, but it might make more sense to start it out in interactive (a host can simply transmit 3 SOH (CTRL-A) characters, grab the current window frame, and start transmitting packets.)

Currently, the packet mode error diagnostics returned include "NAK id" for a general idle error, "NAK id : frag" for a fragmented / incomplete packet, "NAK id : crc" for a crc error, "NAK id : too long" for a packet with length exceeding the internal maximum packet size, and probably one or two that I'm forgetting. Any successfully recieved packet will send "ACK id". All responses terminate with CR LF, but that can be changed by changing a single constant. On input side, it works with whatever variation of CR and/or LF that you send to it, as long as you're consistent with that pattern.

I still need to write the host API to interface with the board, but that should be relatively straightforward.
Re: Firmware communication protocol : Streaming vs Packets
February 06, 2010 03:15AM
lol

Just read what I typed last night, I think my fingers were slurring....


Necessity hopefully becomes the absentee parent of successfully invented children.
Re: Firmware communication protocol : Streaming vs Packets
February 18, 2010 03:18AM
As for the Arduino Serial implementation, it has a major flaw. The main problem is that receiving sure is nicely done in interrupt and buffers etc, but _sending_ is done in a busy waiting loop! no buffering, no interrupt or whatsoever. If this could be fixed, then it would be great.

Since i've decided to rewrite the firmware for the Delta RepRap in 'C', I've looked at Triffid Hunters Serial implementation and decided to merge it with some others to now have a generic serial routine but it still haven't got a serial transmit buffer. Time to dig into the ATmega specs i guess and fix this for once and for all.

As for the protocol, i'm having good experiences with the MakerBot firmware's simple packet protocol. It features a master/slave packet protocol with a free bi-directional textual channel next to it. Handy for debug output winking smiley

With regards,
Reinoud
Sorry, only registered users may post in this forum.

Click here to login