Speed and reliability in wireless network

Status
Not open for further replies.

mik3ca

Member
After reading countless of articles about internet protocols, I am in the process of rolling my own wireless one between microcontrollers in which one acts as a master and the remaining as slaves. The wireless devices I use is HM-TRP configured at 38400 baud and connected to the UART of each 8051 micro.

I have a packet definition as follows:

Byte 1:
2 LSB values: sequence number. Master sets, Slave returns same number
6 MSB values: recipient address.

Byte 2:
2 LSB values: ignored
6 MSB values: sender address.

Bytes 3 through 6:
Data bytes

Byte 7: checksum

Now In my microcontroller code I have checked to see that the data is correct before it is accepted and if it is, then data is processed.

I also enabled a timeout feature so that if the sender decides to send part of the data then stall, then data pointer resets (and the sender has to resend all the 7 bytes of data again).

At first I was using E0h for TL0 and 00h for TL0 (I'm using timer 0 in 16-bit mode and I make the values reset upon automatic entry of the timer) which works out to roughly 2 or 3ms. Then I went to 50h for TH0 which is probably 20ms? I only did that because I was making room for other code, and if I cram too many clock cycles in a timer interrupt then the timer interrupt will fire endlessly.

Since most of my microcontrollers have no extended memory, I'm limited to 256 bytes of ram (I'm using at89S52).

So I'm curious. With standard TCP/IP, partial packets are allowed, but with the way I'm doing it, I'm expecting full 7-byte packets to continue.

If I were to switch to just say a 4 byte setup consisting of this packet format:

Receiver address + 3-bit sequence
Sender address + 3-bit data part #
One-byte data
One-byte checksum

Is it going to be more efficient (namely reliability) than using the 7-byte setup I mentioned before?

I'm trying to think of all the pros and cons, and I'm debating whether to switch to the 4-byte protocol setup or stick with the 7-byte protocol setup and I'm also trying to find an optimal timeout value because if I set one too low, I'll never get data, and if I set one too high then it will take data forever to arrive.

The data to be sent at once is usually 2 or 3 bytes (sometimes 4 bytes) in length.
 
The "bare" wireless link devices I have used need a run-in sequence to allow the receiver to adjust to the signal strength and the data slicer to settle.

That means sending eg. 0xAA or 0x55 to give a mean DC level for the data.
Some also require the data itself to be "DC balanced", so no strings of 0x00 or 0xff that can upset the slicer level.
As you don't know when the receiver will start producing data so it neems a framing code byte that can be recognised as such.

For a "transparent" link, all that is done for you - but still part of each data frame sent on air.


It's better to send one slightly longer frame that several short ones, as each short one has all the overhead anyway.
That's unless the link module buffers and combines them, but even then you are still sending more data overall to get the same result.

Depending how critical data integrity is, it can also be worth buffering the data from each received frame and comparing the next frame to it - and only updating the "real" input values when two consecutive frames match.

For really critical stuff (large machine control), I've sent each byte both true and inverted then checked they match after reception, as well as the two consecutive matching frames method. That also gets around any possible DC offset caused by data values.
 
which part of the spectrum are you using for this? what are you using for radio hardware?
 
If the FSK chips you use are receiving continuos, you just have mark and space, like 232. If burst, then you probably ned a aa55 header to get the PLL set up. Check your datasheet for details.
 
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…