Previous Next Table of Contents
The design currently consists of a set of three (3) distributed processes, each running on a different node. Messages between the processes are via Sockets, with the messages being JSON messages.
- The Sensor: running on a BeagleBone Black, written in Python. This reads the various sensors and reports any state changes (e.g., from open contact to closed contact) to the Controller. It uses Sockets to communicate with the Controller, the only process it directly communicates with. It is dedicated to this work. Testing is performed via py.test. In a Model View Controller (MVC) paradigm, this is the Model.
- The Controller written in Kivy for Python: running on a desktop node, currently an Ubuntu 13.04 system. This system stays up all of the time. All logging will be located here. It communicates with the Sensor, the GUI, and the outside world, all of which are via Sockets. This node is not dedicated to this activity; it is one of many activities taking place on this node. In a MVC paradigm, this is the Controller.
- The GUI: running on multiple nodes on the local network, possibly running on non-local networked devices, written in Python. Uses Kivy for the GUI. Some nodes may be dedicated, while other nodes may have other activities as well. It communicates with the Controller via Sockets.
An issue with Sockets.
Interestingly, as I have been programming for decades, I was unaware that SOCK_STREAM type Sockets are not message oriented! While UDP deals with complete messages, TCP deals with byte-streams. So, the data between the processes aren't complete messages, but instead, byte-streams. So, while you may think you are sending a whole message you've constructed, in fact, you are sending a series of byte-parts of the message. Your code will have to reconstruct the byte-parts into a single message.
In running tests in py.test, I became aware that sometimes the messages between the Sensor module and the Controller module would fail from one test run to another, even with no code changes.
Looking into this, I found that data placed onto the network via the sock.send, sock.recv are not guaranteed to be delivered in whole, as a complete message. Instead, the message may be chopped up in as many small messages as the underlying network components implementing Sockets desire. A message that Sensor is sending, that in it's entirety might be 64 bytes long, could be delivered as 64 separate 1-byte messages, or two separate 32-byte messages, or one single 64-byte messages, or any other combination. The only guarantee is the order they will be received; they will be received in the correct order.
This means the sock.recv call in the Controller code (or anywhere a socket sock.recv is being used) must be capable of building up the complete message from multiple partial-messages. This is usually implemented via a loop in the code.
But the problem now becomes: how do the receiver know it's received all of the partial messages for the complete message, and now can send this message off for processing while it starts receiving the next message?
In researching this, I have found the following recommended approaches:
- Always use the same fixed-length messages, then loop until this length of message has been obtained. This means you have to either have a message construct that content never varies, or you need to pad the message to reach some agreed-upon maximum length.
- Include the length of the message as part of the message, preferably at the message head. (BTW - in my testing, I found that Python dictionary contents are not in any specific order, so placing a key of 'MSG_LEN' with the length value may be the last dictionary element in the stream!). The message length does not include the length-part, just the message part. I plan on using this method, but an issue cropped up because I'm using JSON for the messages. This involves two messages - the first message of a known type and length (struct 'I' 4-byte is frequently recommended) that contains the length value of the following message, and the second message being the actual data.
- Delimiter notifying the end of the message. This entails receiving byte-streams, scanning for the end delimiter, then when found, packaging the result into a final message. This means setting aside some special byte combination that won't be replicated as part of a normal message. This may work, as I'm in control of the messages that will be sent/received, and can dictate the message contents. Currently, I'm not using this approach.
- Using UDP instead of TCP SOCK_STREAM. This is a true message construct, and either the whole message is delivered, or the message isn't delivered; partial messages aren't delivered. Since this is a system whereby a missed message may be an alert that the back door was just opened, this isn't an option for me, so I'll stay with TCP SOCK_STREAM.
- Using some sort of framework that performs the underlying message construction for me, such as ZeroMQ. I'm hesitant to use this approach, as I'm not certain how much overhead this approach will introduce, particularly on the BeagleBone Black. More research will need to be spent here for analysis, so for right now I'm not taking this approach, but may come back to it if the message-length approach isn't satisfactory.
JSON Message Length Issue
I thought I could just create a new message that consisted of the message length (in a string format) concatenated with the JSON message already developed. However, the receiving end, where it rebuilds the JSON message to the original construct, objected to this non-JSON length portion. This means the length has got to be part of the JSON message.
Decision: Using embedded message length
For right now, I'm going to use the embedded message length approach, which entails a two messages: the first message being the length of the following data content message, and the actual data content message. If that doesn't pan out, I'll try the delimiter end notifier, then finally try the ZeroMQ.