26.02.2014 Views

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

© 2009, <strong>QNX</strong> <strong>Software</strong> <strong>Systems</strong> GmbH & Co. KG. Message passing over a network<br />

Longer delays<br />

Impact on ConnectAttach()<br />

Impact on MsgDeliverEvent()<br />

Since message passing is now being done over some medium, rather than a direct<br />

kernel-controlled memory-to-memory copy, you can expect that the amount of time<br />

taken to transfer messages will be significantly higher (100 MB Ethernet versus 100<br />

MHz 64-bit wide DRAM is going to be an order of magnitude or two slower). Plus, on<br />

top of this will be protocol overhead (minimal) and retries on lossy networks.<br />

When you call ConnectAttach(), you’re specifying an ND, a PID, and a CHID. All that<br />

happens in <strong>Neutrino</strong> is that the kernel returns a connection ID to the Qnet “network<br />

handler” thread pictured in the diagram above. Since no message has been sent, you’re<br />

not informed as to whether the node that you’ve just attached to is still alive or not. In<br />

normal use, this isn’t a problem, because most clients won’t be doing their own<br />

ConnectAttach() — rather, they’ll be using the services of the library call open(),<br />

which does the ConnectAttach() and then almost immediately sends out an “open”<br />

message. This has the effect of indicating almost immediately if the remote node is<br />

alive or not.<br />

When a server calls MsgDeliverEvent() locally, it’s the kernel’s responsibility to<br />

deliver the event to the target thread. With the network, the server still calls<br />

MsgDeliverEvent(), but the kernel delivers a “proxy” of that event to Qnet, and it’s up<br />

to Qnet to deliver the proxy to the other (client-side) Qnet, who’ll then deliver the<br />

actual event to the client. Things can get screwed up on the server side, because the<br />

MsgDeliverEvent() function call is non-blocking — this means that once the server<br />

has called MsgDeliverEvent() it’s running. It’s too late to turn around and say, “I hate<br />

to tell you this, but you know that MsgDeliverEvent() that I said succeeded? Well, it<br />

didn’t!”<br />

Impact on MsgReply(), MsgRead(), and MsgWrite()<br />

Impact on MsgReceive()<br />

To prevent the problem I just mentioned <strong>with</strong> MsgDeliverEvent() from happening <strong>with</strong><br />

MsgReply(), MsgRead(), and MsgWrite(), these functions were transformed into<br />

blocking calls when used over the network. Locally they’d simply transfer the data<br />

and unblock immediately. On the network, we have to (in the case of MsgReply())<br />

ensure that the data has been delivered to the client or (in the case of the other two) to<br />

actually transfer the data to or from the client over the network.<br />

Finally, MsgReceive() is affected as well (in the networked case). Not all the client’s<br />

data may have been transferred over the network by Qnet when the server’s<br />

MsgReceive() unblocks. This is done for performance reasons.<br />

There are two flags in the struct _msg_info that’s passed as the last parameter to<br />

MsgReceive() (we’ve seen this structure in detail in “Who sent the message?” above):<br />

msglen<br />

indicates how much data was actually transferred by the MsgReceive()<br />

April 30, 2009 Chapter 2 • Message Passing 127

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!