26.02.2014 Views

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

© 2009, <strong>QNX</strong> <strong>Software</strong> <strong>Systems</strong> GmbH & Co. KG. Pulses<br />

you received a message and set when you replied to a message. Just before you replied<br />

to the message, you’d check the flag. If the flag indicates that the message has already<br />

been replied to, you’d skip the reply. The mutex would be locked and unlocked around<br />

the checking and setting of the flag.<br />

Unfortunately, this won’t work because we’re not always dealing <strong>with</strong> two parallel<br />

flows of execution — the client won’t always get hit <strong>with</strong> a signal during processing<br />

(causing an unblock pulse). Here’s the scenario where it breaks:<br />

• The client sends a message to the server; the client is now blocked, the server is<br />

now running.<br />

• Since the server received a request from the client, the flag is reset to 0, indicating<br />

that we still need to reply to the client.<br />

• The server replies normally to the client (because the flag was set to 0) and sets the<br />

flag to 1 indicating that, if an unblock-pulse arrives, it should be ignored.<br />

• (Problems begin here.) The client sends a second message to the server, and almost<br />

immediately after sending it gets hit <strong>with</strong> a signal; the kernel sends an<br />

unblock-pulse to the server.<br />

• The server thread that receives the message was about to acquire the mutex in order<br />

to check the flag, but didn’t quite get there (it got preempted).<br />

• Another server thread now gets the pulse and, because the flag is still set to a 1<br />

from the last time, ignores the pulse.<br />

• Now the server’s first thread gets the mutex and clears the flag.<br />

• At this point, the unblock event has been lost.<br />

If you refine the flag to indicate more states (such as pulse received, pulse replied to,<br />

message received, message replied to), you’ll still run into a synchronization race<br />

condition because there’s no way for you to create an atomic binding between the flag<br />

and the receive and reply function calls. (Fundamentally, that’s where the problem lies<br />

— the small timing windows after a MsgReceive() and before the flag is adjusted, and<br />

after the flag is adjusted just before the MsgReply().) The only way to get around this<br />

is to have the kernel keep track of the flag for you.<br />

Using the _NTO_MI_UNBLOCK_REQ<br />

Luckily, the kernel keeps track of the flag for you as a single bit in the message info<br />

structure (the struct _msg_info that you pass as the last parameter to<br />

MsgReceive(), or that you can fetch later, given the receive ID, by calling MsgInfo()).<br />

This flag is called _NTO_MI_UNBLOCK_REQ and is set if the client wishes to<br />

unblock (for example, after receiving a signal).<br />

This means that in a multithreaded server, you’d typically have a “worker” thread<br />

that’s performing the client’s work, and another thread that’s going to receive the<br />

unblock message (or some other message; we’ll just focus on the unblock message for<br />

April 30, 2009 Chapter 2 • Message Passing 123

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!