26.02.2014 Views

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

© 2009, <strong>QNX</strong> <strong>Software</strong> <strong>Systems</strong> GmbH & Co. KG. Writing a resource manager<br />

}<br />

sts = fchown (fd, owner, group);<br />

close (fd);<br />

return (sts);<br />

where fchown() is the file-descriptor-based version of chown(). The problem here is<br />

that we are now issuing three function calls (and three separate message passing<br />

transactions), and incurring the overhead of open() and close() on the client side.<br />

With combine messages, under <strong>Neutrino</strong> a single message that looks like this is<br />

constructed directly by the client’s chown() library call:<br />

_IO_CONNECT_COMBINE_CLOSE<br />

_IO_CHOWN<br />

A combine message.<br />

The message has two parts, a connect part (similar to what the client’s open() would<br />

have generated) and an I/O part (the equivalent of the message generated by the<br />

fchown()). There is no equivalent of the close() because we implied that in our<br />

particular choice of connect messages. We used the<br />

_IO_CONNECT_COMBINE_CLOSE message, which effectively states “Open this<br />

pathname, use the file descriptor you got for handling the rest of the message, and<br />

when you run off the end or encounter an error, close the file descriptor.”<br />

The resource manager that you write doesn’t have a clue that the client called chown()<br />

or that the client did a distinct open(), followed by an fchown(), followed by a close().<br />

It’s all hidden by the base-layer library.<br />

Combine messages<br />

As it turns out, this concept of combine messages isn’t useful just for saving<br />

bandwidth (as in the chown() case, above). It’s also critical for ensuring atomic<br />

completion of operations.<br />

Suppose the client process has two or more threads and one file descriptor. One of the<br />

threads in the client does an lseek() followed by a read(). Everything is as we expect it.<br />

If another thread in the client does the same set of operations, on the same file<br />

descriptor, we’d run into problems. Since the lseek() and read() functions don’t know<br />

about each other, it’s possible that the first thread would do the lseek(), and then get<br />

preempted by the second thread. The second thread gets to do its lseek(), and then its<br />

read(), before giving up CPU. The problem is that since the two threads are sharing<br />

the same file descriptor, the first thread’s lseek() offset is now at the wrong place —<br />

it’s at the position given by the second thread’s read() function! This is also a problem<br />

<strong>with</strong> file descriptors that are dup()’d across processes, let alone the network.<br />

An obvious solution to this is to put the lseek() and read() functions <strong>with</strong>in a mutex —<br />

when the first thread obtains the mutex, we now know that it has exclusive access to<br />

the file descriptor. The second thread has to wait until it can acquire the mutex before<br />

it can go and mess around <strong>with</strong> the position of the file descriptor.<br />

April 30, 2009 Chapter 5 • Resource Managers 217

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!