26.02.2014 Views

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

Getting Started with QNX Neutrino - QNX Software Systems

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

© 2009, <strong>QNX</strong> <strong>Software</strong> <strong>Systems</strong> GmbH & Co. KG. More on synchronization<br />

Pools of threads<br />

and the process manager, which worries about memory protection and “processes”<br />

(among other things). A mutex is really just a synchronization object used between<br />

threads. Since the kernel worries only about threads, it really doesn’t care that the<br />

threads are operating in different processes — this is an issue for the process manager.<br />

So, if you’ve set up a shared memory area between two processes, and you’ve<br />

initialized a mutex in that shared memory, there’s nothing stopping you from<br />

synchronizing multiple threads in those two (or more!) processes via the mutex. The<br />

same pthread_mutex_lock() and pthread_mutex_unlock() functions will still work.<br />

Another thing that <strong>Neutrino</strong> has added is the concept of thread pools. You’ll often<br />

notice in your programs that you want to be able to run a certain number of threads,<br />

but you also want to be able to control the behavior of those threads <strong>with</strong>in certain<br />

limits. For example, in a server you may decide that initially just one thread should be<br />

blocked, waiting for a message from a client. When that thread gets a message and is<br />

off servicing a request, you may decide that it would be a good idea to create another<br />

thread, so that it could be blocked waiting in case another request arrived. This second<br />

thread would then be available to handle that request. And so on. After a while, when<br />

the requests had been serviced, you would now have a large number of threads sitting<br />

around, waiting for further requests. In order to conserve resources, you may decide to<br />

kill off some of those “extra” threads.<br />

This is in fact a common operation, and <strong>Neutrino</strong> provides a library to help <strong>with</strong> this.<br />

We’ll see the thread pool functions again in the Resource Managers chapter.<br />

It’s important for the discussions that follow to realize there are really two distinct<br />

operations that threads (that are used in thread pools) perform:<br />

• a blocking (waiting operation)<br />

• a processing operation<br />

The blocking operation doesn’t generally consume CPU. In a typical server, this is<br />

where the thread is waiting for a message to arrive. Contrast that <strong>with</strong> the processing<br />

operation, where the thread may or may not be consuming CPU (depending on how<br />

the process is structured). In the thread pool functions that we’ll look at later, you’ll<br />

see that we have the ability to control the number of threads in the blocking operation<br />

as well as the number of threads that are in the processing operations.<br />

<strong>Neutrino</strong> provides the following functions to deal <strong>with</strong> thread pools:<br />

#include <br />

thread_pool_t *<br />

thread_pool_create (thread_pool_attr_t *attr,<br />

unsigned flags);<br />

int<br />

thread_pool_destroy (thread_pool_t *pool);<br />

int<br />

April 30, 2009 Chapter 1 • Processes and Threads 69

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!