15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

In the message passing model, inter-thread communication occurs only through explicit I/O operations<br />

called messages. That is, the inter-thread communication is integrated at the I/O level rather than at the<br />

memory level. The messages are of two kinds—send and receive—and their variants. The combination<br />

of a send and a matching receive accomplishes a pairwise synchronization event. Several variants of the<br />

above synchronization event are possible. Message passing has long been used as a means of communication<br />

and synchronization among cooperating processes. Operating system functions such as sockets<br />

serve precisely this function.<br />

Inter-Thread Synchronization<br />

Synchronization involves coordinating the results of a set of parallel threads into some merged result.<br />

An example is waiting for one thread to finish filling a buffer before another begins using the data.<br />

Synchronization is achieved in different ways:<br />

• Control Synchronization: Control synchronization depends only on the threads’ control state and<br />

is not affected by the threads’ data state. This synchronization method requires a thread to wait<br />

until other thread(s) reach a particular control point. Examples for control synchronization<br />

operations are barriers and critical sections. With barrier synchronization, all parallel threads have<br />

a common barrier point. Each thread is allowed to proceed after the barrier only after all of the<br />

spawned threads have reached the barrier point. This type of synchronization is typically used<br />

when the results generated by the spawned threads need to be merged. With critical section type<br />

synchronization, only one thread is allowed to enter into the critical section code at a time. Thus,<br />

when a thread reaches a critical section, it will wait if another thread is currently executing the<br />

same critical section code.<br />

• Data Synchronization: Data synchronization depends on the threads’ data values. This synchronization<br />

method requires a thread to wait at a point until a shared name is updated with a particular<br />

value (by another thread). For instance, a thread executing a wait (x == 0) statement will be<br />

delayed until x becomes zero. Data synchronization operations are typically used to implement<br />

locks, monitors, and events, which, in turn, can be used to implement atomic operations and critical<br />

sections. When a thread executes a sequence of operations as an atomic operation, other threads<br />

cannot access any of the (shared) names updated during the atomic operation until the atomic<br />

operation has been completed.<br />

Coherence and Consistency<br />

The last aspect that we will consider about the parallel programming model is coherence and consistency<br />

when threads share a name space. Coherence specifies that the value obtained by a read to a shared<br />

location should be the latest value written to that location. Notice that when a read and a write are present<br />

in two parallel threads, coherence does not specify any ordering between them. It merely states that if<br />

one thread sees an updated value at a particular time, all other threads must also see the updated value<br />

from that time onward (until another update happens to the same location).<br />

The consistency model determines the time at which a written value will be made visible to other threads.<br />

It specifies constraints on the order in which operations to the shared space must appear to be performed<br />

(i.e., become visible to other threads) with respect to one another. This includes operations to the same<br />

locations or to different locations, and by the same thread or different threads. Thus, every transaction (or<br />

parallel transactions) transfers a collection of threads from one consistent state to another. Exactly what is<br />

consistent depends on the consistency model. Several consistency models have been proposed:<br />

• Sequential Consistency: This is the most intuitive consistency model. As per sequential consistency,<br />

the reads and writes to a shared address space from all threads must appear to execute serially in<br />

such a manner as to conform to the program orders in individual threads. This implies that the<br />

overall order of memory accesses must preserve the order in each thread, regardless of how instructions<br />

from different threads are interleaved. A multiprocessor system is sequentially consistent if it<br />

always produces results that are same as what could be obtained when the operations of all threads<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!