Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />
• Processor 1 writes: x = x + 1<br />
• Processor 2 writes: x = x – 1<br />
Depending on the possible orderings of the reads <strong>and</strong> writes the resulting variable could<br />
be x–1, x+1 or x. This is a race condition <strong>and</strong> is an extremely undesirable because the<br />
result depends on chance. Synchronization primitives, such as semaphores <strong>and</strong> monitors,<br />
aid in the resolution of conflicts due to race conditions. The shared memory may be in a<br />
single machine if it has more than one processor or a distributed shared memory, where<br />
individual computers access the same memory area(s) located on another computer(s) on<br />
the network.<br />
Parallel processors must use some means to communicate. This is done on the system<br />
buss <strong>and</strong> with shared memory in the case of a single computer with multiple processors.<br />
When multiple machines are involved, communication can be implemented over a<br />
network using either message passing or a distributed shared memory.<br />
Cost is a very important consideration in distributed computing. A parallel system with n<br />
processors is cheaper to build than a processor that is n-times faster. For tasks that need<br />
to be completed quickly <strong>and</strong> can be performed by more than one thread of execution with<br />
minimal concurrency, parallel processing is an exceptional solution. Many highperformance<br />
or supercomputing machines have parallel processing architectures. The<br />
parallel implementations discussed in the remainder of this book will be based on<br />
distributed computing as opposed to single machines with multiple processors.<br />
65