28.10.2014 Views

Synergy User Manual and Tutorial. - THE CORE MEMORY

Synergy User Manual and Tutorial. - THE CORE MEMORY

Synergy User Manual and Tutorial. - THE CORE MEMORY

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />

Existing Tools for Parallel Processing<br />

The parallel programming systems discussed, PVM, MPI <strong>and</strong> Linda, are implemented<br />

with libraries of function calls that are coded directly into either C or Fortran source code<br />

<strong>and</strong> compiled. There are two primary types of communication used: message passing<br />

(PVM <strong>and</strong> MPI) <strong>and</strong> tuple space (Linda <strong>and</strong> <strong>Synergy</strong>). In message passing a participating<br />

process may send messages to any other process, directly, which is somewhat similar to<br />

inter-process communication (IPC) in the Linux/UNIX operating system. In fact, both<br />

message passing <strong>and</strong> tuple space systems are implemented with sockets in the<br />

Linux/UNIX environment. A tuple space is a type of distributed shared memory that is<br />

used by participating processes to hold messages. These messages can be posted or<br />

obtained by any of the participants. All of these programs function by the use of<br />

“master” <strong>and</strong> “worker” designations. The master is generally responsible to break the<br />

task into pieces <strong>and</strong> to assemble the results. The workers are responsible to complete<br />

their piece of the task. These systems are communicate over computer networks <strong>and</strong><br />

typically have some type of middleware to facilitate cooperation between machines, such<br />

as the cluster discussed below.<br />

Computer Clusters<br />

Computer clusters, sometimes referred to as server farms, are groups of connected<br />

computers that form a parallel computer by working together to complete tasks. Clusters<br />

were originally developed in the 1980’s by Digital Equipment Corporation (DEC) to<br />

facilitate parallel computing <strong>and</strong> file <strong>and</strong> peripheral device sharing. An example of a<br />

cluster would be a Linux network with some middleware software to implement the<br />

parallelism. Well established cluster systems have procedures to eliminate single point<br />

failures, providing some level of fault tolerance. The four major types of clusters are:<br />

• Director based clusters—one machine directs or controls the behavior of the<br />

cluster <strong>and</strong> usually implemented to enhance performance<br />

• Two-node clusters—two nodes perform the same part of the task or one serves as<br />

a backup in case the other fails to ensure fault tolerance<br />

• Multi-node clusters—may have tens of clustered machines, which are usually on<br />

the same network<br />

• Massively parallel clusters—may have hundreds or thous<strong>and</strong>s of machines on<br />

many networks<br />

Currently, the fastest supercomputing cluster is Earth Simulator at 35.86 TFlops, which is<br />

15 TFlops faster than the second place machine. The main reason for cluster based<br />

66

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!