28.10.2014 Views

Synergy User Manual and Tutorial. - THE CORE MEMORY

Synergy User Manual and Tutorial. - THE CORE MEMORY

Synergy User Manual and Tutorial. - THE CORE MEMORY

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />

all memories (possibly stores each cell of an array or each column of a matrix in a<br />

different memory area). These machines are designed to execute arithmetic <strong>and</strong> data<br />

operations on a large number of data elements very quickly. A vector machine can<br />

perform operations in constant time if the length of the vectors (arrays) does not exceed<br />

the number of data processors. Most supercomputers, used for scientific computing in<br />

the 1980’s <strong>and</strong> 1990’s, are based on this architecture.<br />

Multiple instruction streams/single data stream (MISD) systems have multiple instruction<br />

processors <strong>and</strong> a single data processor. Few of these machines have been produced <strong>and</strong><br />

have had no commercial success.<br />

Multiple instruction streams/multiple data streams (MIMD) systems have multiple<br />

instruction processors <strong>and</strong> multiple data processors. There are a diverse variety of MIMD<br />

systems including those constructed from inexpensive off-the-shelf components to much<br />

more expensive interconnected vector processors, <strong>and</strong> many other configurations.<br />

Computers over a network that simultaneously cooperate to complete a single task are<br />

MIMD systems. Computers that have two or more independent processors are another<br />

example. A multiple independent processor machine has the ability to perform more than<br />

one task, simultaneously. lvii<br />

There are three types of performance gains received from parallel processing solutions<br />

for the use of n processors:<br />

• Sub-linear speedup is when the increase in speed is less than<br />

o i.e. five processors yields only 3x speedup<br />

• Linear speedup is when the increase is equal to n<br />

o i.e. five processors yields 5x speedup<br />

• Super-linear speedup is when the increase is greater than n<br />

o i.e. five processors yields 7x speedup<br />

Generally linear or faster speedup is very hard to achieve because of the sequential nature<br />

of most algorithms. Parallel algorithms must be designed to take advantage of parallel<br />

hardware. Parallel systems may have one shared memory area, to which all processors<br />

may have access. In shared memory systems care must be taken to design parallel<br />

algorithms that ensure mutual exclusion, which protects data from being corrupted when<br />

operated on by more than one processor. The results from parallel operations should be<br />

determinate, meaning they should be the same as if done by a sequential algorithm. As<br />

an example, if two processors write to the same variable in memory such that:<br />

• Processor 1 reads: x<br />

• Processor 2 reads: x<br />

64

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!