12.07.2015 Views

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

620 appendix dlengths and sending the pieces to nodes. Likewise, MPI_Gather() gathers data fromevery node (including the root node) and places it in an array, with data from node0 placed first, followed by node 1, and so on. A similar function, MPI_Allgather(),stores the data on every node rather than just the root node.D.7 Bootable Cluster CD ⊙One of the difficulties in learning how to parallel compute is the need for a parallelcomputer. Even though there may be many computers around that you maybe able to use, knitting them all together into a parallel machine takes time andeffort. However, if your interest is in learning about and experiencing distributedparallel computing, and not in setting up one of the fastest research machinesin the world, then there is an easy way. It is called a bootable cluster CD (BCCD)and is a file on a CD. When you start your computer with the CD in place, youare given the option of having the computer ignore your regular operating systemand instead boot from the CD into a preconfigured distributed computingenvironment. The new system does not change your system but rather is a nondestructiveoverlay on top of the existing hardware that runs a full-fledged parallelcomputing environment on just about any workstation-class system, includingMacs. You boot up every machine you wish to have on your cluster this way, and ifneeded, set up a domain name system (DNS) and dynamic host configuration protocol(DHCP) servers, which are also included. Did we mention that the system isfree? [BCCD]D.8 Parallel Computing Exercises1. Bifurcation plot: If you have not yet done so, take the program you wrote togenerate the bifurcation plot for bug populations and run different ranges ofµ values simultaneously on several CPUs.2. Processor ring: Write a program in whicha. a set of processors are arranged in a ring.b. each processor stores its rank in MPI_COMM_WORLD in an integer.c. each processor passes this rank on to its neighbor on the right.d. each processor keeps passing until it receives its rank back.3. Ping pong: Write a program in which two processors repeatedly pass amessage back and forth. Insert timing calls to measure the time taken forone message and determine how the time taken varies with the size of themessage.4. Broadcast: Have processor 1 send the same message to all the other processorsand then receive messages of the same length from all the other processors.How does the time taken vary with the size of the messages and the numberof processors?−101<strong>COPYRIGHT</strong> <strong>2008</strong>, PRINCET O N UNIVE R S I T Y P R E S SEVALUATION COPY ONLY. NOT FOR USE IN COURSES.ALLpup_06.04 — <strong>2008</strong>/2/15 — Page 620

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!