12.07.2015 Views

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Appendix D: An MPI TutorialIn this appendix we present a tutorial on the use of MPI on a small Beowulf clustercomposed of Unix or Linux computers. 1 This follows our philosophy of “learning whiledoing.” Our presentation is meant to help the user from the ground up, somethingthat might not be needed if you were working at a central computing center with areasonable level of support. Although your problem is still to take the program youhave written to generate the bifurcation plot for bug populations and run differentranges of µ values simultaneously on several CPUs, in a more immediate senseyour task is to get the experience of running MPI, to understand some of the MPIcommands within the programs, and then to run a timing experiment. In §D.9 at theend of the appendix we give a listing and a brief description of the MPI commands anddata types. General information about MPI is given in [MPI], detailed informationabout the syntax of MPI commands appears in [MPI2], and other useful materialcan be found in [MPImis]. The standard reference on the C language is [K&R 88],although we prefer [OR]. MPI is very much the standard software protocol for parallelcomputing and is at a higher level than its predecessor PVM [PVM] (which has itsown tutorial on the CD).While in the past we have run Java programs with a version of MPI, thedifference in communication protocols used by MPI and Java have led to poor performanceor to additional complications needed to improve performance [Fox 03].In addition, you usually would not bother parallelizing a program unless it requiresvery large amounts of computing time, and those types of programs are usuallywritten in Fortran or C (both for historical reasons and because Java is slower). So itmakes sense for us to use Fortran or C for our MPI examples. We will use C becauseit is similar to Java.D.1 Running on a BeowulfA Beowulf cluster is a collection of independent computers each with its own memoryand operating system that are connected to each other by a fast communicationnetwork over which messages are exchanged among processors. MPI is a library ofcommands that make communication between programs running on the differentcomputers possible. The messages are sent as data contained in arrays. Becausedifferent processors do not directly access the memory on some other computer,when a variable is changed on one computer, it is not changed automatically in1 This material was developed with the help of Kristopher Wieland, Kevin Kyle, Dona Hertel,and Phil Carter. Some of the other materials derive from class notes from the Ohio SuperComputer Center, which were written in part by Steve Gordon.−101<strong>COPYRIGHT</strong> <strong>2008</strong>, PRINCET O N UNIVE R S I T Y P R E S SEVALUATION COPY ONLY. NOT FOR USE IN COURSES.ALLpup_06.04 — <strong>2008</strong>/2/15 — Page 593

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!