12.07.2015 Views

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

594 appendix dFront End MachinesSched Sched Sched120MPI_COMM_WOLRD3 45Figure D.1 A schematic view of a cluster (cloud) connected to front-end machines (box).the other copies of the program running on other processors. This is an example ofwhere MPI comes into play.In Figure D.1 we show a typical, but not universal, configuration for a Beowulfcluster. Almost all clusters have the common feature of using MPI for communicationamong computers and Unix/Linux for the operating system. The clusterin Figure D.1 is shown within a cloud. The cloud symbolizes the grouping andconnection of what are still independent computers communicating via MPI (thelines). The MPI_COMM_WORLD within the cloud is an MPI data type containingall the processors that are allowed to communicate with each other (in this casesix). The box in Figure D.1 represents the front end or submit hosts. These are thecomputers from which users submit their jobs to the Beowulf and later work withthe output from the Beowulf. We have placed the front-end computers outside theBeowulf cloud, although they could be within. This type of configuration frees theBeowulf cluster from administrative chores so that it can concentrate on numbercrunching, and is useful when there are multiple users on the Beowulf.Finally, note that we have placed the letters “Sched” within the front-endmachines. This represents a configuration in which these computers are also runningsome type of a scheduler, grid engine, or queueing system that oversees therunning of jobs submitted to MPI by a number of users. For instance, if we havea cluster of 20 computers and user A requests 10 machines and user B requests8 machines, then the grid engine will permit both users to run simultaneouslyand assign their jobs to different computers. However, if user A has requested 16machines and user B 8, then the grid engine will make one of the users wait untilthe other finishes their work.Some setup is required before you can run MPI on several computers at once. Ifsomeone has already done this for you, then you may skip the rest of this sectionand move on to § D.3. Our instructions have been run on a cluster of Sun computersrunning Solaris Unix (in a later section we discuss how to do this using the Torquescheduler on a Linux system). You will have to change the computer names andsuch for your purposes, but the steps should remain the same.• First you need to have an active account on each of the computers in theBeowulf cluster.−101<strong>COPYRIGHT</strong> <strong>2008</strong>, PRINCET O N UNIVE R S I T Y P R E S SEVALUATION COPY ONLY. NOT FOR USE IN COURSES.ALLpup_06.04 — <strong>2008</strong>/2/15 — Page 594

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!