Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Synergy User Manual and Tutorial. - THE CORE MEMORY
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Synergy</strong> <strong>User</strong> <strong>Manual</strong> <strong>and</strong> <strong>Tutorial</strong><br />
MPI_Init(&argc, &argv);<br />
where argc is the number of arguments <strong>and</strong> argv is a vector of strings, both of which<br />
should be taken as comm<strong>and</strong> line arguments because the same program will be used for<br />
both the master <strong>and</strong> worker processes in the example application. After initialization, a<br />
program must determine its rank by calling MPI_Comm_rank(), designated by process<br />
number, to determine if it is the master or a worker process. The master will be process<br />
number 0. The function call is:<br />
MPI_Comm_rank(MPI_Comm comm, int* rank);<br />
where comm is a communicator <strong>and</strong> is defined in MPI’s libraries <strong>and</strong> rank is a reference<br />
pointer to an integer to hold this process’ rank. It may also be necessary for an<br />
application to determine the number of currently running processes. The<br />
MPI_Comm_size() function returns this number. The function call is:<br />
MPI_Comm_size(MPI_Comm comm, int* size);<br />
where comm is a communicator <strong>and</strong> is defined in MPI’s libraries <strong>and</strong> size is a reference<br />
pointer to an integer to hold the number processes. To send a message to another process<br />
the MPI_Send() function is used as such:<br />
MPI_Send(void* msg, strlen(msg)+1, MPI_Datatype type, int dest, int tag,<br />
MPI_Comm comm);<br />
where msg is a message buffer, strlen(msg)+1 sets the length of the message <strong>and</strong> its null<br />
terminal, type is the data type of the message as defined by MPI’s libraries, dest is an<br />
integer holding the process number of the destination, tag is an integer holding the<br />
message tag, <strong>and</strong> comm is a communicator <strong>and</strong> is defined in MPI’s libraries. This is a<br />
blocking send <strong>and</strong> will wait for the destination to receive the message before executing<br />
further instructions. To receive a message the MPI_Recv() function is used as such:<br />
MPI_Recv(void* msg, int size, MPI_Datatype type, int source, int tag, MPI_Comm<br />
comm, MPI_Status* status)<br />
where msg is a message buffer, is an integer holding the size actual size of the receiving<br />
buffer, type is the data type of the message as defined by MPI’s libraries, source is an<br />
integer holding the process number of the source, tag is an integer holding the message<br />
tag, comm is a communicator <strong>and</strong> is defined in MPI’s libraries, <strong>and</strong> status is the data<br />
about the receive operation. To end an MPI application session the MPI_Finalize()<br />
function is called:<br />
73