12.07.2015 Views

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

COPYRIGHT 2008, PRINCETON UNIVERSITY PRESS

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

622 appendix dBasic MPI Commands• MPI_Send: Sends a message.• MPI_Recv: Receives a message.• MPI_Sendrecv: Sends and receives a message.• MPI_Init: Starts MPI at the beginning of the program.• MPI_Finalize: Stops MPI at the end of the program.• MPI_Comm_rank: Determines a node’s rank.• MPI_Comm_size: Determines the number of nodes in the communicator.• MPI_Get_processor_name: Determines the name of the processor.• MPI_Wtime: Returns the wall time in seconds since an arbitrary time in thepast.• MPI_Barrier: Blocks until all the nodes have called this function.Collective Communication• MPI_Reduce: Performs an operation on all copies of a variable and stores theresult on a single node.• MPI_Allreduce: Like MPI_Reduce, but stores the result on all the nodes.• MPI_Gather: Gathers data from a group of nodes and stores them on onenode.• MPI_Allgather: Like MPI_Gather but stores the result on all the nodes.• MPI_Scatter: Sends different data to all the other nodes (opposite ofMPI_Gather).• MPI_Bcast: Sends the same message to all the other processors.Nonblocking Communication• MPI_Isend: Begins a nonblocking send.• MPI_Irecv: Begins a nonblocking receive.• MPI_Wait: Waits for an MPI send or receive to be completed.• MPI_Test: Tests for the completion of a send or receive.−101<strong>COPYRIGHT</strong> <strong>2008</strong>, PRINCET O N UNIVE R S I T Y P R E S SEVALUATION COPY ONLY. NOT FOR USE IN COURSES.ALLpup_06.04 — <strong>2008</strong>/2/15 — Page 622

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!