27.12.2014 Views

QLogic OFED+ Host Software User Guide, Rev. B

QLogic OFED+ Host Software User Guide, Rev. B

QLogic OFED+ Host Software User Guide, Rev. B

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

4–Running <strong>QLogic</strong> MPI on <strong>QLogic</strong> Adapters<br />

Introduction<br />

PSM<br />

The PSM TrueScale Messaging API, or PSM API, is <strong>QLogic</strong>'s low-level user-level<br />

communications interface for the TrueScale family of products. Other than using<br />

some environment variables with the PSM prefix, MPI users typically need not<br />

interact directly with PSM. The PSM environment variables apply to other MPI<br />

implementations as long as the environment with the PSM variables is correctly<br />

forwarded. See “Environment Variables” on page 4-20 for a summary of the<br />

commonly used environment variables.<br />

For more information on PSM, email <strong>QLogic</strong> at support@qlogic.com.<br />

Other MPIs<br />

In addition to <strong>QLogic</strong> MPI, other high-performance MPIs such as HP-MPI version<br />

2.3, Open MPI version 1.4, Ohio State University MVAPICH version 1.2,<br />

MVAPICH2 version 1.4, and Scali (Platform) MPI, have been ported to the PSM<br />

interface.<br />

Open MPI, MVAPICH, HP-MPI, and Scali also run over InfiniBand Verbs (the<br />

Open Fabrics Alliance API that provides support for user-level upper-layer<br />

protocols like MPI). Intel MPI, although not ported to the PSM interface, is<br />

supported over uDAPL, which uses InfiniBand Verbs. For more information, see<br />

Section 5 Using Other MPIs.<br />

Linux File I/O in MPI Programs<br />

MPI node programs are Linux programs that can execute file I/O operations to<br />

local or remote files in the usual ways through APIs of the language in use.<br />

Remote files are accessed via a network file system, typically NFS. Parallel<br />

programs usually need to have some data in files to be shared by all of the<br />

processes of an MPI job. Node programs can also use non-shared, node-specific<br />

files, such as for scratch storage for intermediate results or for a node’s share of a<br />

distributed database.<br />

There are different ways of handling file I/O of shared data in parallel<br />

programming. You may have one process, typically on the front end node or on a<br />

file server that is the only process to touch the shared files, and passes data to<br />

and from the other processes via MPI messages. Alternately, the shared data files<br />

can be accessed directly by each node program. In this case, the shared files are<br />

available through some network file support, such as NFS. Also, in this case, the<br />

application programmer is responsible for ensuring file consistency, either through<br />

proper use of file locking mechanisms offered by the operating system and the<br />

programming language, such as fcntl in C, or by using MPI synchronization<br />

operations.<br />

4-2 D000046-005 B

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!