QLogic OFED+ Host Software User Guide, Rev. B
QLogic OFED+ Host Software User Guide, Rev. B
QLogic OFED+ Host Software User Guide, Rev. B
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
4–Running <strong>QLogic</strong> MPI on <strong>QLogic</strong> Adapters<br />
<strong>QLogic</strong> MPI Details<br />
For example, if you are running two different jobs on nodes using the QLE7140,<br />
set PSM_SHAREDCONTEXTS_MAX to 2 instead of the default 4. Each job would<br />
then have at most two of the four available hardware contexts. Both of the jobs<br />
that want to share a node would have to set PSM_SHAREDCONTEXTS_MAX=2 on<br />
that node before sharing begins.<br />
However, note that setting PSM_SHAREDCONTEXTS_MAX=2 as a clusterwide<br />
default would unnecessarily penalize nodes that are dedicated to running single<br />
jobs. So a per-node setting, or some level of coordination with the job scheduler<br />
with setting the environment variable, is recommended.<br />
If some nodes have more cores than others, then the setting must be adjusted<br />
properly for the number of cores on each node.<br />
Additionally, you can explicitly configure for the number of contexts you want to<br />
use with the cfgctxts module parameter. This will override the default settings<br />
(on the QLE7240 and QLE7280) based on the number of CPUs present on each<br />
node. See “TrueScale Hardware Contexts on the DDR and QDR InfiniBand<br />
Adapters” on page 4-12.<br />
Context Sharing Error Messages<br />
The error message when the context limit is exceeded is:<br />
No free InfiniPath contexts available on /dev/ipath<br />
This message appears when the application starts.<br />
Error messages related to contexts may also be generated by ipath_checkout<br />
or mpirun. For example:<br />
PSM found 0 available contexts on InfiniPath device<br />
The most likely cause is that the cluster has processes using all the available<br />
PSM contexts. Clean up these processes before restarting the job.<br />
Running in Shared Memory Mode<br />
<strong>QLogic</strong> MPI supports running exclusively in shared memory mode; no <strong>QLogic</strong><br />
adapter is required for this mode of operation. This mode is used for running<br />
applications on a single node rather than on a cluster of nodes.<br />
To enable shared memory mode, use either a single node in the mpihosts file or<br />
use these options with mpirun:<br />
$ mpirun -np= -ppn=<br />
needs to be equal in both cases.<br />
NOTE:<br />
For this release, must be ≤ 64.<br />
4-14 D000046-005 B