19.01.2015 Views

MOLPRO

MOLPRO

MOLPRO

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2 RUNNING <strong>MOLPRO</strong> 6<br />

Note that options --multiple-helper-server, --node-helper-server,<br />

--single-helper-server, and --no-helper-server are only effective for MOL-<br />

PRO built with MPI-2 library. In the cases of one or more helper servers enabled, one or more<br />

processes act as data helper servers, and the rest processes are used for computation. Even so, it<br />

is quite competitive in performance when it is run with a large number of processes. In the case<br />

of helper server disabled, all processes are used for computation; however, the performance may<br />

not be good because of the poor performance of some existing implementations of the MPI-2<br />

standard for one-sided operations.<br />

In addition, for <strong>MOLPRO</strong> built with GA library (TCGMSG MPI or MPI over InfiniBand), GA<br />

data structures can’t be too large (e.g., 2GB per node) when running molpro job on multiple<br />

nodes. In this case, setting environment variable ARMCI DEFAULT SHMMAX might be helpful.<br />

The number should be less than 2GB (e.g., to set 1600MB for ARMCI DEFAULT SHMMAX<br />

in bash: export ARMCI DEFAULT SHMMAX=1600). One can also use more computer nodes<br />

to run such jobs, thus allocated memory for GA data structures on each node becomes smaller.<br />

There is no such restriction for MPI-2 version of <strong>MOLPRO</strong>.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!