19.01.2015 Views

MOLPRO

MOLPRO

MOLPRO

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2 RUNNING <strong>MOLPRO</strong> 5<br />

to the use of OpenMP shared-memory parallelism, and specifies<br />

the maximum number of OpenMP threads that will be opened,<br />

and defaults to 1. Any of these three components may be omitted,<br />

and appropriate combinations will allow GA(or MPI-2)-only,<br />

OpenMP-only, or mixed parallelism.<br />

-N | --task-specification user1:node1:tasks1,user2:node2:tasks2... node1, node2 etc.<br />

specify the host names of the nodes on which to run. On most parallel<br />

systems, node1 defaults to the local host name, and there is no<br />

default for node2 and higher. On Cray T3E and IBM SP systems,<br />

and on systems running under the PBS batch system, if -N is not<br />

specified, nodes are obtained from the system in the standard way.<br />

tasks1, tasks2 etc. may be used to control the number of tasks on<br />

each node as a more flexible alternative to -n / tasks per node.<br />

If omitted, they are each set equal to -n / tasks per node. user1,<br />

user2 etc. give the username under which processes are to be created.<br />

Most of these parameters may be omitted in favour of the<br />

usually sensible default values.<br />

-G | --global-memory memory Some parts of the program make use of Global Arrays for<br />

holding and communicating temporary data structures. This option<br />

sets the amount of memory to allocate in total across all processors<br />

for such activities. This option is no longer activated.<br />

-S | --shared-file-implementation method specifies the method by which the shared<br />

data are held in parallel. method can be sf or ga, and it is set automatically<br />

according to the properties of scratch directories by default.<br />

If method is manually set to sf, please ensure all the scratch<br />

directories are shared by all processes. Note that for GA version of<br />

<strong>MOLPRO</strong>, if method is set to sf manually or by default, the scratch<br />

directories can’t be located in NFS when running molpro job on<br />

multiple nodes. The reason is that the SF facility in Global Arrays<br />

doesn’t work well on multiple nodes with NFS. There is no such<br />

restriction for MPI-2 version of <strong>MOLPRO</strong>.<br />

--multiple-helper-server nprocs per server enables the multiple helper servers, and<br />

nprocs per server sets how many processes own one helper server.<br />

For example, when total number of processes is specified as 32 and<br />

nprocs per server = 8, then every 8 processes(including helper<br />

server) will own one helper server, and there are 4 helper servers in<br />

total. For any unreasonable value of nprocs per server (i.e., any<br />

integer less than 2), it will be reset to a very large number automatically,<br />

and this will be equivalent to option --single-helper-server.<br />

--node-helper-server specifies one helper server on every node if all the nodes are<br />

symmetric and have reasonable processes (i.e., every node has the<br />

same number of processes, and the number should be greater than<br />

1), and this is the default behaviour. Otherwise, only one single<br />

helper server for all processes/nodes will be used, and this will be<br />

equivalent to option --single-helper-server<br />

--single-helper-server specifies only one single helper server for all processes.<br />

--no-helper-server<br />

disables the helper server.<br />

-t | --omp-num-threads n Specify the number of OpenMP threads, as if the environment<br />

variable OPENMP_NUM_THREADS were set to n.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!