PNNL-13501 - Pacific Northwest National Laboratory
PNNL-13501 - Pacific Northwest National Laboratory
PNNL-13501 - Pacific Northwest National Laboratory
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Study Control Number: PN00021/1428<br />
Code Development for NWGrid/NWPhys<br />
Harold E. Trease<br />
The NWGrid/NWPhys system represents enabling technology for solving dynamic, multiscale, coupled computational<br />
physics problems on hybrid grids using massively parallel computers in the following areas: computational biology and<br />
bioengineering, atmospheric dynamics and transport, subsurface environmental remediation modeling, engineering,<br />
computational and molecular chemistry, and semiconductor design. The NWGrid/NWPhys system provides one of the<br />
components for building a new Problem Solving Environment modeled after EECE/NWChem. NWGrid/NWPhys brings<br />
advanced grid and computational simulation technologies to the <strong>Laboratory</strong> and to the DOE research community.<br />
Project Description<br />
The purpose of this project is to develop NWGrid/<br />
NWPhys into production codes that integrate automated<br />
grid generation, time-dependent adaptivity, applied<br />
mathematics, and numerical analysis for hybrid grids on<br />
distributed parallel computing systems. This system<br />
transforms complex geometries into computable hybrid<br />
grids upon which computational physics problems can<br />
then be solved. To run the necessary, large memory<br />
problems, NWGrid has been ported to other platforms<br />
including those in the EMSL Molecular Science<br />
Computing Facility (MSCF) and at the <strong>National</strong> Energy<br />
Research Scientific Computing Center (NERSC), with<br />
more memory and central processing unit power.<br />
Additional Global Array sparse matrix operations that are<br />
needed to parallelize NWPhys have been defined and<br />
tested. A first draft of the NWGrid Users Manual is being<br />
reviewed and a Graphical User Interface is being<br />
designed.<br />
Introduction<br />
In the past, NWGrid was run in parallel on the ASCI<br />
Silicon Graphics, Inc., System cluster (a predominantly<br />
global memory, 64-bit machine) at Los Alamos <strong>National</strong><br />
<strong>Laboratory</strong> with ASCI parallelization software. To be<br />
able to run the necessary, large memory problems,<br />
NWGrid needed to be ported to other platforms, with<br />
more memory and central processing unit power.<br />
In the past, NWPhys was run in parallel on CRAYs and<br />
Connection Machines at Los Alamos <strong>National</strong><br />
<strong>Laboratory</strong>. It is not currently running in a full-assembled<br />
configuration in parallel anywhere. Pieces are running in<br />
serial on a Silicon Graphics, Inc., system at our <strong>Laboratory</strong>.<br />
Portions of the code are being rewritten on an as-<br />
needed-basis for ongoing work. To be able to parallelize<br />
it, functions for sparse matrix operations were needed.<br />
The NWGrid/NWPhys command suite was not well<br />
documented and there were few examples. Most<br />
experienced users knew only small portions that applied<br />
to their specific applications. The code currently runs<br />
with a command line driven interface. A users manual<br />
and graphical user interface are needed.<br />
Results and Accomplishments<br />
NWGrid Code Development. To run the necessary,<br />
large memory problems, the scalar version of NWGrid<br />
has been ported to other platforms including those in the<br />
EMSL/MSCF and at NERSC, with more memory and<br />
central processing unit power. This involved converting<br />
it from a 64-bit Silicon Graphics, Inc., code to 32-bit<br />
architectures, such as IBM (MPP1), LINUX INTEL PC,<br />
LINUX ALPHA PC, and SUNs and porting it to each<br />
machine’s parallel libraries. NWGrid is evolving from<br />
FORTRAN 77 and C to FORTRAN 90.<br />
NWPhys Code Development. To be able to parallelize<br />
NWPhys, a number of sparse matrix operations are being<br />
added to the EMSL internal software package, Global<br />
Arrays, to support unstructured meshes. These new<br />
functions are compatible with the directives provided by<br />
the Connection Machine, which are already used by<br />
NWPhys. They have been defined and tested and are in<br />
an initial parallel implementation of NWPhys. NWPhys<br />
is being rewritten to FORTRAN 90 during this conversion<br />
process.<br />
Documentation. The first draft of the NWGrid Users<br />
Manual has a full description with format, parameters,<br />
examples, and special notes of the over 120 NWGrid<br />
Computational Science and Engineering 103