05.06.2013 Views

PNNL-13501 - Pacific Northwest National Laboratory

PNNL-13501 - Pacific Northwest National Laboratory

PNNL-13501 - Pacific Northwest National Laboratory

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

tetra-scale computing, envisioned as being in the near<br />

future for parallel computing. Multi-block grids<br />

comprising orthogonal sub-grids will allow incorporation<br />

of subsurface complexities into the computational domain<br />

without significantly increasing the total number of<br />

required grid cells.<br />

Approach<br />

This project has focused on implementing capabilities for<br />

orthogonal curvilinear coordinate grids and multi-block<br />

grids into a series of sequential and parallel multifluid<br />

subsurface flow and transport simulators. Curvilinear<br />

coordinate grid capabilities were implemented into the<br />

software using the conventional approach of transforming<br />

the governing differential equations that describe<br />

subsurface flow and transport, using grid metrics (i.e.,<br />

geometric transformations between the physical domain<br />

and computational domain). For static grids the grid<br />

metrics are computed once, resulting in minimal<br />

additional computational effort. For dynamic grids, such<br />

as adaptive grids used to simulate the migration of dense<br />

brines from leaking storage tanks into the vadose zone,<br />

grid metrics are computed for each grid level. The<br />

parallel implementation of the subsurface flow and<br />

transport simulator uses a FORTRAN preprocessor,<br />

which interprets directives imbedded in the source coding,<br />

to convert the source coding from sequential Fortran to<br />

parallel FORTRAN with message passing protocols.<br />

Multi-block grids require different data structures and<br />

distributions than the conventional single-block structured<br />

grid. The principal focus for implementing multi-block<br />

capabilities in parallel has been in the development of<br />

directives and preprocessor translators to handle multiblock<br />

data forms.<br />

Results and Accomplishments<br />

The principal outcomes of the project have been the<br />

development of capabilities for solving multifluid<br />

subsurface flow and transport problems on curvilinear<br />

coordinates and the development of a FORTRAN<br />

preprocessor directive, data structure, and interpreter for<br />

multi-block grids. To demonstrate these computational<br />

capabilities, an experiment involving multifluid flow in a<br />

bedded porous media with a sloped interface is being<br />

simulated using a structured Cartesian grid, tilted<br />

Cartesian grid, and boundary-fitted orthogonal grid.<br />

Simulation results for the different grids will be compared<br />

against laboratory data, where the key issues are the<br />

behavior of a light-nonaqueous-phase-liquid near a sloped<br />

textural interface. The ability of the simulator to predict<br />

260 FY 2000 <strong>Laboratory</strong> Directed Research and Development Annual Report<br />

the observed behavior of the light-nonaqueous-phaseliquid<br />

migration strongly depends on the soil-moisture<br />

characteristic function and the geometric representation of<br />

the computational grid. For this study, a boundary-fitted<br />

curvilinear grid has a significant advantage over Cartesian<br />

and tilted Cartesian grids in that it accurately honors both<br />

the geometry of the sloped interface and the container<br />

boundaries.<br />

Curvilinear coordinate system capabilities have been<br />

incorporated into eleven operational modes of the<br />

multifluid subsurface flow and transport simulators<br />

(sequential and parallel implementations): 1) water<br />

(aqueous system with passive gas), 2) water-air (aqueousgas<br />

system), 3) water-salt (aqueous-brine system with<br />

passive gas), 4) water-salt-air (aqueous-brine-gas system),<br />

5) water-air-energy (nonisothermal aqueous-gas system),<br />

6) water-salt-air-energy (nonisothermal aqueous-brine-gas<br />

system), 7) water-oil (aqueous-nonaqueous liquid system<br />

with passive gas), 8) water-oil-air (aqueous-nonaqueous<br />

liquid-gas system), 9) water-oil-dissolved oil (aqueousnonaqueous<br />

liquid system with passive gas and kinetic<br />

dissolution), 10) water-oil-surfactant (aqueousnonaqueous<br />

liquid-surfactant system with passive gas),<br />

and 11) water-oil-alcohol (aqueous-nonaqueous liquidternary<br />

mixture with passive gas).<br />

Writing long-lasting computer codes that distribute data<br />

and computations across an array of parallel processors is<br />

difficult in the advancing technology of computer<br />

hardware and system software. The concept chosen to<br />

develop enduring scientific software for this project was<br />

that of using a programmable parallel FORTRAN<br />

preprocessor, which translates sequential FORTRAN<br />

coding with embedded directives into parallel coding.<br />

The embedded parallel coding distributes arrays, supports<br />

task and data parallelism, implements parallel<br />

input/output and other processes required to use<br />

distributed memory, shared memory symmetric<br />

multiprocessor, and clustered parallel computer systems.<br />

This approach allows the code developer to focus on the<br />

science and applied mathematics of multifluid subsurface<br />

flow and transport without having to deal with the details<br />

and protocols of parallel processing. Implementing this<br />

approach for multi-block grids required the development<br />

of preprocessor directives and programming the<br />

preprocessor to interpret these directives. The<br />

preprocessor has been programmed to handle block<br />

structures using the derived types and structures<br />

capabilities of FORTRAN 90. Preprocessor routines were<br />

written to handle defining blocks, allocating variables,<br />

creating variables, and distributing blocks across the<br />

processors.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!