18.01.2015 Views

TGQR 2010Q4 Report.pdf - Teragridforum.org

TGQR 2010Q4 Report.pdf - Teragridforum.org

TGQR 2010Q4 Report.pdf - Teragridforum.org

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.2.18<br />

Physics: The Geophysical High Order Suite for Turbulence (GHOST) (PI: Annick<br />

Pouquet, NCAR)<br />

The Geophysical High Order Suite for Turbulence (GHOST) is a framework for studying<br />

fundamental turbulence via direct numerical simulation (DNS) and modeling. Running on<br />

TeraGrid computers in 2010, GHOST became the first known successful attempt to combine<br />

OpenMP and MPI in a large-scale hybrid scheme for pseudo-spectral fluid dynamics. This<br />

hybridization enables highly efficient use of computing resources both within each node and<br />

between nodes across a network. Using a TeraGrid allocation at NICS, NCAR researchers were<br />

able to do a 3,072 3 simulation of a rotating flow—the largest such simulation in the world—on<br />

nearly 20,000 processor cores. The efficiency achieved with<br />

GHOST allowed them to study flows at a higher resolution and<br />

with greater fidelity by using all the cores on each node. Such<br />

efficiency gains are allowing researchers to solve problems they<br />

could not do before by significantly reducing the processing time<br />

on large core-count computer systems. Pablo Mininni, Duane<br />

Rosenberg, Raghu Reddy, and Annick Pouquet presented their<br />

paper on the topic, Investigation of Performance of a Hybrid<br />

MPI-OpenMP Turbulence Code, at TG'10, and it received the<br />

conference award for best paper in the technology track.<br />

2.2.19<br />

Student Training: Use of TeraGrid Systems in the<br />

Classroom and Research Lab (PI: Andrew Ruether,<br />

Swarthmore College)<br />

Swarthmore College uses the TeraGrid for both student<br />

coursework and faculty research. When Ruether taught<br />

Swarthmore’s Distributed and Parallel Computing course, all<br />

students were given TeraGrid accounts, and they used TeraGrid<br />

systems for class projects. Two of computer science Professor Tia<br />

Newhall’s students extended the work done in the class to a<br />

summer project and presented their results at the TeraGrid 2010<br />

conference. The students developed and tested a novel<br />

parallelization technique for solving the K-Nearest Neighbor<br />

problem. The algorithm is used to classify objects from a large<br />

set. A few examples of its use include discovering medical<br />

images that contain tumors from a large medical database,<br />

recognizing fingerprints from a national fingerprint database and<br />

finding certain types of astronomical objects in a large<br />

Figure 2.18. These four turbulence<br />

visualizations show Taylor-Green flow at<br />

increasing resolutions using 64 3 , 256 3 ,<br />

1,024 3 , and 2,048 3 grid points. This<br />

modeling advancement using a hybrid<br />

MPI-OpenMP code is only possible on<br />

systems with significantly more processor<br />

cores than NCAR’s most powerful<br />

supercomputer. GHOST code has been run<br />

successfully on Kraken and Ranger.<br />

GHOST code scales well on these and<br />

other platforms for hydrodynamic runs<br />

with up to 3,072 3 grid points. These results<br />

demonstrate that pseudo-spectral codes<br />

such as GHOST – using FFT algorithms<br />

that are prevalent in many numerical<br />

applications – can scale linearly to a large<br />

number of processors.<br />

astronomical database. The TeraGrid allowed the students to run large-scale experiments<br />

necessary to demonstrate that their solution worked well for real-world problems.<br />

Swarthmore also has three faculty members—in computer science, physics and chemistry—doing<br />

publishable work using TeraGrid resources. Because Swarthmore does not have a highperformance<br />

computing system, the TeraGrid has been a key resource for faculty researchers who<br />

have reached the limits of desktop computing. In one case, this reduced the run time of a<br />

simulation from one week on a desktop computer to one hour using Purdue's Condor pool, which<br />

offers nearly 37,000 processors and is available as a high-throughput TeraGrid resource. This<br />

allowed<br />

the<br />

research<br />

group to<br />

run many<br />

Figure 2.19. The gray lines are magnetic field lines in plasma and the colored dotted lines are charged<br />

particle orbits through the magnetic field. TeraGrid resources were used to model many particle orbits to get a<br />

sense of the confinement in this configuration. Knowledge gained from the modeling could be useful in<br />

designing fusion reactors based on magnetic confinement.<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!