06.07.2016 Views

National Energy Research Scientific Computing Center

BcOJ301XnTK

BcOJ301XnTK

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

14 NERSC ANNUAL REPORT 2015<br />

The Dark <strong>Energy</strong> Spectroscopic<br />

Instrument (DESI) will measure<br />

the effect of dark energy on the<br />

expansion of the universe by<br />

obtaining optical spectra for<br />

tens of millions of galaxies<br />

and quasars. DESI will be<br />

conducted on the Mayall<br />

4-meter telescope at Kitt Peak<br />

<strong>National</strong> Observatory in Arizona<br />

(shown here) starting in 2018.<br />

Image: NOAO/AURA/NSF<br />

Extreme-Scale and Data-Intensive Platforms<br />

An increasing number of scientific discoveries are being made based on the analysis of large volumes<br />

of data coming from observational science and large-scale simulation. These analyses are beginning to<br />

require resources only available on large-scale computing platforms, and data-intensive workloads are<br />

expected to become a significant component of the Office of Science’s workload in the coming years.<br />

For example, scientists are using telescopes on Earth—such as the Dark <strong>Energy</strong> Spectroscopic<br />

Instrument (DESI) managed by LBNL—and mounted on satellites to try to “see” the unseen forces,<br />

such as dark matter, that can explain the birth, evolution and fate of our universe. Detailed,<br />

large-scale simulations of how structures form in the universe will play a key role in advancing our<br />

understanding of cosmology and the origins of the universe.<br />

To ensure that future systems will be able to meet the needs of extreme-scale and data-intensive<br />

computing in cosmology, particle physics, climate and other areas of science, NERSC has<br />

undertaken an extensive effort to characterize the data demands of the scientific workflows run by its<br />

users. In 2015 two important image-analysis workflows—astronomy and microtomography—were<br />

characterized to determine the feasibility of adapting these workflows to exascale architectures.<br />

Although both workflows were found to have poor thread scalability in general, we also determined<br />

that adapting these workflows to manycore and exascale node architectures will not be fundamentally<br />

difficult; coarse-grained threading and the use of large private buffers were the principal limits to<br />

scalability and optimization strategies to address these bottlenecks already exist.<br />

The I/O requirements of these workflows were also characterized, and in general their I/O patterns<br />

are good candidates for acceleration on a flash-based storage subsystem. However, these workflows do<br />

not re-read data heavily, and it follows that transparent-caching architectures would not deliver the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!