29.12.2014 Views

Magellan Final Report - Office of Science - U.S. Department of Energy

Magellan Final Report - Office of Science - U.S. Department of Energy

Magellan Final Report - Office of Science - U.S. Department of Energy

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Magellan</strong> <strong>Final</strong> <strong>Report</strong><br />

at a virtual OpenCL (VOCL) framework that could support the transparent utilization <strong>of</strong> local or remote<br />

GPUs. This framework, based on the OpenCL programming model, exposes physical GPUs as decoupled<br />

virtual resources that can be transparently managed independent <strong>of</strong> the application execution. The performance<br />

<strong>of</strong> VOCL for four real-world applications was evaluated as part <strong>of</strong> the project, looking at various<br />

computation and memory access intensities. The work showed that compute-intensive applications can execute<br />

with relatively small amounts <strong>of</strong> overhead within the VOCL framework.<br />

Virtualization Overhead Benchmarking. The benchmarking <strong>of</strong> virtualization overheads using both<br />

Eucalyptus and OpenStack was performed in collaboration with the Mathematics and Computer <strong>Science</strong><br />

Division (MCS) at Argonne, which does algorithm development and s<strong>of</strong>tware design in core areas such as<br />

optimization, explores new technologies such as distributed computing and bioinformatics, and performs<br />

numerical simulations in challenging areas such as climate modeling, the Advanced Integration Group at<br />

ALCF, which designs, develops, benchmarks, and deploys new technology and tools, and the Performance<br />

Engineering Group at ALCF, which works to ensure the effective use <strong>of</strong> applications on ALCF systems and<br />

emerging systems. This work is detailed in Chapter 9.<br />

SuperNova Factory. <strong>Magellan</strong> project personnel were part <strong>of</strong> the team <strong>of</strong> researchers from LBNL who<br />

received the Best Paper Award at <strong>Science</strong>Cloud 2010. The paper describes the feasibility <strong>of</strong> porting the<br />

Nearby Supernova Factory pipeline to the Amazon Web Services environment and <strong>of</strong>fers detailed performance<br />

results and lessons learned from various design options.<br />

MOAB Provisioning. We worked closely with Adaptive Computing’s MOAB team to test both bare-metal<br />

provisioning and virtual machine provisioning through the MOAB batch queue interface at NERSC. Our early<br />

evaluation provides an alternate model for providing cloud services to HPC center users allowing them to<br />

benefit from customized environments while leveraging many <strong>of</strong> the services they are already used to such as<br />

high bandwidth low latency interconnects, access to high performance file systems, access to archival storage.<br />

Juniper 10GigE. Recent cloud <strong>of</strong>ferings such as Amazon’s Cluster Compute instances are based on 10GigE<br />

networking infrastructure. The <strong>Magellan</strong> team at NERSC worked closely with Juniper to evaluate their<br />

10GigE infrastructure on a subset <strong>of</strong> the <strong>Magellan</strong> testbed. Detailed benchmarking evaluation using both<br />

bare-metal and virtualization was performed and is detailed in Chapter 9.<br />

IBM GPFS-SNC. Hadoop and Hadoop Distributed File System show the importance <strong>of</strong> data locality in<br />

file systems when handling workloads with large data volumes. However HDFS does not have a POSIX<br />

interface which is a significant challenge for legacy scientific applications. Alternate storage architecture<br />

implementations such as IBM’s General Parallel File System - Shared Nothing Cluster (GPFS-SNC), a distributed<br />

shared-nothing file system architecture, provides many <strong>of</strong> the features <strong>of</strong> HDFS such as data locality<br />

and data replication while preserving the POSIX IO interface. The <strong>Magellan</strong> team at NERSC worked closely<br />

with the IBM Almaden research team to install and test an early version <strong>of</strong> GPFS-SNC on <strong>Magellan</strong> hardware.<br />

Storage architectures such as GPFS-SNC hold promise for scientific applications but a more detailed<br />

benchmarking effort will be needed which is outside <strong>of</strong> the scope <strong>of</strong> <strong>Magellan</strong>.<br />

User Education and Support. User education and support have been critical to the success <strong>of</strong> the<br />

project. Both sites were actively involved in providing user education at workshops and through other<br />

forums. Additionally, <strong>Magellan</strong> project personnel engaged heavily with user groups to help them in their<br />

evaluation <strong>of</strong> the cloud infrastructure. In addition, the NERSC project team did an initial requirements<br />

gathering survey. At the end <strong>of</strong> the project, user experiences from both sites were gathered through a survey<br />

and case studies, that are described in Chapter 11.<br />

12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!