28.12.2014 Views

TGQR 2010Q2 Report.pdf - Teragridforum.org

TGQR 2010Q2 Report.pdf - Teragridforum.org

TGQR 2010Q2 Report.pdf - Teragridforum.org

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Deepwater Horizon oil spill related research activities at the Center for Space Research (CSR) at<br />

The University of Texas at Austin have resulted in a significant increase in the size of the TG data<br />

collection managed by CSR. The CSR MAGIC group now manages approximately 23.4<br />

terabytes of data on Corral using the data management system. All CSR MAGIC data is<br />

accessible over the web, with over 15,000 web requests for data from the collection during the<br />

reporting period (from over 2500 separate systems.)<br />

Researcher Scott Michael was offered compute cycles at Mississippi State University to begin his<br />

analysis of the data generated on Pople at PSC and Cobalt at NCSA. To avoid him having to<br />

move 30 TB of simulation data to MSU, IU's Data Capacitor Wide Area File system was used to<br />

bridge campuses. The file system was mounted on compute nodes at MSU. In doing this,<br />

Michael was able to begin his analyses right away without the time consuming step of bundling<br />

his data and moving it from the TeraGrid to a non-TeraGrid institution. Michael has written a<br />

paper about this experience to be presented at the TeraGrid '10 conference.<br />

SDSC continued work on preparing Lustre-WAN for production. DC-WAN (IU Lustre) and<br />

GPFS-WAN (SDSC GPFS) have been mounted on our Dash cluster.<br />

In Q1 Purdue RP migrated existing its TG data collections from the old SRB system to the new<br />

iRODS system. In Q2 Purdue RP modified the Purdue environmental data portal to connect with<br />

the new data management system. The new site is currently being tested.<br />

The ORNL RP continued the active interaction with the REDDnet collaboration and its new<br />

initiative, the Data Logistics Toolkit (DLT). REDDnet was recognized at the Spring Internet2<br />

meeting (http://events.internet2.edu/2010/spring-mm/) with an IDEA award<br />

(http://www.internet2.edu/idea/2010/) for providing distributed working storage in assistance of<br />

projects with large data movement needs.<br />

7.2 Data<br />

7.2.1 Data Movement<br />

Data movement infrastructure based on GridFTP remains stable within the TeraGrid, with<br />

monitoring performed by Inca and the Speedpage. Interactions with the REDDnet collaborators<br />

are expected to result in development of additional mechanisms to facilitate data movement into<br />

and out of TeraGrid resources.<br />

7.2.2 Wide-Area Filesystems<br />

Work continued to deploy the new distributed Lustre-WAN file system using storage at 6 sites.<br />

Indiana, NCSA, NICS, PSC, and TACC all received new hardware to deploy storage servers, and<br />

SDSC began work on repurposing over 200TB of existing storage for use in the new Lustre-<br />

WAN file system. During the quarter the new hardware was deployed and mostly integrated into<br />

the network and power infrastructures of the various RPs, and efforts to prepare for software and<br />

network testing were underway.<br />

In addition to adding MSU as a non-TeraGrid site mounting DC-WAN, IU began work on the<br />

TeraGrid wide area filesystem. IU has written and worked with system administrators at PSC the<br />

UID mapping code that will permit a unified UID space for the TeraGrid. By the end of Q2,<br />

hardware had been delivered, racked, and the disk array had been zoned for production.<br />

PSC continues to operate Lustre 2.0-based test infrastructure to investigate future mechanisms to<br />

enhance wide-area file system security and compatibility. Developments in the quarter include<br />

70

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!