TGQR 2010Q2 Report.pdf - Teragridforum.org
TGQR 2010Q2 Report.pdf - Teragridforum.org
TGQR 2010Q2 Report.pdf - Teragridforum.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
NSF Extensible Terascale Facility<br />
TeraGrid<br />
<strong>Report</strong> for<br />
Q2: April 1 st , 2010 through June 30 th , 2010<br />
Ian Foster (GIG)<br />
Phil Andrews<br />
Jay Boisseau<br />
John Cobb<br />
Michael Levine<br />
Rich Loft<br />
Honggao Liu<br />
Richard Moore<br />
Carol Song<br />
Rick Stevens<br />
Craig Stewart<br />
John Towns<br />
Principal Investigators (GIG and RPs)<br />
University of Chicago/Argonne National Laboratory (UC/ANL)<br />
University of Tennessee (UT-NICS)<br />
Texas Advanced Computing Center (TACC)<br />
Oak Ridge National Laboratory (ORNL)<br />
Pittsburgh Supercomputing Center (PSC)<br />
National Center for Atmospheric Research (NCAR)<br />
Louisiana Optical Network Initiative/Louisiana State University<br />
(LONI/LSU)<br />
San Diego Supercomputer Center (SDSC)<br />
Purdue University (PU)<br />
University of Chicago/Argonne National Laboratory (UC/ANL)<br />
Indiana University (IU)<br />
National Center for Supercomputing Applications (NCSA)<br />
i
TeraGrid Forum (TGF)<br />
John Towns (NCSA)<br />
Matt Heinzel (UC/ANL)<br />
Phil Andrews (UT-NICS)<br />
Jay Boisseau (TACC)<br />
John Cobb (ORNL)<br />
Honggao Liu (LONI/LSU)<br />
Michael Levine (PSC)<br />
Rich Loft (NCAR)<br />
Richard Moore (SDSC)<br />
Mike Papka (UC-ANL)<br />
Carol Song (PU)<br />
Craig Stewart (IU)<br />
Grid Infrastructure Group<br />
Ian Foster (GIG) PI of TeraGrid GIG<br />
Matt Heinzel (UC) Director of the TeraGrid GIG<br />
Tim Cockerill (NCSA) Project Management Working Group<br />
Kelly Gaither (TACC) Visualization<br />
David Hart (SDSC) User Facing Projects<br />
Christopher Jordan (TACC) Data Analysis<br />
Daniel S. Katz (UC/ANL) GIG Director of Science<br />
Scott Lathrop (UC/ANL) Education, Outreach and Training; External Relations<br />
Elizabeth Leake (UC) External Relations<br />
Lee Liming (UC/ANL) Software Integration and Scheduling<br />
Amit Majumdar (SDSC) Advanced User Support<br />
J.P. Navarro (UC/ANL) Software Integration and Scheduling<br />
Mike Northrop (UC) GIG Project Manager<br />
Jeff Koerner (UC/ANL) Networking, Operations and Security<br />
Sergiu Sanielevici (PSC) User Services and Support<br />
Nancy Wilkins-Diehr (SDSC) Science Gateways<br />
Carolyn Peters (ANL) Event Coordinator/Administration<br />
ii
Working Group Leaders<br />
Accounting<br />
Advanced User Support<br />
Allocations<br />
Common User Environment<br />
Core Services<br />
Data<br />
Education, Outreach and Training<br />
External Relations<br />
Extreme Scalability<br />
Networking<br />
Operations<br />
Project Management<br />
Quality Assurance<br />
Scheduling<br />
Science Gateways<br />
Security<br />
Software<br />
User Facing Projects<br />
User Services<br />
Visualization<br />
David Hart (SDSC)<br />
Amit Majumdar (SDSC)<br />
Kent Milfield (TACC)<br />
Shawn Brown (PSC)<br />
Dave Hart (SDSC)<br />
Christopher Jordan (TACC)<br />
Scott Lathrop (UC/ANL)<br />
Elizabeth Leake (UC)<br />
Nick Nystrom (PSC)<br />
Linda Winkler (ANL)<br />
Jeff Koerner (UC/ANL)<br />
Tim Cockerill (NCSA)<br />
Kate Ericson (SDSC), Shava Smallen (SDSC)<br />
Warren Smith (TACC)<br />
Nancy Wilkins-Diehr (SDSC)<br />
Jim Marsteller (PSC)<br />
Lee Liming (UC/ANL), JP Navarro (UC/ANL)<br />
Dave Hart (SDSC)<br />
Sergiu Sanielevici (PSC)<br />
Kelly Gaither (TACC), Mike Papka (UC/ANL)<br />
iii
TERAGRID QUARTERLY REPORT<br />
Contents<br />
1 Overview ........................................................................................................................... 7<br />
1.1 Scope .............................................................................................................................. 7<br />
1.2 Communities Served ...................................................................................................... 8<br />
1.3 TeraGrid’s Integrated, Distributed Environment ........................................................... 9<br />
1.4 Organizational Architecture ........................................................................................... 9<br />
2 Science and Engineering Highlights ............................................................................. 10<br />
2.1 List of Science and Engineering Highlights ................................................................. 10<br />
2.2 Science and Engineering Highlights ............................................................................ 11<br />
3 Software Integration and Scheduling ........................................................................... 21<br />
3.1 Highlights ..................................................................................................................... 21<br />
3.2 Advanced Scheduling Capabilities .............................................................................. 21<br />
3.3 Capability Development and Expansion ...................................................................... 22<br />
3.4 Operational Issues ........................................................................................................ 22<br />
4 Science Gateways ............................................................................................................ 22<br />
4.1 Highlights ..................................................................................................................... 22<br />
4.2 Targeted Support .......................................................................................................... 23<br />
4.3 Gateway Infrastructure and Services ........................................................................... 28<br />
4.4 RP Operations: SGW Operations ................................................................................. 30<br />
5 Users and User Support ................................................................................................. 31<br />
5.1 Highlights ..................................................................................................................... 31<br />
5.2 User Engagement ......................................................................................................... 32<br />
5.3 Frontline User Support ................................................................................................. 33<br />
5.4 RP Operations: User Services ...................................................................................... 36<br />
5.5 Advanced User Support ............................................................................................... 36<br />
6 User Facing Projects and Core Services ....................................................................... 59<br />
6.1 Highlights ..................................................................................................................... 59<br />
6.2 Enhanced TeraGrid User Access ................................................................................. 59<br />
6.3 Allocations Process and Allocation Management ........................................................ 61<br />
6.4 RP Integration and Information Sharing ...................................................................... 62<br />
6.5 RP Operations: Accounting/Core Services .................................................................. 63<br />
6.6 User Information Presentation ..................................................................................... 64<br />
iv
6.7 RP Operations: User Facing Projects ........................................................................... 65<br />
6.8 Information Production and Quality Assurance ........................................................... 69<br />
7 Data and Visualization ................................................................................................... 69<br />
7.1 Highlights ..................................................................................................................... 69<br />
7.2 Data .............................................................................................................................. 70<br />
7.3 RP Operations: Data..................................................................................................... 71<br />
7.4 Visualization ................................................................................................................ 72<br />
7.5 RP Operations: Visualization ....................................................................................... 73<br />
8 Network Operations and Security ................................................................................ 75<br />
8.1 Highlights ..................................................................................................................... 75<br />
8.2 Networking .................................................................................................................. 75<br />
8.3 RP Operations: Networking ......................................................................................... 77<br />
8.4 Inca ............................................................................................................................... 78<br />
8.5 Grid Services Usage ..................................................................................................... 78<br />
8.6 RP Operations: HPC Operations .................................................................................. 85<br />
8.7 GIG Operations ............................................................................................................ 88<br />
8.8 Security ........................................................................................................................ 88<br />
8.9 RP Operations: Security ............................................................................................... 89<br />
8.10 Common User Environment ........................................................................................ 90<br />
8.11 System Performance Metrics ....................................................................................... 90<br />
9 Evaluation, Monitoring, and Auditing ......................................................................... 98<br />
9.1 Highlights ..................................................................................................................... 98<br />
9.2 XD – TIS: Technology Insertion Services ................................................................... 98<br />
9.3 Quality Assurance ...................................................................................................... 100<br />
9.4 XD – TAS: Technology Audit Services for TeraGrid ............................................... 100<br />
10 Education, Outreach, and Training; and External Relations .................................. 106<br />
10.1 Highlights ................................................................................................................... 106<br />
10.2 Training, Education and Outreach ............................................................................. 111<br />
10.3 External Relations ...................................................................................................... 118<br />
10.4 Enhancement of Diversity .......................................................................................... 122<br />
10.5 International Collaborations ....................................................................................... 122<br />
10.6 Broader Impacts ......................................................................................................... 123<br />
11 Project Management .................................................................................................... 138<br />
11.1 Highlights ................................................................................................................... 138<br />
11.2 Project Management Working Group ........................................................................ 138<br />
v
12 Science Advisory Board Interactions.......................................................................... 139<br />
13 Collaborations with Non-TeraGrid Institutions ........................................................ 140<br />
13.1 Grid Partnerships/Infrastructures ............................................................................... 140<br />
13.2 Science Projects and Science Gateways .................................................................... 140<br />
13.3 Data Systems .............................................................................................................. 140<br />
13.4 Science Projects ......................................................................................................... 140<br />
A Publications Listing ...................................................................................................... 142<br />
A.1. TeraGrid Staff Publications ............................................................................................. 142<br />
A.2. Publications from TeraGrid Users ................................................................................... 143<br />
B Performance on CY2010 Milestones ........................................................................... 177<br />
vi
1 Overview<br />
The TeraGrid is an open cyberinfrastructure that enables and supports leading-edge scientific<br />
discovery and promotes science and technology education. The TeraGrid comprises<br />
supercomputing and massive storage systems, visualization resources, data collections, and<br />
science gateways, connected by high-bandwidth networks integrated by coordinated policies and<br />
operations, and supported by computational science and technology experts,.<br />
TeraGrid’s objectives are accomplished via a three-pronged strategy: to support the most<br />
advanced computational science in multiple domains (deep impact), to empower new<br />
communities of users (wide impact), and to provide resources and services that can be extended<br />
to a broader cyberinfrastructure (open infrastructure). This “deep, wide, and open” strategy<br />
guides the development, deployment, operations, and support activities to ensure maximum<br />
impact on science research and education across communities.<br />
1.1 Scope<br />
TeraGrid is an integrated, national-scale computational science infrastructure operated in a<br />
partnership comprising the Grid Infrastructure Group (GIG), eleven Resource Provider (RP)<br />
institutions, and six Software Integration partners, with funding from the National Science<br />
Foundation’s (NSF) Office of Cyberinfrastructure (OCI). Initially created as the Distributed<br />
Terascale Facility through a Major Research Equipment (MRE) award in 2001, the TeraGrid<br />
began providing production computing, storage, and visualization services to the national<br />
community in October 2004. In August 2005, NSF funded a five-year program to operate,<br />
enhance, and expand the capacity and capabilities of the TeraGrid to meet the growing needs of<br />
the science and engineering community through 2010, and then extended the TeraGrid an<br />
additional year into 2011 to provide an extended planning phase in preparation for TeraGrid<br />
Phase III eXtreme Digital (XD).<br />
Accomplishing this vision is crucial for the advancement of many areas of scientific discovery,<br />
ensuring US scientific leadership, and increasingly, for addressing important societal issues.<br />
TeraGrid achieves its purpose and fulfills its mission through a three-pronged strategy:<br />
Deep: ensure profound impact for the most experienced users, through provision of the<br />
most powerful computational resources and advanced computational expertise;<br />
Wide: enable scientific discovery by broader and more diverse communities of<br />
researchers and educators who can leverage TeraGrid’s high-end resources, portals and<br />
science gateways; and<br />
Open: facilitate simple integration with the broader cyberinfrastructure through the use<br />
of open interfaces, partnerships with other grids, and collaborations with other science<br />
research groups delivering and supporting open cyberinfrastructure facilities.<br />
The TeraGrid’s deep goal is to enable transformational scientific discovery through leadership<br />
in HPC for high-end computational research. The TeraGrid is designed to enable high-end<br />
science utilizing powerful supercomputing systems and high-end resources for the data analysis,<br />
visualization, management, storage, and transfer capabilities required by large-scale simulation<br />
and analysis. All of this requires an increasingly diverse set of leadership-class resources and<br />
services, and deep intellectual expertise in the application of advanced computing technologies.<br />
The TeraGrid’s wide goal is to increase the overall impact of TeraGrid’s advanced<br />
computational resources to larger and more diverse research and education communities<br />
7
through user interfaces and portals, domain specific gateways, and enhanced support that<br />
facilitate scientific discovery by people without requiring them to become high performance<br />
computing experts. The complexity of using TeraGrid’s high-end resources will continue to grow<br />
as systems increase in scale and evolve with new technologies. TeraGrid broadens the scientific<br />
user base of its resources via the development and support of simpler but powerful interfaces to<br />
resources, ranging from establishing common user environments to developing and hosting<br />
Science Gateways and portals. TeraGrid also provides focused outreach and collaboration with<br />
science domain research groups, and conducts educational and outreach activities that help<br />
inspire and educate the next generation of America’s leading-edge scientists.<br />
TeraGrid’s open goal is twofold: to enable the extensibility and evolution of the TeraGrid by<br />
using open standards and interfaces; and to ensure that the TeraGrid is interoperable with<br />
other, open standards-based cyberinfrastructure facilities. While TeraGrid only provides<br />
(integrated) high-end resources, it must enable its high-end cyberinfrastructure to be more<br />
accessible from, and even federated or integrated with, cyberinfrastructure of all scales. That<br />
includes not just other grids, but also campus cyberinfrastructures and even individual researcher<br />
labs/systems. The TeraGrid leads the community forward by providing an open infrastructure that<br />
enables, simplifies, and even encourages scaling out to its leadership-class resources by<br />
establishing models in which computational resources can be integrated both for current and new<br />
modalities of science. This openness includes open standards and interfaces, but goes further to<br />
include appropriate policies, support, training, and community building.<br />
The TeraGrid’s integrated resource portfolio includes 19 high-performance computational (HPC)<br />
systems, several massive storage systems, remote visualization resources, and a dedicated<br />
interconnection network. This infrastructure is integrated at several levels: policy and planning,<br />
operational and user support, and software and services. Policy and planning integration<br />
facilitates coordinated management and evolution of the TeraGrid environment and allows us to<br />
present—as much as possible—a single cyberinfrastructure to the user community. Operational<br />
and user support integration allows the user community to interact with one or more of the many<br />
distinct resources and HPC centers that comprise TeraGrid, through a common service, training,<br />
and support <strong>org</strong>anization—masking the complexity of a distributed <strong>org</strong>anization. In addition, this<br />
allows the national user community to request allocations through a single national review<br />
process and use the resources of the distributed facility with a single allocation. Software and<br />
services integration creates a user environment with standard service interfaces, lowering barriers<br />
to porting applications, enabling users to readily exploit the many TeraGrid resources to optimize<br />
their workload, and catalyzing a new generation of scientific discovery through distributed<br />
computing modalities. Science Gateways, web-based portals that present TeraGrid to specific<br />
communities, represent a new paradigm in high-performance computing with the potential to<br />
impact a wide audience. Gateways benefit significantly from the integration of software and<br />
services across the TeraGrid.<br />
1.2 Communities Served<br />
The national, and global, user community that relies on TeraGrid has grown tremendously during<br />
the past four years, from fewer than 1,000 users in October 2005 to nearly 5,000 active users and<br />
more than 10,000 total lifetime users at the end of 2009.<br />
In Q2 2010, TeraGrid continued to exceed new thresholds for many user-related metrics. Notably,<br />
in May, TeraGrid surpassed 1,400 individuals charging jobs in a single month. TeraGrid also<br />
surpassed 6,000 active users at the quarter’s end. Finally, 662 new individuals joined the ranks of<br />
TeraGrid users, the second highest quarterly total to date. This elevated workload is made<br />
possible by prior efforts to streamline the new-user process and reduce the central staff effort.<br />
8
Further details can be found §6.2.1.<br />
To support the great diversity of research activities and their wide range in resources needs, our<br />
user support and operations teams leverage the expertise across the eleven TeraGrid Resource<br />
Providers (§5, §8). In addition, users benefit from our coordinated education, outreach, and<br />
training activities (§9).<br />
1.3 TeraGrid’s Integrated, Distributed Environment<br />
TeraGrid’s diverse set of HPC resources provides a rich computational science environment.<br />
TeraGrid RPs operate more than 25 highly reliable HPC resources and 10 storage resources that<br />
are available via a central allocations and accounting process for the national academic<br />
community. More detailed information is available on compute resources at<br />
www.teragrid.<strong>org</strong>/userinfo/hardware/resources.php and on storage resources at<br />
www.teragrid.<strong>org</strong>/userinfo/hardware/dataresources.php.<br />
In Q2 2010, TeraGrid again saw further increases in delivered NUs. TeraGrid compute resources<br />
delivered 12.1 billion NUs to users, 0.3% as TeraGrid Roaming NUs (Figure 8-11). Roaming<br />
NUs continued to decline as that allocation feature is phased out. The 12.1 billion NUs represent<br />
a 20% increase over the NUs delivered in Q1 2010. Nearly 100% of those NUs were delivered to<br />
allocated users. The Q2 2010 NUs delivered are 1.7x the NUs delivered in Q2 2009. Further<br />
details can be found §8.11.<br />
1.4 Organizational Architecture<br />
The coordination and management of the TeraGrid partners and resources requires <strong>org</strong>anizational<br />
and collaboration mechanisms that<br />
are different from a classic<br />
<strong>org</strong>anizational structure for single<br />
<strong>org</strong>anizations. The existing<br />
structure and practice has evolved<br />
from many years of collaborative<br />
arrangements between the centers,<br />
some predating the TeraGrid. As<br />
the TeraGrid moves forward, the<br />
inter-relationships continue to<br />
evolve in the context of a<br />
persistent collaborative<br />
environment.<br />
Figure 1-1: TeraGrid Facility Partner Institutions<br />
The TeraGrid team (Figure 1-1) is<br />
composed of eleven RPs and the<br />
GIG, which in turn has subawards to the RPs plus six additional Software Integration partners.<br />
The GIG provides coordination, operations, software integration, management and planning. GIG<br />
area directors (ADs) direct project activities involving staff from multiple partner sites,<br />
coordinating and maintaining TeraGrid central services.<br />
TeraGrid policy and governance rests with the TeraGrid Forum (TG Forum), comprising the<br />
eleven RP principal investigators and the GIG principal investigator. The TG Forum is led by an<br />
9
elected Chairperson, currently John Towns. This position facilitates the functioning of the TG<br />
Forum on behalf of the overall collaboration.<br />
TeraGrid management and planning is coordinated via a series of regular meetings, including<br />
weekly project-wide Round Table meetings (held via Access Grid), weekly TeraGrid AD and<br />
biweekly TG Forum teleconferences, and quarterly face-to-face internal project meetings.<br />
Coordination of project staff in terms of detailed technical analysis and planning is done through<br />
two types of technical groups: working groups and Requirement Analysis Teams (RATs).<br />
Working groups are persistent coordination teams and in general have participants from all RP<br />
sites; RATs are short-term (6-10 weeks) focused planning teams that are typically small, with<br />
experts from a subset of both RP sites and GIG partner sites. Both types of groups make<br />
recommendations to the TeraGrid Forum or, as appropriate, to the GIG management team.<br />
2 Science and Engineering Highlights<br />
The TeraGrid publishes science impact stories to inform and educate the public about the<br />
importance and impact of high-end computing technologies in advancing scientific research. The<br />
fourth annual TeraGrid Science Highlights document was distributed at the SC09 conference,<br />
providing information about 20 of the many science projects that have achieved significant<br />
impact by using the TeraGrid. These and other science impact stories are available on the<br />
TeraGrid web site.<br />
Below we provide 14 examples of TeraGrid-enabled science: first in a summary list, then as short<br />
summaries of each project. These highlights have been selected from the much larger collection<br />
of important scientific research activity supported on the TeraGrid to demonstrate the breadth of<br />
scope across scientific disciplines, research institutions, and sizes of research groups.<br />
2.1 List of Science and Engineering Highlights<br />
2.1.1<br />
2.1.2<br />
2.1.3<br />
2.1.4<br />
2.1.5<br />
2.1.6<br />
2.1.7<br />
2.1.8<br />
2.1.9<br />
Analytical and Surface Chemistry: Cleaner coal through computation (Jennifer Wilcox,<br />
Stanford University)<br />
Bioengineering: The role of anatomical structure in ventricular and atrial arrhythmias<br />
(Elizabeth Cherry, Cornell)<br />
Biophysics: Molecular dynamic study of the conformational dynamics of HIV-1 protease<br />
subtypes A, B, C, and F (Adrian Roitberg, U. Florida<br />
Condensed Matter Physics: First-principles investigation of elastic constants of Sn-based<br />
intermetallics and thermodynamics of Fe-based multicomponent alloys (Gautam Ghosh,<br />
Northwestern)<br />
Chemistry: Mechanisms of bio<strong>org</strong>anic and <strong>org</strong>anometallic cyclization reactions (Dean<br />
Tantillo, U. California, Davis)<br />
Education: ModelThis! Science Olympiad initiative (Edee Wiziecki, NCSA)<br />
Extragalactic Astronomy and Cosmology: The formation and evolution of X-ray clusters,<br />
galaxies, and the first stars (Greg Bryan, Columbia)<br />
Materials Research: Structural order and disorder in semiconducting nanocrystals<br />
studied by combined DFT calculations and solid-state NMR (Bradley Chmelka, U.<br />
California, Santa Barbara)<br />
Meteorology: Center for analysis and prediction of storms (CAPS) (Ming Xue,<br />
Oklahoma)<br />
10
2.1.10<br />
2.1.11<br />
2.1.12<br />
2.1.13<br />
2.1.14<br />
Molecular and Cellular Biosciences: Solution structure of PNA and DNA double-helices<br />
with and without metal substitutions; and Study of the interactions between mutant RT<br />
and novel anti-HIV drugs (Marcela Madrid, Pittsburgh Supercomputing Center)<br />
Physical Chemistry: Actin filament remodeling by actin depolymerization factor/cofilin<br />
(Greg Voth, U. Chicago; Jim Pfaendtner, U. Washington)<br />
Physics: The MIMD lattice computation (MILC) collaboration (Bob Sugar, U.<br />
California, Santa Barbara)<br />
Molecular and Cellular Biosciences: Blueprint for the affordable genome (Aleksei<br />
Aksimentiev, U. Illinois at Urbana-Champaign)<br />
Physics: The Universe’s magnetic personality (Stanislav Boldyrev, U. Wisconsin)<br />
2.2 Science and Engineering Highlights<br />
2.2.1<br />
Analytical and Surface Chemistry: Cleaner coal through computation (Jennifer Wilcox,<br />
Stanford University)<br />
Pollution control devices known as scrubbers, installed to restrict the amount of nitrogen oxide<br />
and sulfur dioxide release from coal-fired power plants, may have helped to reduce acid rain, but<br />
they haven’t made those plants safe. About 5,000 tons of mercury are released worldwide from<br />
coal-fired power plants each year, according to the<br />
Environmental Protection Agency (EPA). Scientists have<br />
determined that this mercury travels long distances and<br />
ends up in remote regions, like the Arctic, where it enters<br />
the food chain and accumulates within <strong>org</strong>anisms. As a<br />
consequence, the people of the Arctic display elevated<br />
levels of these toxic metals in their blood, which is<br />
correlated to birth defects and other ailments.<br />
In order to reduce the release of trace metals, Wilcox<br />
simulates the interactions of these particles using Ranger<br />
at TACC. Her studies are helping to improve on current<br />
technologies and design new ones that can remove heavy<br />
metals from the coal combustion process. Specifically,<br />
Wilcox’s simulations show how the size, pore structure,<br />
and composition of a material affect its success as an<br />
oxidizer of heavy metals. The goal is to create a structure<br />
that will bind trace metal molecules and convert them<br />
into a water-soluble form that can be easily removed.<br />
Wilcox’s solutions can take the form of changes in the<br />
Figure 2-1. Understanding trace metal<br />
adsorption aids in determining the pathway<br />
by which the trace metals are emitted into<br />
the atmosphere. In this example, elemental<br />
selenium is adsorbed onto an iron oxide<br />
nanoparticle through a surface oxidation<br />
mechanism. [Selenium: blue-colored atom;<br />
oxygen: red-colored atoms such that<br />
hatched and solid correspond to surface<br />
and bulk, respectively; iron: rust-colored<br />
atoms]<br />
spacing of pores in a charcoal filter, or the creation of a never-before-seen alloy of palladium and<br />
gold that, Wilcox predicts, will better oxidize heavy metals in gasification.<br />
In 2011, the EPA is expected to release regulations limiting the release of mercury and other<br />
heavy metals into the atmosphere. Power companies will be forced to adopt more efficient and<br />
effective technologies to protect the environment and avoid fines. By developing the theory and<br />
methods needed to produce optimal capture devices, Wilcox is leading the way and helping to<br />
create a cleaner energy future.<br />
11
2.2.2<br />
Bioengineering: The role of anatomical structure in ventricular and atrial arrhythmias<br />
(Elizabeth Cherry, Cornell)<br />
New technologies for low-energy defibrillation (applying shocks to persons undergoing heart<br />
attack at much lower voltage than is the current standard) hold promise to avoid undesirable sideeffects<br />
— including tissue damage and pain — of current defibrillation. These emerging<br />
technologies, which apply shocks that may be below the pain threshold, also hold for “witness<br />
defibrillation” — shocks administered by a layperson who witnesses collapse. Elizabeth Cherry<br />
and co-PIs (Flavio Fenton and Jean Bragard) perform whole-<strong>org</strong>an simulation of cardiac<br />
electrical dynamics.<br />
They have developed<br />
parallel codes that<br />
implement a threedimensional<br />
model of<br />
electrical wave<br />
dynamics in atria and<br />
ventricles, to<br />
implement models of<br />
cellular<br />
electrophysiology in<br />
Figure 2-2. Top: Idealized simulated cardiac tissue with conductivity discontinuities<br />
created by two inexcitable obstacles. With increasing field strengths, first the tissue<br />
edge, then a single conductivity discontinuity, and finally both conductivity<br />
discontinuities are excited. Bottom: Realistic simulated cardiac tissue with two<br />
inexcitable obstacles. Increasing the field strength recruits more conductivity<br />
discontinuities as virtual electrodes.<br />
realistic geometries.<br />
This is computationally<br />
challenging because of<br />
the range of space and<br />
time scale. During the<br />
past year, they reported<br />
on simulations (using<br />
BigBen at PSC) that<br />
show the feasibility of a<br />
novel method of lowenergy<br />
defibrillation.<br />
This report appeared in<br />
Circulation, the leading research journal in cardiology. In this procedure, low-voltage shocks are<br />
applied via far-field pacing to terminate arrhythmias. Modeling of this method requires inclusion<br />
of small-scale discontinuities in conductivity, such as blood vessels and other non-conducting<br />
tissue. The researchers incorporated these effects into their bidomain model of cardiac electrical<br />
activity. A bidomain model, as opposed to monodomain models that treat interstitial space as<br />
grounded, is crucial for accurate simulation of defibrillation. Their simulations are correlated with<br />
experimental studies. Newer publications highlighting these findings, including a book chapter,<br />
are in press.<br />
2.2.3 Biophysics: Molecular dynamic study of the conformational dynamics of HIV-1 protease<br />
subtypes A, B, C, and F (Adrian Roitberg, U. Florida<br />
Human Immunodeficiency Virus (HIV) is still one of the most prominent diseases in the world<br />
with infection rates continuously rising in certain areas. There are two groups of HIV, which are<br />
divided into HIV-1 and HIV-2. HIV-1 is more prevalent than HIV-2 and accounts for the<br />
majority of infections. HIV-1 can further be divided into three subgroups M, N, and O. Subgroup<br />
M is the largest subgroup and can be further divided into several subtypes. Roitberg group’s work<br />
focuses on four subtypes of subgroup M, specifically A, B, C, and F. Subtype A is generally seen<br />
in West and Central Africa. Subtype B is predominant in North and South America, Australia and<br />
12
Figure 2-3. Illustration of the three main flap conformations of HIV-1 Protease, from left to right, closed, semi-open, wideopen.<br />
Western Europe. Subtype C is found in East and Southern Africa and throughout Eastern Asia.<br />
Subtype F is predominantly found in South America.<br />
The HIV protease has become an attractive target for drug design due to its role of cleaving the<br />
gag and gag-pol polyprotein precursors. Furthermore, inhibition of the protease’s natural<br />
biological function would prevent the maturation of the HIV, hence preventing the infection of<br />
neighboring cells. The flap domain is the most mobile of all domains, largely attributed to the<br />
number of Gly residues found in this region. Due to the importance of the flaps having to open<br />
and close in order for catalytic activity to occur, it has been suggested by previous works that the<br />
development of a new class of protease inhibitors that instead target the flap domain or other<br />
essential domains might be more effective than the original idea of developing a protease<br />
inhibitor that works through competitive inhibition. In this current study, Roitberg and his<br />
students investigate conformational dynamics of the flaps, and the size of active site in order to<br />
correlate how the different sequences of subtypes A, B, C and F allow for different conformations<br />
of the protease.<br />
The Molecular Dynamics simulations were performed on Kraken using a version of the AMBER<br />
software optimized specifically for this problem as part of a TeraGrid Advanced User Support<br />
project between Prof. Roitberg and<br />
SDSC. Each of the simulations was run<br />
for 220 ns in which between 204 and 256<br />
processors were used for each<br />
independent run.<br />
The results indicate that the different<br />
subtypes have different mobilities,<br />
particular in the protein flaps, which in<br />
term affect substrate and inhibitor access<br />
to the active site. Given these differences,<br />
one can rationalize for instance that<br />
certain inhibitors, designed to work<br />
against subtype B of HIV Protease (the<br />
subtype prevalent in Europe and North<br />
America) do not bind as well to subtype<br />
A and C (prevalent in Africa and<br />
responsible for 60% of the overall HIV<br />
infections). Using this information, we<br />
will work on designing inhibitors better<br />
tailored to the less studied subtypes.<br />
Figure 2-4. Snapshots from the simulation of subtype C<br />
superimposed, which illustrate the mobility of the flaps.<br />
13
2.2.4<br />
This team’s general area of interest is the calculation of<br />
cohesive properties in solid-state systems using firstprinciples<br />
methods. They have used Cobalt and the nowretired<br />
Mercury at NCSA, Pople at PSC, the retired IBM<br />
IA-64 Linux cluster at SDSC, and Kraken at NICS in two<br />
main areas of computational research: (1) thermodynamic<br />
stability, elastic constants and elastic anisotropy, and<br />
thermal expansion of intermetallics (binary and ternary)<br />
relevant to modern solder interconnects in<br />
microelectronic packages, and (2) calculation of bulk<br />
thermodynamic properties of multicomponent alloys,<br />
precipitate/matrix interfacial energies, and elastic<br />
constants of Fe-base solid solutions to facilitate<br />
computational design and microstructure simulation in<br />
ferritic superalloys. They have collaborated with the<br />
member companies of Semiconductor Research<br />
Corporation and provided them basic materials properties<br />
relevant to modeling constitutive relations in solder<br />
interconnects. This in turn led to an interdisciplinary and<br />
multiscaled approach to integrate intermetallic properties<br />
into a predictive reliability model for area-array solder<br />
interconnects. Their work was published in Acta Materialia.<br />
2.2.5<br />
Condensed Matter Physics: First-principles investigation of elastic constants of Sn-based<br />
intermetallics and thermodynamics of Fe-based multicomponent alloys (Gautam Ghosh,<br />
Northwestern)<br />
Terpenes are a prominent class of natural products whose diverse members consist of most<br />
everything from flavor molecules to anti-cancer drugs, a prominent example being taxol, a<br />
therapeutic agent derived from<br />
tree bark used especially in<br />
treatment of breast cancer. The<br />
mechanisms by which terpenes<br />
are produced in nature have<br />
been debated for decades, but<br />
are still far from fully<br />
understood. Tantillo has used<br />
GAUSSIAN on up to 128<br />
processors of Pople at PSC to<br />
explore the potential energy<br />
surfaces and map the transition<br />
states of these cascade<br />
reactions, which are among the<br />
most challenging in nature.<br />
The key factor here is that one<br />
enzyme works on the substrate<br />
Figure 2.n. Orientation dependence of<br />
shear modulus of AuSn 4 , based on the<br />
elastic constatnts (C ij s) calculated from<br />
first-principles at LDA and GGA levels,<br />
where the shear direction is rotated from<br />
[010] to [001] in the shear plane (100) or<br />
from [010] to [101] in the shear plane [10-<br />
1]. Calculated results demonstrate the<br />
extent of anisotropy of shear modulus.<br />
Chemistry: Mechanisms of bio<strong>org</strong>anic and <strong>org</strong>anometallic cyclization reactions (Dean<br />
Tantillo, U. California, Davis)<br />
Figure 2-5. Product distributions for cyclizations of biological cations<br />
such as that derived from farnesyl diphosphate<br />
to produce one terpene, and another enzyme works on the same substrate to produce a different<br />
terpene, resulting in about 400 separate terpene products from one reactant. More often in nature,<br />
only one enzyme works with any one reactant to make one product. Tantillo’s recent work has<br />
14
mapped out pathways from the bisaboyl cation to a diverse array of complex sequiterpenes. His<br />
findings, reported in Nature Chemistry (August 2009) and JACS (March 2010), conclude that: (a)<br />
correct folding of the substrate, farnesyl diphosphate, alone doesn’t always dictate the structure of<br />
which products will be formed, (b) secondary carbocations are often avoided as intermediates in<br />
these rearrangements, and (c) most of the reactions don’t involve significant conformational<br />
changes for the intermediates, which may contribute significantly to product selectivity.<br />
2.2.6<br />
Recently, the NCSA cybereducation team deployed the<br />
first national science Olympiad event using computer<br />
modeling. The Science Olympiad National Tournament<br />
2010 competition was held on the University of Illinois<br />
campus May 21-22 for students from 49 states. Ninetytwo<br />
middle school and 50 high school students<br />
representing the top competitors in their state competed<br />
in the Model This! competition using Vensim and<br />
NetLogo to answer questions related to predator-prey<br />
interactions and epidemiology, respectively. Two-person<br />
teams used laptop computers and modeling software to<br />
gain insights into their topic; they interpreted data and<br />
used it to answer questions. Previously, in April, dozens<br />
of middle- and high-school students from across Illinois<br />
competed in the Illinois Science Olympiad State Finals Competition at NCSA. Model This! is the<br />
first Olympiad competition to use computer modeling. It was a trial challenge, meaning it is now<br />
being evaluated and may become an official Science Olympiad event. Edee Wiziecki, director of<br />
NCSA cybereducation programs, is shepherding the introduction of Model This! into the Science<br />
Olympiad competition. The initiative, which is supported by NCSA, the TeraGrid, and the<br />
Institute for Chemistry Literacy Through Computational Science (ICLCS), integrates computer<br />
technology, the Internet, quantitative data analysis and computer modeling.<br />
2.2.7<br />
Education: ModelThis! Science Olympiad initiative (Edee Wiziecki, NCSA)<br />
Figure 2-6. Students work on their tasks in<br />
the ModelThis! Competition at NCSA.<br />
ModelThis! is the first Science Olympiad<br />
competition to use computer modeling.<br />
Extragalactic Astronomy and Cosmology: The formation and evolution of X-ray clusters,<br />
galaxies, and the first stars (Greg Bryan, Columbia)<br />
It appears that the first generation of stars to form in the universe were probably very massive<br />
(about 100 times the mass of the sun), and lived short lives before exploding in supernovae<br />
explosions. However, during the time that they did shine, they were so bright that they ionized the<br />
gas for thousands of light-years around. Greg Bryan and his team have carried out adaptive-mesh<br />
refinement simulations using Abe at NCSA to investigate the impact of this ionizing radiation on<br />
the ability of the gas to form later generations of stars. A typical run on Abe, notes Bryan, uses<br />
around 500 processors and runs for approximately 100 hours. They demonstrated that there are<br />
two separate significant effects: one that tends to enhance later star formation, and one that<br />
suppresses it. They found that these two effects are (coincidently) nearly balanced. There is<br />
generally a slight net suppression but this disappears after a relatively short period of time (80<br />
million years). The result is that previous simple models that neglected both of these effects are<br />
(by chance) reasonably accurate predictors of the star formation rate.<br />
They have also looked at galaxies. Galaxies tend to group together, forming galaxy clusters.<br />
When a galaxy falls into a cluster for the first time, it plows into this hot cluster gas at high<br />
velocity, which can result in cold galaxy gas being stripped off. This is important because the<br />
cold gas is the fuel for new star formation in the galaxy. Bryan’s team has carried out the highest<br />
resolution simulations ever conducted of this process. This results in significantly more realistic<br />
simulations, and they can make extensive comparisons to observations, making, for the first time,<br />
15
ealistic "simulated" maps that can be directly compared to observations. Their work was<br />
published in the Monthly Notices of the Royal Astronomical Society, The Astrophysical Journal,<br />
and Astrophysical Journal Letters.<br />
2.2.8<br />
Materials Research: Structural order and disorder in semiconducting nanocrystals<br />
studied by combined DFT calculations and solid-state NMR (Bradley Chmelka, U.<br />
California, Santa Barbara)<br />
The research team of Bradley Chmelka is motivated by the need to understand at a molecular<br />
level the fabrication and functions of new catalysts, adsorbents, porous ceramics, and<br />
heterogeneous polymers. These categories of<br />
technologically important materials are linked by their<br />
crucial dependencies on local order/disorder, which often<br />
govern macroscopic process or device performance. The<br />
team is observing many common molecular features<br />
among these diverse systems, which provide new insights<br />
for materials chemistry and engineering.<br />
Using Cobalt at NCSA, two-bond scalar 2 J( 29 Si-O- 29 Si)<br />
spin-spin couplings between 29 Si atoms connected via<br />
bridging oxygen atoms in complicated nanoporous<br />
zeolite frameworks were calculated with high accuracies<br />
by density functional theory (DFT) and compared against<br />
experimentally measured values obtained from twodimensional<br />
29 Si{ 29 Si} nuclear magnetic resonance<br />
(NMR) spectroscopy. Such calculations, based on singlecrystal<br />
or powder X-ray diffraction (XRD) structures of<br />
siliceous zeolites Sigma-2 and ZSM-12 and a surfactanttemplated<br />
layered silicate, were determined to be<br />
extremely sensitive to local framework structures. The<br />
calculations were compared against experimentally<br />
measured 2 J( 29 Si-O- 29 Si) couplings obtained with the<br />
Figure 2-7. Different O-centered clusters<br />
extracted from the structure of zeolite<br />
Sigma-2 (as determined by single-crystal<br />
XRD analyses25) and centered on the<br />
29 Si(1)–O- 29 Si(4) fragment with different<br />
bond terminations. Si, O, and H atoms are<br />
displayed in blue, red, and white,<br />
respectively.<br />
highest accuracies yet achieved for zeolitic solids using state-of-the-art solid-state 2D doublequantum<br />
29 Si{ 29 Si} NMR. The use of 2 J( 29 Si-O- 29 Si) couplings to probe, evaluate, and refine the<br />
local structures of zeolites and layered silicates was demonstrated, including for frameworks that<br />
had previously been unsolved.<br />
In combination, such 2D NMR, XRD, and DFT analyses enable new opportunities for the<br />
development of protocols that integrate new and sensitive J coupling constraints into structuredetermination<br />
or -refinement protocols. Such local order (or disorder) has important influences on<br />
the macroscopic adsorption, reaction, mechanical, photophysical, and/or stability properties of<br />
these diverse classes of materials, which can now be better understood and correlated with their<br />
structures at a more detailed molecular level. The team’s work, in collaboration with Sylvian<br />
Cadars (now at the Centre National de la Recherche Scientifique in Orléans, France) and Darren<br />
Brouwer at the National Research Council of Canada, was presented at several conferences, has<br />
been published in Physical Chemistry Chemical Physics in 2009 and has been submitted to the<br />
Journal of the American Chemical Society in 2010.<br />
2.2.9 Meteorology: Center for analysis and prediction of storms (CAPS) (Ming Xue,<br />
Oklahoma)<br />
16
The Center for Analysis and Prediction of Storms (CAPS) used Athena at NICS in dedicated<br />
mode for 6 hours per night, 5 nights a week, to run their 2010 CAPS Spring Experiments from<br />
April to June. A combination of<br />
jobs using up to 12804 cores<br />
filled up most of Athena’s<br />
18048 cores during these<br />
periods. Several computational<br />
models were used, including<br />
WRF, ARW, NMM and ARPS,<br />
using 51 vertical levels, with<br />
different horizontal grid sizes;<br />
there were 4 km grids forecasts,<br />
and a single 1 km grid forecast,<br />
all of them encompassing the<br />
whole continental USA. A total<br />
of 4.7 Million computational<br />
core hours were used throughout<br />
the duration of this project. The<br />
codes were significantly<br />
improved from the ones used<br />
during last year’s CAPS Spring<br />
Experiments on Kraken.<br />
Figure 2-8. Computational domains for the 2010 CAPS Spring Season.<br />
The outer thick rectangular box represents the domain for performing<br />
3DVAR (Grid 1 – 1200×780). The red dot area represents the WRF-<br />
NMM domain (Grid 2 – 790×999). The inner thick box is the domain for<br />
WRF-ARW and ARPS and also for common verification (Grid3 –<br />
1160×720 at 4 km grid spacing; 4640×2880 at 1 km grid spacing)<br />
The stormscale ensemble and 1<br />
km CONUS systems continue to break new ground in advancing high resolution modeling for<br />
operational applications. The Spring Experiment this year included convective aviation-impacts<br />
and quantitative precipitation forecast (QPF)/heavy rain components in partnership with the<br />
Aviation Weather Center (AWC) and the Hydrometeorological Prediction Center (HPC),<br />
respectively, in addition to their usual severe thunderstorm focus. The more wide-ranging<br />
convective storm focus allowed others in the operational community to become more familiar<br />
with the CAPS models, leading the HPC to say that the storm-scale ensemble forecasting system<br />
was a "transformational" event for warm season QPF.<br />
Ming Xue, Director of CAPS, said, “Obviously, the success of our forecasting effort this and last<br />
year is due in large part to the incredible support that we have received from NICS, University of<br />
Tennessee. Without their making available to us a dedicated machine (with >18,000 cores) and<br />
their support of a dedicate team that usually stays up late into night to fix many system related<br />
problems, all of these would not have been possible. So we deeply appreciate their efforts. As in<br />
the past, a lot of new science will result from such efforts, as a stream of abstracts is being<br />
submitted to the Severe Local Storm Conference.“<br />
More than 10 abstracts were submitted to the American Meteorological Society (AMS)<br />
Conference for Severe Local Storms (October 2010), based on the analysis of the data sets. More<br />
journal articles will follow. An overview article is planned for Bulletin of the AMS, which has the<br />
largest readership among AMS journals.<br />
17
2.2.10<br />
By means of X-ray crystallography and calculations<br />
done on Pople at PSC, the team of Joanne I. Yeh,<br />
Boris Shivachev, Srinivas Rapireddy, Matthew J.<br />
Crawford, Roberto R. Gil, Shoucheng Du, Marcela<br />
Madrid, and Danith H. Ly has determined the<br />
structure of a PNA-DNA duplex. This structure<br />
represents the first high-resolution view of a hybrid<br />
duplex containing a contiguous chiral PNA strand<br />
with complete γ-backbone modification (“γPNA”).<br />
The new structure (see Figure 2.a) illustrates the<br />
unique characteristics of this modified PNA,<br />
possessing conformational flexibility while<br />
maintaining sufficient structural integrity to ultimately<br />
adopt the preferred P-helical conformation upon<br />
hybridization with DNA. The unusual structural<br />
adaptability found in the γPNA strand is crucial for<br />
enabling the accommodation of backbone<br />
modifications while constraining conformational<br />
states. These results provide unprecedented insights<br />
into how this new class of chiral γPNA is pre<strong>org</strong>anized and stabilized, before and after<br />
hybridization with a complementary DNA strand. Such knowledge is crucial for the future design<br />
and development of PNA for applications in biology, biotechnology and medicine.<br />
2.2.11<br />
Molecular and Cellular Biosciences: Solution structure of PNA and DNA double-helices<br />
with and without metal substitutions; and Study of the interactions between mutant RT<br />
and novel anti-HIV drugs (Marcela Madrid, Pittsburgh Supercomputing Center)<br />
Physical Chemistry: Actin filament remodeling<br />
by actin depolymerization factor/cofilin (Greg<br />
Voth, U. Chicago; Jim Pfaendtner, U.<br />
Washington)<br />
Voth and Pfaendtner have extended prior work on<br />
actin, a ubiquitous protein with a vital role in<br />
cytoskeletal dynamics and structure. The cycle of actin<br />
polymerization into filaments that undergird the<br />
cytoskeleton and depolymerization of these filaments<br />
has been implicated in many cancers, in particular<br />
breast cancer. The researchers used molecular<br />
dynamics simulations on BigBen at PSC and Ranger at<br />
TACC to investigate how the severing protein, actin<br />
depolymerization factor (ADF)/cofilin, modulates the<br />
structure, conformational dynamics, and mechanical<br />
properties of actin filaments. Their findings, reported<br />
in PNAS (April 2010), show that actin and cofilactin<br />
filament bending stiffness and corresponding<br />
persistence lengths obtained from all-atom simulations<br />
are comparable to values obtained from analysis of<br />
thermal fluctuations in filament shape. Filament<br />
flexibility is strongly affected by the nucleotide-linked<br />
conformation of the actin subdomain 2 DNase-I<br />
binding loop (DB loop) and the filament radial mass<br />
Figure 2-9. (a) A view of the γPNA strand<br />
(the DNA strand is omitted for clarity). The<br />
“water bridges” are present in between all the<br />
backbone amide NH and nucleobases<br />
(purine-N3 or pyrimidine-O2) enhancing the<br />
stability of the γPNA helix. (b) γPNA<br />
interactions with surrounding solvent<br />
molecules (γPNA strand, residues 6 to 9).<br />
Figure 2-10. This is a representative<br />
snapshot of two actin subunits<br />
(grey/green) and one bound ADF/cofilin<br />
(yellow). The parts that look like a rough<br />
surface are those regions of the actin<br />
protein that bind the ADF/cofilin protein<br />
(the ADF/cofilin binding surface). The<br />
red regions are those regions which were<br />
implicated in experiments to be important<br />
for ADF/cofilin binding to actin.<br />
18
density distribution. ADF/cofilin binding between subdomains 1 and 3 of a filament subunit<br />
triggers re<strong>org</strong>anization of subdomain 2 of the neighboring subunit such that the DB loop moves<br />
radially away from the filament. Repositioning of the neighboring subunit DB-loop significantly<br />
weakens subunit interactions along the long-pitch helix and lowers the filament bending rigidity.<br />
Lateral filament contacts between the hydrophobic loop and neighboring short-pitch helix<br />
monomers in native filaments are also compromised with cofilin binding. These results provide a<br />
molecular interpretation of biochemical solution studies documenting the disruption of filament<br />
subunit interactions and also reveal the molecular basis of actin filament allostery and its linkage<br />
to ADF/cofilin binding. In sum, their findings, corroborated through collaboration with prominent<br />
actin structural biologists Thomas Pollard and Enrique De La Cruz of Yale, show how slight<br />
structural change in the DB loop translates to altered flexibility of the cytoskeleton, pointing the<br />
way to drug design possibilities for cancer therapies.<br />
2.2.12 Physics: The MIMD lattice computation (MILC) collaboration (Bob Sugar, U.<br />
California, Santa Barbara)<br />
The MIMD Lattice Computation (MILC) Collaboration used approximately 8.66M core-hours on<br />
Athena at NICS in May 2010 for their Dedicated Athena Project research in Quantum<br />
Chromodynamics (QCD). This research addressed fundamental questions in high energy and<br />
nuclear physics, and is directly related to major experimental programs in these fields,<br />
particularly as they relate to the strong (nuclear) forces. The objectives of their work are to<br />
calculate some of the least well determined parameters of the standard model, and, in concert<br />
with major experiments in high energy physics, to test the standard model and search for new<br />
physics that goes beyond it.<br />
The particular calculations carried out on Athena represented a milestone because they are the<br />
first simulations which include four quarks, up, down, strange and charm, all with their physical<br />
masses. This had not been possible before because computational resources required to carry out<br />
QCD simulations grow as the masses of the up and down quarks decrease. Furthermore, the<br />
ensemble of gauge configurations worked on with Athena is the most challenging generated to<br />
date with the Highly Improved Staggered Quark (HISQ) formulation of lattice quarks. A total of<br />
81 gauge configurations were generated with Athena resources, using two different algorithms as<br />
a check on the results. The jobs running on Athena ranged from 1536 cores to 16384 cores. The<br />
MILC group also performed measurements of the static quark potential, the masses of the lightest<br />
strongly interacting particles, and the leptonic decay constants of pi, K, D and Ds mesons on over<br />
100 configurations, including ones generated on Athena as well as on other TeraGrid computers.<br />
The resulting gauge configurations will enable major improvements in the precision of a wide<br />
range of physical quantities of importance in high energy and nuclear physics.<br />
The ensemble of gauge configurations generated Athena must be combined with those from other<br />
ensembles to obtain final results before the work can be published. A talk on this work was<br />
presented at the XXVIII International Conference on Lattice Field Theory in Villasimius,<br />
Sardinia in June 2010.<br />
2.2.13<br />
Molecular and Cellular Biosciences: Blueprint for the affordable genome (Aleksei<br />
Aksimentiev, U. Illinois at Urbana-Champaign)<br />
As “next-generation” gene sequencers begin to make their mark on the life sciences, teams<br />
around the world are racing to develop new and improved DNA sequencers that can ingest a<br />
strand of nucleotide bases and directly read a person’s genetic code at a cost of less than $1,000.<br />
19
Aleksei Aksimentiev, a computational physicist at the U.<br />
Illinois at Urbana-Champaign, is one of the creators of a<br />
new kind of sequencer that may make the dream of an<br />
affordable, personal genome a reality. This type of<br />
sequencer uses an electric field to drive a strand of DNA<br />
through a small hole, or “nanopore,” in a membrane, and<br />
reads each base pair in order by measuring the change in<br />
current as the pair moves through the hole in the<br />
membrane.<br />
Since experiments can’t show exactly what’s going on<br />
inside a nanopore, Aksimentiev produces atom-by-atom<br />
models of nanopore designs and runs them on Ranger at<br />
TACC. His simulations revealed the atomic-level<br />
movements of DNA as it wriggles through the nanopore,<br />
leading to insights into how to improve the design of the<br />
system.<br />
Aksimentiev’s nanopore design promises a dramatic reduction in the cost of gene sequencing, and<br />
a commensurate increase in speed. The development of such a sequencer would have important<br />
ramifications for medicine, biology and human health, with the potential to lead to breakthroughs<br />
in the study of disease and evolution, and the development of personalized medicine. The<br />
National Institute of Health (NIH) has set a goal of producing a $1,000 genome using nanopore<br />
sequencers by 2013.<br />
2.2.14<br />
Physics: The Universe’s magnetic personality (Stanislav Boldyrev, U. Wisconsin)<br />
Plasma, or ionized gas, is the most common form of<br />
matter in the universe. Stars are balls of plasma held<br />
together by gravity, and even the space between the stars<br />
is filled with plasma, albeit sparsely. Astrophysicists<br />
have long believed that this plasma is magnetized.<br />
Likewise, they’ve understood that the cosmos is<br />
turbulent, filled with non-linear flows and eddies that<br />
generate motion and heat.<br />
What they have recently discovered is that for many<br />
astrophysical systems, magnetic turbulent energy is<br />
comparable to, and at some scales exceeds, kinetic<br />
energy, making it one of the most powerful forces<br />
shaping the system’s dynamics. In fact, certain<br />
astronomical observations, like the behavior of accreting<br />
disks and the signature of radio waves from the<br />
interstellar medium, only make sense after accounting for<br />
magnetic turbulence.<br />
Unlike regular hydrodynamic turbulence, which can be<br />
observed in the natural world and tested in laboratories<br />
or wind tunnels, those who study<br />
magnetohydrodynamics, or MHD, have little tangible<br />
evidence to go on. Since scientists can’t generate large<br />
Figure 2-11. Large-scale molecular<br />
dynamics simulations reveal the<br />
microscopic mechanics of DNA transport<br />
through solid-state nanopores. [Courtesy of<br />
Aleksei Aksimentiev, University of Illinois<br />
at Urbana-Champaign]<br />
Figure 2-12. Solar wind turbulence is<br />
imbalanced, with more Alfven waves<br />
propagating along the magnetic field lines<br />
from the Sun than toward the Sun. Such<br />
turbulence is reproduced in numerical<br />
simulations on Ranger. Red and blue<br />
regions correspond to streams of nonlinear<br />
Alfven waves propagating along the<br />
magnetic field (z-axis) in opposite<br />
directions.<br />
magnetized turbulent flows in our laboratories, they have to rely on numerical simulations. Over<br />
the last several years, Boldyrev and his colleague, Jean Carlos Perez (U. Wisconsin), developed<br />
the theory and methodology required to represent the magnetic turbulence of plasma. Using<br />
20
Ranger at TACC, they simulated the dynamics believed to be characteristic of most plasma flows<br />
with higher resolution than ever before, revealing its fine-scale dynamics.<br />
Through his simulations on Ranger, Boldyrev and his collaborators have shown that magnetism<br />
cannot be ignored when drafting the history of the Universe. In fact, magnetic turbulence may be<br />
the key to explaining some basic astrophysical phenomenon, like the clumping of interstellar gas<br />
to form stars, and the scintillation, or twinkling, of pulsars that scientists now recognize as<br />
evidence of the fluctuations produced by a turbulent and magnetized interstellar medium.<br />
Boldyrev says, “Understanding magnetized plasma turbulence is crucial for explaining very<br />
important astrophysical observations.”<br />
3 Software Integration and Scheduling<br />
The software integration area covers the operational tasks associated with keeping TeraGrid’s<br />
common software up-to-date and consistently configured across all HPC resources that support<br />
them, as well as a number of small capability extension projects. In PY5, these projects include<br />
improving our meta-scheduling capabilities, enhancing our information service, supporting NSF’s<br />
software cyber-infrastructure development and maintenance programs (SDCI, STCI), improving<br />
our documentation, and formalizing our application hosting capabilities.<br />
3.1 Highlights<br />
The scheduling working group is continuing to transition the batch queue prediction capability<br />
from QBETS to Karnac in order to ensure long-term production-quality support for the capability.<br />
We also are beginning the process of releasing historical scheduling workload data for public use<br />
in external R&D activities.<br />
An increasing number of services (most recently at NCAR and ORNL) are publishing new kinds<br />
of system data into TeraGrid’s Integrated Information Service (IIS), including directories of<br />
locally available software packages and science gateways.<br />
3.2 Advanced Scheduling Capabilities<br />
TeraGrid’s current automatic resource selection (metascheduling) capabilities are provided by<br />
MCP software (developed at SDSC) and Condor-G with matchmaking software (from Univ. of<br />
Wisconsin-Madison). We made incremental improvements to MCP, including a new portlet that<br />
displays the nodes and CPUs/node for TeraGrid machines. We added documentation on Condor-<br />
G with matchmaking to the TeraGrid documentation website. We continued to assess the<br />
possibility of using Moab as a metascheduling system for TeraGrid. NICS is conducting an<br />
evaluation of Moab’s performance when metascheduling across at least two Moab clusters. NICS<br />
has obtained permission to use the FutureGrid system as a second site (beyond NICS) for the<br />
experiments. We also conducted a study to evaluate the performance of QBETS, Condor-G, and<br />
MCP for automatic resource selection.<br />
TeraGrid’s batch queue prediction service is currently provided by UC-Santa Barbara’s QBETS<br />
service. Our quality assurance working group recommended that this service be described in<br />
documentation as an “experimental” service, which we implemented this quarter. We released a<br />
“beta” version of a new batch queue prediction service based on TACC’s Karnak software, which<br />
we believe will eventually replace QBETS and become a production queue prediction service.<br />
SDSC implemented basic Inca tests for the beta Karnak service. We also distributed a new<br />
version of TeraGrid’s GLUE2 capability kit, which provides the data used by our Condor-G<br />
matchmaking and Karnak services.<br />
21
The scheduling working group also began new plans to release scheduling workload data from<br />
the live TeraGrid systems to the public. TeraGrid operators use this data to evaluate different<br />
cluster and grid scheduling strategies. It will also be useful to researchers and developers to help<br />
them develop algorithms and tools that work on systems like TeraGrid.<br />
3.3 Capability Development and Expansion<br />
On Indiana University’s Sierra cluster, users can run HPC jobs and can utilize the nimbus cloud.<br />
On IU’s India cluster, users can run HPC jobs and utilize the eucalyptus cloud.<br />
The McStas simulation package, used by the Neutron Science TeraGrid Gateway (NSTG)<br />
managed at ORNL is installed on the NSTG cluster and advertised via the IIS. Currently jobs<br />
dispatched from the Neutron Science Portal (NSP) Application Manager are done via explicitly<br />
Globus/Gram calls, but future use is anticipated both possibly from the NSP as well as a separate<br />
portal infrastructure, Orbiter, being developed by TechX, Buffalo, from a SBIR grant.<br />
NCAR deployed TeraGrid’s new Local HPC Software Publishing capability and has begun<br />
advertizing its local software in TeraGrid’s Integrated Information Service (IIS).<br />
A new version of Globus GRAM software, which provides TeraGrid’s remote job submission<br />
capability, has been tested on TACC and LONI systems in preparation for a TeraGrid-wide<br />
deployment. We identified issues with the software configuration, usage reporting, and our<br />
installation tools that will be corrected before the TeraGrid-wide deployment in Q3.<br />
3.4 Operational Issues<br />
LONI upgraded the MyCluster, Condor, and Condor-G software on the QueenBee system.<br />
The TACC system operators identified issues with TeraGrid’s new security policies related to<br />
TAGPMA and certificate authority (CA) certification. Specifically, the initial policies regarding<br />
user sites were too restrictive, and this issue has been corrected.<br />
TACC also raised a concern arising from non-TeraGrid sites who would like to use TeraGrid’s<br />
CTSS software distributions with non-TeraGrid (and non-TAGPMA) certificate authorities. This<br />
issue is still open, but it is not strictly relevant to TeraGrid operations, either.<br />
CTSS’s remote login, data movement, application and parallel application development and<br />
runtime support, and TeraGrid core integration capabilities are all used frequently (often heavily)<br />
on all TeraGrid systems. Use of the CTSS remote compute capability (which allows remote job<br />
submission) varies widely with heavy use at three sites, frequent use at four sites, and infrequent<br />
use at two sites. Only two resource providers offer the data visualization capability but both<br />
report frequent use. The data management capability is used relatively infrequently at the three<br />
sites where it is available. The science workflow support capability is available on most resources<br />
and is used relatively infrequently, but each use results in many job submissions, making it<br />
difficult to compare to the other capabilities.<br />
4 Science Gateways<br />
4.1 Highlights<br />
The CIPRES (Cyberinfrastructure for Phylogenetic Research) gateway, with 637 users in this<br />
reporting period, had more users running simulations on the TeraGrid than all gateways combined<br />
in 2009. The group received a 2.7M hour award at the June allocations meeting and was cited as a<br />
model gateway proposal. Demand is expected to far outstrip this award though, which was<br />
limited only by resource availability. The team is looking into having top users make individual<br />
allocation requests.<br />
22
Presentations at gateway telecons featured Ultrascan (section 4.2), gateway software listing<br />
(section 4.3), the Dark Energy Survey (DES) gateway, TeraGrid’s Common User Environment,<br />
GRAM5 (section 4.3), and the Expressed Sequence Tag (EST) gateway (section 4.3). Unfunded<br />
efforts such as DES highlight projects making effective use of TeraGrid without staff assistance.<br />
4.2 Targeted Support<br />
CCSM/ESG.<br />
Version 1.0 of the NCAR Earth Science Knowledge Environment – Science Gateway Framework<br />
(ESKE-SGF) has been released. ESKE-SGF is a major software development effort aimed at<br />
building shared cyberinfrastructure for supporting several important data and knowledge<br />
management initiatives. The gateway currently supports dissemination of IPCC AR4 data, and in<br />
the near future will support the IPCC Fifth Assessment <strong>Report</strong> (AR5). Current holdings of the<br />
ESKE-SGF total about 290 TB. Total download activity from the site is about 10 TB per month,<br />
of which about 2 TB per month are transferred from deep storage at partner sites using the<br />
TeraGrid. Holdings for AR5 are anticipated to be about 400 TB of rotating storage at NCAR. The<br />
NCAR gateway is anticipated to contain references to petabyte-scale holdings at collaborating<br />
sites.<br />
NCAR and Purdue are developing an Environmental Science Gateway on the TeraGrid that<br />
integrates elements of the Purdue Climate Data Model Portal, the Earth System Grid (ESG)<br />
infrastructure at NCAR, and the multi-agency Earth System Curator effort. Currently,<br />
infrastructural iRODS elements are being set up, an ESG data node has been set up at Purdue, and<br />
publication of test data is underway to expose the test data in the NCAR ESG gateway. ESMF is<br />
currently being modified to include CIM metadata and to output the CIM XML. CCSM4 will be<br />
modified to include CIM attributes. CIM metadata for simulations has been manually coded in<br />
OWL and has been tested in a Development Portal user interface. A test instance CCSM4 was set<br />
up on the IBM Power 575 Bluefire computer at NCAR.<br />
Task 1 for this reporting period was to provide interfaces for remote model run invocation.<br />
CCSM4 final release with the ESMF library has been installed on the Steele system at Purdue.<br />
The install is being validated. Basic test runs were successful. Web Services interfaces for<br />
CCSM4 simulations are available including support for the Purdue CCSM modeling portal and<br />
the NOAA Curator workflow client, SOAP and RESTFUL interfaces, and access through either a<br />
community account or individual’s TeraGrid allocation via MyProxy. The group will maintain<br />
one code base by adapting Purdue’s current CCSM3 implementation and modifying it to meet<br />
ESGC’s needs such as authentication and running on multiple sites. Purdue is working on open<br />
source licensing for the Web Service code. Design is finalized. Implementation is in progress.<br />
Next quarter the team will install the CCSM attribute patch on Steele, finish model validation and<br />
complete the CCSM4 web service implementation.<br />
Task 2 is to publish Purdue’s CCSM archive data to ESG. Data publishing has been tested to<br />
PCMDI (Program for Climate Model Diagnosis and Intercomparison) gateway from the Purdue<br />
ESG data node. Data publishing to the NCAR prototype gateway is almost complete (Figure 4-1).<br />
Example metadata has been published to the Curator developmental portal<br />
23
Figure 4-14 CCSM output in ESG<br />
Next quarter the team will publish example CCSM output data to NCAR prototype gateway and<br />
investigate two paths for harvesting iRODS datasets: the FUSE file system with the ESG<br />
Publisher, or PyRODS with the same.<br />
Task 3 is to collect and publish CCSM4 model data and metadata back to ESG. The<br />
CCSM4/ESMF installation has been compiled and tested at Purdue. Several build issues were<br />
resolved with help from the ESMF support team. A group demonstration of METAFOR CIM<br />
(Common Metadata for Climate Modeling Digital Repositories), which Curator is building ESG<br />
interfaces for, and will be output by CCSM4 later this year. Good progress has been made on<br />
instrumenting CCSM4 such that it can produce a METAFOR-compliant CIM document<br />
(WCRP/CMIP5 metadata).<br />
Plan for next quarter include the evaluation of the CCSM4/ESMF CIM output capability. Scripts<br />
will be tested for collecting and publishing the metadata for CCSM4 model runs to ESG and<br />
evaluating the ATOM feed of CCSM-CIM into ESG. The design will be refined for an overall<br />
system architecture that supports publishing CCSM4 model output data to ESG.<br />
New features added to the CCSM portal include secure FTP download of CCSM model output.<br />
This is convenient for downloading large volumes of data and is integrated with the Purdue<br />
LDAP server and the portal account database. Online video tutorials have been added. Install of<br />
CCSM on Queenbee is in progress. The goal here is to dynamically submit to a TeraGrid site<br />
based on resource availability.<br />
In addition, the Purdue under RP funding has contributed to this work by developing new features<br />
for the TeraGrid CCSM modeling portal based on the feedback from previous class use.<br />
This team’s overarching goals for next quarter are continued work on software licensing and<br />
publishing the first Purdue CCSM dataset into ESG.<br />
GISolve.<br />
The GISolve team evaluated a hydrological modeling prototype to study the management<br />
strategies of water resources in Illinois and integrated a data-intensive map projection function<br />
from the USGS National Map project. Next quarter the team continues to explore new<br />
applications/communities and will integrate a data transfer service for moving data between the<br />
24
USGS National Map infrastructure and TeraGrid. They will also study the integration of a land<br />
use model.<br />
RENCI Science Portal.<br />
The RENCI team wrote numerous web service clients which help the learning curve involved in<br />
writing/using web service clients that call into the RENCI Science Portal (RSP) infrastructure.<br />
Clients include Autodock (NCBR), UrQMD (Duke), BLAST+ (WUSTL), KlustaKwik (Duke),<br />
MEME/MAST (UNC), MAQ (Duke) and ProteinEffects (Duke). Several new applications have<br />
been installed: ProteinEffects, SamTools (pileup) and KlustaKwik.<br />
The team has continued working with various researchers to create large BLAST runs using the<br />
BLASTMaster Desktop and RSP web service infrastructure. The Genome Center at WUSTL<br />
continues to make use of the RSP system via web services. Web service clients have been honed<br />
to work cohesively with Jython based scripts at WUSTL. This enables bridging different<br />
programming expertise (Java at RENCI and Python at WUSTL). The RSP has been used as a<br />
proxy for launching jobs using the RSP web service infrastructure (business logic on third-party<br />
web site calls into RSP web services).<br />
Next quarter work concludes for this team. They will continue to work with researchers to utilize<br />
TeraGrid and maintain account credentials across all configured clusters (TeraGrid, OSG, and<br />
local RENCI clusters). They will refactor the backend software used to submit glidein jobs out to<br />
remote resources using the next generation Portal Glidein Factory. This will be plugin-based<br />
(OSGi). Granularity of pre-staging data and binary building will be improved.<br />
UltraScan.<br />
UltraScan (PI Prof. Borries Demeler, U Texas Health Science Center at San Antonio) is a<br />
gateway supporting the computationally intense analysis of data collected on macromolecules by<br />
an analytical ultracentrifuge. The gateway is used by researchers throughout the world and also<br />
makes use of computational resources throughout the world. While some end users have their<br />
own centrifuges and use the analysis capabilities of the gateway, others send samples to use both<br />
the centrifuge and the analysis software. The IU gateways team completed initial integration of<br />
Open Grid Computing Environment tools (application management, registration, and monitoring<br />
services) with Ultrascan portal to submit jobs on Queenbee and Ranger. The team implemented<br />
job monitoring for job status, job cancel method based on requests from the UltraScan<br />
development team. Future work includes the addition of checkpointing for long-running jobs, the<br />
use of a community account on TeraGrid resources. Resource discovery methods developed<br />
through TeraGrid’s advanced support work with the GridChem project were applied in this<br />
project as well. Final production integration of OGCE tools with UltraScan's production<br />
environment depends on final GRAM5 stability validation on Ranger.<br />
Gateways built with many different technologies have been able to make effective use of OGCE.<br />
The LEAD gateway is portlet-based. GridChem is a Java swing client side application and<br />
Ultrascan is php and perl-based. All have been able to effectively use OGCE. In addition, a large<br />
MPI application that forks off many independent runs will be a focus for improvement through<br />
TeraGrid’s ASTA program.<br />
Finally, the Ultrascan gateway infrastructure has been replicated on IU’s Quarry hosting service<br />
so that improvements can be made without impacting the production environment.<br />
25
Arroyo.<br />
Figure 4-2 Ultrascan infrastructure.<br />
The Arroyo gateway supports the use of the widely used Arroyo adaptive optics code. Adaptive<br />
optics is used to sharpen images and comes into play in the building of actuators and sensing<br />
systems. End user groups consulted in the design of the gateway include astronomers at Caltech<br />
and the Jet Propulsion Lab. Typically a 4 day run on a workstation is needed for 4 seconds of<br />
optical imagery. The code requires a very complicated set of input parameters and one goal of the<br />
gateway is to simplify that with the web interface. Django has been used for gateway<br />
development and mysql to keep track of users and jobs. A daemon makes http calls to webserver<br />
asking if there are new jobs, then copies local information and uses Java CoG to push jobs to<br />
Queenbee. Users are updated on the progress of jobs continually throughout the process.<br />
Figure 4-3 Adaptive optics make wavefront corrections.<br />
26
Figure 4-4 Effect of adaptive optics.<br />
RP Activities<br />
IU: The Expressed Sequence Tag (EST) Pipeline is a bioinformatics portal for processing<br />
expressed sequence tags that has been described in previous reports. The increased availability of<br />
next generation sequencing devices will result in considerably higher volumes of data,<br />
necessitating the use of high performance computing for analysis. During the current period of<br />
performance, the IU gateways team added Cobalt, Ranger, Queenbee and Abe to the EST<br />
pipeline resource pool significantly reducing turn around time. Pipeline applications<br />
(RepeatMasker, PACE, CAP3) were installed on the above resources. The team performed initial<br />
evaluations of benchmarked performance of pipeline applications on resources and selected<br />
resources more suitable for applications. The pipeline was tested with 150,000, 200,000, 300,000,<br />
1,000,000 and 2,000,000 sequence inputs to test scaling and validate the underlying OGCE job<br />
management software. The EST Pipeline is based on the SWARM Web Service that provides a<br />
web service interface to clients and also manages the bulk job submission using the Birdbath API<br />
to submit to Condor. Jobs can range in length from milliseconds to days. Short jobs are run<br />
locally, longer jobs are submitted to remote supercomputers using Condor-G. SWARM load<br />
balances based on queues on the remote machines. The workflow that is to be executed by the<br />
pipeline is configured using a PHP based gateway that allows users to upload input data and<br />
select programs to run.<br />
OLAS: This is a new effort in collaboration with Prof. Craig Mattocks, University of North<br />
Carolina, to support Gateway-based hurricane and storm surge modeling. Dr. Mattocks's ASTA<br />
request has been approved, and the IU team is awaiting official notification.<br />
BioVLAB: This bioinformatics gateway led by Prof. Sun Kim at IU will rely heavily on OGCE<br />
job management and workflow tools. During the current period, the IU gateways team assisted<br />
Prof. Kim's startup allocation request and assisted his team with pilot TeraGrid jobs. Prof. Kim<br />
plans to submit a larger allocation and advanced support request proposal during Quarter 3.<br />
ORNL: The ORNL Resource Provider (RP) continues to provided specific support to the<br />
Neutron Science TeraGrid Gateway as well as the Earth Systems Grid (ESG) effort. For the<br />
NSTG, the main effort has focused on preparing the NSTG (gateway and cluster) to transition to<br />
long-term, stable, contributory operations within the Spallation Neutron Source (SNS). For the<br />
ESG, the entire collaboration is finalizing preparations for support for the next round of very high<br />
profile IPCC runs. ESG plans to be the principal source for simulation data and collection support<br />
for the IPCC-5 effort.<br />
27
Purdue: For the Year 5 GIG TeraGrid Climate Science Gateway project, the Purdue/NCAR are<br />
making progress and on track according to the project plan. Bi-weekly conference calls were held<br />
between Purdue and NCAR/NOAA collaborators to discuss project design and implementation<br />
details. Purdue RP has successfully installed the CCSM4 final release with ESMF libraries on the<br />
Steele cluster and is validating the installation. A common set of CCSM4 web service interfaces<br />
has been defined and is being implemented to support both the CCSM modeling portal at Purdue<br />
and the Curator workflow client at NOAA. A shared SVN repository has been set up for<br />
collaborative code development of Purdue/NCAR. Purdue and NCAR have investigated iRODS<br />
features in the design of the mechanism for publishing CCSM archive data to the Earth System<br />
Grid gateway. Purdue has also set up an ESG data node on a production server and successfully<br />
published example CCSM model data to the ESG PCMDI gateway<br />
UC/ANL: The U Chicago/Argonne RP has continued maintenance of the TeraGrid Visualization<br />
Gateway. The team also continued previous work done with the SIDGrid Gateway, which<br />
generalized the application framework to support Swift for the execution and management of<br />
large-scale science and engineering workflows. Using this gadget-workflow framework, the U<br />
Chicago/Argonne team has continued development and support of the Open Protein Simulator<br />
(OOPS) gateway, a gadget-based science portal for a science community that focuses on protein<br />
3D structure simulation and prediction. Scientists in this community have been using this<br />
gateway to participate in the on-going CASP9 protein structure prediction challenge, sponsored<br />
by the US National Library of Medicine (NIH/NLM).<br />
4.3 Gateway Infrastructure and Services<br />
Documentation and Gateway registry.<br />
Most effort from the SDSC team has been on bringing TeraGrid Liferay up to speed. The high<br />
number of bad links has been brought under control. No substantive changes to content have<br />
been implemented in Science Gateways. Science Gateways are represented in the<br />
Comprehensive Software Search, but requires further population from gateways. Only the RENCI<br />
science portal software is registered at this time.<br />
The RENCI team has developed Flex based XML generation web applications which generates<br />
XML which can be digested by TeraGrid integrated information services (IIS) software listings<br />
registry at http://www.renci.<strong>org</strong>/~jdr0887/kit-registration/index.html. The source is available at<br />
http://www.renci.<strong>org</strong>/~jdr0887/git-repo/kit-registration.git. Next quarter the team will work with<br />
TeraGrid to resolve any issues arising from the Gateway Application Web Service Registry. They<br />
will continue to support other Gateways in publishing their web service capabilities into<br />
TeraGrid’s IIS and continue to publish RENCI software listing information into TG IIS<br />
Gateway User Count.<br />
Science Gateway credential management documentation was updated to describe use of shortlived<br />
versus long-lived certificates and MyProxy by gateways<br />
(http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Science_Gateway_Credential_Management)<br />
. The NCSA user count team worked with Yan Liu (NCSA) on security improvements to<br />
SimpleGrid. “TeraGrid Science Gateway AAAA Model: Implementation and Lessons Learned”<br />
was accepted for TeraGrid 10. GRAM proxy refresh was implemented in the jglobus GramJob<br />
API to enable jglobus-based gateways to renew proxy credentials for running GRAM jobs.<br />
Next quarter the team will present their TeraGrid 10 paper. They will assist with inclusion of<br />
GRAM5 and SSH support for gateway attributes in CTSS and will support gateway delivery of<br />
attributes to RPs. Current status of this work is available at:<br />
http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Science_Gateway_Credential_with_Attribute<br />
s_Status and http://info.teragrid.<strong>org</strong>/web-apps/html/kit-reg-v1/science-gateway.teragrid.<strong>org</strong>-<br />
28
4.2.0/. The NCSA team will support AMIE integration by RP accounting administrators. This has<br />
been completed by NCSA, NICS, LONI/LSU, and NCAR; integration at other RPs is planned.<br />
The team will also support use of community shell<br />
(commsh, http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Community_Shell) by RPs.<br />
At ORNL, the Neutron Science TeraGrid Gateway (NSTG) does support attributes at this point<br />
for jobs dispatched from the Neutron Science portal however, the NSTG cluster (where most<br />
gateway jobs were queued) did not complete the deployment of the software to support this useful<br />
reporting to the TGCDB until early in the third quarter.<br />
SimpleGrid.<br />
Usability improvements include integrating interactive command access and tutorial instructions<br />
into a single Web interface. The team has developed a generic Web interface generation module<br />
to streamline the process of creating gateway application Web service and Web user interface by<br />
leveraging advanced Web 2.0 technologies (i.e., Yahoo UI and inputEx). They have developed a<br />
configuration tool to automate the management of package installation as well as server and Web<br />
portal account as a set of system command scripts. The team has explored a solution using<br />
TeraGrid information services for pulling usage data from the TeraGrid Central Database<br />
(TGCDB). The enhancement of SimpleGrid Web interface and package management enables<br />
more efficient delivery of the knowledge of gateway application development and the tutorial<br />
package, respectively.<br />
Next quarter the team will continue to improve the usability of SimpleGrid by evaluating the<br />
effectiveness of newly added features from user experience and feedback collected in handstutorials.<br />
They will continue to explore solutions to gateway user-level resource usage reporting<br />
and work with TeraGrid information service group to extract gateway user-level data from<br />
TGCDB.<br />
Helpdesk support.<br />
Staff resolved a ticket (#185005) from the University of Michigan on emulating Windows<br />
applications on TeraGrid using Wine technology. They continued to support CIPRES gateway<br />
development on using local job scheduler features and continued to support NBCR gateway<br />
development on solving a meta-scheduler problem.<br />
Improved standardization of community accounts.<br />
The NICS team has tested the commsh community tool shell on Kraken with GRAM4. They have<br />
completed the commsh configuration and testing for GridAMP gateway. The community account<br />
“gridamp” now has commsh as the production login shell for gateway access. Use of commsh<br />
will provide a reasonably secure “sandbox” for gateway developers by limiting the commands<br />
and scripts that can be run by community account.<br />
Next quarter the team will investigate the use of chroot with commsh and conduct security testing<br />
of GRAM4 with commsh in an attempt to identify any additional vulnerabilities. A presentation<br />
on gateway security and commsh will be given on TG Science Gateway call. A paper will be<br />
completed on installation, configuration and the use of commsh. Securing community accounts is<br />
an important function for TeraGrid and the gateway program. This work furthers the work of<br />
securing gateways and their use of community accounts.<br />
Summer Interns. IU's Science Gateway Advanced Support Team added two summer interns<br />
during the period of performance: Suresh Kumar Deivasigamani and Patanachai Tangchaisin.<br />
Both are enrolled as IU Computer Science Master's Students. Deivasigamani is developing<br />
JavaScript client libraries, REST services, and iGoogle Gadgets to provide interactive interfaces<br />
to TeraGrid information services. Tangchaisin is assisting with GRAM5 testing (see below),<br />
29
developing and running realistic test suites and developing iGoogle Gadgets for visualizing<br />
results.<br />
GRAM 5 preparation.<br />
The IU gateways team developed a testing suite and ran tests on the GRAM5 hosts Ranger,<br />
Lonestar and Queenbee with different batch sizes (5,10,20,50,100,500,1000) to assist with<br />
validation. Initial problems were found (specifically, resources associated with failed jobs were<br />
not being released) with Ranger and Queenbee installations. These problems were resolved with<br />
the Globus team in multiple iterations of test. Tests on the latest GRAM5 releases on Ranger and<br />
Queenbee are running adequately, but the IU team is continuing 2-3 stability tests per day. The<br />
harness is being used for the Gram5 testing.<br />
The testing suite is packaged and made available on the Open Grid Computing Environments'<br />
SourceF<strong>org</strong>e site for any gateway's usage. The testing suite is currently being used for scalability<br />
testing but will be expanded to run reliability tests if a gateway chooses to augment Inca testing.<br />
The testing suite is modularized to extend to other job manager including Condor JDRAMMA<br />
interfaces.<br />
The larger TeraGrid team has made use of the TeraGrid User Portal’s user forum capability for<br />
GRAM5 deployment discussions and has provided feedback to the TGUP team on the utility of<br />
the user forum. GRAM5 experiences have been exchanged with the Open Science Grid as well.<br />
4.4 RP Operations: SGW Operations<br />
During this quarter RPs worked with developers and providers of Scientific Gateways on specific<br />
RP resources in order to ensure that operation of each gateway is secure and that there is a means<br />
for tracking accountability for gateway usage.<br />
IU: IU added two new virtual machine images.<br />
TACC: TACC added support for the ogce community account.<br />
NCAR: While the Asteroseismic Modeling Portal (AMP) is not receiving advanced support in<br />
PY5, NCAR staff have consulted with NICS staff on the implementation of commsh and its use<br />
by gateway developers.<br />
ORNL: During this quarter, the ORNL RP continued its efforts to support the Neutron Science<br />
TeraGrid Gateway. The ORNL RP focused on making dispatching jobs to the TeraGrid a more<br />
routine execution target for the Neutron Science and Orbiter portals. Following a DOE internal<br />
assessment, the SNS (actually the ORNL enterprise) network has instituted default deny for<br />
outgoing network connections. Thus exceptions from SNS host(s) to TeraGrid subnets must now<br />
be requested on a pair wise case-by-case basis. In the long term this is not tenable. It would be<br />
easier if ORNL did not have this new policy; if the TeraGrid had a common address space (i.e. a<br />
single router rule for a FW exception) or the number of ports required for common TeraGrid<br />
services such as Gram and GridFTP could be reasonably restricted to a smaller number of high<br />
numbered ports. A long-term solution is not clear at this time, but is still being worked.<br />
Purdue: The Purdue CCSM portal has been successfully used in a graduate class (POL 520/EAS<br />
591, Purdue) in the fall semester, 2009. After the class, the Purdue RP staff worked closely with<br />
the students and instructors to collect feedback on how to improve the portal to better meet the<br />
instructional needs. In Q2 2010, several new features were developed and deployed to the portal<br />
including secure FTP download of model output and online video tutorials demonstrating the<br />
capabilities of the portal. In addition, progress has been made in setting up the model at Queenbee<br />
to allow dynamic job submission based on resource availability. These new capabilities will<br />
30
significantly improve the user experience and ensure better support for another graduate class in<br />
the fall semester 2010.<br />
UC/ANL: The UChicago/Argonne RP hosts and provides operating system, security and other<br />
software support for the two machines that act as the production and development Visualization<br />
Gateway resources, as well as the machine that serves the Open Life Science Gateway. The<br />
UChicago/Argonne team has continued work on their gadget-workflow framework. Initial work<br />
on a package builder for the framework, which enables developers to reuse the framework to plug<br />
in their domain-specific workflows has been completed. To simplify the handling of input, two<br />
intuitive and easy-to-use GUIs were developed. Users can simply type in command-line examples<br />
or upload their Swift scripts to create XML descriptions for their applications. Also implemented<br />
was support for determining output products from workflow descriptions and representing them<br />
in the portal graphically. The output view also includes an output file browser, execution<br />
parameters view, and workflow progress view. Users can customize the output views when<br />
defining their workflows by selecting the appropriate output files and modifying the general<br />
transform style sheets. Based on their experience of developing the gadget-workflow space, the<br />
UChicago/Argonne RP team wrote the paper "Accelerating Science Gateway Development with<br />
Web 2.0 and Swift" which was submitted, and accepted, to the TeraGrid 2010 Conference.<br />
5 Users and User Support<br />
User support in the TeraGrid comprises interlocking activities in several project areas:<br />
documentation, allocations and accounting services in the User Facing Projects and Core<br />
Services area (Section 6 of this report); the TeraGrid Operations Center in the Networking,<br />
Operations and Security area (Section 8); user outreach and training in the Education, Outreach<br />
and Training/External Relations area (Section 10); and the Frontline User Support and Advanced<br />
User Support services described in the current section. In addition, specialized user support<br />
services are provided by staff from the Science Gateways (Section 4) and Data and Visualization<br />
(Section 7) areas. The synergy of all these aspects of user support is coordinated by the User<br />
Interaction Council which consists of the GIG Director of Science and the directors of the above<br />
mentioned areas.<br />
5.1 Highlights<br />
The User Support (US) and Advanced User Support (AUS) staff continued their highly rated<br />
services for TeraGrid users at every level. As needed, US and AUS staff collaborated with the<br />
Data and Visualization, Science Gateways, EOT and User-Facing Projects and Core Services<br />
(UFC) staff to provide comprehensive support for TeraGrid users.<br />
An important collaboration between UFC and US staff has resulted in the final test version of a<br />
new “User Services Contact Administration and Logging” LifeRay portlet, which will enable US<br />
staff to manage the assignment of personal contacts not only to TRAC grant awardees after each<br />
quarterly meeting, but also to Startup and Education awardees whose usage patterns indicate a<br />
possible need for proactive assistance. Building on an application previously developed by UFC<br />
for AUS project planning and reporting, this web page will also enable consultants to take notes<br />
of their interactions with the users, greatly enhancing the User Engagement function of the User<br />
Champions program. The application is now in final testing, and has been used for assigning<br />
contacts to the awardees of the June 2010 TRAC meeting.<br />
Based on work done on ASTA and other AUS projects, AUS staff continued to present papers at<br />
conferences, and submit and publish journal papers. Multiple AUS and US staff were involved in<br />
<strong>org</strong>anizing the TG10 conference. This included submitting abstracts/papers/tutorials for the<br />
31
conference, reviewing abstracts/papers for the conference, selecting invited speakers for the<br />
science track etc.<br />
As a product of the Advanced User Support project on Performance and Analysis Tools, Mahin<br />
Mahmoodi of PSC created an online tutorial TAU: Parallel Performance Profiling and Tracing<br />
Tool that was made available on the TeraGrid AUS projects website during the last week of June.<br />
TAU is an open-source and advanced performance tool that can be used in profiling scientific<br />
applications. The tutorial provides TAU usage information for Kraken and Ranger. It introduces<br />
commonly used features, provides quick references, and assists users to successfully start using<br />
TAU on the systems. The tutorial also provides a guide for navigating visualization features of<br />
TAU which are essential in extracting and analyzing performance data. The material is available<br />
at https://www.teragrid.<strong>org</strong>/web/user-support/tau.<br />
The SDSC User Support team helped move Dash into production by installing software,<br />
developing documentation, and setting up the user environment (using modules). Support has<br />
been provided to users of Dash to help with porting codes and running jobs using Dash’s unique<br />
features (flash memory, vSMP). User support personnel also tested performance of several codes<br />
on Dash (NAMD, GAMESS, VASP, ABAQUS).<br />
The TACC Visualization and Data Analysis (VDA) group has been assisting Dr. Clint Dawson’s<br />
(UT Austin, Center for Subsurface Modeling) visualization efforts related to the Deepwater<br />
Horizon oil spill. Dawson’s team is simulating the path of oil from the BP Deepwater Horizon oil<br />
spill using simulations of different past hurricanes (IKE, GUSTAV, RITA) computed using the<br />
ADCIRC coastal circulation model coupled to the unstructured SWAN wave model. The<br />
visualization effort by TACC staff has been focused on the overlay of particle movement and<br />
satellite or aerial imaging data. The particles in the visualization represent the oil spill and their<br />
position is either hypothetical or reflect the position of the oil on the surface. The data has been<br />
visualized using Longhorn and MINERVA, which is an open source geospatial software<br />
primarily developed by TACC members. The data is generated daily and it's approximately 100<br />
GB is size.<br />
Members of the TACC VDA group also assisted Dr. Tom Fogal at SCI (University of Utah) as he<br />
recently used Longhorn to perform some of the largest visualizations ever accomplished. Dr.<br />
Thomas Fogal wrote, “Longhorn is an impressive machine: using only 256 processes, we<br />
performed some of the largest-ever published volume renderings. This was in major part due to<br />
the large number of GPUs available, in addition to helpful staff that aided us in accessing them<br />
for our visualization work.”<br />
5.2 User Engagement<br />
User engagement is done by various means. Under the User Champions program, RP consultants<br />
are assigned to each TRAC award right after the results of an allocations meeting become known.<br />
The assignment takes place by discussion in the user services working group, taking into account<br />
the distribution of an allocation across RP sites and machines, and the affinity between the group<br />
and the consultants based on expertise, previous history, and institutional proximity. The assigned<br />
consultant contacts the user group as their champion within the TeraGrid, and seeks to learn about<br />
their plans and issues. We leverage the EOT area’s Campus Champions program (Section 10) to<br />
fulfill this same contact role with respect to users on their campuses, especially for Startup and<br />
Education grants. Campus Champions are enrolled as members of the user services working<br />
group, and thus being trained to become “on-site consultants” extending the reach of TeraGrid<br />
support. To help users efficiently scale their codes and ancillary data processing flows to the tens<br />
of thousands of processes (on Ranger and Kraken), the Advanced User Support and User Support<br />
areas jointly operate the extreme scalability working group. Led by Nick Nystrom (PSC), this<br />
team brings together staff experts from all RP sites, external partners working to develop<br />
32
petascale applications and tools, as well as TeraGrid users and awardees of the NSF PetaApps,<br />
SDCI and STCI programs.<br />
Purdue RP Staff, in response to an increasing number of tickets requesting assisting with setting<br />
up login access to Steele and Condor as well as input from the Campus Champions group for a<br />
“simple” method to log into all resources for new users, created three step-by-step tutorials<br />
(including graphics) that show how to download and use the GSI-SSHTerm Java application as<br />
well as how to customize and save the resource settings and use the built-in sftp file transfer<br />
application. Two of these tutorial documents are finalized and currently located on the Campus<br />
Champions Wiki under the “Helping TeraGrid Users” heading<br />
(http://www.teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Campus_Champions#Helping_TeraGr<br />
id_users). These tutorials will also be made available soon on the local Purdue TeraGrid website.<br />
The third document is currently in review and will be published soon. Purdue RP Staff also<br />
identified a need to assist Campus Champions with identifying and matching their users’ research<br />
requirements to TeraGrid resources. Based on this need, Purdue RP Staff created and presented a<br />
session on “Matching Users to Resources on the TeraGrid” for the Campus Champions.<br />
5.3 Frontline User Support<br />
In this section we describe the frontline user support provided by TeraGrid user support staff. In<br />
addition, TeraGrid provides online documentation, a knowledgebase and training support for<br />
“self-serve” help and this is explained in detail in §6. By means of the User Interaction Council,<br />
which is chaired by the GIG Director of Science, Frontline and Advanced support functions are<br />
operationally integrated with the TeraGrid Operations Center (TOC) and the “self-serve” online<br />
help team.<br />
The TOC help desk is the first tier of user support for the TeraGrid. It is staffed 24X7 at NCSA.<br />
All TeraGrid users are asked to submit problem reports to the TOC via email to<br />
help@teragrid.<strong>org</strong>, by web form from the TeraGrid User Portal, or via phone (866-907-2382).<br />
The TOC creates a trouble ticket for each problem reported, and tracks its resolution until it is<br />
closed. The user is automatically informed that a ticket has been opened and advised of the next<br />
steps, as well as of a ticket number enabling follow-up.<br />
If a ticket cannot be resolved within one hour at the TOC itself, it gets passed on to the second<br />
tier of user support. It is referred to the RP where the problem has been experienced, or, if the<br />
problem appears to be TeraGrid-wide, it is reported to the full user services working group.<br />
There, it is assigned to a user consultant who begins by discussing the matter with the user, in<br />
order to narrow the diagnosis and to understand if the issue lies with the user’s application, with<br />
the RP or TeraGrid-wide infrastructure, or both. Tiger teams are formed by the user services<br />
working group as necessary, to diagnose and solve complex problems that implicate several RP<br />
sites or fields of expertise. The personal contacts assigned to TRAC awarded projects (via the<br />
User Champions program), as well as the Campus Champions, proactively seek to develop<br />
porting and optimization plans with their assigned users. They act as proxies for “their” users<br />
with respect to managers, systems administrators and vendors at their local RP site and (via the<br />
user services working group) at other RP sites, ensuring support for any special arrangements that<br />
may be required for debugging, testing and benchmarking new or upgraded user codes.<br />
As needed, consultants may request the assistance of advanced user support staff (the third tier of<br />
user support), systems, or vendor experts. The immediate goal is to ensure that the user can<br />
resume his or her scientific work as soon as possible, even if addressing the root cause requires a<br />
longer-term effort. When a proposed root cause solution becomes available, we contact the<br />
affected users again and request their participation in its testing. When a user’s problem requires<br />
active assistance from a M.S.- or Ph.D.-level staff exceeding approximately one person-month of<br />
33
effort, the issue is referred to the third tier of user support, i.e. for Advanced User Support<br />
described in §5.5 following the guidance of the allocations process. In terms of experience and<br />
qualifications, frontline support consultants range from the B.S. level to the Ph.D. level, with<br />
many of the latter dividing their effort between frontline and advanced user support.<br />
The framework for coordinating frontline support across the TeraGrid is the user services<br />
working group which assembles staff from all the RP sites under the leadership of the Area<br />
Director for User Support. It enables each site’s staff to leverage the experience and expertise of<br />
all others, and a concerted approach to solving cross-site user problems based on sharing and<br />
adopting best practices. The working group functions by means of an email list and weekly<br />
conference calls. It coordinates the User Champions and Campus Champions programs that, as<br />
we mentioned above, provide user engagement at the personal level. By involving the Champions<br />
in resolving tickets submitted by “their” users, we can improve the promptness and quality of<br />
ticket resolution, and often prevent problems from occurring in the first place, or from recurring<br />
repeatedly.<br />
5.3.1 Helping users port software into a TeraGrid environment<br />
In this section, in table 5.1 below, we provide some instances of assistance provided by our<br />
consultants to enable or improve users’ work on TeraGrid resources, which did not rise to the one<br />
FTE-month involvement level that defines “advanced” user support.<br />
Table 5.1. Examples of Frontline User Consulting Porting Software on TeraGrid Resources<br />
PI Name RP(s) Involved Name of software Nature of improvement<br />
Steve Rounsley, U.<br />
Arizona<br />
PSC Abyss Helped a group new to<br />
TeraGrid get genome<br />
assembly software running<br />
on Pople.<br />
Greg Voth, U. Utah PSC, TACC Data generated from<br />
NAMD runs<br />
Mark Ellison,<br />
Ursinus College<br />
Assisted user in<br />
transferring large dataset<br />
(> 1 TB) from retiring<br />
Bigben and PSC archival<br />
storage to the archive at<br />
TACC.<br />
PSC Gaussian Helped user overcome a<br />
problem with unexpected<br />
Gaussian termination by<br />
setting a fault-tolerant<br />
Gaussian runtime option.<br />
Doran Naveh, CMU TACC Siesta 3.0-b Parallel versions of Siesta<br />
and Transiesta compiled<br />
with support for Scalapack,<br />
GotoBlas and netCDF.<br />
Code tested and made<br />
available to users via a<br />
module with access control<br />
for licensing purposes.<br />
Jonas Baltrusaitis,<br />
University of Iowa<br />
Purdue TURBOMOLE Installed and modified run<br />
script to work in the Steele<br />
34
Srinivasa<br />
Jampani,<br />
University<br />
Rao<br />
Drexel<br />
Peter Peumans,<br />
Stanford University<br />
Alenka Luzar,<br />
Virginia<br />
Commonwealth<br />
University<br />
Shahriar Afkhami,<br />
New Jersey Institute<br />
of Technology<br />
environment.<br />
Purdue Gromacs Assisted the user in setting<br />
up a script to run Gromacs<br />
correctly on Steele.<br />
Purdue Lumerical Assisted the user in<br />
installing and configuring<br />
the software to run on<br />
Steele using the user’s<br />
license.<br />
Purdue LAMMPS Assisted the user in<br />
installing and customizing<br />
a specific version of the<br />
software.<br />
Purdue Custom Code Assisted the user in<br />
compiling and running<br />
custom 32 bit serial code<br />
on Steele.<br />
5.3.2 More Frequent User Support Issues/Questions<br />
Among the system or site specific user issues referred to the RPs for resolution, the most frequent<br />
had to do with login/access and account management issues (e.g. TeraGrid Portal password<br />
versus resource specific passwords, password resets, security credentials, adding and removing<br />
users, locating environment variables, allocations on machines leaving and joining the TeraGrid).<br />
In decreasing order of frequency, issues with job queuing (e.g. which queues to use for various<br />
types of jobs, throughput issues, scripting and execution issues); software availability and use<br />
(finding, setting up, building, optimizing and running); system availability and performance<br />
problems (e.g. memory limit questions, compute node failures, I/O timeouts, requests to kill<br />
jobs); file systems (e.g. permissions and quotas, corrupted files), and wide area data transfer and<br />
archive systems questions were also encountered.<br />
5.3.3 Trouble Ticket Statistics<br />
Table 5.2 shows how long tickets remained open during this quarter, that is, how long it took for<br />
TeraGrid user support staff to provide a diagnostic, workaround or solution.<br />
Table 5.2. Number of Q2, 2010 tickets that remained “open” for the indicated time bins.<br />
Open for 1 hour, 1 day, 2 days, 1 week, 2 weeks, 4 weeks 190<br />
35
5.4 RP Operations: User Services<br />
5.4.1 Consulting Services<br />
Ongoing daily operational user support was conducted at the RPs. Consulting is the second tier of<br />
TG’s three-tier user support, with the first tier being the TG Operations Center and the third tier<br />
being the TG Advanced User Support. TG RP Consulting staff provide real-time, frontline<br />
support for their local RP’s resources in response to users’ requests to the TeraGrid Operations<br />
Center and RP consulting systems. This support includes assistance such as allocations issues,<br />
initial login issues for new users, debugging jobs, data transfer, and compiler support. Consulting<br />
support typically engages a user for a relatively short period of time, and may pass a user on to<br />
Advanced Support if longer-term assistance is required. Some examples of site-specific assistance<br />
achievements and their impact on users’ research progress follow.<br />
Under the User Champions program (section 5.2) TACC staff contacted 51 PIs with new or<br />
renewal awards granted for the allocation period beginning April 1, 2010. Responses were<br />
received from 13 PIs and TACC staff members resolved any user issues or referred them to the<br />
appropriate site for resolution. TACC also initiated contact with 151 “startup” principal<br />
investigators whose allocations began in March, April, and May of 2010. Responses were<br />
received from 39 PIs and TACC staff members resolved any user issues or referred them to the<br />
appropriate site for resolution. TACC staff continued to facilitate special user requests, including<br />
resource usage outside normal queue limits, high priority in order to meet deadlines, and special<br />
allocation requests to prevent user research projects from coming to a halt.<br />
SDSC User Support staff tested various research (NAMD, GAMESS, VASP) and commercial<br />
(ABAQUS) codes on the Dash computational resource. Tests included checking performance<br />
benefits of using Dash’s unique features (flash memory and vSMP technology). Preliminary tests<br />
with ABAQUS showed an improvement of up to 50% with use of flash for I/O. Information from<br />
early user experiences and SDSC support staff testing were used to develop an introductory<br />
tutorial which has been accepted for the TeraGrid 2010 conference.<br />
Purdue RP staff member Kim Dillman took a leading role in the testing of the new User Services<br />
Contact Administration and Logging web page (see Section 5.1). She currently serves as a<br />
“volunteer” Startup Reviewer for new TeraGrid startup/renewal/supplemental allocation requests<br />
and has reviewed 63 startup requests during Q2. Leveraging her experience working as the RP<br />
support expert and her contact with many campus champions, Kim also assisted Campus<br />
Champions with questions/issues in support of their institution. The issues range from managing<br />
their TeraGrid allocations for campus users to identifying appropriate TeraGrid resources for<br />
specific research projects from the campuses that include Bingham University, University of<br />
Arkansas, and Pittsburg State University (Kansas).<br />
5.4.2 Helpdesk<br />
TG RPs provided local helpdesk first-tier support that integrates with the TeraGrid Operations<br />
Center. This support works to provide rapid response email and phone responses for questions<br />
and trouble reports from the user community. Helpdesk staff direct users’ queries to Consulting<br />
or Advanced user support in cases where second-tier or third-tier expertise is required.<br />
5.5 Advanced User Support<br />
Advanced User Support (AUS) staff are located at the RP sites and are coordinated by the AUS<br />
Area Director. The highest level of long-term user support for a TeraGrid user or user group is<br />
provided by the AUS staff. The overall advanced support efforts are categorized by three sub-<br />
36
efforts: Advanced Support for TeraGrid Applications (ASTA), Advanced Support for Projects<br />
(ASP), and Advanced Support for EOT (ASEOT).<br />
Operations of the AUS area is coordinated by the AUS Area Director jointly with the AUS Point<br />
of Contacts (POC) who are in most cases the managers or group leaders of the scientific<br />
computing applications groups at each of the RP sites. All the management and coordination<br />
issues such as analyzing TRAC ASTA review reports, contacting PIs for further understanding of<br />
ASTA plans, matching of appropriate AUS staff to ASTAs, discussing/initiating new ASPs,<br />
matching of AUS staff to ASPs, discussing ASEOT topics, preparing program plan and annual<br />
reports, etc., are handled by this group. They hold biweekly teleconferences for the management<br />
and coordination issues and continue discussions via email as needed.<br />
On alternate bi-weeks a technical tele/web-conference is hosted using ReadyTalk for all the AUS<br />
technical staff as well as the AUS POCs. At this technical tele/web-conference 1-2 technical<br />
presentations are made by AUS technical staff discussing progress of ASTA projects, progress of<br />
ASPs, and any other technical issue that is of interest to all the AUS technical staff. ReadyTalk<br />
allows all the AUS staff, attending these bi-weekly tele/web-conferences to view the slides and<br />
listen and participate in the technical presentations. This fosters an environment for collaboration<br />
among the AUS staff located at the various RP sites and also allows sharing of technical insight<br />
that may have impact on multiple ASTAs or ASPs. The presentation slides and audio recordings<br />
are archived and made publicly available for interested staff as well as users at:<br />
http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Technical_presentations .<br />
5.5.1 Advanced Support for TeraGrid Applications (ASTA)<br />
Advanced Support for TeraGrid Applications (ASTA) efforts allow AUS staff to work with a user<br />
for a period of a few months to a year. Activities include porting applications, implementing<br />
algorithmic enhancements, implementing parallel programming methods, incorporating math<br />
libraries, improving the scalability of codes to higher core counts, optimizing codes to utilize<br />
specific resources, enhancing scientific workflows, providing help for science gateways and<br />
performing visualization and data analysis projects. To receive ASTA support, TeraGrid users<br />
submit a request as a part of their annual, Supplemental, or Startup resource allocation proposal.<br />
The recommendation score provided by the reviewers is taken into account and upon discussion<br />
with the user regarding a well-defined ASTA workplan, AUS staff provide ASTA support to the<br />
user. This support often impacts a larger number of users who are part of the PI’s team. Since late<br />
2008, in addition to the quarterly TRAC allocations, ASTA support is also available to those<br />
requesting Supplements or Startup allocations. The TeraGrid-wide AUS effort improves the<br />
ability to optimally match AUS staff to an ASTA project by taking into account the reviewers’<br />
recommendation score, the AUS staff(s) expertise in a domain science/HPC/CI, the ASTA project<br />
workplan, and the RP site where the user has a resource allocation. In addition to providing longterm<br />
benefits to the user or the user team, projects are also beneficial to the TeraGrid as a whole.<br />
Results of ASTA projects provide insights and exemplars for the general TeraGrid user<br />
community and these are included in documentation, training and outreach activities. In the table<br />
5.3 we list 47 ASTA projects that were active and 6 ASTA projects that were completed during<br />
the quarter. This list also includes Startup and Supplemental ASTA projects. Quarterly updates of<br />
ASTA work for most of these ASTA project are provided after the table.<br />
ASTA Project<br />
The Effect of<br />
Gravitational<br />
Instabilities on<br />
Table 5.3. List of continuing and completed ASTA projects for the quarter.<br />
Site(s)<br />
Involved<br />
Status<br />
Indiana Continuing through<br />
09/10<br />
Notes<br />
37
Planet Migration in<br />
Protoplanetary<br />
Disks, PI Durisen, IU<br />
DNS of Spatially<br />
Developing<br />
Turbulent Boundary<br />
Layers, PI Ferrante,<br />
U. Washington<br />
Multiscale Analysis<br />
of Size Dependence<br />
of Deformation and<br />
Fracture of Hierarchy<br />
Protein Materials, PI<br />
Buehler, MIT<br />
An Earthquake<br />
System Science<br />
Approach to<br />
Physics-based<br />
Seismic Hazard<br />
Research, PI Jordan,<br />
USC<br />
Direct Numerical and<br />
Large-eddy<br />
Simulation of<br />
Complex Turbulent<br />
Flows, PI Mahesh,<br />
U. Minnesota<br />
Modeling Studies of<br />
Nano<br />
and<br />
Biomolecular<br />
Systems, PI<br />
Roitberg, U. Florida<br />
Efficient<br />
Implementation of<br />
Novel MD Simulation<br />
Methods in<br />
Optimized MD<br />
Codes, PI Voth, U.<br />
Chicago<br />
Flow and Nutrient<br />
Transport in 3D<br />
Porous Scaffolds<br />
Used for Bone<br />
Tissue Growth, PI<br />
Papavassiliou, U.<br />
Oklahoma<br />
Biomechanics of<br />
Stem Cells, PI Finol,<br />
SDSC, NCSA,<br />
TACC, NICS<br />
Continuing<br />
03/11<br />
through<br />
SDSC Continuing through<br />
12/10<br />
SDSC,<br />
TACC,<br />
NCSA<br />
PSC,<br />
NICS,<br />
Continuing<br />
09/10<br />
through<br />
SDSC Continuing through<br />
09/10<br />
SDSC,<br />
NICS<br />
NCSA,<br />
PSC, NICS,<br />
TACC, NCSA<br />
Continuing<br />
09/10<br />
Continuing<br />
09/10<br />
through<br />
through<br />
PSC Continuing through<br />
06/11<br />
PSC Continuing through<br />
09/10<br />
38
CMU<br />
Global Kinetic<br />
Simulations of the<br />
Magnetosphere, PI<br />
Karimabadi, UCSD<br />
EpiSims, PI Roberts,<br />
RTI<br />
International;NekTar-<br />
G2, PI Karniadakis,<br />
Brown U.; GENIUS,<br />
PI Coveney, UCL<br />
Enabling Neutron<br />
Science Research<br />
with the Neutron<br />
Science TeraGrid<br />
Gateway, PI Cobb,<br />
ORNL<br />
Simulation of Liquid<br />
Fuel Combustors, PI<br />
Mashayek, UIC<br />
Multichannel<br />
Scattering Theory<br />
via the Modified<br />
Faddeev Equation,<br />
PI Hu, CSULB<br />
Turbulence<br />
Simulations Towards<br />
the PetaScale:<br />
Intermittency,<br />
Mixing, Reaction and<br />
Stratification, PI<br />
Yeung, Ge<strong>org</strong>ia<br />
Tech<br />
First-Principles<br />
Molecular Dynamics<br />
for Petascale<br />
Computers, PI Gygi,<br />
UCD<br />
UNRES Force-field<br />
for Simulation of<br />
Large Molecular<br />
Systems, PI<br />
Scheraga, Cornell<br />
Petascale Adaptive<br />
Computational Fluid<br />
Dynamics, PI<br />
Jansen, RPI<br />
SDSC, NICS Continuing through<br />
06/11<br />
NIU,<br />
NCSA,<br />
TACC,<br />
Purdue<br />
SDSC,<br />
ANL,<br />
IU,<br />
Continuing<br />
09/10<br />
through<br />
ORNL Continuing through<br />
03/11<br />
NCSA Continuing through<br />
09/10<br />
TACC Continuing through<br />
09/10<br />
NICS,<br />
TACC<br />
SDSC,<br />
Continuing<br />
09/10<br />
through<br />
NICS, TACC Continuing through<br />
12/10<br />
PSC Continuing through<br />
12/10<br />
NICS,<br />
PSC<br />
TACC,<br />
Continuing<br />
12/10<br />
through<br />
This is also part of the<br />
Advanced Support<br />
Project of MPig and<br />
cloud computing (see<br />
section 5.5.2)<br />
39
Insight into<br />
Biomolecular<br />
Structure, Dynamics,<br />
Interactions and<br />
Energetics from<br />
Simulation, PI<br />
Cheatham, U. Utah<br />
Large Eddy<br />
Simulation of<br />
Particle-Turbulence<br />
Interactions in<br />
Complex Flows, PI<br />
Apte, Oregon State<br />
Nonlinear Evolution<br />
of the Universe, PI<br />
Cen, Princeton<br />
Modeling Global<br />
Climate Variability<br />
with the Multi-scale<br />
Modeling<br />
Framework: The<br />
Boundary-layer<br />
Cloud Problem, PI<br />
Helly, SIO/UCSD<br />
Lattice Gauge<br />
Calculation of<br />
Hadronic Physics, PI<br />
Liu, University of<br />
Kentucky<br />
Leveraging<br />
Supercomputing for<br />
Large-scale Gametheoretic<br />
Analysis, PI<br />
Sandholm, CMU<br />
Astrophysical<br />
Applications of<br />
Numerical Relativity:<br />
Coalescing Binary<br />
Systems and Stellar<br />
Collapse, PI<br />
Schnetter, LSU<br />
Hierarchical<br />
Bayesian Models for<br />
Relational Learning,<br />
PI Gordon, CMU<br />
Fractals in Elasto-<br />
Plasticity, PI Ostoja-<br />
Starzewski, UIUC<br />
SDSC Continuing through<br />
12/10<br />
TACC Continuing through<br />
03/11<br />
TACC,<br />
NICS<br />
SDSC,<br />
Continuing<br />
09/10<br />
through<br />
Purdue, SDSC Continuing through<br />
06/11<br />
PSC, NICS Continuing through<br />
03/11<br />
PSC Continuing through<br />
03/11<br />
PSC,<br />
SDSC<br />
NCSA,<br />
Continuing<br />
03/11<br />
PSC Completed 04/10<br />
through<br />
NCSA Continuing through<br />
01/11<br />
40
Heat Transfer<br />
Simulation of<br />
Suspension<br />
Micro/nano Flow<br />
Using a Parallel<br />
Implementation of<br />
Lattice-Boltzman<br />
Method, PI Joshi,<br />
Ge<strong>org</strong>ia Tech<br />
The CIPRES<br />
Science Gateway, PI<br />
Miller, SDSC/UCSD<br />
Coupling of<br />
Turbulenct<br />
Compressible Solar<br />
Convection with<br />
Rotation, Shear and<br />
Magnetic Fields, PI<br />
Toomre, U. Colorado<br />
Effects of Star<br />
Formation and<br />
Supernova<br />
Feedback on Galaxy<br />
Formation, PI Choi,<br />
U. Nevada, Las<br />
Vegas<br />
High-Resolution<br />
Modeling of<br />
Hydrodynamic<br />
Experiments, PI<br />
Demeler, U. Texas<br />
Health Science<br />
Center<br />
Imine Formation in<br />
an Enzyme Active<br />
Site, PI LLoyd,<br />
Oregon Health and<br />
Science University<br />
The Roles of<br />
Vortices and Gravity<br />
Waves in Plane<br />
Formation in<br />
Protoplanetary Disks<br />
and in Jupiter’s<br />
Atmosphere, PI<br />
Marcus, U. California<br />
Berkeley<br />
Magnetized<br />
Astrophysical<br />
Purdue, TACC,<br />
PSC<br />
Completed 06/10<br />
SDSC Continuing through<br />
06/11<br />
PSC,<br />
TACC<br />
NICS,<br />
Completed 06/10<br />
TACC Continuing through<br />
06/10<br />
IU,<br />
Purdue<br />
PSC,<br />
Continuing<br />
03/11<br />
through<br />
SDSC Continuing through<br />
09/10<br />
TACC, SDSC Continuing through<br />
09/10<br />
NCSA, TACC Continuing through<br />
09/10<br />
Also see Science<br />
Gateways section of the<br />
quaterly report<br />
(UltraScan)<br />
41
Plasmas with<br />
Anisotropic<br />
Conduction,<br />
Viscosity, and<br />
Energetic Particles,<br />
PI Parrish, U.<br />
California Berkeley<br />
On the Decay of<br />
Anisotropic<br />
Turbulence, PI<br />
Perot, U. of<br />
Massachusetts,<br />
Amherst<br />
Simulations of<br />
Coarsed Grained<br />
Macromolecular<br />
Fluids, PI Guenza,<br />
U. Oregon<br />
ExtremeOpenMP –<br />
A Programming<br />
Model for Productive<br />
High<br />
End<br />
Computing, PI Tafti,<br />
Virginia Polytechnic<br />
Inst and State U.<br />
High Throughput<br />
Modeling of<br />
Nucleosome<br />
Stability, PI Bishop,<br />
Tulane U.<br />
Center for Integrated<br />
Space Weather<br />
Modeling, PI Quinn,<br />
Boston U.<br />
Parallelization of a<br />
Finite Element Code<br />
for Petascale<br />
Dynamics Modeling,<br />
PI Cui, SDSC<br />
Computational<br />
Modeling of Thermal<br />
Stripping in Nuclear<br />
Reactors, PI Kimber,<br />
U. Pittsburgh<br />
Simulation of<br />
Complex Organic<br />
Mixture<br />
Biotransformations in<br />
Natural Systems, PI<br />
NCSA Continuing through<br />
09/10<br />
SDSC Completed 06/10<br />
NCSA Completed 06/10<br />
SDSC Completed 06/10<br />
NICS, TACC Continuing through<br />
12/10<br />
SDSC Continuing through<br />
08/10<br />
PSC Continuing through<br />
12/10<br />
PSC Continuing through<br />
04/11<br />
42
VanBriesen, CMU<br />
Villous Motility as a<br />
Critical Mechanism<br />
for Efficient Nutrient<br />
Absorption in the<br />
Small Intestine, PI<br />
Brasseur, Penn<br />
State<br />
Systematics of<br />
Nuclear Surface<br />
Vibrations in<br />
Deformed Nuclei, PI<br />
Engel, U. North<br />
Carolina<br />
Simulations of<br />
Molecular Clouds<br />
and Protoplanetary<br />
Disks, PI Mac Low,<br />
American Museum<br />
of Natural History<br />
Structural and<br />
Functional<br />
Characterization of<br />
Glu-Plasminogen<br />
and<br />
Lys-<br />
Plasminogen, PI<br />
Kim, CMU<br />
Design<br />
of<br />
Communicating<br />
Colonies of<br />
Biomimetic<br />
Microcapsules, PI<br />
Kolmakov, Univ. of<br />
Pittsburgh<br />
Computational<br />
Modeling of Ultracold<br />
Molecular Collisions<br />
and Reactions<br />
Involving Clusters<br />
and Nanoparticles,<br />
PI Naduvalath,<br />
UNLV<br />
Development of<br />
Novel HIV Entry<br />
Inhibitors Using the<br />
ROCS Shaped<br />
Based Matching<br />
Algorithm and<br />
Molecular Dynamics<br />
NICS, SDSC Continuing through<br />
03/11<br />
NICS, TACC Continuing through<br />
03/11<br />
PSC Continuing through<br />
03/11<br />
PSC Continuing through<br />
03/11<br />
TACC,<br />
NCSA<br />
SDSC,<br />
Continuing<br />
03/11<br />
through<br />
NICS, PSC Continuing through<br />
03/11<br />
PSC Continuing through<br />
03/11<br />
43
Studies of gp120<br />
Envelop Proteins, PI<br />
LaLonde, Bryn Mawr<br />
College<br />
Simulation and Data<br />
Analysis of Macro-<br />
Molecular Systems,<br />
PI Blaisten-Barojas,<br />
Ge<strong>org</strong>e Mason<br />
University<br />
Applications of Bispeptide<br />
Nanostructures, PI<br />
Schafmeister,<br />
Temple University<br />
Finite-elements<br />
Modeling of Singleand<br />
Poly-Crystalline<br />
Material Removal<br />
Including<br />
Crystallographic<br />
Effects,<br />
PI<br />
Ozdoganlar, CMU<br />
PSC Continuing through<br />
03/11<br />
NICS Continuing through<br />
04/11<br />
PSC Continuing through<br />
03/11<br />
PI: Durisen (Indiana U, Astronomy). The Effect of Gravitational Instabilities on Planet Migration<br />
in Protoplanetary Disks. Continuing through 09/10. The ASTA staff for this project are Don<br />
Berry and Robert Henschel from the High Performance Applications group of Research<br />
Technologies at Indiana University. In this quarter the ASTA staff continued to work with faculty<br />
from the IU astronomy department to refine the requirements for the proposed automated<br />
workflow system. Work continued on all the applications of the package and to produce standard<br />
data products. Performance tests of the code was done specially for postprocessing routines on<br />
the clusters.<br />
PI: Ferrante (U Washington, Fluids, Particulate and Hydraulic Systems). DNS of Spatially<br />
Developing Turbulent Boundary Layers. Continuing through 03/11. The ASTA team for this<br />
project consists of Darren Adams (NCSA), David Bock (NCSA), John Peterson (TACC), Lonnie<br />
Crosby (NICS) and Dmitry Pekurovsky (SDSC). The team continued to collaborate with the PI<br />
on the integration of a Parallel HDF5 file format with the PI’s existing simulations code. TACC<br />
staff continued to provide support for the SVN repository for user’s codes. Further support<br />
activities such as compiling, porting, scaling and general help running the code etc. are also<br />
occasionally provided. A shift to more visualization support is anticipated as new data is<br />
generated using a new HDF5 file format.<br />
Some software engineering and other ASTA assistance has basically been completed with some<br />
minimal ongoing support as stated below:<br />
• Improvements were made to the src code for scalability, speed, and portability<br />
• Source code was split from a single F77 code into many files and some were converted to<br />
F90 with improved interfaces<br />
44
• Subversion source management has resulted in better collaboration<br />
• ASTA work has allowed runtime parameters control grid size and process count as well<br />
as other IO-related settings<br />
• SPRNG random number library was added<br />
• A project wiki space was created for the project on NCSA wiki: wiki.ncsa.uiuc.edu and<br />
the wiki is being used to develop a code developer/user guide. This is very beneficial as<br />
there are always new students who need to get up to speed with the (rapidly changing)<br />
software<br />
• It is anticipated that the wiki will help with documentation and legacy support once the<br />
active development stage of the project collaboration has completed<br />
• 3D transpose routines were integrated with the existing code<br />
Darren’s main focus was to design a new HDF5 date format and optimize I/O routines. The new<br />
HDF5 code, developed at NCSA for the PI’s DNS data has these basic properties:<br />
• A lightweight c library (dubbed “h5dns”) with Fortran interface routines. The library is<br />
designed to be used both as a writer (from the dns code) and a reader (from dns code and<br />
visualization codes)<br />
• The c library only requires HDF5 (usually with MPI –“PHDF5”)<br />
• All hdf I/O routines will be optimized for parallel I/O when they are compiled with MPI<br />
• Files are intended to be “self-describing” with attributes and metadata datasets<br />
• The file format will be used for both 2D and 3D Rectilinear Grid data consisting of both<br />
scalar and 3d vector variables<br />
• Information about the per MPI-process layout of the data is preserved as supplementary<br />
metadata to the main datasets. The datasets themselves are single array spanning the<br />
global computational grid.<br />
Initial testing of basic I/O functionality and unit testing of the library are happening currently. It<br />
is expected that runs will be done in early to mid August to get new visualization dump via the<br />
HDF5 routines and it will allow to explore the data in both VisIt and with David Bock’s<br />
visualization tools.<br />
Following many tasks remain to be completed:<br />
• Fully integrate HDF5 routines with the main code<br />
• Test new data format for correctness<br />
• Create reader routine for VisIt<br />
• Integrate Isotropic turbulence code into new executable<br />
• Update code to perform 2d domain decomposition<br />
• Optimize I/O patterns both by system and by taking special measure when running at<br />
very large scale<br />
PI: Buehler (MIT, Mechanics and Materials). Multiscale Analysis of Size Dependence of<br />
Deformation and Fracture of Hierarchy Protein Materials. Continuing through 12/10. The<br />
project is lead by Ross Walker (SDSC). The PI used the executables, build by Walker<br />
45
previously, to run calculations. ASTA help was provided to produce some ray-traced pictures for<br />
the PI.<br />
PI: Jordan (University of Southern California, Earth Sciences). SCEC PetaScale Research: An<br />
Earthquake System Science Approach to Physics-based Seismic Hazard Research. Continuing<br />
through 09/10. The ASTA team consists of Yifeng Cui (SDSC), John Urbanic (PSC), Byoung-Do<br />
Kim (TACC), Kwai Wong (NICS), Mark Vanmoer (NCSA), and Amit Chourasia (SDSC).<br />
Highlights of ASTA effort for the quarter include:<br />
• Support for SCEC M8 dynamic rupture simulation on NICS Kraken using the AWM<br />
code; the input for M8 wave propagation simulation was prepared on ORNL Jaguar<br />
machine<br />
• Time was spent to debug and test compiler issues on Kraken for AWM runs at very large<br />
scale<br />
• Publication: Cui, Y., Olsen, K., Jordan, T., Lee, K., Zhou, J., Small, P., Ely, G., Roten,<br />
D., Panda, DK, Levesque, J., Day, S. and Maechling, P.: Scalable Earthquake Simulation<br />
on Petascale Supercomputers, submitted to SC10, New Orleans, Nov, 2010<br />
• Hercules code work: Additional large scale (100K PE) optimization was done. In<br />
particular, Lustre limitations have required that checkpointing also now be scaled using<br />
techniques related to those used for the IO of scientific results IO. This work is in<br />
progress as full checkpointing at very large scale pushes the envelope of Lustre<br />
capability. The opportunity was also taken to refactor all related part of the code.<br />
• Visualization work: Support continued by creating visualization of M8 simulations.<br />
PI: Roitberg (U. Florida, Biophysics). Modeling Studies of Nano and Biomolecular Systems.<br />
Continuing through 09/10. The ASTA team consists of Ross Walker (SDSC), Sudhakar<br />
Pamidighantam (NCSA) and Christian Halloy (NICS). During this quarter Walker continued<br />
work with Prof Roitberg to further develop a GPU accelerated version of AMBER. In April 2010<br />
we version 11 of the AMBER software was released and this included support for NVIDIA GPU<br />
acceleration by default. Prof Roitberg is now using this for his simulations. ASTA work is now<br />
ongoing with him to develop a version of AMBER that runs on multiple GPUs using MPI. This<br />
will likely take the next 3 months or so and then Prof Roitberg will be renewing his TRAC<br />
proposal and requesting an extension to the AUS project in order to finish this work. During this<br />
quarter a joint NSF SSE proposal was submitted and it requested funds to extend this project<br />
further to support QM/MM simulations and additional more advanced MD features such as<br />
Thermodynamic Integration, Replica Exchange etc. A manuscript describing the GPU<br />
implementation is in preparation.<br />
PI: Voth (University of Chicago, Physical Chemistry). Efficient implementation of novel MD<br />
simulation methods in optimized MD codes. Continuing through 09/10. The ASTA team consists<br />
of Phil Blood (PSC), John Peterson (TACC), and Lonnie Crosby (NICS). Recently the Voth<br />
group completed the initial port of their MS-EVB molecular dynamics code to LAMMPS<br />
molecular dynamics simulation package (LAMMPS EVB). The MS-EVB method allows the<br />
calculation of quantum effects like proton transport from within an MD simulation much more<br />
efficiently than traditional quantum methods. Nevertheless, the algorithm presents extreme<br />
scaling challenges since the long-range electrostatics (requiring a 3D FFT with all-to-all<br />
communication) must be recalculated many times each time step, as opposed to a single time for<br />
regular MD simulation. During this quarter, Blood has continued work to find ways to overcome<br />
these severe scaling challenges and completed an initial analysis of the opportunities to save<br />
communication time in the LAMMPS EVB code. Based on this initial analysis, it appears that it<br />
46
may be necessary to use a different method for the long-range electrostatics calculation that can<br />
avoid the need for an all-to-all communication. Recently, a scientist that the PSC devised a<br />
method for performing a 3D FFT without all-to-all communication. The ASTA team is exploring<br />
the possibility of putting this code into LAMMPS to help address the scaling problem. In<br />
addition, a group at Temple has recently developed an experimental version of LAMMPS that<br />
uses hybrid MPI/OpenMP programming to achieve better balance between computation and<br />
communication at high processor counts in the regular version of LAMMPS<br />
(http://sites.google.com/site/akohlmey/software/lammps-icms). During this quarter, Blood<br />
ported the LAMMPS EVB code to work with the new hybrid version of LAMMPS. Initial tests<br />
indicate that this may help alleviate some of the scaling problems, although additional effort, as<br />
described above, will be needed to achieve good scaling.<br />
PI: Papavassiliou (U. Oklahoma, Chemical, Thermal Systems). Investigation of Flow and<br />
Nutrient Transport in PorousScaffolds Used for Bone Tissue Growth. Continued through 06/11.<br />
ASTA staff for this project is Raghu Reddy (PSC). Interaction is ongoing with PI’s student<br />
regarding profiling runs of PI’s codes.<br />
PI: Finol (CMU, Mechanical and Biomedical Engineering). Biomechanics of Stem Cells.<br />
Continuing through 09/10. The ASTA consultant is Anirban Jana (PSC). In this project, the<br />
extruded mesh for the arterial wall using prism elements (code developed inhouse, see last<br />
quarterly report) was tested with ADINA. However, it was discovered that for nonlinear isotropic<br />
and anisotropic materials, the prism element is not a good choice. Quadratic brick elements are<br />
better suited for this. Hence an algorithm that split each prism element into 3 brick elements was<br />
incorporated into the extrusion code. Another outstanding issue is that isoparametric element<br />
formulation, that allows an element to have curved edges, may improve the accuracy of the<br />
simulations with fewer elements. Anirban and the PI’s team have been looking into the pros and<br />
cons of incorporating this into the extrusion code. Finally, in the latest ADINA versions (8.6<br />
onwards), the interface for user-defined Mooney-Rivlin type hyperelastic materials has changed.<br />
Effort is underway on updating the user defined subroutines accordingly. .<br />
PI: Karimabadi (UCSD, Magnetospheric Physics). Global Kinetic Simulations of the<br />
Magnetosphere. Continuing through 06/11. The ASTA staff for this project are Mahidhar<br />
Tatineni (SDSC) and Glenn Brook (NICS). The ASTA effort focused on performing various tests<br />
for I/O performance on the Lustre filesystem of Kraken system and analysis of those results.<br />
Multiple tests runs were made to see the impact of the filesystem on I/O timing variability. I/O<br />
methods were implemented where “gating” of processors were done i.e. allowing a set of<br />
processors, instead of all the processors, to write at a time to the Lustre parallel file system and<br />
I/O performance enhancement was achieved.<br />
PI: Roberts (RTI International, Computer Science) EpiSims; PI: Karniadakis (Brown U., CFD)<br />
NekTar-G2; PI Coveney (University College of London, Chemistry) GENIUS. Continuing<br />
through 09/10. The lead, regarding MPIg research and implementation, for these ASTA projects<br />
is Nick Karonis (NIU, ANL) and he works with user support consultants from NCSA, TACC,<br />
and SDSC, and more recently with Indiana, Purdue, and the University of Chicago. Based on<br />
conversations with the user, ASTA effort is focusing on aspects found in the MPI standard and<br />
other more ubiquitous technologies, namely, multithreading and MPI-2's RMA operations.<br />
Karonis group have continue to work with Karniadakis and Grinberg to identify the<br />
computational bottlenecks in the application. Next step is focus on those sections with the goal of<br />
introducing multithreading, where possible, and then possibly RMA operations. The primary<br />
motivation for adopting this new strategy is to try and find performance gains from memory<br />
hierarchies.<br />
47
PI: Mashayek (University of Illinois Chicago, CFD). Simulation of Liquid Fuel Combustors.<br />
Continuing through 09/10. The ASTA consultant is Ahmed Taha (NCSA). Work continued to do<br />
scalability study for parallel Fluent v.12.1.4 version for the following cases:<br />
i. Comparing the LES simulations without microjets to the experiment data of the cold flow;<br />
ii. Comparing the LES simulations with microjets to the experiment data of the cold flow;.<br />
PI: Hu (California State University, Long Beach, Theoretical Physics). Applications of the<br />
Multichannel scattering theory via the ModifiedFadeev Equation. Continuing through 09/10.<br />
Gabriele Jost and Bill Barth (TACC) are providing ASTA support for this project. > 1. The issue<br />
regarding excessive time spent in reading the input and distributing the data has been resolved by<br />
changing the IO procedure. In the original code the master process reads all of the data, then<br />
distributes the data to the other processes. This has been changed such that now all processes read<br />
the data they require in parallel. Dr. Hu has tested the modified code and confirmed that it<br />
reduces the run time to be acceptable. Two tasks are ongoing currently. (i) Work is ongoing with<br />
Dr. Hu on changing the currently used dense matrix solver based on Scalapack to a sparse direct<br />
solver. ASTA staff are investigating the usefulness of the SuperLU and Mumps packages. (ii)<br />
Another work is on setting up the execution of multiple sequential runs using the Ranger launcher<br />
utility. Dr, Hu is currently testing the suggested procedure for this and will report back soon.<br />
Dr. Hu has also sent a TRAC request to extend the project and has provided more details on the<br />
work that was done in the TRAC allocation proposal.<br />
PI: Yeung (Ge<strong>org</strong>ia Tech, Fluids). Turbulence Simulations Towards the PetaScale: Intermittency,<br />
Mixing, Reaction and Stratification. Continuing through 09/10. The ASTA team consists of Kwai<br />
Wong (NICS), Dmitry Pekurovsky (SDSC), Bill Barth (TACC) and visualization experts from<br />
TACC. In this quarter partial ASTA effort was provided in porting the 3D FFT kernel to IBM<br />
Power7 with OpenMP threads.<br />
PI: Gygi (UCD, Materials Science). First-Principles Molecular Dynamics for Petascale<br />
Computers. Continuing through 12/10. The ASTA staff for this project are BD Kim (TACC) and<br />
Bilel Hadri (NICS). Kim provided help with Qbox application and specifically on using gotoblas,<br />
scalapack and MKL. The PI was interested in suggestion regarding better scalability. The use of<br />
scalapack and gotoblas gave minor performance improvement compared to MKL case. While<br />
helping the PI on this, some other issues were resolved and that allowed to move on to the next<br />
step. PI was also trying to do scalability test of the hybrid version of Qbox on Ranger, and Kim<br />
set him up for the large queue and provided previous test cases with some other hybrid<br />
applications. Hadri provided help regarding optimizing the Qbox code with OpenMP directives<br />
and about multi-threaded linear algebra libraries. Help was also provided to enable uuid library<br />
on Kraken by fixing the code and providing directions regarding how to link dynamically the<br />
code with uuid library.<br />
PI: Scheraga (Cornell U., Biophysics). UNRES Force-field for Simulation of Large Molecular<br />
Systems. Continuing through 12/10. The ASTA team consists of Phil Blood and Mahin<br />
Mahmoodi (PSC). ASTA team stayed in touch with the PI’s group but no major support issues<br />
were needed this quarter.<br />
PI: Jansen (RPI, Fluids). Petascale Adaptive Computational Fluid Dynamics. Continuing<br />
through 12/10. ASTA team for this project consists of Glenn Brook (NICS), David O’Neal<br />
(PSC), and John Peterson (TACC). There is no major update for this quarter.<br />
PI: Cheatham (U. Utah, Biochemistry). Insight into Biomolecular Structure, Dynamics,<br />
Interaction and Energetics from Simulation. Continuing through 12/10. The ASTA team consists<br />
of Ross Walker (SDSC). During this quarter work continued to develop the parallel version of<br />
48
Prof. Cheatham’s ptraj MD analysis program. This has included adding support for additional<br />
parallel ‘actions’ as well as fixing several bugs related to reading and writing compressed files in<br />
parallel that were not apparent until the code received wide spread usage. The first version of<br />
parallel ptraj was released as Open Source as part of the AmberTools 1.4 package and is available<br />
for download from http://ambermd.<strong>org</strong>/ .<br />
PI: Apte (Oregon State University, Fluid, Particulate, and Hydraulic Systems). Large Eddy<br />
Simulation of Particle-Turbulence Interactions in Complex Flows. Continuing through 03/11.<br />
The ASTA staff for this project is Carlos Rosales (TACC). In the previous year several<br />
benchmarking of PI’s code was done and currently ASTA staff is waiting to hear back from the<br />
PI for further interaction.<br />
PI: Liu (University of Kentucky, Elementary Particle Physics). Lattice Gauge Calculation of<br />
Hadronic Physics. Continuing through 03/11. The ASTA team consists of Raghu Reddy (PSC)<br />
and Haihang You (NICS). Raghu continued some of the profiling work initiated by You. IPM<br />
was used to get performance and communication information for PI’s code. PI’s group were<br />
interested in more detailed performance information and TAU is being used for this. ASTA staff<br />
are dealing with issues related to TAU and some potential bug in the code and this is being<br />
investigated. Following two papers were posted by the PI’s group and they summarize the<br />
scientific work of the PI’s group:<br />
http://arxiv.<strong>org</strong>/abs/1005.5424<br />
http://arxiv.<strong>org</strong>/abs/1005.4158.<br />
PI: Helly (UCSD, SIO, Atmospheric Sciences). Modeling Global Climate Variability with the<br />
Multi-scale Modeling Framework New parameterizations of Cloud Micro-physics and<br />
Developing Community Community Accounts Portal for Running MMF. Continuing through<br />
06/11. Phil Cheeseman (Purdue), Adam Jundt (SDSC) and Mahidhar Tatineni (SDSC) are the<br />
ASTA team for this project. There is no major update on this project for this quarter.<br />
PI: Sandholm (CMU, Computer and Computation Research). Leveraging Supercomputing for<br />
Large-scale Game-theoretic Analysis. Continuing through 03/11. Joel Welling and David O’Neal<br />
(PSC) are working with the PI on this project. The PI’s team won a prize in a computer poker<br />
tournament as described in the following news report:<br />
A Carnegie Mellon team headed by Computer Science Professor Tuomas Sandholm won the<br />
heads-up, no-limit bankroll category in the annual Computer Poker Competition at the<br />
Association for the Advancement of Artificial Intelligence 2010 conference in Atlanta. The team<br />
included Sam Ganzfried, a Ph.D. student in computer science, and Andrew Gilpin (CS09), an<br />
adjunct assistant professor of computer science. For more:<br />
http://www.cmu.edu/news/blog/2010/Summer/all-aces.shtml<br />
Prof. Sandholm has a new grad student and is generating a new code; David and Joel as well as<br />
John Urbanic (PSC) have consulted with the student regarding this. The previous program uses a<br />
custom algorithm to solve what is essentially multiplication by a Kronecker product matrix; the<br />
grad student was studying literature on Kronecker product algorithms. They are also considering<br />
a new algorithm which is essentially Monte Carlo. Sandholm has applied for a new grant with<br />
the intention of continuing to run the old code while these new codes are developed.<br />
PI: Schnetter (LSU, Gravitational Physics). Astrophysical Applications in Numerical Relativity:<br />
Coalescing Binary Systems and Stellar Collapse. Continuing through 03/11. The ASTA team<br />
consists of Mahin Mahmoodi (PSC), Rick Kufrin and Rui Liu (NCSA), and Wayne Pfeiffer<br />
(SDSC). There is no major update on this project for this quarter.<br />
49
PI: Ostoja-Starzewski (UIUC, Mechanics and Materials) Fractals in Elasto-Plasticity.<br />
Continuing through 01/11. The ASTA staff for this project is Seid Koric (NCSA). Preliminary<br />
studies with the Abaqus Explicit Finite Element solver in 3D have been done, and the team is<br />
now putting together a code to run on TeraGrid. The code will allow to determine the formation<br />
of fractal patterns of plastic grains at elastic-plastic transitions in polycrystals. Given the fact that<br />
(i) one needs very large lattices to reliably assess fractal dimensions, and (ii) large lattices require<br />
very large memory allocations and powerful number crunching, the ASTA support from NCSA is<br />
most helpful. There was a slight delay due to a change of a graduate student working on the<br />
project, but results of publication quality will be produced this summer.<br />
PI: Miller (SDSC/UCSD, System and Population Biology) The CIPRES Science Gateway.<br />
Continuing through 06/11. ASTA support is being provided by Wayne Pfeiffer (SDSC). Wayne<br />
Pfeiffer attended HiCOMB 2010 and IPDPS 2010 in Atlanta and gave a paper at the former. The<br />
paper was titled "Hybrid MPI/Pthreads Parallelization of the RAxML Phylogenetics Code" and<br />
was coauthored with Alexis Stamatakis of the Technical University of Munich.<br />
To help users with special requirements, long phylogenetic analyses were manually run on Dash.<br />
Specifically, long MrBayes jobs were run for<br />
. Jennifer Thomas, formerly of Rutgers and now at the University of London, and<br />
. Joel Ledford of the California Academy of Sciences and UC Berkeley.<br />
PI Miller has also received a Startup allocation on the SDSC Dash machine and the title of the<br />
project is Improving the Efficiency of Eukaryotic Genome Sequence Assembly Using Dash.<br />
Wayne Pfeiffer performed various tests using BFAST and SHRiMP, two codes for mapping<br />
short DNA sequence reads to a large reference genome. In advance of the tests, both codes were<br />
installed on Dash as well as on Triton (a SDSC machine) PDAF. BFAST is parallelized with<br />
Pthreads, while SHRiMP is parallelized with OpenMP.<br />
DNA read data were obtained from Sam Levy of the Scripps Translational Science Institute<br />
(STSI), and baseline runs were made for both codes mapping the reads to the reference human<br />
genome. The run time for both codes on 8 cores of Dash is similar, while SHRiMP runs about<br />
about 2.4x faster when using 32 cores on Triton PDAF. However, the percentage of the reads that<br />
are mapped is higher with BFAST than SHRiMP, so the solution quality may be better with<br />
BFAST. This is being investigated further by Ashley Van Zeeland of STSI.<br />
Both codes do a fair amount of I/O, and hence tests using flash memory instead of disk were done<br />
on a Dash I/O node. The speedup from using flash memory was only a few percent for both<br />
codes, evidently because their I/O is sequential rather than random. Moreover, the compute time<br />
as well as I/O time improved, which suggests that the I/O node is slightly faster than a compute<br />
node, independent of I/O. This was subsequently confirmed by HPCC benchmark tests, though<br />
the reason for the better I/O node performance has not yet been determined.<br />
Tests of the vSMP virtual shared memory software on Dash were also done for SHRiMP. This<br />
code uses a lot of memory, and analyses can exceed the memory available on Dash for some<br />
choices of input parameters. Unfortunately, SHRiMP ran several times slower with vSMP than<br />
without, so vSMP does not appear useful. Analyses with the default parameters will run on Dash<br />
without vSMP, while analyses requiring extra memory can be run on Triton PDAF.<br />
PI: Toomre (U. Colorado, Astronomy) Coupling of Turbulenct Compressible Solar Convection<br />
with Rotation, Shear and Magnetic Fields. Completed 06/10. The ASTA team consists of Raghu<br />
Reddy (PSC) and Haihang You (TACC). There has been no interaction from the PI’s team and<br />
the project has ended.<br />
50
PI: Choi (U. Nevada, Las Vegas, Astronomy and Cosmology) Effects of Star Formation and<br />
Supernova Feedback on Galaxy Formation. Continuing through 06/10. ASTA support is<br />
provided by Lars Koesterke (TACC). There is no update for this quarter.<br />
PI: Demeler (U. Texas Health Science Center, Biochemistry and Molecular Structure and<br />
Function) High-Resolution Modeling of Hydrodynamics Experiments. Continuing through 03/11.<br />
ASTA support is provided by Suresh Marru, Raminder Singh (IU), Roberto Gomez (PSC), and<br />
Phil Cheeseman (Purdue). The ASTA team consists of staff from both the Science Gateway area<br />
and the Advanced User Support area since the PI’s project requires advanced support for<br />
gateways as well as MPI optimization. Further description of activities is provided in the Science<br />
Gateways section.<br />
PI: Lloyd (Oregon Health and Science University, Molecular Bioscience) Imine Formation in an<br />
Enzyme Active Site. Continuing through 09/10. Ross Walker (SDSC) is providing the ASTA<br />
support. Between March 2010 and June 2010 Walker worked with Robert Lloyd and Bud Dodson<br />
to complete their QMMM calculations. This involved fixing several bugs in their code relating to<br />
the constraint of hydrogen atom positions around the edge of the quantum region in the proton<br />
transfer reaction they are attempting to study. These bugs only materialized when running on<br />
more than 16 cores and were due to incorrect reduction of the force array in parallel. These fixes<br />
allowed the PIs to complete their calculations and they are now writing a manuscript for<br />
biochemistry entitled "Imine Reactions in a DNA Repair Enzyme Active Site: Carbinolamine<br />
Formation and Dehydration". Prof Lloyd and Dodson have been grateful of ASTA contributions<br />
and are acknowledging that they couldn't have completed this work without Walker’s<br />
contributions by including his name on the manuscript. Walker will be working over the next<br />
month or so to help revise this manuscript.<br />
PI: Marcus (U. California Berkeley, Astronomy) The Roles of Vortices and Gravity Waves in<br />
Plane Formation in Protoplanetary Disks. Continuing through 09/10. The ASTA staff are Carlos<br />
Rosales (TACC) and Dmitry Pekurovsky (SDSC). PI’s interest is for a better performing 2D<br />
parallel FFT. ASTA team is waiting to have further interaction from the PI’s team.<br />
PI: Parrish (U. California Berkeley, Astronomy) Magnetized Astrophysical Plasmas with<br />
Anisotropic Conduction, Viscosity, and Energetic Particles. Continuing through 09/10. The<br />
ASTA staff are Darren Adams (NCSA) and Paul Navratil (TACC). There is no update regarding<br />
I/O work for this quarter.<br />
PI: Perot (U. Massachusetts, Amherst, CFD) On the Decay of Anisotropic Turbulence.<br />
Continuing through 09/10. ASTA staff is Jay Alameda (NCSA). There is no update for this<br />
quarter.<br />
PI: Tafti (Virginia Polytechnic Inst and State University, Advanced Scientific Computing).<br />
ExtremeOpenMP – A Programming Model for Productive High End Computing. Completed<br />
06/10. Rick Kufrin and Rui Liu, from NCSA participated in this ASTA effort along with the PI<br />
Danesh Tafti and Amit Amritkar. This ASTA project was completed successfully, with the major<br />
outcome that the performance degradations observed in the code have been located and<br />
addressed. The work involved was documented in a paper that has been submitted to the<br />
International Journal of High Performance Computing ("Extreme OpenMP in CFD", Amritkar,<br />
A., Tafti, D., Liu, R., Kufrin, R., Chapman, B.). The team also summarized their work in a talk<br />
presented to the TeraGrid AUS group on May 6, 2010<br />
(http://www.sdsc.edu/~majumdar/AUS_Recordings/NCSA_RICK_5_6_2010/RickAUSRecordin<br />
g/Recording/lib/playback.html).<br />
PI: Bishop (Tulane University, Biophysics). High Throughput Modeling of Nucleosome Stability.<br />
Completed 06/10. The ASTA staff for this project is Adam Jundt (SDSC, previously at LONI).<br />
51
This project has a large number of jobs to run (some with dependencies on previous job runs) and<br />
multiple machines available. The PI is looking for a way to manage multiple systems, plus<br />
multiple jobs efficiently, and is currently using a python package - titled /Many Jobs/ - to help<br />
accomplish this. The PI has been working closely with the developer of these scripts (they are<br />
both at the same university), and does not seem to need any further assistance from AUS staff.<br />
PI: Quinn (Boston University, Solar Terrestrial Research). Center for Integrated Space Weather<br />
Modeling. Continuing through 12/10. The ASTA staff for this project are Lonnie Crosby, Glenn<br />
Brook (NICS), and Carlos Rosales-Fernandez (TACC). There is no further update for this quarter.<br />
PI: Cui (SDSC, Computer and Computation Research). Parallelization of a Finite Element Code<br />
for Petascale Dynamics Modeling. Continuing through 08/10. The ASTA staff for this project is<br />
Jun Zhou (SDSC). The MaFE code supported by ASTA project has been successfully scaled up<br />
to 48,000 cores on both TACC Ranger and NICS Kraken. The MPI/OpenMP Hybrid version has<br />
been also implemented to improve the parallel efficiency. However, the hybrid solver performs<br />
even worse than pure MPI for large-scale benchmark run, where MPI communication and<br />
synchronization overhead dominate the simulation time based on our experiments. Currently we<br />
are redesigning asynchronous model and trying to reduce communication time via overlap<br />
implementation. We will also further investigate optimization algorithm to reduce the main<br />
computation time towards higher peak performance.<br />
PI: Kimber (University of Pittsburgh, Thermal Systems). Computational Modeling of Thermal<br />
Striping in Nuclear Reactors. Continuing through 12/10. Anirban Jana from PSC is the ASTA<br />
staff for this project. As the computational lead, Anirban continued to guide the postdoc on<br />
systematically simulating in Fluent a series of cases with growing complexity, on the local<br />
machine at Kimber's lab. The following cases has been simulated in Fluent this quarter (in order<br />
of growing complexity):<br />
a) Single laminar isothermal round jet, 2d axisymmetric<br />
b) Single laminar non-isothermal round jet, 2d axisymmetric, without and with temperature<br />
dependence of fluid properties.<br />
c) Single laminar isothermal round jet, 3d<br />
d) Single turbulent isothermal round jet, 3d, RANS and LES<br />
Anirban and the postdoc completed rigorous mesh independence and domain independence tests<br />
and validation with known (analytical) results for case (a) and will repeat them for the other cases<br />
next. The ultimate goal is to perform fully 3d, non-isothermal, LES simulations in Fluent of<br />
single and multiple jets in order to study their thermal mixing characteristics.<br />
The project has also been expanded to include the investigation of boiling heat transfers in<br />
nuclear reactors. The plan to use the Lattice Boltzmann Method as the primary tool here, since<br />
they will be interested in mesoscopic effects. They have submitted a request to add to this TG<br />
account Dan Roseum, a PhD student in the Dept of Mechanical Engr, University of Pittsburgh,<br />
who will be working on this.<br />
PI: VanBriesen (CMU, Chemical and Reaction Processes). Simulation of Complex Organic<br />
Mixture Biotransformations in Natural Systems. Continuing through 04/11. Anirban Jana from<br />
PSC is collaborating on this ASTA project. In this project, the aim is to perform Monte Carlo<br />
simulations to investigate the dechlorination likelihoods of a class of chemicals known as<br />
polychlorinated biphenyls or PCBs. The Monte Carlo runs themselves are embarrassingly<br />
parallel, but the outputs of each MC run needs to be stored in n-dimensional arrays so that at the<br />
end, relevant statistics can be computed. Hence clearly, both memory requirements and<br />
computational time increase as the number of MC runs increase.<br />
52
Amanda Hughes, a PhD student of Dr. Van Briesen, had developed a MATLAB code that could,<br />
on her laptop, run 330 MC simulations before running out of memory. Over the last quarter, her<br />
original serial MATLAB code was ported to the MATLAB Star-P platform on Pople and<br />
parallelized. As part of the porting process:<br />
1) The original code structure was thoroughly reviewed.<br />
2) The code was restructured so that the parallelizable parts of the code were consolidated into a<br />
single for loop.<br />
3) During the restructuring, additional vectorization and other code tweaks were performed to<br />
improve the serial performance.<br />
4) Finally the code was parallelized with Star-P by moving the contents of the parallelizable for<br />
loop into a functions and doing a ppeval on the function.<br />
This has now enabled simulations with up to 10000 MC runs. The scaling of computational time<br />
with increasing # cores was found to be very satisfactory too.<br />
Note that because the code was restructured to put everything to be parallelized into a single for<br />
loop, it will be trivial to run this in parallel on *other parallel MATLAB platforms*. For example,<br />
if using the Mathworks Parallel Computing Toolbox, instead of a ppeval, the for loop just needs<br />
to be replaced by the parfor loop.<br />
PI: Brasseur (Penn State, BioPhysics).Villous Motility as a Critical Mechanism for Efficient<br />
Nutrient Absorption in the Small Intestine. Continuing through 03/11. Lonnie Crosby (NICS), for<br />
computational work, and Amit Chourasia (SDSC), for visualization work, are involved in this<br />
ASTA project. In this quarter effort focused on computational work by Lonnie. This group's code,<br />
Intestine3D, initially produced a large amount of metadata traffic on the file system. This large<br />
traffic volume was due to the I/O pattern employed by the application in which files are opened<br />
and closed for each read/write operation. Twelve files were identified which could benefit from<br />
remaining open during the course of the application's runtime. This approach was able to<br />
decrease the application's runtime by 33%.<br />
A full profile of application performance was performed and sent to members of the project. This<br />
profile identified an inherent load imbalance in the application. Addressing this load imbalance<br />
could improve performance by an estimated 10%. Additionally, the implementation of MPI-IO<br />
parallel I/O in order to produce concatenated binary files instead of individual (one per process or<br />
group) ANSCI files can improve application performance by about 10%. Suggestions and<br />
instructions for implementing MPI-IO parallel-IO were sent to members of the project.<br />
PI. Engel (U. North Carolina, Physics). Systematics of Nuclear Surgace Vibrations in Deformed<br />
Nuclei. Continuing through 03/11. ASTA staff for this project are Meng-Shiou Wu (NICS) and<br />
Victor Eijkhout (TACC). During the first quarter of this ASTA project focus was on gathering<br />
detail specifications and understanding the code structure of the project. Meng-Shiou has been<br />
working with the group in order to explore possible approaches to improve the code efficiency on<br />
Kraken. Jointly with the PI’s group they have identified and discussed why their approach to use<br />
ScaLapack was not working, and what are possible choices for them to conduct diagonalization<br />
on a multi-core XT5 node. Both shared memory approach (use threaded libraries) and distributed<br />
memory approach (use sub-communicator and re-design memory management in their code)<br />
were discussed. Several scientific libraries that support multi-core architecture were tested<br />
(Libsci of Cray, AMD's ACML and ATLAS), but very limited or no performance improvement<br />
was observed. Currently work is on integration of their code with a code segment provided by<br />
another research team that use a master-slave style programming to utilize MPI's subcommunicator<br />
for the project.<br />
53
PI. Mac Low (American Museum of Natural History,PlanetaryAstronomy). Simulations of<br />
Molecular Clouds and Protoplanetary Disks. Continuing through 03/11. Roberto Gomez (PSC) is<br />
working the PI’s team for this ASTA project. The users have been doing development work on a<br />
code derived from Gadget-2, with a different hydrodynamics and MHD algorithm, which they<br />
think is now "more settled and starting to do useful stuff". They have not pushed the number of<br />
cores much past 128. Recently (end of June) they made contact requesting ASTA help in<br />
profiling/optimizing their code. Roberto received an account on the SVN server at AMNH and<br />
had just pulled the code to test that he could build and run a model that was of interest to the<br />
user, working at first on PSC’s local hardware (Pople) to get a handle on it. After discussion with<br />
the user, a procedure was worked out to run the initialization part, and write a restart file, and<br />
resume the run from the restart file. The intention is to profile after the restart so as to avoid<br />
timing the initialization, which takes several minutes, and concentrate instead on profiling the<br />
time evolution, which is where most of the run time will be spent. This profiling effort is just<br />
beginning.<br />
PI. Kim (CMU, Biochemistry and Molecular Structure and Function). Structural and Functional<br />
Characterization of Glu-Plasminogen and Lys-Plasminogen. Continuing through 03/11. ASTA<br />
effort is provided by Phil Blood (PSC). Phil is in touch with the PI’s group, but no activities to<br />
report yet.<br />
PI. Kolmakov (Univ. of Pittsburgh, Materials Research).Design of Communicating Colonies of<br />
Biomimetic Microcapsules. Continuing through 03/11. Carlos Rosales-Fernandez (TACC), Jay<br />
Alameda (NCSA), Dongju Choi, Amit Majumdar (SDSC) are providing ASTA support for this<br />
project. Carlos Rosales-Fernandez from TACC worked with user German V. Kolmakov from<br />
Pittsburg University in porting his current Lattice Boltzmann Method to GPUs. Initially a group<br />
of consultants (Jay Alameda, Carlos Rosales-Fernandez, Dongju Choi, Amitava Majumdar)<br />
suggested using accelerator regions defined within the PGI compiler framework to improve<br />
performance, but the limitations of this method prompted the user to try a straight port to CUDA.<br />
An initial version of the code was ported to CUDA by the user, with a performance that was<br />
benchmarked to be around twice as slow as that of the pure CPU code when running on a Tesla<br />
C870 system and to only match the CPU code speed when run on a Quadroplex 2200 S4 card on<br />
the Longhorn system at TACC. Subsequent improvements were made to the code, focusing on<br />
memory access. Improved shared memory and register usage and improved memory access<br />
patterns led to a 16X performance improvement, representing a reduction from 205 ms/iteration<br />
to 13 ms/iteration for the benchmarked example.<br />
Since this optimization of a simplified version of the code has been successful, the AUS<br />
consultant will continue to work with the user in order to complete the port of the production<br />
code. This work will include the parallelization of the production GPU code over multiple GPUs.<br />
PI. Naduvalath (UNLV, Chemistry).Computational Modeling of Ultracold Molecular Collisions<br />
and Reactions Involving Clusters and Nanoparticles. Continuing through 03/11. ASTA effort is<br />
provided by Bilel Hadri (NICS) and Mahin Mahmoodi (PSC). Initial contact has been made with<br />
the PI and the ASTA team is waiting as the PI needs little bit more time to start the interaction.<br />
PI. LaLonde (Bryn Mawr College, Biochemistry and Molecular Structure and Function).<br />
Development of Novel HIV Entry Inhibitors Using the ROCS Shaped Based Matching Algorithm<br />
and Molecular Dynamics Studies of gp120 Envelop Proteins. Continuing through 03/11. ASTA<br />
staff of this project is Phil Blood (PSC) and did quite a bit of work with the PI before the ASTA<br />
was officially initiated. There is no major update for this quarter.<br />
PI. Blaisten-Barojas (Ge<strong>org</strong>e Mason University, Physical Chemistry). Simulation and Data<br />
Analysis of Macro-Molecular Systems. Continuing through 03/11. ASTA staff for this project is<br />
54
Yang Wang (PSC). Initial contact was made with the PI’s team and due to being busy with other<br />
work, the PI will contact ASTA staff back in August for help.<br />
PI. Schafmeister (Temple University, Organic and Macromolecular Chemistry). Applications of<br />
Bis-peptide Nanostructures. Continuing through 04/11. ASTA staff for this project is Bilel Hadri<br />
(NICS). In this quarter help was provided regarding running the boost library with PGI on the<br />
Kraken machine and the PI switched to gnu compiler; both gnu and Pathscale compilers worked<br />
in this case. The PI was provided guidance for running performance tools such as PAPI, Craypat<br />
and IPM and these are expected to help the PI understand the performance as well as memory<br />
usage.<br />
PI. Ozdoganlar (CMU, Manufacturing Processes and Equipment).Finite-Elements Modeling of<br />
Single- and Poly-Crystalline Material Removal Including Crystallographic Effects. Continuing<br />
through 03/11. ASTA staff for this project is Anirban Jana (PSC). Anirban has worked with PI’s<br />
student, Shin Hyung Song, and run parallel ABAQUS simulations of the plastic deformations of<br />
crystals on Pople, using user defined subroutines written in Fortran.<br />
5.5.2 Advanced Support for Projects (ASP)<br />
Under the ASP sub-area, AUS staff work on projects that directly benefit a large number of<br />
TeraGrid users, as opposed to the ASTA projects for which AUS staff work, following the<br />
guidance of the TRAC review process, with an individual user on his/her applications such that<br />
TeraGrid resources can be used effectively by the user. The different categories of ASPs are:<br />
• Installation and maintenance of domain science codes and software as well as HPC and<br />
CI related software and tools on suitable HPC machines located at the RP sites<br />
• Projects, decided in collaboration with users, that have the potential to impact large<br />
number of users<br />
• Creation of a software and associated database to monitor libraries being used by users<br />
• Implementation and testing of software allowing cross-site MPIg based simulations to<br />
run and associated interaction with cloud computing<br />
Among these four categories, the second one involves more effort, from AUS staff, than the other<br />
three.<br />
5.5.2.1 Installation and maintenance of domain science codes and software as well as HPC<br />
and CI related software and tools on all the HPC machines located at the RP sites<br />
For this category of ASP tasks, AUS staff from all the RP sites continued to participate in this<br />
effort. This is considered a foundation type work that is performed in an ongoing basis and is<br />
needed for TeraGrid users who routinely use these codes and software for their science<br />
simulations. Many of these codes and software are also needed and used as a part of various<br />
ASTA projects by AUS staff. Examples of these codes and software include scientific<br />
applications in domains sciences such as chemistry, biochemistry, materials science, engineering,<br />
numerical mathematics, visualization and data, and in HPC/CI. AUS effort continued in this<br />
quarter involving maintenance and proper installation of these codes and software, optimization<br />
of the software for an architecture, debugging the software, interaction with and training for users<br />
regarding usage of these codes and software. This is carried out by AUS staff that have Ph.D. and<br />
M.S. level expertise in specific domain sciences and computer science.<br />
55
5.5.2.2 Projects, decided in collaboration with users, that have the potential to impact large<br />
number of users<br />
Under the second category of ASP efforts, AUS staff work on projects that have the potential to<br />
benefit a large number of users (at least 10 user groups) within a specific domain science or<br />
TeraGrid users in general, utilizing the unique expertise and experience that AUS staff hold. The<br />
projects are identified based on impact it will have on the user community, input from users, input<br />
from AUS staff and other TeraGrid experts. Expertise is available within the AUS staff pool,<br />
across the RP sites, to undertake such projects that have broader impact. AUS staff from multiple<br />
RP sites jointly collaborated on these projects. AUS staff continued to pursue five such projects.<br />
Two of these were pursued by the AUS staff that have domain science as well as HPC expertise<br />
in chemistry/biochemistry and materials science and are users and, in some cases, developers of<br />
molecular dynamics and materials science codes. Large numbers of TeraGrid users use the<br />
molecular dynamics and materials science codes on various TeraGrid machines. The third project,<br />
investigating hybrid programming model, was pursued by AUS staff that has expertise in the area<br />
of HPC programming paradigm. The fourth project involves investigation of PGAS languages<br />
and comparison of performance with MPI based programming models. Initially kernels for linear<br />
system solvers are being explored and later other kernels/codes will be added. AUS staff from<br />
other sites can join once the project progresses. The fifth project involves developing tutorials of<br />
profiling and tracing tools for users.<br />
The first project involved porting, optimizing (using the best compiler options, machine specific<br />
environment variables), benchmarking, and profiling widely used molecular dynamics (MD)<br />
codes on various TG machines. Currently AUS staff from PSC, SDSC, and TACC are<br />
participating in this project. AMBER and NAMD were benchmarked on Ranger and Kraken for<br />
various input cases and utilizing various compiler and environment options. Benchmarks were<br />
also done using all the cores within a node, and partial number of cores within a node to see the<br />
impact on performance and scaling. The results were presented from the AUS benchmark<br />
webpage and feedback was received from some users of these codes. During the quarter AUS<br />
staff worked with the documentation group to do final edit of these benchmark pages. All of the<br />
technical information are available for the MD user community via web-based documentation and<br />
can also be used for TeraGrid training and workshops. We believe that as such detailed MD<br />
benchmark data are made available to the TeraGrid MD users, it will greatly benefit this large<br />
community. In addition to always having the optimally installed MD codes on various TG<br />
machines, this project will enable the MD community to (1) gain knowledge about performance<br />
and scaling characteristics of MD codes on various machines, (2) know about the optimal<br />
compiler and libraries to use on TG machines, (3) install their own versions of MD codes on<br />
TeraGrid machines, and (4) justify resource requests in allocation proposals. This will also help<br />
TRAC reviewers to cross-check users’ requests for resources in their allocation proposals. The<br />
profiling results will be made available to the MD code developers and enable them to tune the<br />
future released versions of the MD codes for TeraGrid machines. Feedback regarding the<br />
usefulness of this project, was sought from about ten prominent long time TeraGrid MD users and<br />
strong support for this project was received from the users.<br />
The second project is similar to the MD project described above and targets codes that are used<br />
more for materials science type simulations, such as VASP, and CPMD. AUS staff, from NCSA,<br />
LONI, PSC and NICS, that have expertise in these codes and in the field of materials science, are<br />
carrying out this second project. AUS staff have worked on benchmarking VASP on various<br />
TeraGrid machines for various input cases and parameter settings. AUS staff worked with the<br />
TeraGrid documentation experts to do final edits of these results. Also benchmark effort for the<br />
newer version of VASP is being pursued. The results are available for the materials science user<br />
56
community. User feedback received, from materials science users, for this project also showed<br />
strong support for this project.<br />
The third project involves evaluation and benchmarking of the hybrid (MPI-OpenMP/pthreads)<br />
programming model. Due to the emergence of multi-core processors, hybrid programming, using<br />
MPI tasks and multiple OpenMP threads or pthreads per tasks, is of interest to users. This model<br />
may provide better performance than straight MPI programming. In addition hybrid programming<br />
may allow some simulations that are otherwise not possible due to limited memory available per<br />
core and hence provide a science capability not achievable by straight MPI codes on multi-core<br />
architectures. In order to investigate these issues, AUS staff from PSC, NICS, TACC and SDSC<br />
are comparing performance and usage of codes written in hybrid mode versus straight MPI mode.<br />
The codes being targeted are an astrophysics code, a hybrid benchmarking code, phylogenetics<br />
codes, FFTW, and a Monte Carlo code and work is ongoing. AUS staff are planning to write up<br />
the results comparing straight MPI versus MPI-OpenMP/pthreads performance for TeraGrid user<br />
community. Raghu Reddy (PSC) continued to work with University of Colorado researchers on<br />
measuring the hybrid performance of their code. One of them members of the group will be<br />
presenting the results at TG10. This work also resulted in the following<br />
submission:http://arxiv.<strong>org</strong>/abs/1003.4322 .<br />
As mentioned earlier the fourth project involves investigating PGAS languages. Progress to-date<br />
includes the implementation of a 2D finite difference kernel in UPC and MPI. Testing of said<br />
kernel demonstrated expected MPI performance and pretty poor UPC performance. Interactions<br />
have began with the Berkeley UPC group, and the TACC project team is still waiting on a<br />
response from them on a few items before they move on to implementation of other kernels. A<br />
Chapel implementation of the same kernel is underway. Benchmarks were also run with MUPC<br />
with similar results.<br />
The fifth project involved developing TAU tutorial and this is done by AUS staff from PSC. This<br />
tutorial and documentation were linked from the AUS projects page (i.e. the same page from<br />
where the other projects of benchmark results and hybrid programming results are published).<br />
The TAU tutorial will give users information about TAU installation on Ranger and Kraken and<br />
other TeraGrid resources, and will be a practical guide for TAU. It will allow users to learn about<br />
the commonly used features of the TAU software tool and will enable users to successfully start<br />
using TAU on TeraGrid resources. Fig 5.1 shows one the images from the tutorial.<br />
Figure 5.1 TAU Startup Screen as Shown in the TAU Tutorial.<br />
57
5.5.2.3 Creation of a software and associated database to monitor libraries being used by<br />
users<br />
For the third category of ASP, AUS staff at NICS are implementing and testing “ld” and “aprun"<br />
as a part of the libraries database project on Kraken. This database will provide user support staff<br />
knowledge about library usage such as which ones are used the most and how frequently. The<br />
information obtained will be used to improve the software management, procurement and<br />
maintenance requirements. Also, this information may prove useful in the planning and<br />
development stages of new resources as they are brought to production. After testing on Kraken,<br />
this will be made available to other RP sites that may be interested in using this. During this<br />
quarter a beta release of the software was made. This beta software was shared with the new<br />
NOAA center at ORNL.<br />
5.5.2.4 Implementation and testing of software allowing cross-site MPIg based simulations to<br />
run and associated interaction with cloud computing<br />
Under the fourth category of ASP, the project being done involves both ASTA and ASP and it<br />
has been described in §5.5.1 (PI Roberts (RTI International, Computer Science) EpiSims; PI:<br />
Karniadakis (Brown U., CFD) NekTar-G2; PI Coveney (University College of London,<br />
Chemistry) GENIUS)<br />
5.5.3 Advanced Support for EOT (ASEOT)<br />
AUS staff from all the RP sites contribute to preparing and delivering advanced HPC and CI<br />
technical contents for trainings and workshops that are offered throughout the year at all the RP<br />
sites. This is done in coordination with the EOT area. List of workshops, which were offered are<br />
listed in the EOT section of this report.<br />
Under the ASEOT effort, AUS staff continued to participate in the Extreme Scalability (XS)<br />
working group, which specifically focuses on architecture, performance, tools and software<br />
related to petascale-level simulation. AUS staff are participating in projects identified by the XS<br />
working group, and in many cases these projects allow interaction with researchers/users funded<br />
by NSF PetaApps, SDCI, and STCI grants. Interactions within the extreme scalability working<br />
group also help AUS staff to identify ASPs that may impact large number of users. The hybrid<br />
MPI-OpenMP programming model project and the TAU tools project, as stated in §5.5.2.2, are<br />
joint projects between the AUS area and the XS working group.<br />
AUS experts, from all the RP sites, regularly present technical talks at the bi-weekly AUS<br />
technical tele/web-conferences. These technical talks, based on ASTA work or other AUS work,<br />
are recorded and the slides and the audio recordings are available from:<br />
http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Technical_presentations. The EOT area<br />
director and other members of the EOT area are going over these talks and plan to utilize these<br />
for development of future HPC/CI technical training materials.<br />
AUS staff are engaged in <strong>org</strong>anizing and planning of the TG10 conference to be held in<br />
Pittsburgh in August, 2010. Multiple AUS staff served as reviewers of abstracts/papers and in<br />
selecting and inviting invited speakers for science track. Multiple AUS staff are regular members<br />
of the TRAC review committee and in addition many AUS staff review TRAC proposals that<br />
request smaller amount of SUs and hence do not require review by regular members of the TRAC<br />
review committee.<br />
58
6 User Facing Projects and Core Services<br />
The User-Facing Projects and Core Services (UFC) area continued its work of providing the<br />
central access and information dissemination points for TeraGrid users and to present a single<br />
interface to the collective efforts of the TeraGrid RPs. The GIG supports and coordinates effort in<br />
this area from staff at IU, NCSA, PSC, SDSC, TACC, and UC/ANL and coordinates the<br />
participation by and integration of all RPs into the UFC environment and processes.<br />
6.1 Highlights<br />
662 new users were added to TeraGrid making this the second busiest quarter ever. The June<br />
TRAC meeting reviewed requests for more than 700 million SUs and the TeraGrid team, for the<br />
first time, implemented an automated process to roll back recommended awards to fit within the<br />
available SU capacity.<br />
The ongoing work to permit unvetted users to create their TeraGrid User Portal (TGUP) accounts<br />
led to a completed and approved implementation document; work will proceed in Q3. The TGUP<br />
now allows users to view their submitted tickets and see activity on those tickets, and the POPS<br />
system now has a significantly streamlined resource request page. The accounting system<br />
(TGCDB and AMIE) added two significant features. First, the system allows sites to include user<br />
uid and project gid information, in support of the Lustre wide-area file system effort. Second, the<br />
TGCDB now permits sites to send usage information associated with allocations on storage<br />
systems. The team has also begun collaborating with the XD-TAS team, both to identify the best<br />
way to transfer data from TGCDB into the XD-TAS system and to ensure metrics portlet effort<br />
by the Documentation team does not duplicate features of the XD-TAS system.<br />
6.2 Enhanced TeraGrid User Access<br />
This objective supports the creation and distribution of new user logins across TeraGrid RPs for<br />
the various resources, as well as enhancements to this process to integrate with campus<br />
authentication infrastructures. Within the TeraGrid User Portal, users are also provided ways to<br />
maintain their user profiles, customize their interaction with TeraGrid, and access TeraGrid<br />
resources and services remotely.<br />
The TGUP team completed the implementation plan for creating “unvetted” new user accounts<br />
and received sign-off from almost all working groups; following a discussion with the securitywg<br />
lead in July, implementation activities are getting underway in anticipation of final sign-off<br />
from that working group early in Q3 and targeting completion by the end of the calendar year.<br />
Also in Q2, the NOS team provided the TGUP team with a prototype implementation and<br />
documentation of Shibboleth integration with Liferay, and work is progressing on this integration<br />
effort.<br />
A TGUP (and TG Mobile) portlet to list a user’s help tickets was deployed in production. The<br />
ticket information is being provided via a REST service to the ticketing system database. A<br />
revised add/remove user form was released in June that supports multiple add/remove requests<br />
with one submission.<br />
UFC staff have also prototyped a TGUP portlet for accessing the MCP metascheduling capability<br />
on TeraGrid. The developer is continuing to extract info from TG IIS and other sources, to<br />
populate MCP submission info. The MCP portlet should be ready for testing in early Q3.<br />
The TGUP team also prepared to release a new file-sharing feature within the portal and TG<br />
Mobile site. The new features offer comprehensive file management and collaborative file<br />
sharing in the My TeraGrid section of the Mobile TGUP via the Files category and collaborative<br />
file sharing within the TGUP File Manager. The new collaborative file sharing features are<br />
59
enabled by the TeraGrid Share service. TeraGrid Share provides users with 2GB of secure,<br />
personal space for collaborating with other TeraGrid users and the world. The new mobile file<br />
management feature allows users to manage files across the TeraGrid while away from their<br />
desktop. Users can browse through data, download files, create folders, rename and delete files<br />
and folders, copy files between systems, and publish files to their TeraGrid Share accounts.<br />
6.2.1 Operational Activities<br />
TeraGrid continued to exceed new thresholds for many user-related metrics. Notably, in May,<br />
TeraGrid surpassed 1,400 individuals charging jobs in a single month. TeraGrid also surpassed<br />
6,000 active users at the quarter’s end. Finally, 662 new individuals joined the ranks of TeraGrid<br />
users, the second highest quarterly total to date. This elevated workload is made possible by prior<br />
efforts to streamline the new-user process and reduce the central staff effort.<br />
TeraGrid users numbered 6,482 at the end of the quarter, including 5,414 individuals with current<br />
TeraGrid accounts and 1,068 reported gateway users. Of the 5,414 individuals, 2,037 (38%) were<br />
associated with jobs recorded in the TeraGrid central accounting system. Figure 6-1 shows the<br />
growth of the TeraGrid community since 2004. (Active Users = Users Charging Jobs + Gateway<br />
Users; TeraGrid Users = Current Accounts + Gateway Users.)<br />
The TGUP’s access capabilities continue to be among the most highly used features, with users<br />
accessing the file manager portlet 18,061 times this quarter, and 1,221 users visiting the GSI-SSH<br />
portlet 23,685 times. The new ticket system portlet made the list of most-visited portlets, with 350<br />
visits, even though it only entered production in June. The automated password reset function was<br />
used 292 times. Table 6-1 shows the most highly visited TGUP applications for this quarter.<br />
Figure 6-1. TeraGrid User Trends, Q2 2010<br />
60
Table 6-1. Q2 2010 Most Visited TGUP Applications<br />
Application # Users Q2 2010 Visits<br />
Knowledge Base 790 45,834<br />
Allocation Usage 2,037 33,197<br />
GSI-SSH 1,221 23,685<br />
System Accounts 1,477 20,239<br />
Data Collections 91 17,873<br />
File Manager 89 18,061<br />
System Monitor 1,267 12,925<br />
Science Gateways 101 5,072<br />
Consulting 675 4,571<br />
Add/Remove User Form 619 3,798<br />
User Profile 787 2,115<br />
Queue Prediction 124 1,327<br />
Wait Time Prediction 124 1,327<br />
DN Listing 513 915<br />
Feedback 33 719<br />
Community Accounts 430 694<br />
*Ticketing System 99 350<br />
6.3 Allocations Process and Allocation Management<br />
This objective encompasses the allocations process, both for Startup/Education allocations as well<br />
as the merit-review TRAC Research request process, the POPS system for request handling and<br />
management, mechanisms by which allocation PIs manage allocations through transfers,<br />
extensions and so on, and interfaces by which allocation PIs manage the users who are authorized<br />
to use their allocations. Operationally, this objective includes the TRAC review process, the<br />
Startup allocations review and decision process, and the maintenance and operations of the POPS<br />
system.<br />
The most significant effort was completing the definition of the formulas and process for<br />
“Recommend and Available Allocation Reconciliation.” This work was initiated in recognition of<br />
the fact that oversubscription—that is, TRAC recommendations exceeding available RP SUs—<br />
was fast becoming a pressing issue within TeraGrid. The previous methods were too informal to<br />
fairly and transparently deal with a situation in which awards have to be reduced by potentially<br />
hundreds of millions of SUs; in addition, NSF directives required TeraGrid to factor funding<br />
sources into the equation. The resulting calculations, performed within Excel, optimize awards<br />
based on a complex formula incorporating funding source information (with NSF awardees<br />
getting cut less than non-NSF awardees) and size of award (with smaller awards cut less than the<br />
largest awards).<br />
The POPS team rolled out a new version of POPS with a streamlined Resource Request page.<br />
Rather than requiring users to answer low-level questions associated with each resource requested<br />
(data that were never reviewed or used in the allocations process), users now just enter request<br />
amounts, optional comments, and answer three general questions about their planned allocation<br />
use. The three questions were defined by the TeraGrid Director of Science and added in support<br />
of the usage modalities effort. Work began on efforts to streamline the entry screens for POPS to<br />
reduce the opportunities for confusion and error by submitters.<br />
The team also supported a new process by which Advanced User Support allocations will now be<br />
associated with a project in TGCDB. A process that uses existing POPS functionality (transfer<br />
61
equests) was defined to allow the AUS Area Director to have ASTA “allocations” associated<br />
with the respective projects in TGCDB. These account for a number of the Transfer requests<br />
made this quarter.<br />
6.3.1 Operational Activities<br />
The allocations staff also managed the June 2010 TRAC meeting, in Arlington, VA, at which 102<br />
requests for TeraGrid resources were reviewed. Of more than 730 million core-hours requested,<br />
more than 328 million core-hours of compute time and more than 2.4 PB of data storage were<br />
awarded. Due to the high request levels, the TRAC recommended awards that in aggregate<br />
exceeded the available computing time available. Thus, the allocations team applied, for the first<br />
time, the process to reduce recommended awards to fit within the available SUs.<br />
Preparations to support a number of new resources were completed, including the TeraGrid Data<br />
Replication Service, Advanced User Support Service, TeraGrid Wide-Area File System, Nautilus,<br />
and Wispy.<br />
The allocation staff continues to receive in excess of two new/renewal startup requests each day.<br />
TeraGrid received 223 Startup requests (193 approved) and 8 Education requests (8 approved).<br />
Table 6-2 shows the overall allocations management activity handled by POPS and the<br />
allocations staff. Note that for Transfers, the table shows only the positive side of the transaction<br />
to show the total transfer level; there is a corresponding negative amount, adjusted for resource<br />
exchange rates.<br />
Table 6-2. POPS Requests and Awards, Q2 2010<br />
Research Startup Education<br />
# Req SU Requested # Awd SUs Awarded # Req SU Requested # Awd SUs Awarded # Req SU Requested # Awd SUs Awarded<br />
New 49 176,959,526 45 57,229,483 197 28,266,856 171 18,901,423 6 240,165 6 242,006<br />
Prog. Rept 3 2,978,000 3 1,862,005 n/a n/a<br />
Renewal 52 554,374,301 50 247,248,274 26 2,970,341 22 4,815,025 2 307,512 2 307,513<br />
Advance 25 41,777,755 24 23,213,752 n/a n/a<br />
Justification 7 42,643,530 6 11,647,120 0 0 0 0 0 0 0 0<br />
Supplement 32 72,568,631 23 22,512,633 41 4,934,738 33 3,245,508 4 270,012 4 270,012<br />
Transfer 85 23,533,630 84 23,852,996 51 2,249,371 48 1,208,215 1 20,000 1 20,000<br />
Extension 40 n/a 37 n/a 52 n/a 48 n/a 0 n/a 0 n/a<br />
6.4 RP Integration and Information Sharing<br />
This objective ensures that RPs are able to provide information about their resources and services<br />
and that decisions made centrally (such as allocation awards) are communicated to the RP sites.<br />
This objective includes the accounting infrastructure of TeraGrid Central Database (TGCDB) and<br />
AMIE. For operations, the TeraGrid accounting infrastructure requires ongoing operational<br />
support and updates to the TGCDB and AMIE systems to support evolving policies and<br />
requirements for project, user and usage creation and exchange.<br />
Two significant enhancements to the AMIE packet transport software and TGCDB accounting<br />
procedures were implemented and deployed during the quarter.<br />
• Both AMIE and TGCDB were updated to support the optional inclusion of a site’s uid<br />
and gid for a user and project. This feature was added in support of the Lustre global file<br />
system. The uid and gid fields for existing users were back-populated in the TGCDB.<br />
• Procedures to implement storage accounting were implemented and deployed in May<br />
along with AMIE changes to extend the NPU packet definition to support storage usage.<br />
Rather than an “additive” accounting model (as used for compute accounting, in which<br />
the cost of new jobs is added to the previous job charges), the storage accounting uses a<br />
62
“replacement” model, in which the charges sent by the site replace the prior amounts for<br />
a given user. NCSA has implemented and tested storage NPUs and soon intends to start<br />
sending them in earnest.<br />
The documentation team continued work on Liferay portlets to display current and past<br />
allocations information. The usage map from the TeraGrid usage web site (TGU) has been ported<br />
in both Google Map and Google Earth form; the team is working on identifying a production<br />
home for these portlets within the TeraGrid web site.<br />
The TGCDB team is also working with XD-TAS to identify the best way to get TGCDB data into<br />
their system. The XD-TAS effort at the University of Buffalo needs to query the TGCDB in order<br />
to perform its auditing function. In case the XD-TAS database queries would interfere with<br />
normal day-to-day TGCDB operation, Rob Light of PSC has been asked to develop a means for<br />
the TAS team to have a current copy of TGCDB. The model used by the TGCDB failover system<br />
may not be appropriate because both the primary and failover systems need to be identical.<br />
Light’s plan is to send a copy of the up-to-date backup database that resides on PSC’s TGCDB<br />
failover system to Buffalo nightly, which the TAS team can re-store into a running server. This<br />
concept will be discussed further with the TAS team and developed during the next quarter. The<br />
documentation team will also coordinate with XD-TAS on future metrics-related portlets to avoid<br />
duplication of effort.<br />
The SDSC team identified student effort available to proceed with the migration of the Resource<br />
Catalog to integrate with the Resource Description Repository (RDR) developed at PSC. Work<br />
will proceed during the summer.<br />
6.4.1 Operational Activities<br />
Operationally, the Accounting Working Group prepared for a coordinated upgrade to a new<br />
AMIE version that supports packet exchange via .tar files. The new version required all sites to be<br />
running the latest AMIE version and a coordinated final step to deploy the .tar version. In June,<br />
all sites deployed the latest pre-tar version of AMIE. The final upgrade step was scheduled for<br />
and completed on July 14.<br />
Preparations to support a number of new resources were completed, including the TeraGrid Data<br />
Replication Service, Advanced User Support Service, TeraGrid Wide-Area File System, Nautilus,<br />
and Wispy. Quarterly resource utilization metrics appear in Section 8 of this report.<br />
No TGCDB outages occurred in the quarter, and the source of a recurring Inca “false alarm” was<br />
identified and corrected. A query from the TeraGrid portal was causing excessive slow-down,<br />
which triggered the Inca alarm. The portal query has been rewritten and thus far the problem has<br />
not resurfaced. The failover system continues to sync with the production TGCDB server. The<br />
regular quarterly test of TGCDB restoration procedures was completed successfully in less than<br />
two hours.<br />
The staff at SDSC and NCSA are also preparing for a postgreSQL software upgrade for TGCDB.<br />
The current TGCDB software version (postgreSQL 8.1) has been declared end of life as of<br />
November 2010. A test server and install of postgreSQL 8.4 has been deployed at SDSC to<br />
evaluate and prepare for a postgreSQL upgrade.<br />
6.5 RP Operations: Accounting/Core Services<br />
TG RP staff provided local allocations and account management support for users including<br />
account creation/deletion, password reset, and local system usage accounting. These staff<br />
interacted with the TG Account Management transaction system (AMIE), which is a transactionbased<br />
grid account management system that is central to all users having access to the TeraGrid.<br />
63
Four sites are now supporting gateway user tracking through the GRAM Audit Database<br />
integration: NICS, LONI, NCAR, and NCSA. More than 16,000 jobs have been tagged with<br />
GRAM and gateway user data. TACC accounting staff contributed to the effort to facilitate the<br />
tracking of TeraGrid resource usage through science gateways. This effort will result in the<br />
ability to track individual user activities, resulting in increased security and user accountability.<br />
6.5.1 SDSC<br />
SDSC RP staff integrated the new Dash system with the TeraGrid allocations, account creation,<br />
and usage accounting systems. Dash was made available for Startup requests starting in April<br />
2010, and in Q2, the first Startup allocations were created, and jobs from several projects have<br />
been reported to TGCDB. The Dash system was included on the list of available resources that<br />
can be requested for Research (TRAC) allocations starting October 2010.<br />
6.5.2 TACC<br />
TACC accounting staff contributed to the effort to facilitate the tracking of TeraGrid resource<br />
usage through science gateways. This effort will result in the ability to track individual user<br />
activities, resulting in increased security and user accountability. Additional TACC staff<br />
members continued to contribute to activities related to User Facing Projects and Core Services.<br />
Maytal Dahan led TGUP portal development activities including the production release of a<br />
ticketing interface for TGUP and TGUP Mobile. This enables users to manage their tickets online<br />
via an easy-to-use portal interface and view historical tickets. Furthermore, the user portal team<br />
enhanced the add/remove user capability to enable easier user allocation management. Katie<br />
Cohen, Kent Milfeld, and Margaret Murray coordinated the TRAC review process and <strong>org</strong>anized<br />
the allocation review meeting held within the reporting period. In addition, the TACC team<br />
<strong>org</strong>anized and chaired the weekly allocations working group meetings, and processed all advance,<br />
startup (276) and supplement allocation requests during the reporting period.<br />
6.6 User Information Presentation<br />
This objective ensures that users are provided with current, accurate information from across the<br />
TeraGrid in a dynamic environment of resources, software and services. Operationally, this<br />
objective ensures high reliability of the presentation mechanisms, including quarterly operational<br />
support for the TGUP and TeraGrid web site. Support for TeraGrid internal web environments,<br />
notably the Wiki, is included here. The UFC team spent a week working with Liferay consulting<br />
staff to identify, troubleshoot, and resolve ongoing issues. The effort has significantly reduced the<br />
frequency and nature of the problems that have been encountered with Liferay thus far.<br />
During Q2, the User News application was migrated successfully from SDSC’s Oracle database<br />
infrastructure to the TGCDB and postgreSQL. This effort eliminates the application’s dependence<br />
on two separate databases and will allow the User News service to be transitioned as needed to<br />
the XD program in 2011.<br />
6.6.1 Operational Activities<br />
The TGUP and TeraGrid Web presence continued to increase its user base and support large<br />
numbers of visitors and page hits. Table 6-3 shows hits, visitors and logins for the TGUP, Web<br />
site and Wiki. <strong>Report</strong>ed Liferay metrics for May and June were incomplete because the awstats<br />
script stopped running during part of those months; the issue has been resolved. The Liferay<br />
environment experienced one significant unscheduled outage on June 8, when a 30-minute<br />
scheduled downtime to permit a hardware move at TACC turned into a six-hour outage due to a<br />
switch failure. Liferay preventive maintenance and update periods are being scheduled for offhours<br />
until the fully redundant two-server system is deployed at TACC.<br />
64
The TGUP responded to 106,097 portlet requests from registered users. The Knowledgebase<br />
(accessible via both the TGUP and Web site) delivered 39,668 documents in the quarter. The<br />
Liferay environment now permits tracking usage of various web site portlets, and Table 6-4<br />
shows the usage and hits for the Resource, Software, Data Resource, and Gateway catalogs.<br />
Table 6-3. TeraGrid Web traffic, Q2 2010.<br />
Web Portal Mobile Wiki KB<br />
Unique Visitors Hits Unique Visitors Unique Logins Hits Logins Hits Unique Visitors Pages Edits Docs Hits<br />
April 8,785 527,675 3,987 1,250 444,103 47 3,078 2,404<br />
738 20,124<br />
May n/a n/a n/a 1,122 n/a 38 1,449 2,459 7,737 45,629 744 15,494<br />
June n/a n/a n/a 1,177 n/a 56 1,114 2,526 749 20,857<br />
Table 6-4. User information catalog visits<br />
Catalog<br />
Total Visits<br />
Software Search 2,572<br />
Resource Catalog 2,528<br />
Gateway List 989<br />
ASTA List 524<br />
Data Resources 308<br />
CTSS by Resource 121<br />
6.7 RP Operations: User Facing Projects<br />
TG RP staff maintained and updated local documentation for their resources. Some of this<br />
activity resulted from user feedback and interaction with the TeraGrid Operations Center, RP help<br />
desk support personnel, and members of the advanced user support team. Hardware and software<br />
upgrades on HPC, visualization, and storage resources also resulted in local documentation<br />
activities.<br />
6.7.1 Indiana University<br />
Summary statistics for the TeraGrid Knowledge<br />
• Total number of TG KB documents available in TGKB end of 2 nd quarter 749<br />
o The total number of TG KB documents including those archived is 788<br />
• New documents created during 2 nd quarter 22<br />
• Documents modified during 2 nd quarter 169<br />
• Total documents retrieved 182,348<br />
• Real hits 56,475 (documents retrieve minus bots, crawler, scrappers, link checkers). This<br />
number is IU server hits only because TeraGrid server data is unavailable at this time.<br />
On average for 2009, the TG server added an additional 16%, so a conservative estimate<br />
of real hits for the 2 nd quarter would be around 65,500<br />
65
Figure 6-2. TeraGrid KB Number of Documents, Q2 2010<br />
Figure 6-3. TeraGrid KB Document Editing, Q2 2010<br />
66
Most new documentation came as a direct result of conversations within the User Services group.<br />
We are also monitoring the new Forums in Liferay which spawned a document. As a general<br />
practice, the growth of the TG KB is now focused of support driven documentation and less on<br />
quota base contributions.<br />
Figure 6-4. TeraGrid KB Hits, Q2 2010<br />
As noted above, only partial data was available from the TeraGrid server so these numbers do not<br />
include documents opened from the TeraGrid interface. Also note that these counts represent<br />
number of documents opened excluding, as best we can, hits from bots, crawlers, etc.<br />
Content in the TeraGrid Knowledge Base repository can be accessed in a number of ways. The<br />
major sources for legitimate hits (documents opened excluding bots, scrappers, etc.) are the<br />
search interfaces within the TeraGrid web space (missing from this analysis), the IU KB search<br />
interface, links within TGKB documents, and Google searches. A large number of records in the<br />
weblogs do not provide information in the “referrer” field, which prevents determination of the<br />
originating page. Of those hits for which the weblogs provide referrer information, Google<br />
searches are the predominant source.<br />
67
Figure 6-5. TeraGrid KB Source of hits, Q2 2010<br />
Looking at the most frequently accessed documents across the second quarter compared to the<br />
first quarter, there was less overlap. In the 1 st quarter 16 documents appeared among the top 20<br />
documents in all 3 months. In the 2 nd quarter there were only 8 in common across the top 25.<br />
April and May were more alike in that almost 75% of the documents most frequently access were<br />
the same. April had 7 unique documents among the most frequently accessed. May had only 4.<br />
June was the most unique among the three months having 15 documents in the top 25 that did not<br />
appear in the list for April and May.<br />
68
6.8 Information Production and Quality Assurance<br />
This objective includes efforts to provide documentation for TeraGrid activities, provide a<br />
knowledgebase of answers to user questions, and maintain and update general web site content,<br />
all within the TeraGrid Web environment. This activity requires close interaction with RPs and<br />
other GIG areas as subject-matter experts and sources of information. As part of UFC operations,<br />
this objective focuses on updates and Quality Assurance (QA). These ongoing processes and<br />
efforts ensure that existing user information remains correct, is coordinated across both the GIG<br />
and RP subject-matter experts, is in sync with evolution of TG environment (via collaboration<br />
with the CUE working group), and contains minimal errors.<br />
After largely (but not entirely) solving editing and workflow issues with the Liferay environment,<br />
the UFC team began adding more content, particularly new AUS-related pages, updates to the<br />
TG10 web site. A revision to the TeraGrid home page that takes advantage of Liferay capabilities<br />
remains in progress. The Knowledgebase team created 22 new documents and modified another<br />
169. The knowledgebase now holds 749 TeraGrid articles.<br />
6.8.1 Operational Activities<br />
The documentation team continued to work to maintain the correctness of internal and external<br />
links through out the TeraGrid web site, with a goal of keeping documentation link errors to 3%<br />
or less. Issues with the Web Link Validator returning many false broken links in Liferay were<br />
resolved, and the team has once again begun using WLV for tracking link issues. The number of<br />
broken links has crept above 3%, in part due to Liferay issues but also significant increases in the<br />
amount of content with the addition of the AUS performance pages, as reflected in the slow<br />
growth in the number of links on the site. The team continues to bring down the percentage of<br />
broken links.<br />
7 Data and Visualization<br />
7.1 Highlights<br />
Table 6-5. Web Link Validator report for Q2 2010.<br />
Date All links Good links Broken % Broken<br />
14-Apr-10 1,153 1,064 65 5.6%<br />
12-May-10 1,187 1,114 47 4.0%<br />
19-May-10 1,200 1,128 49 4.1%<br />
25-May-10 1,180 1,128 29 2.5%<br />
4-Jun-10 1,234 1,170 41 3.3%<br />
7-Jun-10 1,242 1,180 39 3.1%<br />
14-Jun-10 1,309 1,246 40 3.1%<br />
21-Jun-10 1,266 1,202 41 3.2%<br />
28-Jun-10 1,279 1,215 41 3.2%<br />
Highlights for this quarter include new efforts to support research on the Deepwater Horizon<br />
spill, extension of Lustre-WAN capabilities in production, and continued efforts to integrate with<br />
other large data infrastructures.<br />
Initial hardware setup was completed and configuration and testing began for the new distributed<br />
Lustre-WAN. The production file system will have a petabyte of total capacity, and will be<br />
deployed in production on multiple TeraGrid resources by the end of the program year. Metadata<br />
services for the file system were brought up at PSC, and data servers are under construction at<br />
Indiana, NCSA, NICS, SDSC, and TACC.<br />
69
Deepwater Horizon oil spill related research activities at the Center for Space Research (CSR) at<br />
The University of Texas at Austin have resulted in a significant increase in the size of the TG data<br />
collection managed by CSR. The CSR MAGIC group now manages approximately 23.4<br />
terabytes of data on Corral using the data management system. All CSR MAGIC data is<br />
accessible over the web, with over 15,000 web requests for data from the collection during the<br />
reporting period (from over 2500 separate systems.)<br />
Researcher Scott Michael was offered compute cycles at Mississippi State University to begin his<br />
analysis of the data generated on Pople at PSC and Cobalt at NCSA. To avoid him having to<br />
move 30 TB of simulation data to MSU, IU's Data Capacitor Wide Area File system was used to<br />
bridge campuses. The file system was mounted on compute nodes at MSU. In doing this,<br />
Michael was able to begin his analyses right away without the time consuming step of bundling<br />
his data and moving it from the TeraGrid to a non-TeraGrid institution. Michael has written a<br />
paper about this experience to be presented at the TeraGrid '10 conference.<br />
SDSC continued work on preparing Lustre-WAN for production. DC-WAN (IU Lustre) and<br />
GPFS-WAN (SDSC GPFS) have been mounted on our Dash cluster.<br />
In Q1 Purdue RP migrated existing its TG data collections from the old SRB system to the new<br />
iRODS system. In Q2 Purdue RP modified the Purdue environmental data portal to connect with<br />
the new data management system. The new site is currently being tested.<br />
The ORNL RP continued the active interaction with the REDDnet collaboration and its new<br />
initiative, the Data Logistics Toolkit (DLT). REDDnet was recognized at the Spring Internet2<br />
meeting (http://events.internet2.edu/2010/spring-mm/) with an IDEA award<br />
(http://www.internet2.edu/idea/2010/) for providing distributed working storage in assistance of<br />
projects with large data movement needs.<br />
7.2 Data<br />
7.2.1 Data Movement<br />
Data movement infrastructure based on GridFTP remains stable within the TeraGrid, with<br />
monitoring performed by Inca and the Speedpage. Interactions with the REDDnet collaborators<br />
are expected to result in development of additional mechanisms to facilitate data movement into<br />
and out of TeraGrid resources.<br />
7.2.2 Wide-Area Filesystems<br />
Work continued to deploy the new distributed Lustre-WAN file system using storage at 6 sites.<br />
Indiana, NCSA, NICS, PSC, and TACC all received new hardware to deploy storage servers, and<br />
SDSC began work on repurposing over 200TB of existing storage for use in the new Lustre-<br />
WAN file system. During the quarter the new hardware was deployed and mostly integrated into<br />
the network and power infrastructures of the various RPs, and efforts to prepare for software and<br />
network testing were underway.<br />
In addition to adding MSU as a non-TeraGrid site mounting DC-WAN, IU began work on the<br />
TeraGrid wide area filesystem. IU has written and worked with system administrators at PSC the<br />
UID mapping code that will permit a unified UID space for the TeraGrid. By the end of Q2,<br />
hardware had been delivered, racked, and the disk array had been zoned for production.<br />
PSC continues to operate Lustre 2.0-based test infrastructure to investigate future mechanisms to<br />
enhance wide-area file system security and compatibility. Developments in the quarter include<br />
70
initial testing of NFS4 as a gateway mechanism to a Lustre 2.0 file system and examinations of<br />
the use of Kerberos for authentication in this scenario.<br />
7.2.3 Data Collections<br />
Data Collections support was enhanced for allocated users by the addition of the Data Replication<br />
Service, including metadata management capabilities, and initial discussions were held with user<br />
representatives, as well as both of the present DataNet awardees, to examine ways in which<br />
TeraGrid infrastructure can support additional Data Collections activities.<br />
7.2.4 Archival Storage<br />
The Data Replication Service now provides a capability to move and replicate data across 5 of the<br />
TeraGrid RP archive systems, plus additional disk systems. It is hoped that this capability will<br />
grow to include more archive storage and to play a larger role in user management of archived<br />
data.<br />
7.2.5 Data Architecture<br />
Data Architecture implementation continues with the deployment of the Data Replication Service<br />
and the distributed Lustre-WAN system, both of which will offer persistent storage resources in<br />
support of Data Architecture goals.<br />
7.3 RP Operations: Data<br />
7.3.1 Distributed Data Management<br />
TG RP staff maintained, administered, and operated data resources including archival systems,<br />
GPFS-WAN, Lustre-WAN, database resources, explicit disk/tape allocations, and provision of<br />
scientific databases.<br />
SDSC continued work on preparing Lustre-WAN for production, including the purchase orders of<br />
new servers and storage reassignment.<br />
The ORNL RP continued its participation in the REDDnet collaborative as well as hosting a node<br />
of the Earth Systems Grid (ESG) at the ORNL NSTG TeraGrid enclave.<br />
PSC made significant progress on the distributed Lustre-WAN deployment. PSC set up the<br />
metadata server environment for Lustre-WAN 1.8 deployment. This step allows other sites to<br />
participate in the Lustre-WAN by configuring their servers to use the MDS. PSC has worked<br />
closely with IU to incorporate their user ID mapping code into the deployment. PSC recruited a<br />
“friendly user” whose use uncovered a bug in the IU code, which the IU staff promptly fixed.<br />
To support the user ID mapping, it was necessary to make changes to the TGCDB and have a<br />
new AIME release propagated to all partner sites. PSC led a TeraGrid-wide effort to identify the<br />
changes and get them incorporated into TGCDB; this was a great example of TeraGrid teamwork<br />
that accomplished a lot in a short amount of time.<br />
PSC has established a framework for network testing between the LustreWAN components at the<br />
partner sites. Preliminary tests have been run between TACC and PSC. The results of the testing<br />
are being posted to the Lustre WAN Deployment TeraGrid wiki page. See<br />
http://teragridforum.<strong>org</strong>/mediawiki/index.phptitle=Distributed_Lustre-<br />
WAN_Deployment#Network_performance_tuning.<br />
71
Separately, PSC is continuing testing of the Kerberos-enabled security release of Lustre 2.0.<br />
Cross realm testing is underway. PSC routinely participates in the Lustre WAN deployment<br />
conference calls to discuss all the Lustre-WAN activities with TeraGrid partners.<br />
The Purdue RP staff has completed the migration of data services which enable programmatic<br />
access to the iRODS data collections from several client systems including the Purdue<br />
Environmental Data Portal and the IndianaView Portal.<br />
Purdue RP staff integrated Purdue storage in the TeraGrid distributed Data Replication Service<br />
and continues to provide direct disk storage to the Data Replication Service. The RP is<br />
investigating the addition of the Hadoop distributed filesystem access to the replication service.<br />
Purdue login nodes continue to mount the Data Capacitor via Lustre-WAN, and continue making<br />
it available to TG users. The RP staff is continuing to investigate vendor tools to also mount DC-<br />
WAN on the workers nodes on Steele as those nodes are on a private IP network.<br />
Purdue RP staff continues to evaluate Cycle Computing's CloudFS, a version of the Apache<br />
Hadoop distributed filesystem. The current testbed is approximately 30TB, and Purdue staff is<br />
working with users from the CMS (Compact Muon Solenoid) project and a social sciences<br />
application to evaluate the use of CloudFS/Hadoop to support high-throughput computing.<br />
Performance testing was conducted which showed that this deployment over the Purdue radon<br />
cluster had negligible impact on PBS jobs running in the cluster, in comparison to other software<br />
such as dCache. In addition, Purdue RP has deployed a small MapReduce cluster with 16 nodes<br />
and is working with instructors of a class at Purdue interested in using it.<br />
7.3.2 Data Collections<br />
Data collection services at the RP sites included the maintenance, administration, and operation<br />
of resources hosting data collections.<br />
Digital Preservation and Chronopolis at NCAR: SDSC, NCAR, and the University of Maryland<br />
(UMD) continued to advance the Chronopolis Consortium with the goal of pioneering new<br />
capabilities in distributed digital data preservation using TeraGrid cyberinfrastructure. Testing of<br />
the iRODS/iCAT system for data transmission has begun, and it is expected to replace the current<br />
SRB/MCAT system. Chronopolis holdings are currently approximately 22 TB in 10 million files.<br />
In Q2 Purdue RP incorporated a new group of Landsat remote sensing data into the TeraGrid<br />
LARS data collection.<br />
7.4 Visualization<br />
The University of Chicago/Argonne and TACC RP teams co-lead the TeraGrid Visualization<br />
working group, which meets bi-weekly. This group leverages the efforts and expertise of the RPs<br />
to provide users with fundamental support in the area of visualization. Work over the last quarter<br />
in the visualization working group has been focused on finishing the petascale visualization<br />
whitepaper. This paper was distributed to the Science Advisory Board for feedback.<br />
NCAR develops and distributes the visual data analysis package, VAPOR, which is funded in<br />
part through GIG PY5&6 funds. Progress was made on both of the major milestones of the<br />
PY5&6 award: 1) Research code supporting wavelet coefficient prioritization-based progressive<br />
data access was integrated into the VAPOR code base. 2) Evaluation of parallel netCDF APIs<br />
72
(pnetCDF and netCDF4) was completed in preparation for providing a parallel (MPI-callable)<br />
interface to the VAPOR data model.<br />
7.5 RP Operations: Visualization<br />
Visualization services at the TG RPs included the maintenance, administration, and operation of<br />
visualization resources, including visualization-specific systems.<br />
SDSC continued advanced visualization support to SCEC on M8 simulations and continued<br />
research and development work on visualizing vector fields.<br />
NCAR operated the DAV cluster, Twister. NCAR also supported the visual data analysis<br />
package, VAPOR.<br />
The Purdue RP made good progress in developing 3D visualization services for super resolution<br />
NEXRAD Level II radar data for research and education users. The implementation for a new<br />
hierarchical data structure and GPU CUDA ray-guided Level-of-Detail rendering has been<br />
completed. The code for data processing and rendering has been optimized to meet real-time<br />
performance requirements for rendering large radar volumes over the entire United States at<br />
multiple resolutions interactively. A prototype end-to-end data processing and visualization<br />
pipeline has been implemented which includes (1) data retrieval from iRODS, (2) data<br />
preprocessing to build hierarchical data structure, (3) CUDA rendering on a Tesla server, (4) data<br />
post-processing to generate top image tiles for the entire US and rotation/tilt views for several hot<br />
spots at different resolutions, and (5) a Google gadget flash viewer for real time interactive radar<br />
data visualization. The gadget viewer will be released to the general public in the fall of 2010.<br />
The UChicago/Argonne RP continued its collaboration with PI Michael Norman on his Projects<br />
in Astrophysical and Cosmological Structure Formation. The RP team produced high-resolution<br />
images and animations of large-scale (4096 3 , and larger) Enzo simulation data using vl3, a<br />
hardware-accelerated volume rendering application. During the past quarter the RP and the Enzo<br />
teams produced animations that were submitted, and accepted, to both the TeraGrid 2010<br />
conference’s Visualization Showcase and to the SciDAC 2010 conference’s Electronic<br />
Visualization Night. The UChicago/Argonne RP continued to work with the Parallel Simulations<br />
of Blood Flow in the Human Cranial Tree project (PI: Ge<strong>org</strong>e Karniadakis), and in collaboration<br />
with a PetaApps award, on the visualization of their data, using a combination of ParaView and<br />
custom-built code. The RP team has extended their previously developed custom ParaView<br />
reader plug-in, which loads data in native NekTar format and pushes it into the ParaView<br />
pipeline. Additionally, the plug-in now also computes derived quantities from the native NekTar<br />
data, as well as curved surfaces at higher resolutions, where appropriate. The NekTar team has<br />
begun integrating LAMMPS code with their simulations, producing multi-scale data sets. The<br />
RP team has been working to visualize the resulting red blood cell data, as well as velocity<br />
continuum and particle data. The UChicago/Argonne RP has been collaborating with the<br />
Southern California Earthquake Center (SCEC) on the visualization of Frechet Kernel data.<br />
Because the SCEC researchers simulate thousands of these kernels, which then need to be<br />
investigated, the RP team has been exploring methods for automating the process for visualizing<br />
and presenting this data to the researchers. With some initial set-up configuration, early efforts<br />
have produced images and animations for dozens of such kernels, <strong>org</strong>anized and presented on a<br />
Web page. In addition, the RP has integrated geophysical data, including Moho discontinuity<br />
surface, into visualizations of the simulation data. The UChicago/Argonne RP has made<br />
numerous enhancements to vl3, their hardware-accelerated volume rendering software, in support<br />
of TeraGrid users. Several customizations to its animation capabilities enabled the production of<br />
several ENZO animations. New methods for data management have lead to larger data capacities<br />
73
and increased resource utilization, and an improved compositing algorithm has resulted in faster<br />
frame rates, particularly at larger scales. The vl3 software has been installed and tested on<br />
Longhorn, the new TeraGrid XD Visualization resource at TACC. Vl3 has been used to<br />
visualize the SCEC kernel data at smaller scales (~100MB), and will be used for larger simulation<br />
data (~100GB+) in the near future. The UChicago/Argonne RP is also actively pursuing<br />
additional TeraGrid users that can benefit from the volume rendering of large-scale datasets.<br />
The NCSA visualization group worked with A. Masud and R. Calderer from the Department of<br />
Civil and Environmental Engineering at the University of Illinois to visualize vortex shedding<br />
created near a cylinder (left), forced-isotropic turbulence (middle), and turbulent channel flow<br />
(right).<br />
Recently, NCSA visualization has also begun working with E. Mehnert from the Institute of<br />
Natural Resource Sustainability at the University of Illinois to visualize results from CO2 storage<br />
modeling simulations.<br />
The TACC team has been continuing to support both Spur and Longhorn to provide remote<br />
visualization resources and services. In addition, the TACC visualization team has worked to<br />
produce numerous visualizations in collaboration with users, including Clint Dawson who is<br />
using his ADCIRC model to simulate the evolution of the BP oil spill data. Dawson's team used<br />
Spur to create images from the simulations that run on Ranger. With the assistance of TACC’s<br />
visualization specialists, their visualization run-time was reduced from three hours to 15 minutes.<br />
While Dawson's team is using Longhorn to visualize forecast and simulation data such as wind<br />
velocity, water elevation, and oil particle position, TACC’s visualization specialists are focused<br />
on the overlay of oil particle movement and satellite data. The file system accessible from<br />
Ranger, Spur, and Longhorn make it easy to access and visualize the data. TACC has also<br />
completed year 1 training materials for Longhorn and used them for a training held January 27<br />
and 28 at TACC. TACC held the first of the quarterly trainings on April 12, 2010. The training,<br />
“Introduction to Scientific Visualization on Longhorn” was held at the TACC training facilities<br />
and 20 attendees. The science disciplines ranged from Computational Plasma Physics to Image<br />
Registration and Computational Fluid Dynamics. The course included an overview of computer<br />
graphics and scientific visualization, and hands-on labs on Longhorn for visualizing with<br />
ParaView and VisIt. The attendees were encouraged to bring their own data to the course to<br />
74
provide a path to becoming Longhorn users. The second training for Longhorn was held at TACC<br />
on June 3, 2010 with 16 on-site attendees and 4 attendees participating via web-casting. Future<br />
tentative dates are September 3 and December 10 2010. These trainings will be publicized<br />
through both the TACC and TeraGrid venues. Longhorn training modules are included as part of<br />
the TACC Summer Supercomputing Institute to be held the week of July 19, 2010. John Clyne,<br />
NCAR has provided training materials for VAPOR as a part of the TACC Summer<br />
Supercomputing Institute and will be on site to provide the training in person.<br />
8 Network Operations and Security<br />
8.1 Highlights<br />
A portion of SDSC’s Dash system was made available for production via Startup Allocations on<br />
April 1. The initial deployment is two 16-node partitions, one with vSMP and one without to<br />
allow for comparative benchmarking.<br />
8.2 Networking<br />
The TeraGrid continued operation of the backbone network consisting of core routers in Chicago,<br />
Denver and Los Angeles, connected with dedicated 10 Gigabit/sec links and two levels of<br />
redundancy in the form of alternative paths via NLR. TeraGrid Resource providers maintain their<br />
respective site border routers and dedicated links to one of the three core routers. The dedicated<br />
links consist of at least one 10Gb/s link .<br />
The following two graphs show total usage and 20-second average peaks on the TeraGrid<br />
backbone between Chicago and Denver.<br />
75
Figure 8-1: Daily peak bandwidth on Chicago-Denver link.<br />
Bandwidth is measured at 20s granularity.<br />
76
Figure 8-2: Total transfer (GB) per day on Chicago-Denver link.<br />
Total theoretical capacity is ~100,000 GB/day.<br />
8.3 RP Operations: Networking<br />
During this quarter, TG RP staff provided support and operation of the local RPs network<br />
connectivity to the TeraGrid backbone. Duties include daily monitoring and maintenance of the<br />
network, planning and/or implementing changes to RP connections to the TG backbone, direct<br />
user support for network-specific issues, switch firmware upgrades, and HPC system<br />
interconnects. TG RP staff also kept the user community apprised of interruptions in network<br />
connectivity via local and TeraGrid user news.<br />
During the second quarter of the 2010 calendar year, there were zero (0) unscheduled outages and<br />
zero (0) scheduled maintenance events that directly affected Indiana University's connection to<br />
TeraGrid. There were one hundred thirty one thousand forty (131,040) minutes during the second<br />
quarter of 2010 calendar year, yielding a 100% uptime. If there were outage and maintenance<br />
events, Indiana University's TeraGrid connectivity would have failed over to our Internet2 and<br />
NLR connectivity, giving an actual site-to-site TeraGrid uptime of 100%.<br />
[FutureGrid] The SDSC network connection was upgraded to 10Gb.<br />
The ORNL RP spent a great deal of its effort planning a physical move (scheduled for Q3) on the<br />
NSTG cluster and associated service nodes to the Spallation Neutron Source (SNS) site. This will<br />
77
allow better integration with SNS’s ongoing operations and enable new network connectivity<br />
with shorter hop counts to SNS data production nodes, pending resolution of ORNL network<br />
policy revisions. The network topology will remain unchanged. The only substantial change<br />
being replacing a short-haul link with a long haul link. The NSTG will continue to connect to the<br />
Oak Ridge border router that serves as the main TeraGrid connection point for NSTG, NICS,<br />
RDAV, and Keeneland.<br />
8.4 Inca<br />
The Inca team continued to support and maintain the Inca deployment on TeraGrid and helped<br />
system administrators to debug failures on their machines. In response to IIS changes and<br />
requests from the gig-pack, security and QA groups, existing Inca test suites were modified.<br />
Three TeraGrid tests were also modified and one new test was written, configured and deployed.<br />
At the time of this report 2,846 pieces of test data are being collected across TeraGrid platforms.<br />
No machines were added or removed from the Inca TeraGrid deployment during this time. A<br />
depot redundancy upgrade to the Inca TeraGrid deployment is still planned, which will allow data<br />
to continue to be collected and queried at another site if the SDSC server fails. The depot<br />
redundancy feature is currently being tested with TeraGrid Inca data.<br />
The Inca team also continued work with an UCSD undergraduate student to develop a<br />
visualization of Inca’s TeraGrid deployment using the Processing Toolkit. During this quarter,<br />
the student added some interactive text interfaces and continued to work on code and cleanup.<br />
This visualization work is ongoing and will be utilized by the Inca team and for demo purposes.<br />
8.5 Grid Services Usage<br />
The following figures show the usage statistics for the Grid services included in CTSS and<br />
deployed at RP sites. An update of the Globus usage collection tools was performed in<br />
anticipation of receiving new Gram 5 packets in Q3 after the new Globus Toolkit release is goes<br />
live. This update was necessary as the current Gram 4 usage collector was not capable of<br />
receiving Gram 5 packets. Both the Gram 4 and Gram 5 usage collectors will be utilized and<br />
maintained going forward.<br />
78
8.5.1 Data Movement (GridFTP) Statistics<br />
Figure 8-3: Total GridFTP transfers (GB) for the quarter per site.<br />
79
Figure 8-4: Total GridFTP transfers (GB) for the quarter per site.<br />
80
Figure 8-5: Total number of GridFTP transfers per site.<br />
81
Figure 8-6: Total number of GridFTP transfers per day per site.<br />
82
8.5.2 Job Submission (GRAM) Statistics<br />
Figure 8-7: Number of GRAM submissions per resource per day.<br />
83
Figure 8-8: Total number of GRAM submissions per resource for the quarter.<br />
84
Figure 8-9: GRAM-submitted jobs as a fraction of the total jobs on each resource. Note that not all GRAM<br />
job submissions are submitted to the queue systems and end up in the allocations database. Hence in some<br />
cases, number of GRAM jobs may exceed total number of reported jobs (the percentages for Frost, Bigred<br />
and NSTG are certainly exaggerated due to this).<br />
8.6 RP Operations: HPC Operations<br />
TG RPs provided the personnel and infrastructure support for the local TG RP resources that are<br />
made available to the user community. System administration duties include daily monitoring of<br />
the health of the system, 24 hour on-call where provided, basic filesystem support, machinespecific<br />
parallel filesystem support, and operating system software maintenance and upgrades.<br />
Additionally, this activity provides for CTSS kit (grid middleware) deployment, administration<br />
and support. RP staff also assisted local and TeraGrid user services personnel to provide timely<br />
announcements to the user community related to the status of all TG resources.<br />
85
8.6.1 IU<br />
[FutureGrid] The Sierra and India clusters passed acceptance testing. An LDAP server for use<br />
across all FutureGrid machines has been implemented.<br />
8.6.2 LONI<br />
SAGA 1.5 is nearing release, and work is progressing with NICS to support remote job<br />
submission to Kraken. A planned storage upgrade for Queen Bee had to be delayed due to<br />
funding cycle issues. 5.6M SUs were delivered to 51.4K TeraGrid jobs serving 129 to 173 unique<br />
users per month. The system had only 1 planned outage for a total of 11hrs of down-time for<br />
storage maintenance.<br />
8.6.3 NCSA<br />
NCSA increased the storage capacity of the Abe/Lincoln clusters by over 10% for a current total<br />
of 344 TB of usable scratch and project space.<br />
8.6.4 ORNL<br />
The largest activity for the ORNL-RP’s NSTG node was planning and preparing for the physical<br />
move to the SNS site in Q3. This included some preparatory configuration changes, many<br />
discussions with staff and labor as well as coordination with the Main campus machine room staff<br />
and the SNS machine room staff. In addition, highlights of other activities performed during the<br />
quarter as part of the computing operations included; investigating, implementing, and testing<br />
propagation of gateway credentials; coping with the need for increasing numbers of firewall<br />
exceptions between SNS machines and TeraGrid subnets in response to ORNL default deny<br />
policy on outbound network session; continued effort to understand periodic performance<br />
degradation in PVFS2 deployment on NSTG cluster; finalization of last remaining items from<br />
previous OS upgrade; ESG operations; replacement of several failing or failed disks; and<br />
preparing for a AMIE upgrade that occurred TG-wide in July.<br />
8.6.5 PSC<br />
The National Institute of General Medical Sciences (NIGMS), part of the National Institutes of<br />
Health (NIH), awarded a two-year, $2.7 million grant to the National Resource for Biomedical<br />
Supercomputing (NRBSC) at PSC to host a specialized supercomputer for biomolecular<br />
simulation designed by D. E. Shaw Research (DESRES). The machine, called Anton, will be<br />
made available without cost by DESRES for non-commercial research use by universities and<br />
other not-for-profit institutions commencing in late 2010. (D. E. Shaw Research is an independent<br />
research laboratory that conducts basic scientific research in the field of computational biology.)<br />
Anton was designed to dramatically increase the speed of molecular dynamics (MD) simulations<br />
compared with the previous state of the art, allowing biomedical researchers to understand the<br />
motions and interactions of proteins and other biologically important molecules over much longer<br />
time periods than have previously been accessible to computational study. The machine and the<br />
novel algorithms it employs were designed by a team of researchers led by David E. Shaw, chief<br />
scientist of DESRES. Anton has run simulations extending for more than a millisecond of<br />
biological time — about 100 times longer than the longest previously published all-atom MD<br />
simulation.<br />
NRBSC has invited U.S. biomedical researchers to submit proposals for allocations of time on the<br />
Anton machine. The deadline for the first round of proposals has already passed. A peer review<br />
committee to be convened by the National Research Council will review proposals.<br />
86
As part of the NIGMS award, NRBSC also will install a new data storage and analysis subsystem,<br />
including nearly half a petabyte of online disk capacity, to host and make widely available the<br />
tremendous amount of data that will be produced by MD simulations on Anton.<br />
PSC has been actively involved in the coordination of implementation of the first phase of the<br />
Common User Environment (CUE) on the AQSL (Abe,QueenBee,Steele,Lonestar) complex. We<br />
helped define and have monitored the installation and testing of a standardized version of<br />
modules software across this complex. This software was installed by Resource Provider liaisons<br />
under coordination of the common user environment working group lead by PSC and contains<br />
commonly defined core modules as well as common environment variables consistent across all<br />
platforms. We are in the process of having this environment expanded to include all TeraGrid<br />
platforms.<br />
The PSC MyProxy Certificate Authority (CA) has received final TAGPMA (The Americas Grid<br />
Policy Management Authority) accreditation, has been added to the collection of TeraGrid-<br />
Accepted CAs and is in full production. It will serve as an alternate/backup MyProxy credential<br />
service for TeraGrid, providing short-lived credentials to users when the default TeraGrid<br />
MyProxy CA at NCSA is unavailable. The fail-over capability has been tested in the production<br />
environment.<br />
PSC Scientific Computing Performance Specialist Mahin Mahmoodi installed TAU-2.19.1 on<br />
Kraken, which was necessary to overcome problems with TAU-2.19 for C codes.<br />
Staff members of the Center for Analysis and Prediction of Storms at the University of Oklahoma<br />
(OU/CAPS) reported that attempts to copy large files from the OU grid node (non-TeraGrid) to<br />
PSC’s archiver via TeraGrid-specific GridFTP addresses were failing. As it turned out, any non-<br />
TeraGrid node attempting to copy large files to these GridFTP addresses was affected. Provision<br />
of an alternate address for non-TeraGrid nodes was established. We are incorporating these<br />
changes into our user documentation. This work represented a more general repair for all non-<br />
TeraGrid nodes, not just OU/CAPS.<br />
8.6.6 Purdue<br />
Purdue’s Condor pool continues to support individual users and gateway uses. During Q2 Purdue<br />
RP staff continued to extend the performance of Puffin, a platform of web-services developed at<br />
Purdue to promote widening the Condor user base, e.g., use by gateway applications, and<br />
increasing Condor usage by existing users through automation and simplification of job<br />
submission and control. Efforts for Puffin development this quarter focused on maximizing job<br />
submission throughput. The code which handles communication between Puffin and the<br />
underlying Condor engine was refined and rewritten in some cases for increased stability and<br />
efficiency. Analytical systems were also implemented to allow tracking of performance metrics<br />
throughout these efforts and to aid future improvements of the Puffin platform. Currently the<br />
Puffin web service Condor submission supports up to 5000 simultaneous job submissions. Purdue<br />
RP will work with end users and gateway applications in the next phase.<br />
Purdue continues work on supported its cloud resource Wispy. The RP continues to work with<br />
ANL’s science cloud in supporting users.<br />
The RP continues with performance turning on its central BlueArc filesystems to improve<br />
responsiveness of scratch storage for TeraGrid user jobs.<br />
87
8.6.7 SDSC<br />
SDSC put our Dash resource into production on April 1 for Startup allocations. Dash is<br />
precursor experimental system for Gordon coming in 2011.<br />
8.6.8 UC/ANL<br />
The UChicago/Argonne RP has continued to host repo.teragrid.<strong>org</strong>, which includes the TeraGridwide<br />
CVS repository.<br />
8.7 GIG Operations<br />
The GRAM5 usage collector software was updated on globus-usage.teragrid.<strong>org</strong>. Ongoing<br />
support of MyProxy and Kerberos servers for TeraGrid at NCSA, the mail.teragrid.<strong>org</strong> server, as<br />
well as the MyProxy, GSI-OpenSSH, and GSI-SSHTerm software used by TeraGrid continued.<br />
This includes regular maintenance as well as addressing incoming support tickets. The CIC IDM<br />
WG TeraGrid Pilot, worked with the TeraGrid Security WG to obtain sign-off on the Federated<br />
IDM Security Incident Response Policy. After updating the policy to address the TG Security<br />
WG's comments, the TG Security WG voted to approve the policy (8 in favor, 3 abstain). The<br />
policy was also presented at the InCommon Campus Authentication & Middlware Planning<br />
meeting on June 22, 2010.<br />
8.8 Security<br />
8.8.1 Security Working Group<br />
Summary of Security Incidents: There were five compromised user accounts and two login node<br />
compromises. A machine used by science gateway developers was compromised which lead to<br />
other account credentials being captured. In some instances the attackers inserted unauthorized<br />
SSH keys into users directories to allow alternate methods of access. The Incident Response<br />
team has disabled all of the accounts involved and audited TeraGrid productions machines for the<br />
known fraudulent SSHKeys. Securing Community Account<br />
Revisited: Victor Hazlewood has been documenting community account usage on the TG.<br />
Currently, documentation of the use cases for 22 science gateways is provided at Teragrid Forum<br />
wiki Science Gateway Use Cases site<br />
A summary of software used and security methods employed for the user accounts used at the<br />
RPs is available in the Summary Table at this website. We are planning on setting up a meeting to<br />
discuss community account management and implementation in the near future with Victor and<br />
Nancy. Goal is to establish some standards or commonly accepted positions relating to<br />
community accounts, for example: Community account cannot have an interactive shell session<br />
on productions machines).<br />
Accessing the community account information page: With the move of the Teragrid webserver to<br />
LifeRay complete, the access control permissions for specific areas of the website needed to be<br />
re-defined. The docs group contacted Jim Marsteller to ask who should have access to the science<br />
gateway info page. Since this page is used to look up contact information for community<br />
accounts, ever member of the incident response team requires access to this page. Jim supplied<br />
the docs group the names of all of those on the TG security contact list so they have access.<br />
Vetted/Unvetted Process Review: The Core2 working group team has been working on an<br />
implementation document for Vetted & Unvetted TGUP accounts in order to move forward with<br />
improving the account creation process with TGUP. The security working group was asked to<br />
review the document and provide comments. Jim sent Maytal the security working group's<br />
comments on May 35th. One of the major questions is whether the pops reconciliation process<br />
will still have a TG staff member vetting the authenticity of a user generated account request.<br />
88
The second issue related to adding a user to an allocation, although the PI may "check off" on an<br />
addition request electronically, it may create a situation where PIs approve additions<br />
automatically without proper review. One suggestion is to have the PI specify the means of<br />
validating the relationship with the user to the project. In this case an error could cost the PI<br />
allocations so they SHOULD have an interest in reviewing additions.<br />
8.8.2 Expanded TeraGrid Access<br />
Development to integrate Shibboleth/InCommon authentication with the TeraGrid User Portal<br />
(TGUP) was completed. Documentation and source code was provided to the TGUP team, and<br />
work is proceeding to deploy the new functionality in the production TGUP. The GIG assisted<br />
PSC in successfully completing TAGPMA accreditation for their MyProxy CA, which now<br />
serves as the backup for TGUP authentication in the event the NCSA MyProxy CA is offline. The<br />
GIG also assisted the TGUP team in deploying the fail-over functionality between the NCSA and<br />
PSC MyProxy servers.<br />
8.9 RP Operations: Security<br />
During this quarter, TG RPs provided expert staff in support of local and TG-wide collaborative<br />
security. TG RP local security operations include intrusion detection, examining log data,<br />
investigating unusual use patterns, and incident response for all hardware and networks on a 24-<br />
hour on-call basis. All TG RP partners are required to participate in the TG-wide Security<br />
Working Group which coordinates security operations and incident response across the project.<br />
8.9.1 IU<br />
[FutureGrid] A VM running on the India cluster was broken into due to a weak root password.<br />
The machine was taken down and because of the stateless nature of the image it did not need to<br />
be cleaned. The password has since been changed to something stronger.<br />
8.9.2 LONI<br />
There were no security incidents this quarter.<br />
8.9.3 NCAR<br />
NCAR disabled five accounts that had been implicated in three different incidents at other RPs.<br />
None of these accounts had been used for unauthorized access, and no compromised ssh keys<br />
needed to be removed from our systems.<br />
8.9.4 NCSA<br />
The NCSA security team responded to three different incidents involving TeraGrid users or<br />
resources in the second quarter of 2010. All three of these incidents involved remotely<br />
compromised user accounts. Each the incidents were detected with the current security<br />
monitoring done within NCSA. On two of the intrusions they were able to escalate to root<br />
privileges on the machines. The downtime for the machines for each incident was less than an<br />
hour. NCSA worked with users in each incident to get their remote systems cleaned and access<br />
back to NCSA re-enabled within a day.<br />
8.9.5 ORNL<br />
During the second quarter of calendar 2010, there were no root compromises on the NSTG<br />
computing nodes. There were no users who lost control of their credentials from activities on<br />
NSTG machines. TeraGrid users who were the victims of stolen credentials at other TeraGrid<br />
sties did not use those credentials to log on to NSTG machines. NSTG staff took timely action to<br />
de-activate accounts of users whose credentials were stolen and users whose credentials may have<br />
89
een exposed during a known incident. During the second quarter, the tripwire footprint was<br />
expanded to monitor hashes for a larger set of files on NSTG and ESG (at ORNL) machines.<br />
8.9.6 PSC<br />
PSC had no security incidents during the quarter.<br />
The PSC website now supports an automated password reset function. It is based on the TeraGrid<br />
password reset model that the Security working group developed. It also enforces new password<br />
“strength” requirements (see http://www.psc.edu/general/policies/passwords/), which are the<br />
same as the ones that Carnegie Mellon University is adopting for InCommon membership.<br />
8.9.7 Purdue<br />
During Q2 Purdue RP disabled 10 TeraGrid accounts on the Purdue TG resources due to<br />
compromises at other RP sites.<br />
8.9.8 SDSC<br />
SDSC has had zero major incidents, zero serious incidents, and one minor incident this quarter.<br />
The minor incident consisted of a user account that was compromised at other sites. The account<br />
was suspended at SDSC to prevent possible misuse. No unauthorized activity was observed prior<br />
to locking the account.<br />
8.9.9 TACC<br />
Within the reporting period four accounts were locked out at TACC due to a compromise on a<br />
non-TACC system that occurred on 16Apr10 - including the 2nd security incident involving a<br />
specific community account.<br />
Also, TACC staff discovered a side-effect related to the qb4 compromise on a non-TACC system<br />
that occurred on 30Apr10. A username and password were harvested from another system<br />
located at the site of the compromised system. Most likely attack vector: two user accounts<br />
known compromised during the qb4 incident had sudo access to the compromised system.<br />
8.10 Common User Environment<br />
The CUE has been finalizing the implementation of the Common User Environment Management<br />
System (CUEMS) and the Common User Environment Variable Collection (CUEVC). The<br />
"modules" program has been installed on all machines throughout the TeraGrid, and a common<br />
set of modules have been defined to provide users with an interface that is similar across the<br />
resources. Members of the working group have been coordinating with RP systems personnel on<br />
implementation. The implementation is defined in a document outlining the necessary<br />
components in order to be in compliance with the CUE. Mechanisms were put into place to allow<br />
users to opt into the CUE by making modules their default EMS and allowing the CUE modules<br />
to be loaded automatically. Working with the Documentation group, a new document describing<br />
the use of modules has been drafted, with clear instructions on how to activate the CUE.<br />
8.11 System Performance Metrics<br />
8.11.1 Resource Usage Summary<br />
TeraGrid again saw further increases in delivered NUs. TeraGrid compute resources delivered<br />
12.1 billion NUs to users, 0.3% as TeraGrid Roaming NUs (Figure 8-11). Roaming NUs<br />
continued to decline as that allocation feature is phased out. The 12.1 billion NUs represent a<br />
20% increase over the NUs delivered in Q1 2010. Nearly 100% of those NUs were delivered to<br />
allocated users. The Q2 2010 NUs delivered are 1.7x the NUs delivered in Q2 2009.<br />
90
Figure 8-9. TeraGrid Computational Resource Usage<br />
8.11.2 Summary of Usage / Users / Allocations by Discipline<br />
Figure 8-12 shows the relative fraction of PIs, current accounts, active users (individuals charging<br />
jobs), allocations, and NUs used according to discipline. The 10 disciplines with greater than 2%<br />
of NUs used are listed in the figure. Staff usage is excluded, and users that appear on more than<br />
one allocated project may be counted more than once, if those projects are associated with<br />
different fields of science. Consistent with past quarters, the same disciplines dominant the NUs<br />
used column, though there have been some changes in the ordering. Astronomical Sciences ranks<br />
second, and Chemical and Thermal Systems ranks 8th. As in past quarters, more than 20% of PIs<br />
and current accounts, as well as 15% of active users, are associated with the 20 “other”<br />
disciplines, collectively representing about 2% of total quarterly usage.<br />
91
Figure 8-10. TeraGrid Usage Summary by Discipline<br />
8.11.3 Summary of Top PIs / Projects, Overall and by Discipline<br />
882 different projects charged usage on TeraGrid computational resources, an increase of almost<br />
10% over Q1 2010. Figure 8-13 shows the top 20 PIs by total usage in NUs. The top 20 PIs<br />
consumed 52% of the NUs used and the remaining 862 projects consumed the other half. Table<br />
8-1 shows the top PIs (>2% of the discipline’s usage) in each discipline with more than 2% of<br />
total usage; the final section shows the top PIs for all other disciplines. The overall rank of each<br />
PI among the 882 projects charged is also listed.<br />
92
Figure 8-11. Top 20 TeraGrid PIs, Q2 2010.<br />
Table 8-1. Top PIs by Discipline for Q2 2010<br />
Physics - 2,613,313,083<br />
PI, Institution<br />
NUs Rank<br />
Colin Morningstar, CMU 842,401,191 2<br />
Robert Sugar, UC Santa Barbara 448,720,301 4<br />
Erik Schnetter, Louisiana State U 420,618,820 6<br />
Klaus Bartschat, Drake U 128,603,597 16<br />
Keh-Fei Liu, U Kentucky 123,841,843 18<br />
Mathieu Luisier, Purdue U 109,675,314 23<br />
Konstantinos Orginos, College of Wm & Mary 90,603,726 28<br />
Stanislav Boldyrev, U Wisconsin-Madison 62,943,681 40<br />
Cary Forest, U Wisconsin-Madison 55,752,242 44<br />
Ge<strong>org</strong>e Fleming, Yale U 52,579,520 49<br />
All 53 Others 459,011,257<br />
Astronomical Sciences - 2,422,240,376<br />
PI, Institution NUs Rank<br />
Juri Toomre, U Colorado 435,129,692 5<br />
Tiziana Di Matteo, CMU 297,016,473 8<br />
Robert Harkness, UC San Diego 214,062,477 11<br />
Roger Blandford, Stanford U 137,160,363 15<br />
Adam Burrows, Princeton U 128,001,206 17<br />
Michael Norman, UC San Diego 115,572,718 22<br />
Thomas Quinn, U Washington 92,977,957 27<br />
Andreas Berlind, Vanderbilt U 68,804,123 34<br />
Renyue Cen, Princeton U 61,833,758 41<br />
Dean Townsley, U Alabama 61,322,982 42<br />
Bronson Messer, U Tennessee 54,935,993 45<br />
Richard I. Klein, UC Berkeley 53,524,110 47<br />
Paul Shapiro, UT Austin 50,376,033 50<br />
All 57 Others 470,118,205<br />
Molecular Biosciences - 2,229,651,467<br />
PI, Institution NUs Rank<br />
Klaus Schulten, UIUC 367,290,273 7<br />
Michael Klein, Temple U 232,303,319 10<br />
Gregg Beckham, NREL 160,429,658 12<br />
Aleksei Aksimentiev, UIUC 149,745,363 14<br />
Emad Tajkhorshid, UIUC 118,522,530 21<br />
Thomas Cheatham, U Utah 90,237,671 29<br />
Carlos Simmerling, SUNY Stony Brook 73,069,966 30<br />
Wonpil Im, U Kansas 71,903,651 31<br />
Cheng Zhu, Ge<strong>org</strong>ia Tech 65,427,129 38<br />
Michael Feig, Michigan State U 63,094,958 39<br />
Stephen White, UC Irvine 52,669,275 48<br />
All 148 Others 784,189,969<br />
Earth Sciences - 1,069,997,914<br />
PI, Institution NUs Rank<br />
Thomas Jordan, USC 912,466,883 1<br />
Omar Ghattas, UT Austin 42,853,665 61<br />
Burkhard Militzer, UC Berkeley 30,272,571 70<br />
Magali Billen, UC Davis 23,085,626 88<br />
All 20 Others 61,319,169<br />
Atmospheric Sciences - 1,016,010,959<br />
PI, Institution NUs Rank<br />
Homayoun Karimabadi, UC San Diego 613,303,983 3<br />
Ming Xue, U Oklahoma 157,263,205 13<br />
Fuqing Zhang, Penn State U 67,526,309 37<br />
Nikolai Pogorelov, U Alabama, Huntsville 58,627,158 43<br />
James Kinter, COLA 24,906,577 82<br />
All 39 Others 94,551,366<br />
Materials Research - 725,105,565<br />
PI, Institution NUs Rank<br />
Chris Van de Walle, UC Santa Barbara 70,902,277 33<br />
Stefano Curtarolo, Duke U 48,035,319 52<br />
Peter Cummings, Vanderbilt U 44,419,815 57<br />
Giulia Galli, UC Davis 44,307,422 58<br />
Lubos Mitas, NC State U 44,223,953 59<br />
Julia Medvedeva, Missouri S&T 30,467,783 69<br />
Marvin L. Cohen, UC Berkeley 30,130,898 71<br />
James R. Chelikowsky, UT Austin 28,942,090 74<br />
Dallas Trinkle, UIUC 28,784,357 75<br />
Stephen Paddison, U Tennessee 26,331,592 79<br />
Bhagawan Sahu, UT Austin 24,551,679 84<br />
John D. Joannopoulos, MIT 23,085,547 89<br />
Gautam Ghosh, Northwestern U 21,721,675 96<br />
Ivan Oleynik, U South Florida 18,055,539 110<br />
Sergey Barabash, UCLA 17,023,083 115<br />
Dane M<strong>org</strong>an, U Wisconsin-Madison 16,850,475 116<br />
Alexie Kolpak, Yale U 16,290,849 118<br />
Axel Vandewalle, Caltech 14,808,563 123<br />
All 87 Others 178,200,151<br />
Chemistry - 631,769,408<br />
PI, Institution NUs Rank<br />
Gregory A. Voth, U Chicago 289,850,125 9<br />
Guillermina Estiu, U Notre Dame 71,733,514 32<br />
Enrique Batista, LANL 23,771,817 86<br />
93
Arun Yethiraj, U Wisconsin-Madison 14,459,419 125<br />
Rommie Amaro, UC Irvine 14,348,026 126<br />
All 136 Others 216,423,081<br />
Chemical, Thermal Systems - 627,882,647<br />
PI, Institution NUs Rank<br />
Madhusudan Pai, Stanford U 123,140,649 19<br />
Kenneth Jansen, RPI 93,190,694 26<br />
Daniel Bodony, UIUC 68,332,121 35<br />
Rajat Mittal, Johns Hopkins U 49,668,491 51<br />
Cyrus K. Aidun, Ge<strong>org</strong>ia Tech 24,787,473 83<br />
Olivier Desjardins, U Colorado 23,016,499 90<br />
Andreas Heyden, U South Carolina 22,171,412 92<br />
Ge<strong>org</strong>e E. Karniadakis, Brown U 19,384,553 105<br />
Pui-kuen Yeung, Ge<strong>org</strong>ia Tech 18,636,476 107<br />
John Kim, UCLA 17,184,641 114<br />
Krishnan Mahesh, U Minn 15,917,735 121<br />
Stephen Pope, Cornell U 14,277,061 127<br />
All 66 Others 138,308,990<br />
Advanced Scientific Computing - 339,387,474<br />
PI, Institution NUs Rank<br />
Martin Berzins, U Utah 107,137,049 24<br />
Ali Uzun, Florida State U 103,536,693 25<br />
Tonny Oyana, SIU Carbondale 67,983,386 36<br />
Xiaowen Wang, UCLA 12,081,320 139<br />
Richard Loft, NCAR 10,287,221 155<br />
David LeBard, Arizona State U 8,958,801 176<br />
Bilge Yildiz, MIT 7,282,956 191<br />
Celso Mendes, UIUC 6,788,128 203<br />
All 34 Others 15,331,920<br />
8.11.4 Usage by Institution<br />
Mathematical Sciences - 197,054,478<br />
PI, Institution NUs Rank<br />
Clinton N. Dawson, UT Austin 122,336,390 20<br />
Daniel Bernstein, U Illinois, Chicago 54,444,478 46<br />
Nicholas Zabaras, Cornell U 6,021,641 218<br />
All 21 Others 14,251,955<br />
All Others - 207,867,295<br />
PI, Institution (Discipline) NUs Rank<br />
Mark Miller, SDSC (Environmental Biology) 34,817,821 68<br />
Teresa Chereskin, SIO (Ocean Sciences) 22,671,992 91<br />
Tony Keaveny, UC Berkeley (Biological and Critical<br />
Systems) 20,097,602 103<br />
Huajian Gao, Brown U (Mechanical and Structural Systems) 13,443,461 133<br />
Server Yilmaz, U Pittsburgh (Science and Engineering<br />
Education) 9,162,671 172<br />
Shaowen Wang, UIUC (Social and Economic Science) 8,447,801 183<br />
Dan Stanzione, UT Austin (Environmental Biology) 6,994,064 197<br />
Pranav Shrotriya, Iowa State U (Mechanical and Structural<br />
Systems) 6,543,656 210<br />
Christian Lastoskie, U Michigan (Biological and Critical<br />
Systems) 5,490,875 229<br />
James McWilliams, UCLA (Ocean Sciences) 5,324,979 232<br />
Markus Buehler, MIT (Mechanical and Structural Systems) 4,818,131 241<br />
Kim Dillman, Purdue U (Cross-Disciplinary Activities) 4,790,619 244<br />
Jun Song, Brown U (Mechanical and Structural Systems) 4,765,960 246<br />
All 119 Others 60,085,396<br />
Users from 316 institutions (up from 305 in Q1 2010) across the U.S. and abroad were associated<br />
with the usage delivered by TeraGrid resources. The top 47 institutions (by NUs consumed) were<br />
responsible for 80% of the total NUs delivered. Table 8-2 lists the 316 institutions, the number of<br />
users responsible for the usage at each site, and the NUs consumed.<br />
Table 8-2. TeraGrid Usage by Institution, Q3 2009<br />
Institution<br />
Users<br />
UIUC 216 901,704,211<br />
CMU 43 797,379,040<br />
UC San Diego 56 630,643,099<br />
LANL 7 607,263,917<br />
U Utah 34 487,224,754<br />
SDSC 11 471,986,218<br />
Jefferson Lab 2 469,628,380<br />
U Arizona 7 444,016,918<br />
U Colorado 52 338,540,787<br />
NCAR 12 272,519,944<br />
U Wisconsin-Madison 35 197,349,398<br />
Stanford U 36 195,219,737<br />
Princeton U 9 179,483,989<br />
Temple U 9 176,903,419<br />
U Oklahoma 33 165,266,150<br />
U Washington 51 162,434,751<br />
Ge<strong>org</strong>ia Tech 68 157,574,223<br />
NREL 9 156,631,958<br />
UC Berkeley 45 143,887,943<br />
UT Austin 41 143,312,473<br />
Purdue U 47 137,721,649<br />
Caltech 18 117,216,799<br />
SUNY Stony Brook 10 114,656,664<br />
Duke U 5 113,885,534<br />
UC Davis 37 113,418,320<br />
U Maryland, College Park 26 110,989,827<br />
MIT 56 110,964,628<br />
U Wyoming 6 109,111,424<br />
Institution<br />
Users<br />
UCLA 43 108,339,265<br />
Florida State U 6 107,115,316<br />
College of Wm & Mary 5 105,473,430<br />
U Notre Dame 11 97,444,719<br />
MPI Gravitationsphysik<br />
(Germany) 3 96,237,353<br />
RPI 14 94,953,661<br />
UC Santa Barbara 33 92,852,016<br />
NC State U 24 92,731,546<br />
Drake U 4 90,582,365<br />
NYU 14 89,248,370<br />
Penn State U 16 88,022,414<br />
Vanderbilt U 16 84,995,576<br />
Louisiana State U 26 84,259,899<br />
U Kansas 13 72,766,013<br />
UC Irvine 20 72,403,341<br />
SIU Carbondale 5 68,021,934<br />
Yale U 9 66,690,691<br />
Cornell U 45 65,544,695<br />
Michigan State U 12 63,568,384<br />
U Illinois, Chicago 9 63,453,682<br />
U Chicago 31 61,604,754<br />
RIT 7 59,895,362<br />
U Kentucky 7 58,963,374<br />
Harvard U 15 58,774,850<br />
U Alabama, Huntsville 2 58,627,158<br />
U Rochester 5 58,616,755<br />
U Tennessee 9 56,078,910<br />
94
Institution<br />
Users<br />
Colorado Sch of Mines 3 55,310,241<br />
MPI Plasmaphysik<br />
(Germany) 1 54,675,602<br />
U Virginia 15 52,824,012<br />
U Pennsylvania 10 52,119,075<br />
Johns Hopkins U 7 52,021,344<br />
Albert Einstein Institute<br />
(Germany) 6 50,313,712<br />
Florida Atlantic U 5 48,403,403<br />
Brown U 52 45,822,874<br />
Washington U 12 44,176,412<br />
U Pittsburgh 35 41,713,031<br />
NASA Goddard 2 41,118,432<br />
U Iowa 11 41,106,481<br />
Missouri S&T 10 40,117,869<br />
University of Sussex 2 40,035,946<br />
LLNL 3 33,271,697<br />
Columbia U 23 32,886,615<br />
SIO 3 32,543,567<br />
UNC 8 32,352,522<br />
Arizona State U 11 31,199,428<br />
U Toronto (Canada) 6 30,124,658<br />
Northwestern U 22 30,111,368<br />
U Michigan 20 29,583,351<br />
UC Santa Cruz 5 29,093,294<br />
U Minn 6 28,485,578<br />
Indiana U 10 27,377,290<br />
Texas A&M 4 26,803,770<br />
Drexel U 19 26,796,194<br />
U South Florida 13 26,647,804<br />
Rice U 9 26,071,989<br />
Mt Sinai Sch of Med 10 24,676,235<br />
ANL 4 24,494,718<br />
Weill Cornell Med College 4 24,443,709<br />
U South Carolina 6 23,692,574<br />
Univ of Vienna 1 23,255,638<br />
UNLV 8 23,206,201<br />
UC San Francisco 10 21,938,554<br />
U Heidelberg (Germany) 4 21,771,525<br />
U Alabama 2 19,773,826<br />
Tel Aviv U (Israel) 1 19,559,445<br />
Iowa State U 8 16,937,104<br />
Toyota Tech Inst, Chicago 5 16,271,963<br />
Bioprocessing Tech Inst<br />
(Singapore) 1 15,961,927<br />
U Florida 19 14,551,198<br />
U Richmond 1 14,279,732<br />
Lock Haven U 5 14,178,679<br />
Keldysh Inst of Applied<br />
Math (Russia) 1 14,059,645<br />
Brandeis U 14 13,834,860<br />
Boston U 18 13,569,041<br />
U Mass, Amherst 15 12,567,797<br />
U Maryland, Baltimore Co 14 11,457,514<br />
Duquesne U 5 10,961,027<br />
Wesleyan U 2 10,885,793<br />
USC 4 10,457,138<br />
NC Central U 2 9,953,865<br />
Institution<br />
Users<br />
MPI Astrophysik (Germany) 2 9,944,249<br />
Oxford U (United Kingdom) 1 9,507,236<br />
UC Riverside 5 9,277,423<br />
Virginia Tech 7 9,221,329<br />
Ge<strong>org</strong>etown U 10 9,220,039<br />
Auburn U 6 8,847,212<br />
U Jena (Germany) 2 8,845,034<br />
Rockefeller U 1 8,751,384<br />
St Louis U 16 8,249,773<br />
WPI 2 8,064,218<br />
U Delaware 5 7,973,918<br />
Long Island U 3 7,872,687<br />
CIW 5 7,613,867<br />
U Puerto Rico System 7 7,481,881<br />
College of Charleston 2 7,053,134<br />
Montana State U 5 6,929,795<br />
U Maryland, Baltimore 2 6,693,038<br />
U Texas Hlth Sci Ctr<br />
Houston 7 6,654,504<br />
U Connecticut 5 6,491,143<br />
Rutgers U 20 6,471,322<br />
U New Hampshire 7 6,392,655<br />
Chosun U (South Korea) 1 6,359,204<br />
USGS 2 6,355,677<br />
Perimeter Inst (Canada) 2 6,245,100<br />
University College, London<br />
(United Kingdom) 5 6,003,945<br />
Desert Res Inst 3 5,879,367<br />
Corning, Inc. 2 5,866,082<br />
La Jolla Bioeng Inst 3 5,856,424<br />
CENTRA - Inst Sup Tecnico<br />
(Portugal) 1 5,798,510<br />
UT San Antonio 1 5,583,566<br />
CWRU 1 5,553,287<br />
LBNL 2 5,413,040<br />
Indiana U, Kokomo 1 5,174,163<br />
U Tenn, Oak Ridge 1 5,168,864<br />
U Michigan, Flint 1 5,050,633<br />
Tulane University 4 4,949,138<br />
Colorado State U 12 4,853,494<br />
U Illes Balears (Spain) 1 4,513,554<br />
Princeton Plasma Physics<br />
Lab 2 4,339,410<br />
U Puerto Rico, Mayaguez 2 4,297,589<br />
U Houston 2 4,216,479<br />
U Puerto Rico, Rio Piedras 1 4,129,085<br />
NICS 2 4,085,546<br />
Institute for Advanced Study 2 4,083,681<br />
Florida A&M U 4 3,772,159<br />
Predictive Science, Inc. 3 3,674,712<br />
Clark U 3 3,584,226<br />
AMNH 6 3,457,463<br />
U North Texas 10 3,391,689<br />
San Diego State U 8 2,974,537<br />
Connecticut College 1 2,956,926<br />
UT Southwestern Med Ctr 2 2,853,814<br />
ND State U 2 2,818,084<br />
CSU-Long Beach 2 2,752,978<br />
95
Institution<br />
Users<br />
NIST 1 2,723,312<br />
Atmospheric Tech Services<br />
Co. 1 2,723,284<br />
U New Mexico 5 2,702,950<br />
Lehigh U 3 2,500,322<br />
Kingsborough Comm<br />
College 1 2,483,868<br />
UT El Paso 22 2,394,797<br />
U Mass, Boston 1 2,369,654<br />
Truman State U 2 2,364,790<br />
Burnham Inst (a) 1 2,265,038<br />
Armstrong Atlantic State U 4 2,258,348<br />
Southwest Res Inst 1 2,161,587<br />
U Mass, Dartmouth 1 1,989,696<br />
SUNY at Buffalo 2 1,908,922<br />
Albany State U 4 1,901,445<br />
CERN 1 1,893,101<br />
U Sciences, Philadelphia 1 1,865,710<br />
Fisk U 2 1,744,073<br />
Applied Sci Research 1 1,660,617<br />
Hamilton College 4 1,654,928<br />
U Oregon 12 1,635,819<br />
Cold Spring Harbor Lab 2 1,565,451<br />
IIT 6 1,358,508<br />
University of Nebraska<br />
Medical Center 1 1,334,422<br />
PSC 2 1,285,900<br />
UT Med Branch 2 1,284,402<br />
Illinois State U 2 1,245,899<br />
Washington State U 3 1,198,799<br />
Michigan Tech 9 1,173,522<br />
ORNL 12 1,157,036<br />
Ge<strong>org</strong>e Mason U 3 1,117,224<br />
U Arkansas 4 1,110,040<br />
UNC System Office 1 1,013,605<br />
U Zurich (Switzerland) 2 944,599<br />
U Pittsburgh Sch of Med 2 933,782<br />
Natl Hellenic Res Fnd<br />
(Greece) 2 929,414<br />
Baylor College of Med 1 913,976<br />
Eastern Michigan U 1 903,040<br />
University of Wisconsin-Eau<br />
Claire 1 828,329<br />
U Ge<strong>org</strong>ia 8 799,927<br />
Texas Tech U 3 796,661<br />
Kansas State U 2 784,519<br />
IIMCB (Poland) 1 737,543<br />
CUNY City College 1 688,370<br />
Queens U, Belfast (United<br />
Kingdom) 1 610,980<br />
Howard U 1 602,172<br />
Ill. State Water Survey 2 592,707<br />
U Central Florida 3 580,310<br />
Natl Space Sci & Tech Ctr 1 579,722<br />
U North Dakota 15 572,679<br />
UC Merced 1 566,090<br />
Vorcat, Inc. 1 561,442<br />
Clark Atlanta University 1 558,075<br />
Institution<br />
Users<br />
Ursinus College 2 554,428<br />
U Idaho 2 537,435<br />
University of Montana 1 532,591<br />
CSU-Fullerton 2 528,227<br />
Lamar University-Beaumont 5 490,808<br />
Southern Methodist U 2 476,482<br />
U Pisa (Italy) 1 466,001<br />
Wichita State U 5 437,593<br />
LSST Corp. 2 396,539<br />
Queensborough Comm<br />
College 2 373,340<br />
Ill. Geological Survey 1 368,019<br />
RENCI 3 359,237<br />
NJIT 1 352,720<br />
U Houston-Clear Lake 2 339,052<br />
U Hawaii, Honolulu 1 335,292<br />
Ohio U 2 298,041<br />
Marquette University 3 293,900<br />
UT Arlington 3 293,202<br />
Computational Sci & Eng 2 281,542<br />
Naval Postgrad School 1 272,686<br />
Vienna U of Technology<br />
(Austria) 1 228,740<br />
U Missouri, Columbia 1 215,562<br />
U Hawaii, Manoa 2 214,664<br />
US FDA 1 197,191<br />
Marmara U (Turkey) 1 184,930<br />
Active Site Dynamics, LLC 1 177,320<br />
Old Dominion U 1 173,543<br />
Clarkson U 1 170,212<br />
Jackson State U 1 167,647<br />
Virginia Commonwealth U 4 156,017<br />
Alfred University 1 153,794<br />
Wayne State U 2 146,367<br />
Rosalind Franklin U 1 139,526<br />
Macalester College 6 129,123<br />
U Missouri 2 122,992<br />
New College of Fla. 1 121,998<br />
Monell Chemical Senses Ctr 1 120,948<br />
Inst HEP (China) 1 118,193<br />
U Minn, Twin Cities 1 107,969<br />
Tufts U 7 95,870<br />
Marshall U 5 94,427<br />
UT System 1 86,360<br />
Climate Central 1 77,879<br />
POSTECH (South Korea) 1 64,389<br />
U Detroit Mercy 14 57,801<br />
SD State U 4 39,977<br />
U Hawaii Research Corp 1 39,127<br />
Technische Universitaet<br />
IImenau 1 35,957<br />
U Canterbury (New Zealand) 1 34,679<br />
Shippensburg U 13 34,369<br />
U Cordoba (Spain) 1 34,199<br />
Marian College 1 32,522<br />
Francis Marion U 1 28,386<br />
Hope College 1 27,570<br />
Wilkes U 3 27,025<br />
96
Institution<br />
Users<br />
CSU-San Bernardino 1 25,770<br />
BYU 1 23,485<br />
SUNY at Binghamton 4 21,359<br />
Westminster College of Salt<br />
Lake City 1 19,228<br />
United Technologies 1 13,800<br />
U New Orleans 1 11,356<br />
Syracuse U 1 10,540<br />
Boise State U 1 10,419<br />
Allegheny Gen Hosp 1 9,034<br />
Richard Stockton College of<br />
New Jersey 1 8,402<br />
Julich Supercomputing<br />
Centre (Germany) 1 7,456<br />
Kent State U 1 6,587<br />
Rose-Hulman Inst of Tech 1 6,303<br />
Widener U 2 6,287<br />
Slovak Acad of Sci<br />
(Slovakia) 2 6,085<br />
UT Permian Basin 1 3,726<br />
Washington U School of<br />
Med 1 3,650<br />
CUNY 1 3,012<br />
Monash U (Australia) 2 2,918<br />
U Houston-Downtown 1 2,596<br />
U Paul Sabatier (France) 1 2,351<br />
San Francisco State U 2 1,638<br />
Fond du Lac Tribal College 2 1,635<br />
Florida Intl U 2 1,056<br />
Central Michigan U 1 1,001<br />
Albany College of Pharmacy<br />
& Health Sci 2 993<br />
U Miami 2 779<br />
SUNY ESF 1 750<br />
Northeastern U 3 621<br />
Oregon State U 2 300<br />
U Wisconsin-Milwaukee 2 290<br />
Ge<strong>org</strong>e Washington U 1 210<br />
KEK Japan (Japan) 1 200<br />
St Vincent College 1 191<br />
National U 1 147<br />
Santa Clara U 2 54<br />
APS 1 38<br />
Centre College 1 29<br />
Wofford College 4 27<br />
Earlham College 1 21<br />
PNL 1 21<br />
U Mass, Lowell 1 20<br />
Indian Institute of Science<br />
(India) 1 19<br />
UNC, Asheville 1 3<br />
IFPRI 1 1<br />
Villanova U 1 0<br />
97
9 Evaluation, Monitoring, and Auditing<br />
9.1 Highlights<br />
This is a new section to the TG quarterly report, resulting from the first NSF XD awards for TIS<br />
(Technology Insertion Service) and TAS (Technology Audit Service). This section of the TG<br />
report was developed with a flow in mind – from evaluation and recommendation of technologies<br />
(TIS), to assuring the quality of those technologies via monitoring (QA), and finally the oversight<br />
and quality control of technologies (TAS).<br />
9.2 XD – TIS: Technology Insertion Services<br />
National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign<br />
John Towns, PI<br />
Program Start Date: 7/1/2010<br />
9.2.1 Technology Insertion Service Program Goals<br />
The purpose of XSEDE TIS (the eXtreme Science and Engineering Discovery Environment,<br />
Technology Insertion Service) funded under the NSF TeraGrid Phase III: eXtreme Digital<br />
Resources for Science and Engineering program is to execute comprehensive, expert<br />
technology evolution of the world’s most advanced collection of integrated digital services in<br />
scale and diversity, helping to maximize its impact and advance it over the project lifetime. The<br />
XSEDE partnership includes NCSA, NICS, PSC, TACC, and the Univeristy of Virginia. XSEDE<br />
will ensure that XD’s architecture, resources, and services remain at the forefront of technology<br />
and are reliable, usable, and offer maximum possible performance, thus enabling transformational<br />
science across domains, while advancing integrated distributed environments, high end<br />
computing, data management, and other technologies for computational science.<br />
There are two components to the XSEDE TIS – a Technology Evaluation component and a<br />
Usage, Satisfaction, and Impact Evaluation component.<br />
9.2.1.1 Technology Evaluation<br />
The Technology Evaluation team has the task of identifying hardware and software technologies<br />
that can help XD improve its operations and impact. We anticipate the CMS awardee will have a<br />
periodic deployment schedule in which this service will participate. In Yr1 we will participate<br />
with TeraGrid as appropriate based on knowledge of the technologies being deployed.<br />
The Technology Insertion team will develop and maintain an open, web-front-ended database of<br />
technology projects of potential value to XD sites and users known as XTED (XSEDE<br />
Technology Evaluation Database).<br />
A Technology Evaluation Laboratory will be operated to help ensure that any technology changes<br />
to the infrastructure are thoroughly evaluated and tested before recommended for inclusion into<br />
the XD production environment. The laboratory will leverage the Futuregrid resources and<br />
environment for this testing. We will design all tests, coordinate their execution, evaluate and<br />
document their outcomes, and iterate until we can provide our recommendation regarding<br />
deployment.<br />
XSEDE will provide documentation of the processes and procedures for the evaluation of<br />
technologies, and will offer code consulting services to software developers to help promote code<br />
quality necessary for production usage. Specifically, proposed deployment objects will first be<br />
evaluated and tested on the Futuregrid, in cooperation with their project team. The deployment<br />
98
objects that pass Futuregrid testing must then be tested by the CMS for interoperability and<br />
appropriateness in the defined environment. Finally, technologies deployed into XD must<br />
undergo deployment and acceptance testing on XD production resources. To enable this final<br />
phase, we will work with XD CMS and Service Providers to configure a suitable set of resources<br />
(node partitions and/or reserved time slots) so that production user impact is minimized.<br />
All technologies recommended for deployment by the Technology Insertion team will be<br />
documented including fields that describe roles, rationale, responsibilities and usage context for<br />
the deployment object, in addition to the specific deployment contexts, security requirements,<br />
dependencies and tests and measures of correctness and performance used to validate (audit) the<br />
deployments. XSEDE TIS will work closely with the XD management team to coordinate a<br />
deployment strategy team with members including system administrators, operators, site user<br />
consultants, and technical writers from each of the XD resource providers.<br />
9.2.1.2 Usage, Satisfaction, and Impact Evaluation<br />
This activity is the assessment of user productivity by evaluation of the user environment, usage<br />
of code instrumentation and profiling tools, and collecting feedback on user satisfaction and<br />
impact. These data will be evaluated to determine if the vision and goals of XD are being met and<br />
to identify transformational science being accomplished due to technology insertion. Feedback<br />
will be derived from multiple sources including the annual User Survey and user focus groups.<br />
9.2.2 Progress <strong>Report</strong><br />
In this first TIS quarterly progress report we discuss project staffing, communications, and initial<br />
progress. The reporting period is the three months prior to the award state date of July 1, so much<br />
of the activity was in preparation for the start of the award.<br />
9.2.3 Project Staffing<br />
The TIS management team consists of John Towns as the project PI with Tim Cockerill (NCSA)<br />
as project manager, Jay Boisseau (TACC) heading up the Technology Evaluation team, and<br />
Sergiu Sanielevici (PSC) leading the Usage, Satisfaction, and Impact Evaluation team. The<br />
XSEDE TIS team is staffed by members from NCSA, NICS, PSC, TACC, and the University of<br />
Virginia. During this reporting period (prior to the July 1 award date), we identified staff<br />
members at each site that will execute the project.<br />
9.2.4 Communications<br />
A meeting schedule was developed by the management and initial <strong>org</strong>anizational meetings were<br />
held. Full team meetings are initially weekly, with biweekly management meetings. Email lists<br />
were set up for the different areas of the project. The team also set up a wiki space for internal<br />
project communications.<br />
9.2.5 Initial progress<br />
Early deliverables include the XTED, a technology evaluation process, and user focus group<br />
meetings. The XTED group is in the early design phase – gathering requirements for database<br />
content, expected output, and user interface needs. The technology evaluation process group is in<br />
the early stages of developing a procedure including technology selection, metrics, evaluation,<br />
99
and reporting. The usage, satisfaction, and impact evaluation group did some preliminary data<br />
mining of the TG ticket system data and user surveys to identify the first user focus groups.<br />
9.3 Quality Assurance<br />
The QA group worked to test and debug CTSS services that the group considered to be most<br />
relevant for users. After reviewing CTSS usage and reliability last quarter, QA decided to focus<br />
on GRAM and GridFTP testing and debugging. In particular, evaluating the upcoming GRAM5<br />
release was a high priority. The QA group collaborated with the Gateway working group in<br />
ongoing debugging discussion of GRAM5 experiences and participated in the TeraGrid user<br />
portal forum thread related to GRAM5 debugging. QA also monitored the activities of others<br />
involved in the GRAM5 release – mainly the Software and Gig-pack working groups. The QA<br />
group gathered its own early GRAM5 test results using Inca and compared them to results from<br />
other Gateway testing (the nanoHUB grid probes). While reviewing test errors the QA group<br />
discussed how errors are logged and subsequently submitted requests for improved logging to<br />
Globus developers. The GRAM5 test sources yielded different information – although the Inca<br />
testing was mostly successful because it was fairly lightweight, it did indicate a problem with<br />
lingering zombie processes that has since been addressed. The nanoHUB testing was successful<br />
overall on Lonestar but was mostly failing on Queenbee. The Queenbee issues are still being<br />
debugged and led to the administrator upgrading the GRAM5 software. Some of the differences<br />
between the Inca and nanoHUB testing are that Inca tests are submitted to the local GRAM<br />
service while nanoHUB submits them remotely, and nanoHUB doesn't take downtimes into<br />
account as Inca does. The QA group is currently beginning to leverage some of the previous<br />
Gateway GRAM scalability testing to run tests on a Ranger test node, where a GRAM5<br />
installation was deployed for QA and is also now being used for Gateway testing. As the Ranger<br />
node will soon become a production login node, the QA group is investigating the possibility of<br />
using FutureGrid to continue testing.<br />
In addition to the GRAM5 testing, the QA group created new tests for Condor-g matchmaking<br />
and Karnak. Tests for Condor were also updated as well as the usage reporter for GRAM. The<br />
group continued to monitor the CUE working group’s progress on its upcoming module release<br />
and will test CUE deployments when available. Also, the QA group recommended that the<br />
QBETS service be labeled as experimental in the user portal and documentation due to<br />
intermittent failures experienced by users and a lack of available tests to detect failures.<br />
Finally, the QA group continued work to create a sustainable process for collecting and analyzing<br />
usage data in order to determine which services are most important to users. A member of the<br />
group wrote a script to automate the reporting of CTSS processing accounting data and the group<br />
created an informational wiki article to help explain the process of doing this:<br />
http://www.teragridforum.<strong>org</strong>/mediawiki/index.phptitle=QA_Process_Accounting. Efforts to<br />
further automate this process and to generate consistent and meaningful reports are ongoing.<br />
9.4 XD – TAS: Technology Audit Services for TeraGrid<br />
Center for Computational Research, University at Buffalo<br />
Thomas R. Furlani, PI<br />
Program Start Date: 7/1/2010<br />
100
9.4.1 Technology Audit Services Program Goals<br />
The Technology Audit Service (TAS) will both provide quality assurance and quality control for<br />
TG (XD) to help maximize the positive impact of the infrastructure investment on science and<br />
engineering research. Coupled with a strong user needs and usability analysis program, TAS will<br />
be leveraged to help ensure that users’ needs are met, with particular emphasis on improving the<br />
effectiveness of major users, bringing novice users up to speed rapidly and attracting nontraditional<br />
users. By identifying user needs early on and providing the tools and information that<br />
they need, TAS will allow them to efficiently and more effectively utilize TG resources. TAS will<br />
also provide necessary inputs to other stakeholders, namely, NSF, resource providers (RP) and<br />
the TG user community represented by the Science Advisory Board, interested in the optimal<br />
usage of TG (XD) resources. TAS will also strive to provide quantitative and qualitative metrics<br />
of performance rapidly to all of these stakeholders.<br />
9.4.2 Progress <strong>Report</strong><br />
In this first TAS quarterly progress report we discuss project staffing, infrastructure put into place<br />
to facilitate communication and documentation within the group, <strong>org</strong>anizational meetings and<br />
initial technical progress.<br />
9.4.3 Project Staffing<br />
• University at Buffalo: We are pleased to report that four professionals were hired and all<br />
have started work on the program. The new hires include: Mr. Ryan J. Gentner, scientific<br />
programmer who will work on the XDMoD (Extreme Digital Metrics on Demand) portal<br />
development, Dr. Charng-Da Lu, computational scientist who will work on application<br />
kernel development, Mr. Amin Ghadersohi, scientific programmer who will work on the<br />
XDMoD portal development and XDMoD Data Warehouse, and Dr. Robert L. DeLeon<br />
who will act as the TAS Program Manager. In addition to the new hires, existing CCR<br />
staff members, who will have major programmatic responsibilities, have begun work on<br />
the TAS program. These include Dr. Thomas R. Furlani, who will act as the PI and<br />
oversee the entire TAS program, Dr. Matthew D. Jones who will oversee the technical<br />
aspects of the program as well as the application kernel development; Mr. Steven M.<br />
Gallo, who will lead development and implementation of the XDMoD portal and<br />
backend infrastructure; and Mr. Andrew Bruno, developer of the UB Metrics on Demand<br />
(UBMoD) portal, who will play a central role in development of the XDMoD portal.<br />
• Indiana University: The Indiana University subcontract will be lead by Dr. Gregor von<br />
Lazewski and Dr. Lizhe Wang. Initial efforts include design and implementation of the<br />
custom report builder and support for the XDMoD design.<br />
• University of Michigan: The University of Michigan subcontract effort will be lead by<br />
Dr. Thomas Finholt. Dr. Finholt will be responsible for determining user assessment of<br />
XDMoD and obtaining critical user feedback to optimize XDMoD to best meet user<br />
needs.<br />
101
9.4.4 Organizational Meetings and Events<br />
Weekly meetings of the TAS Working Group were held. The TAS Working Group includes:<br />
• University at Buffalo: Dr. Thomas R. Furlani, PI, Dr. Matthew D. Jones (Co-PI and<br />
Technical Project Manager), Mr. Steven Gallo (lead software engineer), Mr. Andrew<br />
Bruno (scientific programmer), Dr. Charng-Da Lu (computational scientist), Mr. Ryan<br />
Gentner (scientific programmer), Mr. Amin Ghadersohi (scientific programmer), and Dr.<br />
Robert L. DeLeon (Project Manager)<br />
• Indiana University: Dr. Gregor von Lazewski and Dr. Lizhe Wang<br />
• University of Michigan: Dr. Thomas Finholt.<br />
9.4.5 Coordination with External Projects<br />
Through Indiana University, we have conducted several initial outreach activities to identify<br />
opportunities for collaboration and leveraging. As part of this we are working intensely together<br />
with the NSF funded FutureGrid to leverage on their efforts on Single Sign-On and the<br />
integration with InCommon. We will integrate these efforts with our own requirements and<br />
collaboratively explore this path. In addition we have also contacted the NSF funded OGCE<br />
project that was just recently funded in order to evaluate potential collaboration by leveraging<br />
gadgets and their containers. These are ongoing activities and will continue for the foreseeable<br />
future.<br />
9.4.6 Technical Progress<br />
• BASECAMP Project Management System and TAS Wiki: In order to facilitate<br />
communication between the TAS team and provide a repository for project documents,<br />
we are leveraging BASECAMP project management software. BASECAMP is a webbased<br />
project management system that facilitates communication and collaboration to<br />
allow a team to work effectively on multidisciplinary projects. It allows us to post files<br />
for mutual use, send messages, track milestones, assign tasks, and communicate via<br />
write-boards. It has already proven to be a very useful tool in supporting communication<br />
between team members and keeping the TAS program moving forward. All project<br />
personnel, including the sub-contract awardees, are using BASECAMP. An internal TAS<br />
wiki has been set up in addition to BASECAMP to facilitate <strong>org</strong>anization and sharing of<br />
information.<br />
• XDMoD User Interface: A flexible web-application shell for Extreme Digital Metrics<br />
on Demand (XDMoD) interface is being finalized and will serve as the User Interface<br />
(UI) for the XDMoD prototype. The shell will implement a session-based authentication<br />
mechanism, role-based user authorization, extendable user profiles, and a user<br />
administration interface. A content template system will allow customized content to be<br />
delivered to users based on their role within XDMoD (e.g., Program Officer, Center<br />
Director, PI, User, etc.). We have begun the initial user interface designs for the<br />
prototype. We have finished a sitemap diagram of all the pages within the application and<br />
three wireframe designs which will constitute the basic look and feel of the application.<br />
These wireframes will help drive both the development and the UI design of the site and<br />
provide a concrete specification document for future changes to application and user<br />
interface. The wireframe diagrams include layouts for the general TeraGrid User as well<br />
as the Program Officer. The user administration tools for local user accounts (i.e., user<br />
accounts not associated with a TeraGrid account) and the basic functionality pertaining to<br />
102
user management is complete. Local user creation includes the assignment of one or more<br />
portal roles, including identification of the user’s primary role which is used in<br />
determining the initial view of the data that will be presented to the user upon login.<br />
Portal accounts are verified via a confirmation email containing an activation url that is<br />
sent to the user. The portal administrator has the ability to manage role assignments, user<br />
profile information, and enable/disable accounts. In the case of a f<strong>org</strong>otten password, a<br />
facility is provided to reset a user’s password.<br />
An administrative interface to XDMoD has been developed that will provide developers<br />
with the necessary functionality for creating local users and mapping those users to usage<br />
data for actual TeraGrid users. This will simplify development and testing.<br />
Implementation of the portal wireframes are 80% complete with the remainder to be<br />
completed within the next reporting period. Initial versions of identified charts and<br />
visualizations generated with data from the TGCdb are currently being integrated into the<br />
portal. Figure 1. below is an example of an XDMoD screen displaying summary<br />
information for a particular TG User. We are working with a graphic designer at UB to<br />
develop a look-and-feel for the portal. We have presented a prototype of the new<br />
interface at TeraGrid 10, the annual TG users meeting in Pittsburgh (August 2-5, 2010).<br />
Figure 1. Example XDMoD interface.<br />
• XDMoD Data Warehouse: We have begun exploration of the data contained in the<br />
TeraGrid Central Database (TGCdb). To date, queries have been implemented to explore<br />
user, <strong>org</strong>anization, allocation, resource, and job information. Design work has begun on<br />
an abstraction layer for querying and extracting data from the TGCdb. In order to<br />
provide rapid visualizations of TGCdb data for the XDMoD prototype, the Google Maps<br />
and Chart APIs are being employed to develop a number of initial data views. These will<br />
103
display aggregate information geographically with the ability to drill down to more<br />
detailed information as well as aggregate information such as “Job Distribution by<br />
Organization.” In our continuing exploration of the data contained in the TeraGrid<br />
Central Database (TGCdb), queries are being developed to feed the initial visualizations<br />
for the User and Program Officer roles. To date, functionality has been created for<br />
converting query results to JSON data structures and using this data to generate charts,<br />
images and maps using the various Google Chart APIs. We are addressing performance<br />
issues when querying the larger TGCdb tables (i.e., acct.jobs) and are examining ways to<br />
mitigate these issues and improve performance in a production system. Several sample<br />
data visualizations have been created including reports on usage, CPU consumption vs.<br />
Job Size, total jobs run on each resource, and allocation reports. A set of classes<br />
encapsulating the functionality for dynamically generating the queries is being built. We<br />
have begun to develop an in-house database to hold data ingested from various data<br />
sources (initially TGCdb) and support the construction of visualizations via the portal.<br />
Scripts have been written to ingest data for a targeted subset of TeraGrid users that will<br />
be used for testing purposes. Various methods for drawing charts have been investigated<br />
for maximum cross-browser compatibility. We have made contact with Rob Light (PSC)<br />
regarding mirroring of the TGCdb. Rob will be assisting us in identifying the tools<br />
required to mirror a copy of the TGCdb locally at CCR.<br />
• TeraGrid Central Database (TGCdb) Findings/Limitations: In general, the TGCdb<br />
records are very good, however in the course of constructing early versions of XDMoD<br />
we discovered that there are some areas where data is apparently missing and other areas<br />
where there is room for added or improved record storage. Our initial assessment of the<br />
TGCdb finds some inconsistency in batch job usage reporting. Two notable examples are<br />
that job exit status is not reported and not all computational resources report usage by<br />
processor core as well as by node. Indeed, reporting is not always consistent for jobs on<br />
an individual resource. We anticipate recommending a uniform taxonomy and more<br />
extensive collection of resource utilization statistics.<br />
• Monitoring Framework: The Inca User-Level Grid Monitoring tool has been installed<br />
and configured locally at CCR and is currently deployed on our 2112 node compute<br />
cluster. Efforts are underway to develop a number of example reporter modules that can<br />
be used as a basis for deploying custom Application Kernels within Inca. A welldocumented<br />
process will be provided for other team members. Exploration of the<br />
structure of the Inca Depot is underway with the assistance of Shava Smallen (SDSC).<br />
Queries to extract detailed reporter information from the Depot and display it are being<br />
developed. A web-based exploration tool for the Inca data depot is being built. This will<br />
allow the team to quickly explore the results of the custom application kernels that are<br />
being developed.<br />
• Application Kernel Development: Initial application kernel requirements have been<br />
documented and the kernel classification scheme aligned with the “Collella dwarfs” that<br />
describe application processing and data requirements. Deployment will leverage<br />
existing Inca reporting infrastructure, and the first (dense matrix) kernel and reporter is<br />
ready for testing. Two micro benchmarks: STREAM, which measures processor to<br />
memory bandwidth, and BLAS (DGEMM), which measures single-node floating<br />
pointing performance in the context of a dense matrix problem, are now running within<br />
the local Inca testbed. We will coordinate and collaborate with FutureGrid staff and Inca<br />
104
developers to see what they have already deployed on FutureGrid and leverage their<br />
existing codebase for early and easily adopted application kernels. Inca’s current cluster<br />
batch wrapper needs to be augmented to facilitate running kernels of varying size<br />
(especially important for lightweight versions of the various application kernels), and we<br />
will need to develop a meta-scheduling scheme to manage the flow of kernel jobs through<br />
the various batch queuing systems in TG/XD (prevent accumulation of jobs, duplication,<br />
etc.).<br />
• Miscellaneous: TeraGrid accounts have been requested and received for two developers<br />
(Charng-Da Lu and Steven Gallo) under the Network/Operations/Security (NOS)<br />
allocation by Shava Smallen and Jeff Koerner. These accounts will be used for Inca<br />
Monitoring and Application Kernel development and testing. Additionally, a subversion<br />
repository has been set up for TAS software management.<br />
• Hardware Ordered for XDMoD development: Hardware was purchased to host the<br />
development and production environments for XDMoD including a mirror of the TGCdb<br />
database. The hardware includes two Dell R710 servers and an Equilogic PS6010XV<br />
high performance storage array with 9.6TB of raw storage.<br />
• University of Indiana Subcontract: IU tasks include: (a) Identify the current available<br />
TG monitoring and resource management service, for example, access interface (API,<br />
command line, web service). (b) Assist in the development of the XDMoD (XD Metrics<br />
on Demand) portal. (c) Collaboratively design and develop a Custom <strong>Report</strong> Builder that<br />
can be incorporated in the XDMoD portal, allowing users to easily generate custom<br />
reports in several formats, including PDF and excel. (d) Participate in the development<br />
and deployment of application test kernels. (e) Interface with the Futuregrid and TIS<br />
(Technology Insertion Service) projects.<br />
• University of Michigan Subcontract: The University of Michigan subcontract was<br />
finalized.<br />
• The TAS Technical Advisory Committee: This committee will meet regularly to<br />
provide feedback and guidance on the TAS program. The current committee members<br />
are listed below. We plan on adding a representative from the XD resource providers<br />
once the XD award has been established. The first meeting will tentatively be held in<br />
November 2010.<br />
TAS Technical Advisory Committee:<br />
1. Dr. Jerzy Bernholc (Atomistic QM, Physics, NC State)<br />
Email: bernholc@ncsu.edu; http://chips.ncsu.edu/~bernholc/<br />
2. Dr. P.K. Yeung (CFD, Ge<strong>org</strong>ia Tech)<br />
Email: pk.yeung@ae.gatech.edu; http://soliton.ae.gatech.edu/people/pyeung/index.html<br />
105
3. Dr. Martin Berzins (Chair Computer Science at Utah)<br />
Email: mb@cs.utah.edu; http://www.cs.utah.edu/personnel/mb/<br />
4. Dr. Craig Stewart (Executive Director, Pervasive Technology Institute, Indiana),<br />
Email: stewart@iu.edu; http://www.indiana.edu/~ovpit/bios/cstewart.html<br />
5. Dr. Richard Brower (Lattice QCD, Boston University)<br />
Email: brower@bu.edu;; http://physics.bu.edu/~brower<br />
6. Dr. Sara Graves (Director, Information Technology and Systems Center; Univ of<br />
Alabama, Huntsville), Email: agraves@itsc.uah.edu;; http://www.itsc.uah.edu<br />
10 Education, Outreach, and Training; and External Relations<br />
The TeraGrid collective efforts have engaged at least 4,145 people through at least 110 events as<br />
shown in the table below. The number of participants were not recorded at all events. Details on<br />
the events are included in section 10.6.<br />
# Participants # Underrepresented<br />
people<br />
# events<br />
Workshops 988 423 48<br />
Tutorials 98 9<br />
Async Tutorials 1,750 29<br />
Forum and Tours 1,161 471 7<br />
Conference<br />
Presentations<br />
86 27 9<br />
Courses 122 21 10<br />
Total 4,205 942 113<br />
10.1 Highlights<br />
Campus Champions<br />
The following e-mail is from Jeff Pummill, a Campus Champion at the University of Arkansas.<br />
A grad student just finished using Ranger for a computer science project involving the following:<br />
Geoscience research has experienced an explosion of data in recent years. Streams of data from<br />
satellites, unmanned aerial vehicles, airplanes, and people need to be accurately georeferenced (i.e.<br />
placed into a global map). When performed using existing methods, georeferencing is a timeconsuming<br />
process that requires significant human interaction. My research is part of a project<br />
working towards using computer vision techniques and algorithms to quickly and autonomously<br />
106
GIG<br />
place and orient new data. I parallelized the Scale Invariant Feature Transform (SIFT) which was<br />
originally ill-equipped to work with terabyte-sized data sets./<br />
Our campus resources were in heavy use and also did not have the required core count for the<br />
project.<br />
The student sent me this note regarding the experience:<br />
Jeff,<br />
The help from teragrid was very prompt, the person understood my problem and quickly<br />
helped me resolve it. TeraGrid CC is an important job. If you were not on campus, I may<br />
never have even thought about getting on TeraGrid. Also, helping me get an account quickly<br />
allowed me to run very large scale tests and publish those results in time for a deadline.<br />
Having lots of SUs is also good. Overall, this has been a good experience, keep up the good<br />
work.<br />
All my scripts used the wall time of 1:30:00. The spreadsheet has the queue times.<br />
I'd consider this yet another success story for the TeraGrid, the Campus Champions, AND the<br />
stellar staff at TACC ;-)<br />
TeraGrid helped support 3 workshops for faculty to assist them with integrating computational<br />
science tools, resources and methods into their courses. The courses included Introduction to<br />
Computational Thinking, and Parallel and Cluster Computing. These events were jointly<br />
supported by Blue Waters and the National Computational Science Institute<br />
(www.computationalscience.<strong>org</strong>).<br />
Quote from a participant:<br />
LONI<br />
From: "Heeler,Phil" <br />
To: Henry Neeman <br />
Subject: RE: NCSI Intmd Parallel Programming Workshop<br />
Henry,<br />
Thank you for your diligent efforts in conducting a challenging<br />
workshop. The material was definitely harder but at the same time<br />
very interesting. I did learn a lot. Your team and you are to be<br />
congratulated on doing a fine job. Bob Panoff would be proud.<br />
Hopefully, I will see you at SC10 and maybe at the OU<br />
symposium,if I can twizle my schedule!<br />
Thank you again,<br />
Phil Heeler<br />
LONI/LSU offered 8 tutorials, each 2-4 hours long, on the LSU campus and via Access Grid. A<br />
3-day workshop was also sponsored on campus for advanced users. All tutorials and the<br />
workshop involved hands-on learning opportunities covering development tools in HPC<br />
environments. These included MPI, OpenMP, and hybrid programming methods plus debugging<br />
and profiling for optimization. Other topics included functioning in a UNIX environment, account<br />
and allocation management, and job submission and management.<br />
NCAR<br />
107
NCAR’s 2010 Summer Internships in Parallel Computational Science (SIParCS) program got<br />
under way in this quarter, and it includes several TeraGrid-related projects. This year 20 students<br />
are participating in the program from 17 different universities from across the country. Some<br />
have signed up for the Virtual School of Computational Science and Engineering (VSCSE)<br />
petascale applications development, to be held online July 6-9, 2010.<br />
NCSA<br />
The following is an e-mail from a user about the CI-Tutor system of tutorials provided by NCSA.<br />
NICS<br />
From: "Srirangam V Addepalli" <br />
To: training@ncsa.uiuc.edu<br />
Sent: Friday, May 28, 2010 10:41:03 AM<br />
Subject: NCSA Cyberinfrastructure Tutor<br />
Hello,<br />
I am currently in the process of setting up a training page for our users and came across your<br />
website. This website has ranked the best and comprises of the most comprehensive collection of<br />
training materials and tracking training progress. Would it ok if we redirect our users for training<br />
to your website.<br />
Sincerely<br />
Rangam<br />
Srirangam Addepalli; Programmer Analyst – III; High Performance Computing Center<br />
Texas Tech University<br />
NICS efforts in EOT were highlighted by two large workshops, the NICS/OLCF XT Hex-Core<br />
Workshop May 10-12, and the Computational Thinking for Educators workshop June 12-14. In<br />
addition to these large workshops, six college classes from Colorado to Massachusetts to South<br />
Carolina were given access and direct support during the Spring 2010 semester. Considerable<br />
work was also completed on a CI-Tutor course in Parallel I/O, and a new course for the catalog<br />
was started.<br />
PSC<br />
PSC led the planning for the Student Program for the TeraGrid’10 Conference. This included<br />
securing funding from NSF for students to participate in the Conference as well as <strong>org</strong>anizing<br />
sessions to engage the students.<br />
Purdue<br />
Greg Howes, University of Iowa professor of physics and astronomy, contacted TeraGrid looking<br />
for a system on which his students participating in the Iowa High Performance Computing<br />
Summer School course could practice parallel computing. Purdue RP stepped up as TeraGrid<br />
resource partner to accommodate Professor Howe’s needs by providing access to Moffett,<br />
Purdue’s SiCortex supercomputer. Graduate students from 14 departments in fields as diverse as<br />
biochemistry and physics, economics and astronomy, geography and electrical and computer<br />
engineering participated in the two-day, hands-on session of running and debugging parallel code.<br />
The over 4,500 processors and extraordinarily fast custom interconnect fabric of the SiCortex<br />
system proved to be a perfect fit for what Professor Howes was trying to accomplish. The Iowa<br />
graduate students ran 1,420 jobs and used 16,721 CPU hours.<br />
TACC<br />
TACC’s education and outreach programs include the scientific computing courses (2 during this<br />
reporting period) and tours of TACC’s facility at the J.J. Pickle Research Campus (about 10 miles<br />
108
north of UT Austin’s main campus) and the ACES Visualization Laboratory at the UT Austin<br />
main campus. The Austin Forum continues to attract a broad audience from the university, the<br />
city, and the wide range of technology industries around Austin. TACC’s Scientific Computing<br />
courses were all fully subscribed this semester, with 35 students in each of the courses. TACC<br />
continues a dialogue with the community using Web 2.0 technologies like Facebook and Twitter.<br />
TACC continues to experiment with webcasting training courses, and will expand the course<br />
availability in the coming months.<br />
Describe any high-impact EOT activities that resulted from use of the TeraGrid resources, and<br />
TeraGrid EOT activities.<br />
NICS<br />
Training. The NICS/OLCF XT Hex-Core workshop (May 10-12, 2010) was presented at Oak<br />
Ridge National Laboratory and served 65 users of both the NICS Kraken machine and the similar<br />
DOE machine at ORNL. Daniel Lucio and Lonnie Crosby of NICS delivered training material,<br />
in a program that also included DOE presenters and experts from vendors to give insights to their<br />
hardware and tools. The workshop was intermediate to advanced in scope, and NICS staff from<br />
its Scientific Computing and User Support groups provided consulting to the attendees during<br />
hands-on laboratory sessions.<br />
Education. NICS planned and co-hosted a 2.5-day workshop on Computational Thinking for<br />
Educators, held at the Oak Ridge Associated Universities facility in the town of Oak Ridge, TN.<br />
Twenty educators took part in the workshop, ranging from lower middle school teachers, through<br />
middle and high school math & science teachers, up through two faculty from Tuskegee<br />
University. Attendees hailed from Alabama, Ge<strong>org</strong>ia, North Carolina, and Tennessee. Robert<br />
Panoff of Shodor provided one day of excellent instruction on their various tools for educators,<br />
three Master Teachers from Tennessee teamed to provide content on the 2 nd day, while Jim<br />
Ferguson of NICS taught basic programming techniques through the use of a programmable car<br />
which the attendees got to take back to their schools with them. Hands-on activities designed for<br />
students (and their teachers’ lessons) were constant throughout the workshop. Initial surveys<br />
after the course were overwhelming positive, and a follow-up survey is planned into the next<br />
school semester to see how the instructors are making use of what they learned.<br />
Education/Outreach/Training. Six college classes were supported by NICS during the Spring<br />
semester of 2010. Classes at Colorado (Prof Jessup), Tufts (Prof Grinberg), Wofford (Prof<br />
Shiflet), Brown (Prof Grinberg), and two at Tennessee (Professors Banks and Dongarra) were<br />
given access to Kraken, in-person or online (WebX) training, and follow-up user support. There<br />
were approximately 105 students who made use of NICS resources, along with their professors<br />
and teaching assistants.<br />
Training. Mikhail Sekachev and Jim Ferguson are collaborating with other NICS staff and with<br />
Sandie Kappes of NCSA to produce a CI-Tutor training module covering Parallel I/O issues on<br />
the latest systems. Drawing on materials and expertise produced locally at NICS and at DOE’s<br />
OLCF, this module is being shaped to give students a solid overview of the issues at hand<br />
regarding parallel I/O, as well as some specific tools to use on Cray XT and other hardware<br />
platforms. Final editing and polishing is underway, and the finished product will be put up<br />
3Q2010 at the CI-Tutor site.<br />
Training. Jim Ferguson has been heavily involved with locally <strong>org</strong>anizing and recruiting<br />
speakers from the upcoming multi-point summer school in Petascale Computing, being put on by<br />
the Virtual School for Computational Science and Engineering. Ferguson consulted with Scott<br />
Lathrop on the overall agenda for the summer school, and recruited four speaker-trainers at NICS<br />
109
and OLCF (Lonnie Crosby, Rebecca Hartman-Baker, Sean Ahern, Scott Simmerman) to help fill<br />
out that agenda with top people. The workshop is scheduled for July 6-9, 2010.<br />
**Excerpt from a report written by Professor Angela Shiflet regarding NICS, her Spring class,<br />
and the visit of Bruce Loftis and Jim Ferguson to Wofford to give a seminar.**<br />
“The class and I had wonderful support NCSA, NICS, and the TeraGrid. The NCSA tutorial<br />
Introduction to MPI was invaluable to me for lecture material and examples. The TeraGrid staff<br />
promptly answered our questions at all hours of the day and night. One of the TeraGrid<br />
computers we used was NICS' Kraken. NICS staff presented an hour-and-a-half online tutorial<br />
on Kraken designed for the Wofford class. Two scientists from NICS, Jim Ferguson and Bruce<br />
Loftis, volunteered to present a talk at Wofford on “Scientific Computing at a National<br />
Supercomputing Center.” Approximately thirty students and faculty members attended the talk,<br />
including computational science students from the Data and Visualization class (COSC 370).<br />
The talk was the only exposure the COSC 370 students had to HPC during the semester. Some of<br />
their comments about the excellent talk follow:<br />
"I thought the talk by Bruce Loftis and Jim Fergus was extremely interesting. I had never<br />
heard of UT having supercomputers, much less the Kraken. I also did not know that we<br />
had computers capable of operating 100,000 processors at once, or that anyone had<br />
achieved a petaflop yet….The HIV model particularly interested me because we have spent<br />
an entire lecture on that in Immunology, including how it infects T cells, etc. I had heard of<br />
parallel processing before, but never gave much thought to how it worked. The discussion<br />
on how all the processors work at once was the first education I have had on the topic, and I<br />
found it interesting to finally learn a little bit about how multiple 'brains' of the computer<br />
can simultaneously find solutions to the same problem."<br />
"Before this talk I knew next to nothing about the capabilities of such machines, but when I<br />
left I was yearning to learn more. The possibilities for scientific discovery, in most every<br />
field, seem endless with this immense computing power. Impressive images of a supernova<br />
core collapse, detailed earthquake simulations, and immature virion would not be<br />
possible without such technology. Personally, it seems as though we have just scratched the<br />
surface of this quickly advancing field, and I cannot wait to see where the future takes us."<br />
After the talk, members of the HPC class went to lunch with the NICS scientists. The discussions<br />
lead to a "dream internship" at NICS for one of the class members, Glenn Hope.”<br />
SDSC<br />
Planning Underway for Grand Challenges in Data Intensive Sciences<br />
Planning is well underway for a conference at UCSD on October 26-29, 2010. The conference on<br />
“Grand Challenges in Data-Intensive Sciences” will include three days of presentations by leaders<br />
in data-intensive sciences in a wide range of fields. The fourth day of the conference will be a<br />
hands-on workshop using Dash for attendees who wish to try their own code on the machine,<br />
working closely with the workshop presenters in small tutorial teams. SDSC’s TeraGrid EOT,<br />
User Services, and AUS staff are all involved in this upcoming event.<br />
The purpose of the conference is to:<br />
• Identify "Grand Challenges" in data-intensive science across a broad range of topics<br />
• Identify applications and disciplines that will benefit from Gordon's unique architecture<br />
and capabilities<br />
110
• Invite potential users of Gordon to speak and participate<br />
• Make leaders in data-intensive science aware of what SDSC is doing in this space<br />
• Raise awareness among disciplines poorly served by current HPC offerings<br />
• Better understand Gordon's niche in the data-intensive cosmos and potential usage modes<br />
Topic areas are to include astronomy, biology, computer science, earth sciences, engineering,<br />
economics, medicine, neuroscience, tools and technologies and arts, humanities and social<br />
sciences. Each topic area will include a senior plenary speaker laying out the scientific frontiers<br />
and challenges, and an invited speaker who will discuss a current data-intensive project that<br />
ideally could use Gordon. One panel is scheduled for each afternoon on a cross-cutting topic in<br />
data-intensive research (e.g., visualization, data mining, data management).<br />
10.2 Training, Education and Outreach<br />
10.2.1 RP Operations: Training<br />
NCSA<br />
RP-funded training activities included hands-on workshops, training sessions via the Access Grid,<br />
support of online and in-person tutorials. TeraGrid RP staff participate in both local site-specific<br />
training as well as TG-wide training.<br />
PSC<br />
PSC’s John Urbanic and Tom Maiden gave a presentation at the 5th Annual High Performance<br />
Computing Day at Lehigh on April 23. The 3½-hour presentation included an overview of the<br />
multicore computing hardware landscape and an overview of software computing techniques<br />
including examples of MPI, OpenMP, and GPGPU languages like CUDA and Open CL. There<br />
were 20 attendees. For more information, see:<br />
http://www.lehigh.edu/computing/hpc/hpcday/2010/<br />
http://insidehpc.com/2010/05/03/lehigh-hosts-fifth-hpc-day/.<br />
PSC staff conducted the quarterly TeraGrid web conference New User Training Session on<br />
Friday, April 30, using the TeraGrid User Services ReadyTalk line. There were 22 participants<br />
for the 80-minute session. Since audio was not saved for this session, the website notes that audio<br />
and corresponding slides are available for February’s session, and has a link to that session. PSC<br />
people involved were Marcela Madrid – presenter, Phil Blood - support for presenter, Tom<br />
Maiden - troubleshooter for participants and Laura McGinnis - question router for participants.<br />
PSC staff members created a workstation cluster installation process to support an introductory<br />
High Performance Computing workshop hosted at Chatham University, a predominantly female<br />
undergraduate school, on May 19-20, 2010. The goal was to use existing computers in a Chatham<br />
student workstation lab to create a cluster. Since these systems were in active use by Chatham<br />
students, the workshop cluster needed to reside next to the existing software on the workstations<br />
in a non-intrusive way.<br />
The PSC people explored several alternatives and developed a virtual machine (VM) model that<br />
could be run from a memory stick on a head node and create a cluster on all the other available<br />
workstations from the head node. They added in the capability of “netbooting” the nodes since<br />
system administrators would have full control over the software load. (“Netboot” is a machine<br />
build and replication service that PSC developed many years ago and is in use for most of PSC’s<br />
in-house clusters.) With this netboot process, they created a cluster with PBS and OpenMP that is<br />
capable of dynamic scaling of the number of nodes. These techniques reduced the time needed to<br />
111
uild a simple cluster down to a matter of a few hours. The cluster was used for a two-day MPI<br />
workshop at Chatham. The working cluster remains at Chatham and is being used as a learning<br />
resource. Dr. Larry Viehland, the Chatham faculty member who hosted the workshop, is running<br />
production codes on the cluster.<br />
Six people from Chatham and one from Pitt attended the workshop. The workshop covered<br />
Introduction to PSC, Introduction to Chatham Cluster, Introduction to Parallel Computing, MPI<br />
Basics, Scalable Coding (Laplace Solver Case Study), OpenMP, and plenty of hands-on<br />
exercises.<br />
Mahin Mahmoodi of PSC created an online tutorial TAU: Parallel Performance Profiling and<br />
Tracing Tool that was made available to the public on the TeraGrid website during the last week<br />
of June. TAU is an open-source and advanced performance tool that can be used in profiling<br />
scientific applications. The tutorial provides TAU usage information for Kraken and Ranger. It<br />
introduces commonly used features, provides quick references, and assists users to successfully<br />
start using TAU on the systems. The tutorial also provides a guide for navigating visualization<br />
features of TAU which are essential in extracting and analyzing performance data. The material is<br />
available at https://www.teragrid.<strong>org</strong>/web/user-support/tau.<br />
Purdue<br />
Purdue RP created a set of step-by-step online video tutorials on how to use the Purdue CCSM<br />
modeling portal to create, configure and run CCSM simulations. It also covers advanced topics<br />
such as how to manage jobs, how to post process model output, how to configure customized<br />
input data using real world simulation examples.<br />
SDSC<br />
RP-funded training activities included hands-on workshops, training sessions via the Access Grid,<br />
support of online and in-person tutorials. TeraGrid RP staff participate in both local site-specific<br />
training as well as TG-wide training.<br />
TACC<br />
Staff members at the Center for Advanced Computing at Cornell University continue to maintain,<br />
support, and develop new modules for the Ranger Virtual Workshop. Available through both the<br />
TeraGrid User Portal and the TACC User Portal, the workshop provides users access to fifteen<br />
training modules with six new modules currently under development.<br />
Available Modules<br />
Parallel Programming Concepts and High-Performance Computing<br />
Ranger Environment<br />
Message Passing Interface (MPI)<br />
MPI Point-to-Point Communications<br />
MPI Collective Communications<br />
MPI One-Sided Communication<br />
OpenMP<br />
Hybrid Programming with OpenMP and MPI<br />
Profiling and Debugging<br />
Optimization and Scalability<br />
Computational Steering<br />
Large Data Visualization<br />
ParaView<br />
VisIt<br />
Using Databases<br />
112
Modules Under Development<br />
Advanced MPI<br />
Data Analysis with Python on Ranger<br />
Distributed Debugging Tool (DDT)<br />
Multi-core<br />
Multi-node Map Reduce<br />
Profiling with mpiP<br />
10.2.2 RP Operations: Education<br />
NCSA<br />
TG RP Education activities included activities to communicate the relationships between science,<br />
technology, and society, and the role that scientific computing plays. TG RP staff reached out to<br />
the K-20 community in an effort to encourage student interest in math and science during the<br />
education process, to members of diverse, underrepresented groups in order to broaden<br />
participation in high performance computing, and to the general public in order to promote<br />
awareness of role of high performance computing to advance scientific discovery. Some RP sites<br />
contribute to the scientific computing curriculum at their institution, creating and maintaining<br />
"course packages" that are available to higher education faculty to produce their own courses.<br />
ORNL<br />
Jim Rome at the ORNL RP established a working relationship with the Tennessee Governor’s<br />
Academy for Mathematics and Science (TGA) to teach a weeklong course on programming (in<br />
Java). The TGA (http://tga.tennessee.edu/) is a residential secondary school focusing on advanced<br />
science and engineering located in Knoxville with students drawn from across the state of<br />
Tennessee in a highly selective process. Under the direction of UT’s nationally acclaimed College<br />
of Education, Health and Human Sciences, the Academy provides an integrated curriculum that<br />
embeds key elements of the social sciences, language arts, and the humanities in rich and relevant<br />
problem-based modules. With a strong physics and calculus foundation, the Academy students<br />
are also paired with a scientist at the Oak Ridge National Lab to enhance the experiential nature<br />
of his/her science and math instruction. In discussions with the academy, it became clear that the<br />
advanced curriculum of the academy did not include enough hands-on programming instruction<br />
or experience. Consequently, Dr. Rome, with support from TG-EOT and the ORNL-RP, will be<br />
coordinating a week-long introduction to programming in Java course to be taught before the first<br />
week of school in August. In Q2 of 2010, a considerable amount of planning and preparation<br />
occurred in order to be ready for the course in August.<br />
Henri Monti, a graduate student at Virginia Tech, visited Oak Ridge and collaborated with staff at<br />
the ORNL RP and other areas of ORNL. He conducted research on developing methods to<br />
checkpoint HPC codes to memory instead of disk. For Petascale (and later Exascale) HPC<br />
platforms, the time to dump memory to disk for a checkpoint can be long, minutes even. This<br />
means that the act of checkpointing is very expensive in terms of SU’s. An active area of research<br />
is checkpointing to unused memory on other cores either as a way to develop more resilient<br />
modes of computing or as a way to offload the I/O associated with checkpointing and thus hide<br />
the latency usually associated with saving state. The results were promising. The plan is to submit<br />
one of more papers from this work for publication.<br />
PSC<br />
Cheryl Begandy, Pallavi Ishwad and Robin Flaus of PSC met with the technology coordinators of<br />
the Pennsylvania Cyber Charter School; see http://www.pacyber.<strong>org</strong>/. They reviewed PSC’s slate<br />
of K-12 programs and there was a good deal of interest. PSC can clearly provide curriculum<br />
113
content to the Cyber Charter School’s students, who all attend over the internet, and also help<br />
their instructors find additional content on the web. Another meeting is planned which will also<br />
include representatives from the National Network of Digital Schools.<br />
PSC has selected four high school students to work as non-paid interns with NRBSC staff on a<br />
bioinformatics research project. The internship program runs from June 14 to July 23, 2010. In<br />
addition to learning how to conduct a research project in bioinformatics, the interns will develop<br />
career relationships and receive career guidance from the national and international participants<br />
(both professors and university students) of PSC’s MARC Summer Institute in Bioinformatics.<br />
There are two students from two of the three schools that piloted BEST (Better Educators of<br />
Science for Tomorrow), which is a high school bioinformatics course that was developed over the<br />
last three years as a combined effort between PSC scientists and multidisciplinary high school<br />
teachers. Funding support came from the Buhl Foundation. BEST is taught from the overall<br />
viewpoint of training practicing biologists who will use bioinformatics techniques and tools in<br />
their biology careers. The internships will provide experience in bioinformatics from another<br />
viewpoint, that of a computer scientist applying the problem solving and programming skills of<br />
computer science to biological sequence data. The program began with two weeks of intense<br />
instruction in the Python programming language. The interns are now working collaboratively on<br />
applying Python, with guidance from PSC senior scientists, to biological sequence data. There is<br />
an opportunity for the interns to have scientific publications.<br />
Alex Arrico, a Physics undergraduate at Duquesne University, asked PSC’s Yang Wang to<br />
mentor his application for one of the Pennsylvania Space Grant Consortium Research<br />
Scholarships (http://pa.spacegrant.<strong>org</strong>/psgc-fellowship-program) which are awarded by a program<br />
administered by the Pitt Department of Physics and Astronomy to support undergraduate<br />
research. The goal of this NASA Space Grant program is to encourage students to enter STEM<br />
fields. The proposal they developed, “Explore the use of CUDA GPU for High Performance 3-D<br />
FFT Applications”, was successful, and Mr. Arrico has become a PSC intern under Dr. Wang’s<br />
mentorship. The internship runs from Monday, May 24 to Friday, July 30. They have received a<br />
Startup grant on NCSA Lincoln and will also seek to use the GPGPU cluster at the Center for<br />
Molecular and Materials Simulations at Pitt. The project is to test Fast Fourier Transform<br />
packages, targeted to Astrophysics applications, and compare the performances between CPU and<br />
GPGPU. Mr. Arrico has finished the code development for the tests and is now focusing on<br />
getting the timing data.<br />
PSC has two additional interns this summer, Matt Lambert, a sophomore at the College of<br />
Wooster in Ohio, and Amy Pfeifer, an incoming freshman from Carnegie Mellon, who are<br />
working on the HPC University Computational Question of the Week, with a team of developers<br />
at Shodor (in Raleigh, NC). The short-term goal is to get the project back up to date and verify<br />
the problem sets and solutions that have already been entered, in time for the TeraGrid ’10<br />
Student Program. Longer term goals include enhancing the problem sets, adding more detailed<br />
solutions, including discussion forums for each problem, and possibly adding computational<br />
resources (possibly a virtual machine environment) that could be used as a platform for solving<br />
the problems.<br />
Purdue<br />
Purdue RP staff collaborated with faculty at Marquette University about utilizing the Wispy cloud<br />
resource in fall 2010 coursework about cloud computing. We expect students to utilize Wispy and<br />
Hadoop/Mapreduce.<br />
Additionally, Purdue has continued to integrate student system administrators into RP operations,<br />
providing real-life, hands-on experience with operating a large scale computing infrastructure.<br />
114
Purdue RP has recruited two undergraduate students, Talia Ho and Evan Albersmeyer, as summer<br />
interns to work with the TeraGrid team since May. Talia is working with RP user support expert<br />
to create step-by-step tutorials on frequently asked topics. She has learned to access and use<br />
various TeraGrid resources and tools, such as the TG user portal, gsissh tools, as well as the user<br />
issues, and how to make effective help material. Evan is developing a utility tool to help manage<br />
the virtual machines deployed in Purdue’s Condor pool and report usage data to the users of the<br />
desktop computers that contribute cycles to the pool. Both students have found many benefits in<br />
learning about TeraGrid, high end computing and scientific applications in general, learning to<br />
work with end users, learning to present advanced technology to beginners, and working with a<br />
collaborative team.<br />
SDSC<br />
TG RP Education activities included activities to communicate the relationships between science,<br />
technology, and society, and the role that scientific computing plays. TG RP staff reached out to<br />
the K-20 community in an effort to encourage student interest in math and science during the<br />
education process, to members of diverse, underrepresented groups in order to broaden<br />
participation in high performance computing, and to the general public in order to promote<br />
awareness of role of high performance computing to advance scientific discovery. Some RP sites<br />
contribute to the scientific computing curriculum at their institution, creating and maintaining<br />
"course packages" that are available to higher education faculty to produce their own courses.<br />
TACC<br />
TACC Scientific Computing Curriculum Courses: Since the fall of 2006, over 400 students have<br />
enrolled in our courses. A total of 68 undergraduate and graduate students enrolled in the two<br />
spring 2010 courses, Introduction to Scientific Programming (focused on FORTRAN and C) and<br />
Parallel Computing for Scientists & Engineers. Four TACC scientists are teaching these courses.<br />
In each course, enrollment reached the maximum. Engineering students continue to hold the<br />
greatest percentage of students (40%), compared to the natural sciences, computer sciences,<br />
geosciences, and liberal arts. The majority of students are involved in research. Introduction to<br />
Scientific Programming continues to attract the greatest percentage of undergraduates per course.<br />
TACC is evaluating the quality and effectiveness of the courses with pre- and post-course surveys<br />
based on the content, learning objectives, and the computational science competencies from the<br />
Ralph Regula School of Computational Science. The Spring 2010 course data indicates<br />
statistically significant gains in students’ knowledge and skills. There are a few interesting<br />
findings: Introduction to Scientific Programming attracts mostly undergraduates, while graduates<br />
are the overwhelming majority in the Parallel Computing course. The majority of students in the<br />
programming class were from engineering and physics, but two liberal arts majors enrolled in the<br />
class as well. Student achievement in the learning objectives for the parallel computing course,<br />
based on the competencies developed by the Ralph Regula School of Computational Science,<br />
increased from little or no experience to proficient/very experienced over the course of the<br />
semester. An encouraging highlight is 75% of students in the scientific programming course<br />
indicated that they are applying their new programming capability in other courses.<br />
TACC Scientific Computing Curriculum Package: Production of the Scientific Computing<br />
Curriculum Package continues. Instructors are now reviewing the revised course materials, with<br />
an eye toward distribution of materials at SC’10. . The presentation slides contain the bulk of the<br />
courses’ content that students use on a regular basis, thus the content accuracy and quality of the<br />
slides are critical to maximize impact and sustainability. Students have commented to our<br />
instructors about the high quality of the slides. The package content extends beyond the course<br />
slides to ensure that faculty at other institutions may readily adopt or integrate the materials for<br />
115
creating their own courses or enriching existing courses. Additional content includes assignments,<br />
tests, projects, references, and instruction guides.<br />
10.2.3 RP Operations: Outreach<br />
NCSA<br />
NCSA provided tours of its facilities and an overview of how the center supports science and<br />
engineering discovery and innovation to more than 2,000 people. More than 1,000 of these<br />
visitors attended our June 17, 2010, Community Day event at the new National Petascale<br />
Computing Facility, the building that will house the Blue Waters sustained-petaflops<br />
supercomputer and other advanced infrastructure.<br />
ORNL<br />
The ORNL-RP continued coordination and outreach to the DataONE DataNet project continued<br />
in many areas including attending the DataONE core cyberinfrastructure Team meeting in<br />
Albuquerque, NM. The suggestion was made that DataONE should seek closer partnership with<br />
TeraGrid. TeraGrid and DataONE have complementary services for data with TeraGrid<br />
concentrating on low level protocols and infrastructure while DataONE provides higher-level<br />
software infrastructures. Beyond broad collaboration, specific actions in the DataONE working<br />
group for scientific exploration, visualization, and analysis include deploying new, larger, data<br />
intensive studies correlating and predicting bird migration patterns as a function of observed<br />
environmental, ecological, and climatic data. The TeraGrid is now a prime target computational<br />
platform for these studies.<br />
PSC<br />
Marcela Madrid of PSC gave an Introduction to TeraGrid Resources talk at the fourth PA-OH-<br />
WV Simulators Meeting at CMU, on June 15, 2010. She covered the resources and services<br />
offered by the TeraGrid, and how to apply for an allocation through POPS. There were 48<br />
attendees. Ten people signed up for more information about the TeraGrid, one from Case Western<br />
Reserve University, one from University of Pennsylvania, three from West Virginia University,<br />
three from Carnegie Mellon University and two from University of Pittsburgh. For more<br />
information on the meeting see http://ntpl.me.cmu.edu/ntplwiki/index.php/PA-OH-<br />
WV_Simulators_Meeting.<br />
PSC’s Cheryl Begandy and Lynn Layman presented an overview of opportunities that PSC and<br />
the SuperComputing Science Consortium, (SC) 2 provide for companies at an outreach event<br />
<strong>org</strong>anized by the Erie Technology Incubator and eBizITPA. (SC) 2 is a partnership of the National<br />
Energy Technology Laboratory, the Pittsburgh Supercomputing Center, Carnegie Mellon<br />
University, the University of Pittsburgh, West Virginia University, the West Virginia Office of<br />
Technology, Duquesne University, and Waynesburg University which improves the ability of the<br />
partners to advance energy and environment technologies through the application of high<br />
performance computing and communications. The Erie Technology Incubator<br />
(http://erietech.<strong>org</strong>/) “provides the resources and mentoring, the nurturing and business acumen to<br />
help technology-based start-up companies take that next step both in business development or<br />
funding.” The mission of eBizITPA (http://www.ebizitpa.<strong>org</strong>/) is “to assist Pennsylvania<br />
businesses and <strong>org</strong>anizations with understanding, using and developing information technologies<br />
to grow the economy by providing resources to enable company formation and growth,<br />
developing technology talent and expertise, and facilitating collaborative efforts to stimulate<br />
innovation.”<br />
PSC’s Cheryl Begandy and Dave Moses met with Justin Driscoll, who is the Director of STEM<br />
activities for the Pittsburgh Technology Council and also the Greater Oakland KIZ (Keystone<br />
Improvement Zone) coordinator. As a result of this meeting, Mr. Driscoll is adding another<br />
116
eak-out session to the regional STEM Summit (August 26) to feature PSC K-12 programs along<br />
with two other developers of curriculum material. Mr. Driscoll also encouraged PSC to both<br />
exhibit and present at the annual TRETC (Three Rivers Educational Technology Conference) in<br />
November. He is involved in <strong>org</strong>anizing both of these events.<br />
PSC co-<strong>org</strong>anized the Symposium on Applications of Graphics Processing Units (GPUs) in<br />
Chemistry and Materials Science (June 28-30, University of Pittsburgh;<br />
http://www.sam.pitt.edu/education/gpu2010.php). Nick Nystrom delivered the welcome on the<br />
morning of June 30, in which he discussed PSC, TeraGrid and allocations, Anton (a specialized<br />
supercomputer for biomolecular simulation designed by D. E. Shaw Research that is available for<br />
non-proprietary research at PSC; see §8.6.7), and valuable applications of large shared memory.<br />
The symposium was sponsored by QChem, Penguin Computing, and PETTT, and it was webcast<br />
to DoD researchers.<br />
Purdue<br />
Purdue RP staff is actively contributing to the TeraGrid Campus Champion program. Kay Hunt<br />
and Scott Lathrop presented a position paper sharing the campus champion program perspective<br />
with the NSF sponsored Campus Bridging workshop in Indianapolis, April 2010. In her unique<br />
position of being both an RP user support specialist and the Purdue campus champion, Kim<br />
Dillman helps support many requests from campus champions at a number of campuses. She has<br />
also created tutorials to provide users with detailed learning material in addressing the frequently<br />
asked topics. She is also leading the effort in evaluating technologies for a campus champion’s<br />
portal, She was the Campus Champion’s representative on the Roaming RAT group and<br />
contributed to the construction of a user survey, result analysis, the proposed for meeting the<br />
needs identified in the survey.<br />
SDSC<br />
TG RP Outreach activities included activities to communicate the relationships between science,<br />
technology, and society, and the role that scientific computing plays. TG RP staff also reach out<br />
to the scientific and professional communities in an effort to encourage interest in the use of<br />
TeraGrid resources and services. Many RP sites contribute to broadening participation in the<br />
scientific computing community that should be covered in this section.<br />
SDSC’s Education Director has served as Co-Chair of the Education Program for the TG’10<br />
conference in Pittsburgh, Pa., and has been responsible for recruiting presenters, reviewers, and<br />
overseeing the review process. SDSC’s Education Program Manager has been responsible for<br />
coordinating the student volunteer program logistics.<br />
SDSC’s Education Programmer/Instructor has been responsible for further development of the<br />
Campus Champions web portal, responding to feedback from Campus Champions and the<br />
TeraGrid CC team.<br />
SDSC’s Education Programmer/Instructor has been responsible for initial development and<br />
testing of the MSI-CIEC web portal, responding to feedback from the MSI-CIEC community.<br />
SDSC’s Education Programmer/Instructor has been responsible for administering and<br />
maintaining the TeacherTECH Community Portal and the Discover Data Education Portal in<br />
support of online instruction.<br />
Dash became a TeraGrid resource on April 1. At SDSC, the Dash/Gordon User Support team has<br />
worked closely with a number of early users from a variety of disciplines. The first table below<br />
summarizes highlights from this early user outreach. The outcomes are current for this quarter,<br />
though the outreach has been launched, in some cases, prior to the current quarter.<br />
117
The second table (below) summarizes the currently active Dash allocations, which reflect<br />
outreach to new communities and research groups that have data-intensive computing challenges<br />
that might be amenable to faster time to solution using the new Dash memory-rich architecture.<br />
TACC<br />
TACC hosted three of its monthly Austin Forum events with invited speakers from areas of<br />
interest focused on science and technology. The goal of “The Austin Forum on Science,<br />
Technology & Society” is to engage and educate the Austin community about the numerous ways<br />
in which science and technology enhance the quality of their everyday life as well as the health,<br />
prosperity, and security of the nation. Two hours are devoted to a networking reception,<br />
presentation, and Q&A discussion among the speaker and guests, which reinforces the broader<br />
impact and intellectual merit of the program. Speakers and topics within this reporting period<br />
focused on:<br />
• Tony Befi and Brad McCredie, IBM: “IBM and Austin: Partners is a Smarter Planet.”<br />
Two of IBM Austin's leaders shared how IBM Austin is developing smarter systems for<br />
meaningful change and described the company's view of how technology is addressing<br />
major infrastructure issues. 89 attendees.<br />
• Dr. Jessica Hanover, Dr. Steven Leslie, Greg W. Hartman, Joe Skraba: “BioTech:<br />
The Next Big Thing.” Home to 3,300 technology companies, national recognized medical<br />
facilities, and a top-tier research university, Austin is emerging as a best-in-class life<br />
sciences community. The convergence of biology and technology allows Austin to<br />
capitalize on its global reputation as a technology hub. Collaboration between<br />
universities, hospitals, businesses, and others include curriculum planning, research,<br />
medical training, and support of life science initiatives. This talk highlighted the benefits<br />
and challenges of Austin as a community for biotech. 150 attendees.<br />
• Brewster McCracken, “Creating a Dynamic Clean Energy Economy.” Brewster is<br />
executive director of Pecan Street Project Inc., a clean energy/smart grid R&D<br />
<strong>org</strong>anization headquartered at The University of Texas at Austin. Pecan Street is<br />
developing and implementing smart grid and clean energy technologies and business<br />
models. This talk stressed why Austin is uniquely qualified to lead in the 21 st century<br />
energy industry. 101 attendees.<br />
TACC openly advertises tours of our facilities to the Austin and central Texas area. Tour groups<br />
include visitors in K-12 (mainly high school), higher education (training class participants,<br />
university students, etc), industry, government, and the general public. A few events and impacts<br />
stand out with regard to TACC’s outreach efforts: 1) TACC hosted its first “Explore Creativity in<br />
Digital Space” event which showcased digital media, photography and video works in TACC's<br />
Visualization Laboratory; and 2) TACC was selected by the UT Central Development office to<br />
participate in Professional Advisors Day, a unique event that hosts professional advisors<br />
(attorneys, CPAs, and others who serve as financial advisors and estate planners). An overview of<br />
TeraGrid and its contributions to STEM research were given at each event.<br />
10.3 External Relations<br />
External Relations Highlights<br />
The TeraGrid External Relations (ER) working group issued 29 stories and press releases, and<br />
provided media relations, event planning, and other related activities in Q210. Both TACC and<br />
LONI issued stories about how TeraGrid resources were used to mitigate the oil spill in the gulf,<br />
which began in late April and continued throughout the quarter. Efforts to work with NSF-OLPA<br />
118
to coordinate press for all related NSF-funded research were taken. For all TeraGrid-related news,<br />
see www.teragrid.<strong>org</strong>/news.<br />
This quarter, there were 172 news “events” where TeraGrid was mentioned in a unique way on<br />
the World Wide Web, as was determined by Google Alerts for the term “TeraGrid.” This number<br />
is steadily increasing each quarter. There were 330 events in all of 2009. There is more ‘buzz’<br />
about TeraGrid among bloggers and professional <strong>org</strong>anizations that are now choosing to feature<br />
our news—especially internationally. TeraGrid is achieving broader exposure as we have made<br />
an increased effort to release news items via our outreach list to related professional<br />
<strong>org</strong>anizations. TeraGrid’s FaceBook page has become more utilized and the number of ‘friends’<br />
has increased from 64 in Q209 to 109 in Q210. See:<br />
http://www.facebook.com/pages/TeraGrid/106634599376129ref=search#!/group.phpgid=2562<br />
777259&ref=ts<br />
Plan Year Six (PY6) officially began in April, 2010. Within the Integrated Project Plan for the<br />
plan year, Leake had identified the need for additional help with science writing, and graphic<br />
design. To support this need, .20 FTE of additional effort had been funded for graphic design<br />
services by Shandra Williams (PSC) and .30 FTE allocated for science writing by Jan Zverina<br />
(SDSC). Each began to provide additional assistance this quarter—Williams helped with web<br />
needs, documentation, data facilitation, and design assistance with TG’10 efforts. Zverina began<br />
to customize stories so that they were less RP-specific, and more relevant to TeraGrid in general.<br />
Zverina also took a leading role with the TG’10 communication committee efforts.<br />
Science Highlights<br />
Sixteen stories were identified for the 2010 Science Highlights book. The book will be printed<br />
and delivered in time for the November Supercomputing conference. Once again, Warren<br />
Froelich (SDSC) will serve as editor, and Carlton Bruett has been contracted to provide design<br />
services. Most of the ER team provides stories and assist with editorial efforts for this<br />
publication, as well as the 2010 Education, Outreach, and Training book, edited by EOT Working<br />
Group Member Ange Mason (SDSC).<br />
Discussion ensued about how to produce the 2011 Science Highlights publication. The incumbent<br />
TG-ER team will continue to identify story prospects for 2011 purposes, but the winning XD<br />
team will assume ownership of the production once the award is announced in April 2011. The<br />
printing and design will be funded with XD funds, and the production schedule will be<br />
determined by the winning XD team. We would anticipate that it will be delivered in time for<br />
SC’11. The piece will showcase XD science and technology.<br />
TeraGrid’10 Conference<br />
Elizabeth Leake (ER Team Leader, UC-GIG) and Jan Zverina (SDSC) served as co-chairs for the<br />
TeraGrid’10 Communication Committee. Faith Singer-Villalobos (TACC), Bill Bell (NCSA),<br />
Daphne Seifert (IU), Trish Barker (NCSA), Shandra Williams (PSC), and Michael Schneider<br />
(PSC) also contributed. Warren Froelich served as deputy chair for the conference, and most of<br />
the ER team had a role with the communication efforts of the conference.<br />
TeraGrid Resource Allocation Committee ER Team Involvement (TRAC)<br />
Efforts to identify stories via the TeraGrid Resource Allocation Committee (TRAC) meetings<br />
continued. In April, Elizabeth Leake and Dan Katz began to share the responsibility on behalf of<br />
TeraGrid External Relations so that it wasn’t necessary for both to attend all meetings. A printed<br />
letter from Leake and Katz is now included in reviewer packets for each TRAC meeting. The<br />
letter reminds reviewers to identify those proposals that are especially transformational in their<br />
field, or that use technology in new or unique ways. They are also asked to watch for projects that<br />
119
demonstrate collaborations with other grids, across several domains, or with international<br />
partners.<br />
The TRAC Deadline Announcement is disseminated by Leake each quarter about two weeks<br />
prior to the deadline. The announcement is published in HPCWire, iSGTW, Supercomputing<br />
Online, and other relevant external media, as well as TeraGrid User News and the TG.<strong>org</strong> news<br />
site. We typically wait until we have the statistics from the previous cycle, and include those in<br />
the announcement. Therefore, it serves a dual purpose—deadline announcement and summary of<br />
the previous cycle’s applications and awards. Although TeraGrid’s allocation process is<br />
becoming progressively more oversubscribed, we have been increasing efforts to push this<br />
announcement in hopes of engaging a broader user base with TeraGrid resources. The increased<br />
competitive element could result in more creative uses of TeraGrid resources, better justification<br />
of resources, and better allocation applications overall.<br />
Misc. External Relations Working Group Activity in Q210<br />
All TG-ER team members conduct external relations efforts for their RP sites, and disseminate<br />
Science Highlights, represent TeraGrid, and write related content for local, regional, and national<br />
sources. Many attend professional conferences on behalf of TeraGrid.<br />
In April, Leake attended the fifth Enabling Grids for E-SciencE (EGEE) user conference in<br />
Uppsala Sweden, upon the invitation of Bob Long, director of EGEE. Since this was EGEE’s<br />
final conference, just prior to their transition to the European Union’s European Grid Initiative<br />
(EGI), the event paralleled the NSF-funded TeraGrid’s transition to the eXtreme Digital (XD)<br />
phase of cyberinfrastructure . Leake had the opportunity to discuss their external relations<br />
experience with navigating the transition with their administration and the European Editor of<br />
International Science Grid This Week (iSGTW). Leake wrote two stories highlighting topics<br />
relevant to both TeraGrid and EGEE users. One story, about Science Gateways and Portals,<br />
highlighted middleware bridging efforts—what it takes for users to successfully leverage both<br />
American and European CI’s. The second story identified the global need for long-term storage<br />
solutions that will serve generations to come.<br />
TeraGrid’s content management system, LifeRay, continued to have issues which delayed the<br />
wiki transition, and ER training efforts we had hoped to undergo this quarter. Leake was granted<br />
minimal content management rights, but changes were limited, and often stalled, due to ongoing<br />
problems with LifeRay. Efforts to train Williams will be made in Q310 so that she can take an<br />
active role in a website facelift, and wiki look-and-feel (the content will continue to be<br />
maintained by Dudek and he will transition wiki content to the LifeRay environment once<br />
Williams designs a new look-and-feel (skin) and <strong>org</strong>anization of content). Both Dudek and<br />
Williams will interface with the User Facing Projects working group in Q310.<br />
TeraGrid’s collaboration with International Science Grid This Week produced a statement of<br />
work. Efforts to identify funding sources were made by OSG and iSGTW administration.<br />
In late April, Leake was invited to Marquette University, Milwaukee, Wisconsin, to talk with<br />
their administration and staff about TeraGrid in conjunction with a CI Days event. Faculty from<br />
the University of Wisconsin at Milwaukee also attended. Marquette has a new computational<br />
science program, and two new TeraGrid campus champions. They were looking for ways to<br />
leverage TeraGrid in the classroom. Since their Fall ’10 curriculum included coursework about<br />
cloud computing, Leake suggested they contact Carol Song and others at Purdue University, to<br />
explore the use of Wispy—TeraGrid’s first cloud production environment. They did, and the<br />
exchange resulted in a Marquette TeraGrid cloud computing course (utilizing<br />
Hadoop/Mapreduce) that is now optimized for the classroom beginning Fall ’10.<br />
120
ER-RP <strong>Report</strong>s for Q210:<br />
NCSA<br />
TeraGrid RP local external relations staff developed local site-specific content as well as<br />
contributed to TeraGrid-wide activities and publications. External relations staff provide web and<br />
print media outlets detailing the successes of the TeraGrid project and its user community. They<br />
also develop materials for Supercomputing and TeraGrid conferences.<br />
NICS<br />
NICS developed two feature articles focusing on scientific breakthroughs enabled by computing<br />
on NICS resources. The first of these articles highlights researcher Stephen Paddison and his<br />
team as they uncover the fundamental properties of Proton Exchange Membrane fuel cells. The<br />
other feature focused on researchers Greg Voth and Gary Ayton, and their work on understanding<br />
the behavior of the HIV virus, the precursor to Acquired Immune Deficiency Syndrome. NICS<br />
participated along with other TeraGrid external relations staff in nominating, reviewing, and<br />
selecting science stories to be included in the upcoming TeraGrid Science Highlights booklet, to<br />
be released at SC10 in New Orleans.<br />
PSC<br />
PSC Senior Computer Scientist Art Wetzel worked on a 360° Gigapanorama photo of the<br />
Pittsburgh area from the top of the USS Tower. Other team members are David Bear of Carnegie<br />
Mellon University’s Studio for Creative Inquiry, Randy Sargent, Paul Heckbert, Dror Yaron, and<br />
Goutham Mani of the Create Lab at Carnegie Mellon University, Fran Flaherty of CMU Digital<br />
Print Lab, and Ruth Karlin. See http://www.gigapan.<strong>org</strong>/gigapans/47373/ for the view and more<br />
about production of the Gigapanorama.<br />
Rebecca Horne, Photo Editor of Discover, has asked for and received permission to use various<br />
PSC supercomputing visualizations in a “supercomputing portfolio” planned for an issue of<br />
Discover magazine this fall. PSC provided high resolution versions of Jacobo Bielak’s earthquake<br />
image that was used for PSC’s 2010 calendar and two images of Leopold Grinberg’s brain<br />
arteries simulation data.<br />
SDSC<br />
TeraGrid RP local external relations staff developed local site-specific content as well as<br />
contributed to TeraGrid-wide activities and publications. External relations staff provide web and<br />
print media outlets detailing the successes of the TeraGrid project and its user community. They<br />
also develop materials for Supercomputing and TeraGrid conferences.<br />
SDSC’s Communications Director has served as Deputy Chair for the TG’10 conference in<br />
Pittsburgh, Pa., and has been responsible for preparing committee agendas and notes, while<br />
overseeing specific tasks as assigned by the event’s co-chairs. In addition, SDSC’s Media<br />
Manager has served as co-chair of the TG’10 Communications committee, while SDSC’s graphic<br />
designer has provided graphic support for this meeting, including the development of its graphic<br />
identity (brand) used for the website, electronic and print items.<br />
SDSC’s Communications Director also is serving as the editor of TG’s annual Science<br />
Highlights. During the past quarter, he has led a subcommittee of the TG External Relations team<br />
charged with reviewing and selecting stories for this year’s issue.<br />
SDSC’s communications team also has actively participated in TG ER activities and meetings,<br />
including assistance with press releases, media relations, event planning, and other related<br />
activities.<br />
TACC<br />
121
Below are five feature articles (with links) published on the TACC website within the reporting<br />
period that highlight TeraGrid resources and projects:<br />
• Cleaner Coal through Computation Stanford researcher uses Ranger to design better<br />
pollution controls<br />
• The Universe's Magnetic Personality Simulations by University of Wisconsin<br />
researchers explore turbulence at the cosmic scale<br />
• Blueprint for the Affordable Genome Ranger helps University of Illinois researchers<br />
simulate nanopore gene sequencer<br />
• Catching Evolution in Action UT biologist studies coral and worm RNA to learn how<br />
new traits develop<br />
• In Search of Tomorrow's Cures University of Texas researchers set out to benchmark<br />
and develop computational approaches for accurate, efficient drug discovery<br />
10.4 Enhancement of Diversity<br />
NCAR<br />
After student qualifications and rankings have been established, the SIParCS summer internship<br />
program considers diversity factors when matching students with mentors, projects, and funding.<br />
In this way, two additional students from minority-serving institutions were supported by the<br />
SIParCS program.<br />
TACC<br />
In the spring and summer, TACC accommodates a wide range of tour requests from school<br />
programs and summer camps for under-represented students. Highlights include the FirstByte<br />
summer camp, a one-week program that introduces high school girls to the field of computer<br />
science; the My Introduction to Engineering (MITE) summer camp for high school sophomores<br />
and juniors from under-represented minorities; the final presentations by the Del Valle High<br />
School ROBOTech team (80% under-represented); and the Consider Every Option (CEO)<br />
Program for high school women to explore how engineering benefits society and impacts the<br />
world. Each of these groups was given a presentation about TACC and a tour of the visualization<br />
laboratory, with hands-on activities. TeraGrid was described in each presentation.<br />
10.5 International Collaborations<br />
Indiana<br />
In early June Alcatel-Lucent announced the first commercial 100 Gb/s link connecting<br />
Technische Universitaet Dresden (TUD) and Bergakademie Freiberg. Because of the expertise<br />
that Indiana University has with wide area filesystems and the long standing collaboration that IU<br />
has with TUD, IU will be involved with the testing and deployment of a filesystem bridging both<br />
German campuses.<br />
On April 21, Indiana University and the Center for Information Services and High Performance<br />
Computing (ZIH) at the Technische Universität co-hosted a hands-on workshop on Vampir, a tool<br />
designed to conduct performance analyses and diagnose problems in serial and parallel<br />
supercomputing applications. The full day workshop had joint presentations by members of ZIH<br />
and IU's High Performance Applications (HPA) group in morning followed by a hands-on tutorial<br />
in the afternoon. Vampir will be a fundamental component of FutureGrid, a collaborative grid and<br />
cloud computing test-bed funded by the NSF and developed under the leadership of the PTI<br />
Digital Science Center.<br />
ORNL<br />
122
The ORNL RP continues to collaborate with the Swiss NSF funded Sinergia project focused on<br />
the fitting of diffuse neutron and x-ray scattering patterns in order to characterize the molecular<br />
structure of materials exhibiting diffuse scattering features such as streaks.<br />
PSC<br />
PSC Senior Scientific Specialist Yang Wang has accepted an invitation from Prof. Jianguo Hou,<br />
Co-Director, International Center for Quantum Design of Functional Materials (ICQD) and<br />
President, University of Science and Technology of China in Hefei to visit ICQD as a<br />
Distinguished Guest Scientist during July. He will present several lectures and interact with<br />
faculty members and ICQD scientists to promote deep understanding and effective collaborations<br />
in the fields of multiscale materials science modeling and simulation. While there, Dr. Wang will<br />
present an invited talk at the International Workshop on Nanomagnetism and Spintronics:<br />
Current Status and Outlook in Linfen.<br />
TACC<br />
TACC continues to support researchers and students participating in the UT Austin-Portugal<br />
Collaboratory. A member of the TACC Visualization and Data Analysis group presented handson<br />
sessions at the Summer School on e-Science with GPUs. Students were provided training<br />
accounts and allocation on the XD-Vis system Longhorn. The hands-on sessions were held on<br />
June 16-17 in Braga, Portugal.<br />
http://utenportugal.<strong>org</strong>/2010/04/ibergrid%E2%80%992010-and-the-summer-school-in-e-sciencewith-many-core-cpugpu-processors-in-braga-this-may-and-june/<br />
10.6 Broader Impacts<br />
NICS<br />
NICS has an ongoing relationship with the University of Tennessee-Knoxville’s chapter of the<br />
Society of Women Engineers. NICS Assistant Director Patricia Kovatch has given talks at<br />
campus meetings of the group, and NICS will be assisting SWE when it hosts its Regional<br />
Conference in the Spring of 2011.<br />
PSC<br />
Kristopher Hupp, a social studies teacher at Cornell High School, in the Coraopolis School<br />
District near Pittsburgh, asked PSC to help the 39 members of the senior class participate in a<br />
county-wide competition to design and code an electronic voting system to be implemented in<br />
high schools across Pennsylvania for a mock election in the fall of 2010. However, since none of<br />
the students had any programming experience and the competition timeline was short, PSC staff<br />
people thought developing a voting system would be too ambitious for them. Instead, PSC<br />
offered to talk to the students about the election process with an emphasis on security and how<br />
that could relate to electronic voting systems. As an alternative to a lecture, PSC opted to use a<br />
role-playing activity to illustrate opportunities for fraud and to encourage critical thinking. Mr.<br />
Hupp sent a nice letter of thanks in which he said, “The activities you put together for our<br />
students not only engaged them, but also helped me begin the dialogue with my students on the<br />
importance of voting and election security. The feedback I got from my students and my<br />
colleague, Megan Fuga, was extremely positive.”<br />
TACC<br />
TACC is endeavoring to broaden the scope of the research and education projects that it supports,<br />
including outreach to the Humanities, Arts & Social Sciences. The first Digital Media & Arts<br />
event, held on April 29, 2010, attracted over 200 artists and creative experts from the Austin<br />
community. In the Vislab, attendees were introduced to advanced computing and visualization<br />
123
techniques and technologies. This event was the first in a planned series which will broaden<br />
TACC’s reach in the Arts & Humanities. In addition, a workshop is being planned for Social<br />
Sciences researchers in the fall.<br />
10.6 Detailed Information on Events and Attendance<br />
Indiana<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Workshop<br />
Performance Analysis<br />
Using the Vampir<br />
Toolchain<br />
IU<br />
Bloomin<br />
gton<br />
April 21,<br />
2010<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
Method<br />
9 13 1 Synchrono<br />
us<br />
LONI<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Workshop<br />
LONI HPC Workshop<br />
at LSU<br />
LSU 26-28<br />
Apr 2010<br />
Tutorial Introduction to Ruby LSU 15 Apr<br />
2010<br />
Tutorial Introduction to Cactus LSU 21 Apr<br />
2010<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
24 9 S<br />
2 14 S<br />
2 14 S<br />
Tutorial Introduction to CPMD LSU 28 Apr 2 3 S<br />
Tutorial Profiling with TAU LSU 6 May<br />
2010<br />
Tutorial Intro to Linux and VI LSU 3 Jun<br />
2010<br />
Tutorial Intro to HPC LSU 10 Jun<br />
2010<br />
Tutorial<br />
Intro To HPC<br />
Environments<br />
LSU<br />
17 Jun<br />
2010<br />
Tutorial Intro to HPC Job Mgmt LSU 24 Jun<br />
2010<br />
2 6 S<br />
3 15 S<br />
3 9 S<br />
3 10 S<br />
3 7 S<br />
Method<br />
124
NCAR<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
NCAR:<br />
Conference<br />
presentatio<br />
n<br />
Campus, Regional, and<br />
National HPC:<br />
Infrastructure and<br />
services for research<br />
and education<br />
Colorado<br />
School of<br />
Mines,<br />
Golden,<br />
Colo.<br />
May 7,<br />
2010<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
1 50 5 S<br />
Method<br />
NCAR:<br />
Conference<br />
presentatio<br />
n<br />
Digitizing the Planet:<br />
The role of<br />
cyberinfrastructure in<br />
understanding how our<br />
world works<br />
NCAR,<br />
Boulder,<br />
Colo.<br />
June 16,<br />
2010<br />
1 10 2 S<br />
NCAR:<br />
Tutorial<br />
The 11 th Annual WRF<br />
Users’ Event<br />
NCAR,<br />
Boulder,<br />
Colo.<br />
June 21-<br />
25, 2010<br />
2 20 0 S<br />
NCSA<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Online<br />
Tutorial<br />
Online<br />
Tutorial<br />
Online<br />
Tutorial<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
Access Grid Tutorials CI-Tutor Ongoing N/A 34 Unknown A<br />
BigSim: Simulating<br />
PetaFLOPS<br />
Supercomputers<br />
Debugging Serial and<br />
Parallel Codes<br />
CI-Tutor Ongoing N/A 10 Unknown A<br />
CI-Tutor Ongoing N/A 53 Unknown A<br />
Method<br />
125
Online<br />
Tutorial<br />
Online<br />
Tutorial<br />
Online<br />
Tutorial<br />
Online<br />
Tutorial<br />
Getting Started on theTeraGrid CI-Tutor Ongoing N/A 39 Unknown A<br />
Intermediate MPI CI-Tutor Ongoing N/A 104 Unknown A<br />
Introduction to MPI CI-Tutor Ongoing N/A 557 Unknown A<br />
Introduction to Multicore<br />
Performance<br />
CI-Tutor Ongoing N/A 70 Unknown A<br />
Online<br />
Tutorial<br />
Introduction to<br />
OpenMP<br />
CI-Tutor Ongoing N/A 245 Unknown A<br />
Online<br />
Tutorial<br />
Introduction to<br />
Visualization<br />
CI-Tutor Ongoing N/A 35 Unknown A<br />
Online<br />
Tutorial<br />
Multilevel Parallel<br />
Programming<br />
CI-Tutor Ongoing N/A 48 Unknown A<br />
Online<br />
Tutorial<br />
Parallel Computing<br />
Explained<br />
CI-Tutor Ongoing N/A 91 Unknown A<br />
Online<br />
Tutorial<br />
Parallel Numerical<br />
Libraries<br />
CI-Tutor Ongoing N/A 36 Unknown A<br />
Online<br />
Tutorial<br />
Performance Tuning<br />
for Clusters<br />
CI-Tutor Ongoing N/A 37 Unknown A<br />
Online<br />
Tutorial<br />
Tuning Applications<br />
for High Performance<br />
Networks<br />
CI-Tutor Ongoing N/A 23 Unknown A<br />
NICS<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Workshop<br />
NICS/OLCF XT Hex-<br />
Core Workshop<br />
Oak<br />
Ridge<br />
National<br />
May 10-<br />
12, 2010<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
24 hrs 65 20 Live<br />
Method<br />
126
Lab, TN<br />
Workshop<br />
Computational<br />
Thinking for Educators<br />
ORAU,<br />
Oak<br />
Ridge,<br />
TN<br />
Jun 12-<br />
14, 2010<br />
21 hrs 20 17 Live<br />
College<br />
Class<br />
U.Tennessee-<br />
Knoxville:<br />
Structures<br />
Data<br />
Knoxvill<br />
e, TN<br />
2010<br />
Spring<br />
semester<br />
45 16 Live<br />
College<br />
Class<br />
U.Tennessee-<br />
Knoxville:<br />
HPC class<br />
graduate<br />
Knoxvill<br />
e, TN<br />
2010<br />
Spring<br />
semester<br />
18 3 Live<br />
College<br />
Class<br />
Wofford College<br />
Spartanb<br />
urg, SC<br />
2010<br />
Spring<br />
semester<br />
6 2 Live<br />
College<br />
Class<br />
Tufts University<br />
Boston,<br />
MA<br />
2010<br />
Spring<br />
semester<br />
12 Unknown Live<br />
College<br />
Class<br />
Brown University<br />
Providen<br />
ce, RI<br />
2010<br />
Spring<br />
semester<br />
11 Unknown Live<br />
College<br />
Class<br />
University of Colorado<br />
Boulder,<br />
CO<br />
2010<br />
Spring<br />
semester<br />
14 Unknown Live<br />
ORNL<br />
PSC<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Presentation<br />
Computational<br />
Thinking in K-12<br />
Teaching<br />
SITE (Soc. Tech. Inf<br />
and Teacher Ed.)<br />
International<br />
San<br />
Diego,<br />
CA<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
4/1/10 2 30 16 S<br />
Method 1<br />
1 Methodology: Synchronous (e.g., face to face in classroom or live instruction over the Internet) or asynchronous<br />
(e.g., via the Internet using WebCT or other authoring software.) S=synchronous; A=asynchronous.<br />
127
Workshop<br />
Workshop<br />
Workshop<br />
Workshop<br />
Workshop<br />
Presentation<br />
Workshop<br />
Workshop<br />
Workshop<br />
Symposium<br />
Conference<br />
SafeNet Cyber Safety<br />
Train the Trainer<br />
Workshop<br />
Computational<br />
Biophysics using<br />
NAMD and VMD<br />
Better Educators of<br />
Science for Tomorrow<br />
(BEST) Teachers Day<br />
Summer Institute in<br />
Bioinformatics<br />
Better Educators of<br />
Science for Tomorrow<br />
(BEST)<br />
5 th Annual High<br />
Performance<br />
Computing Day<br />
TeraGrid New User<br />
Training<br />
High Performance<br />
Computing Workshop<br />
Introduction to TG<br />
Resources at the 4 th<br />
PA-OH-WV<br />
Simulators Meeting<br />
Applications of GPUs<br />
in Chemistry and<br />
Material Science<br />
PSC<br />
Pittsburg<br />
h, PA<br />
PSC<br />
Pittsburg<br />
h, PA<br />
PSC<br />
Pittsburg<br />
h, PA<br />
PSC<br />
Pittsburg<br />
h, PA<br />
PSC<br />
Pittsburg<br />
h, PA<br />
Lehigh<br />
Universit<br />
y<br />
Lehigh,<br />
PA<br />
TG<br />
Ready<br />
Talk line<br />
Chatham<br />
Universit<br />
y<br />
Pittsburg<br />
h PA<br />
Carnegie<br />
Mellon<br />
Pittsburg<br />
h PA<br />
Universit<br />
y of<br />
Pittsburg<br />
h<br />
Pittsburg<br />
h PA<br />
5/26/10 6 17 11 S<br />
5/10-<br />
5/14/10<br />
45 28 8 S<br />
5/19/10 8 5 5 S<br />
6/14-<br />
6/25/10<br />
6/14-<br />
6/25/10<br />
95 30 23 S<br />
80 5 5 S<br />
4/23/10 4 20 5 S<br />
4/30/10 2 22 S<br />
5/19 –<br />
5/20/10<br />
16 7 0 S<br />
6/15/10 1 48 S<br />
6/28 –<br />
6/29/10<br />
16 S<br />
Presenta- Overview of HPC for Erie 5/7/10 2 11 4 S<br />
128
tion Industry Technology<br />
Incubator<br />
Erie PA<br />
Purdue<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Graduate<br />
Course<br />
Presentatio<br />
n<br />
Paper<br />
Presentatio<br />
n<br />
Iowa High<br />
Performance<br />
Computing Summer<br />
School, Bill Whitson<br />
Matching Users to<br />
Resources on the<br />
TeraGrid, Kim Dillman<br />
NSF Campus Bridging<br />
Technologies<br />
conference at IUPUI,<br />
Kay Hunt, Scott<br />
Lathrop<br />
The<br />
Universit<br />
y of Iowa<br />
25-<br />
MAY-<br />
2010 and<br />
26-<br />
MAY-<br />
2010<br />
Purdue 12-<br />
MAY-<br />
2010<br />
Indianap<br />
olis, IN<br />
April 6-<br />
7, 2010<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
48 16 (unknown) S<br />
Method<br />
1 Readytalk<br />
0.5 n/a n/a<br />
Presentatio<br />
n<br />
Presentatio<br />
n<br />
“Intro to TeraGrid and<br />
Campus Champions” at<br />
the CI Days event at<br />
Notre Dame, Kim<br />
Dillman and Kay Hunt<br />
“Virtual Breast Project<br />
for Training and<br />
Research”, Purdue<br />
Center for Cancer<br />
Research, Dave Braun<br />
Fort<br />
Wayne,<br />
IN<br />
West<br />
Lafayette<br />
, IN<br />
April 29,<br />
2010<br />
May 14,<br />
2010<br />
1<br />
.5 30<br />
SDSC<br />
Type Title Location Date(s) Hours<br />
Number of<br />
Participan<br />
ts<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
Method<br />
129
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
W<br />
Exploring the World of<br />
Digital Art and Design<br />
BE SMART:<br />
Experience Proteins<br />
Through Lab Activities<br />
and Visualization<br />
Accessible Scientific<br />
Programming with<br />
Java: Visualizing the<br />
Human Genome -Part 3<br />
Water Conservation<br />
Lessons for the<br />
Classroom<br />
Moonwalking: Moon<br />
Exploration - Past,<br />
Present and Future<br />
DNA Analysis 1: DNA<br />
Electrophoresis<br />
Questioning and<br />
Assessment Techniques<br />
to Improve Learning<br />
SMART Team students<br />
present at ASBMB<br />
The Wondrous Ways<br />
Google Earth Can be<br />
Used in the Classroom<br />
Environmental Digital<br />
Photography Project:<br />
Part 2<br />
HTML Programming<br />
for High School<br />
Students Part 1<br />
Bridging the Digital<br />
Divide: What Teachers<br />
Need to Know<br />
SDSC<br />
04/01-<br />
02/10<br />
12 28 13 S<br />
SDSC 04/03/10 3 26 26 S<br />
SDSC 04/-6/10 2 23 8 S<br />
SDSC 04/13/10 2 9 8 S<br />
SDSC 04/17/10 3 44 26 S<br />
SDSC 04/20/1- 2 18 16 S<br />
SDSC 04/20/10 2 12 8 S<br />
Anaheim<br />
. CA<br />
04/24/10 4 18 11 S<br />
SDSC 04/27/10 2 22 13 S<br />
SDSC 05/07/10 2 7 7 S<br />
SDSC 05/08/10 3 19 9 S<br />
SDSC 05/10/10 3 35 21 S<br />
W DNA Analysis 2 SDSC 05/13/10 2 18 16 S<br />
W<br />
W<br />
W<br />
A Virtual Tour of Our<br />
Own California Parks<br />
Beginning SMART<br />
Board workshop for<br />
Pre-service Educators<br />
HTML Programming<br />
for High School<br />
SDSC 05/13/10 3 10 4 S<br />
SDSC 05/19/10 2 17 15 S<br />
SDSC 05/22/10 3 19 9 S<br />
130
Students Part 2<br />
W Proteins in Action 2 SDSC 05/25/10 2 18 16 S<br />
W<br />
W<br />
W<br />
Interactive Digital<br />
Books as a Supplement<br />
to Classroom<br />
Textbooks<br />
SpaceTECH;<br />
Standards-based<br />
Astronomy Activities<br />
HTML Programming<br />
for High School<br />
Students Part 3<br />
SDSC 05/26/10 2 14 12 S<br />
SDSC 05/29/10 3 28 20 S<br />
SDSC 06/-5/10 3 19 9 S<br />
W Proteins in Action 3 SDSC 06/08/10 2 18 16 S<br />
W<br />
C<br />
C<br />
C<br />
Exploring the World of<br />
Digital Art and Design<br />
Gordon: A New Kind of<br />
Supercomputer for<br />
Data-Intensive<br />
Applications<br />
Data-Intensive<br />
Solutions at SDSC<br />
Gordon: An<br />
Architecture for Data-<br />
Intensive Computing<br />
SDSC<br />
Princeton<br />
Inst; for<br />
CS&E<br />
(PIC<br />
SciE)<br />
IEEE<br />
MSST20<br />
10<br />
06/28-<br />
30/10<br />
18 54 24 S<br />
3/29/10 1 UNK UNK S<br />
5/3-<br />
7/2010<br />
1 UNK UNK S<br />
UCSD 4/12/10 1 UNK UNK S<br />
TACC<br />
Type Title Location Date(s) Hours Number of<br />
Participan<br />
ts<br />
Workshop<br />
Workshop<br />
Workshop<br />
Profiling and<br />
Debugging Serial and<br />
Parallel Programs<br />
Introduction to<br />
Scientific Visualization<br />
on Longhorn<br />
Fortran90/95/2003<br />
Programming for HPC<br />
Number<br />
of<br />
Underrepresen<br />
ted<br />
people<br />
TACC 4/8/2010 4 12 1 S<br />
TACC 4/12/201<br />
0<br />
TACC 4/22/201<br />
0<br />
7 15 6 S<br />
3 13 2 S<br />
Method<br />
131
Workshop<br />
Introduction to Parallel<br />
Programming with<br />
OpenMP(PETTT)<br />
TACC 4/30/201<br />
0<br />
6 15 1 S&A<br />
Workshop<br />
Scientific Software<br />
Day 2010<br />
TACC 5/10/201<br />
0<br />
8 22 5 S<br />
Workshop<br />
Introduction to<br />
Scientific Visualization<br />
on Longhorn<br />
Huntsvill<br />
e<br />
,<br />
A<br />
L<br />
5/11-<br />
12/2010<br />
16 8 S<br />
Workshop<br />
Introduction to Parallel<br />
Computing on Ranger<br />
Cornell<br />
5/19-<br />
20/2010<br />
16 35 S<br />
Workshop<br />
Introduction to<br />
Scientific Visualization<br />
on Longhorn<br />
Portugal<br />
5/28/201<br />
0<br />
8 11 S<br />
Workshop<br />
Introduction to<br />
Scientific Visualization<br />
on Longhorn<br />
TACC<br />
6/4/2010 7 22 6 S<br />
Workshop<br />
Introduction to Parallel<br />
Computing on Ranger<br />
and Lonestar<br />
TACC<br />
6/10-<br />
11/2010<br />
17 21 4 S<br />
Workshop<br />
Profiling and<br />
Optimization of<br />
Parallel Programs<br />
(PETTT)<br />
TACC<br />
6/30/201<br />
0<br />
6 9 1 S&A<br />
Table: Ranger Virtual Workshop Usage<br />
Quarter Total Logins Unique Logins<br />
Q1 ‘08 74 20<br />
Q2 ‘08 129 26<br />
Q3 ‘08 198 31<br />
Q4 ‘08 135 31<br />
Q1 ‘09 184 29<br />
132
Q2 ‘09 638 216<br />
Q3 '09 504 403<br />
Q4 ‘09 438 289<br />
Q1 '10 425 349<br />
Q2 '10 657 377<br />
Total 4220 1639<br />
Type Title Location Date(s) Number<br />
of<br />
Participa<br />
nts<br />
The Austin<br />
Forum on<br />
Science,<br />
Technology &<br />
Society<br />
The Austin<br />
Forum on<br />
Science,<br />
Technology &<br />
Society<br />
The Austin<br />
Forum on<br />
Science,<br />
Technology &<br />
Society<br />
Digital Media<br />
event at TACC<br />
Visualization<br />
Laboratory<br />
Ranger<br />
Presentation<br />
UT Austin<br />
Central<br />
Development<br />
Event<br />
Other TACC<br />
and TeraGrid<br />
Tours<br />
Tony Befi and Brad<br />
McCredie: “IBM<br />
and Austin: Partners<br />
is a Smarter Planet”<br />
Dr. Jessica<br />
Hanover, Dr.<br />
Steven Leslie, Greg<br />
W. Hartman, Joe<br />
Skraba: “BioTech:<br />
The Next Big<br />
Thing”<br />
Brewster<br />
McCracken,<br />
“Creating a<br />
Dynamic Clean<br />
Energy Economy”<br />
“Explore Creativity<br />
in Digital Space”<br />
Sun City Computer<br />
Club<br />
“Professional<br />
Advisor Day”<br />
Miscellaneous<br />
AT&T<br />
Conference<br />
Center<br />
AT&T<br />
Conference<br />
Center<br />
AT&T<br />
Conference<br />
Center<br />
TACC<br />
Vislab<br />
Sun City,<br />
Texas<br />
TACC<br />
Vislab<br />
TACC<br />
Vislab and<br />
ROC<br />
4/6/2010 89 38<br />
5/4/2010 150 75<br />
6/1/2010 101 25<br />
4/29/2010 200<br />
5/17/2010 100<br />
5/14/2010 62<br />
April - June 458 333<br />
Number of<br />
Underrepresented<br />
people<br />
Totals 1160 471<br />
133
SDSC<br />
Dash Early Users<br />
Project Title<br />
Principle<br />
Investigat<br />
or<br />
Institution<br />
(s)<br />
Project Overview<br />
Status/Outcomes<br />
Protein<br />
Databank<br />
Phil<br />
Bourne<br />
University<br />
of<br />
California,<br />
San Diego<br />
Protein Data Bank (PDB) is a<br />
worldwide repository of<br />
information about the 3D<br />
structures of large biological<br />
molecules including proteins<br />
and nucleic acids. Alignment-<br />
DB, from the PDB group,<br />
performs predictive science<br />
with queries on pair-wise<br />
correlations and alignments<br />
of protein structures that<br />
predict protein fold space and<br />
other properties.<br />
Initial tests show<br />
speedup of up to<br />
69x on a limited<br />
set of queries<br />
when using flash<br />
drives. Tests are<br />
ongoing to<br />
evaluate the<br />
possibility of<br />
running test for<br />
the full PDB. This<br />
represents an<br />
important use case<br />
for Gordon, i.e., a<br />
semi-persistent<br />
database installed<br />
on SSD’s. This<br />
project is<br />
providing a<br />
mechanism to<br />
assess this<br />
architecture.<br />
Biological<br />
Networks<br />
Mihail<br />
Baitaliuc<br />
University<br />
of<br />
California,<br />
San Diego<br />
Biological Networks<br />
integrates over 100 public<br />
databases for thousands of<br />
eukaryotic, prokaryotic and<br />
viral genomes and provides<br />
software tools needed to<br />
decipher gene regulatory<br />
networks, sequence and<br />
experimental data, functional<br />
annotation, transcriptional<br />
regulatory region analysis.<br />
The software platform<br />
supports biological pathways<br />
analysis, querying and<br />
visualization of gene<br />
regulation and protein<br />
interaction networks,<br />
metabolic and signaling<br />
pathways.<br />
Generated a<br />
synthetic<br />
workload to<br />
observe query<br />
plans using<br />
indexes and<br />
sequential scans.<br />
Initial tests show<br />
speedup of up to<br />
186x for typical<br />
queries when the<br />
database is<br />
installed on<br />
SSD’s.<br />
Palomar<br />
Transient<br />
Peter<br />
Nugent<br />
Caltech,<br />
UC<br />
This project’s goal is to<br />
support the PTF Image<br />
PTF transient<br />
search tested on<br />
134
Factory<br />
Berkeley,<br />
Berkeley<br />
Lab (LBL),<br />
Yale<br />
University<br />
processing pipeline and<br />
enable transient detection.<br />
This requires large, random<br />
queries across multiple<br />
databases. There are<br />
approximately 100 new<br />
transients every minute. The<br />
key is to handle all the I/O<br />
issues related to image<br />
processing.<br />
dash I/O node<br />
with SSD storage.<br />
Increased query<br />
performance up to<br />
161x was<br />
achieved.<br />
Discussions are<br />
underway for<br />
sustained longterm<br />
usage of<br />
Dash (and<br />
Gordon) for<br />
research by PTF.<br />
Multiscale<br />
Systems<br />
Center<br />
(MUSYC<br />
Cathie<br />
Olschano<br />
wsky<br />
UC San<br />
Diego<br />
California.<br />
Study energy consumption of<br />
DASH-IO nodes and provide<br />
general relevancy of IO<br />
performance and energy<br />
consumption.<br />
Studied the<br />
requirements of<br />
both software and<br />
hardware<br />
architecture<br />
required on<br />
DASH to perform<br />
this study. Setup<br />
initial hardware<br />
configuration to<br />
carry out<br />
experiments<br />
needed for<br />
project.<br />
Scalable<br />
Graph<br />
Framework<br />
Sandeep<br />
Gupta<br />
University<br />
of<br />
California,<br />
San Diego<br />
The project, Scalable Graph<br />
Framework (SGF), aims<br />
to develop a graph processing<br />
engine for DASH. It is<br />
designed to addresses<br />
scalability and optimization<br />
challenges faced in<br />
processing massively large<br />
graph in the domain of bioinformatics,<br />
semantic neuroinformatics<br />
and graphmodeling.<br />
Project will<br />
deliver a suite of<br />
preprocessing<br />
strategies/algorith<br />
ms for working<br />
efficiently on bioinformatics<br />
and semantic<br />
neuro-informatics<br />
workloads. The<br />
preprocessing<br />
strategy involves<br />
partitioning the<br />
graph and<br />
distributing<br />
workload across<br />
the cluster.<br />
Algorithms use a<br />
hybrid of sharedmemory<br />
and MPI<br />
to effectively<br />
exploit DASH<br />
135
architecture<br />
Architecture<br />
aware<br />
compiler and<br />
algorithm<br />
development<br />
Allan<br />
Snavely<br />
U. Tenn.;<br />
GATech<br />
Res. Inst.;<br />
ORNL; U.<br />
Colorado;<br />
Portland<br />
Group<br />
All groups have<br />
accounts on Dash<br />
and have been<br />
conducting<br />
preliminary tests<br />
on the machine<br />
RAXML<br />
Wayne<br />
Pfeiffer<br />
University<br />
of<br />
California,<br />
San Diego<br />
Benchmark and port RAXML<br />
code to Dash.<br />
Completed<br />
porting and<br />
benchmarking of<br />
code on Dash.<br />
This includes<br />
testing on the<br />
vSMP node.<br />
Abaqus<br />
Mahidhar<br />
Tatineni<br />
University<br />
of<br />
California,<br />
San Diego<br />
The goal is to evaluate the<br />
potential of using flash<br />
storage to accelerate<br />
commercial finite element<br />
codes like Abaqus,<br />
NASTRAN, ANSYS, etc<br />
which have large I/O<br />
requirements during the runs.<br />
Wide applicability since there<br />
is a large user base for such<br />
codes.<br />
ABAQUS<br />
standard test cases<br />
(S2A1, S4B)<br />
using flash<br />
storage for I/O.<br />
Preliminary<br />
results show<br />
~50%<br />
improvement for<br />
8 core runs.<br />
Table: - Active Dash TeraGrid Allocations<br />
Principle<br />
Project Title<br />
Institution(s) Project Overview<br />
Investigator<br />
Performance<br />
Evaluation<br />
Using the<br />
TAU<br />
Performance<br />
System (R)<br />
Sameer<br />
Shende<br />
University of<br />
Oregon<br />
The project aims to analyze<br />
profiling data from runs on<br />
Kraken (up to 64k cores).<br />
These are very large files that<br />
need to be read into memory<br />
(merged) so that users can<br />
analyze the performance data<br />
and visually examine the<br />
results. The vSMP node is<br />
attractive since it has a lot of<br />
aggregated memory. PIs’<br />
group develops the widely<br />
used TAU profiling and<br />
tracing toolkit for<br />
Status/Outcomes<br />
Application<br />
ported to Dash.<br />
Preliminary<br />
testing conducted<br />
on vSMP node<br />
136
performance analysis of<br />
parallel programs written in<br />
Fortran, C, C++, Java,<br />
Python.<br />
Improving<br />
the<br />
Efficiency of<br />
Eukaryotic<br />
Genome<br />
Sequence<br />
Assembly<br />
Using DASH<br />
Mark Miller<br />
University of<br />
California,<br />
San Diego<br />
The proposal aims to address<br />
the question whether the<br />
BFAST code can be speeded<br />
up by taking advantage of the<br />
combination of SSDs and<br />
high RAM nodes available<br />
on the DASH machine. The<br />
request is for a small<br />
allocation to benchmark the<br />
existing BFAST code on<br />
DASH, and to explore how<br />
BFAST code can be<br />
modified to take further<br />
advantage of the DASH<br />
architecture. One specific<br />
part of the BFAST code that<br />
should be improved<br />
dramatically on the DASH<br />
architecture is the initial file<br />
preparation step. In this step,<br />
very large data sets (typically<br />
8 - 85 GB) are read from disc<br />
storage and written into a<br />
new format (fastq) on disc.<br />
This operation is completely<br />
I/O bound on conventional<br />
machines, and requires 7<br />
hours to complete on the<br />
Triton Cluster at SDSC for<br />
our current data set.<br />
Conducting this operation on<br />
SSDs on should speed this<br />
step significantly and may<br />
immediately cut the run time<br />
nearly in half. The aim is to<br />
create a new body of<br />
knowledge on how data<br />
intensive computing can be<br />
done on a new architecture<br />
and enable new, efficient<br />
approaches to Next<br />
Generation Sequence<br />
assembly.<br />
TG Startup<br />
allocation has<br />
been approved<br />
and preliminary<br />
work underway<br />
on Dash.<br />
Large<br />
Synoptic<br />
Survey<br />
Tim<br />
Axelrod<br />
University of<br />
Arizona<br />
The Large Synoptic Survey<br />
Telescope (LSST), to be<br />
constructed at Cerro Pachon<br />
Allocation<br />
approved,<br />
preliminary<br />
137
Telescope<br />
Data<br />
Challenge 3b<br />
in the Chilean Andes and<br />
slated for first light in 2014<br />
and full operation by 2016,<br />
will produce over 15<br />
terabytes of raw astronomical<br />
data each night, resulting in a<br />
database catalog of 22<br />
Petabytes and an image<br />
archive of 100 Petabytes by<br />
the end of the 10-year<br />
survey. This TG allocated<br />
project addresses the goals of<br />
the second half of the third<br />
data challenge (DC3b),<br />
which focuses on the<br />
production of science data<br />
products from an archive's<br />
worth of data.<br />
discussions<br />
completed w/<br />
SDSC contacts<br />
Data<br />
Transposition<br />
Development<br />
for Exa-scale<br />
Data in<br />
Memory<br />
John Helly<br />
University of<br />
California,<br />
San Diego<br />
Cloud data is voluminous<br />
and this is a pilot study of the<br />
utility of Dash for use in<br />
CMMAP data analysis<br />
activities. Temporal<br />
snapshots of data need to be<br />
read into memory, merged,<br />
and resulting dataset is<br />
analyzed.<br />
Startup allocation<br />
was approved.<br />
Preliminary<br />
discussions<br />
completed. An<br />
OpenMP proof of<br />
concept code was<br />
developed and<br />
tested. Read 256<br />
temporal<br />
snapshots with an<br />
aggregated size<br />
of 128GB in<br />
memory<br />
11 Project Management<br />
11.1 Highlights<br />
A new section of the quarterly report Section 9 Evaluation, Monitoring, and Auditing was added<br />
now that the first XD awards have been made for the XD Technology Audit Service and the XD<br />
Technology Insertion Service.<br />
11.2 Project Management Working Group<br />
11.2.1 Tracking<br />
The IPP/WBS tracking process is continuing with project managers reviewing status with area<br />
directors. The area directors are finding the tracking process to be a useful exercise, providing<br />
138
them with a thorough overview of their area’s status against the plan. The update is included as an<br />
appendix to this report.<br />
11.2.2 <strong>Report</strong>ing<br />
This report begins the second year of TeraGrid providing a single, integrated quarterly report<br />
representing the activity across all NSF TeraGrid awards. The report template has evolved over<br />
the past year by implementing improvements suggested upon the completion of each quarter’s<br />
report. Additionally, a new section of the quarterly report Section 9 Evaluation, Monitoring, and<br />
Auditing was developed now that the first XD awards have been made for the XD Technology<br />
Audit Service and the XD Technology Insertion Service.<br />
12 Science Advisory Board Interactions<br />
In advance of the TeraGrid Science Advisory Board (SAB) in San Diego in Q3, changes have<br />
been made in the SAB membership as follows.<br />
Will be added to the SAB in July 2010:<br />
• Amy Apon-Arkansas<br />
• John Connolly-Kentucky<br />
• Kevin Franklin-Illinois<br />
• Jerry Ostriker-Princeton<br />
• Carlos Simmerling-Stony Brook<br />
• Ali Uzun-FSU<br />
• Greg Voth-UChicago<br />
Will be removed from the SAB after July 2010:<br />
• Eric Chassignet-FSU<br />
• Dave Kaeli-Northeastern<br />
• Luis Lehner-U. Guelph<br />
• Michael Macy-Cornell<br />
• Pat Teller-UTEP<br />
• Cathy Wu-University of Delaware and Ge<strong>org</strong>etown<br />
139
13 Collaborations with Non-TeraGrid Institutions<br />
13.1 Grid Partnerships/Infrastructures<br />
Purdue continued to work with OSG engagement users on using TeraGrid Steele for “highthroughput<br />
HPC” (HTHPC) computation jobs (single node, 8-core). This effort leverages<br />
previous work with enabling easier submission of MPI jobs in the Purdue OSG effort. In this<br />
quarter, a total of 1765 OSG MPI jobs consumed 138,122 wallclock hours on Purdue Steele.<br />
13.2 Science Projects and Science Gateways<br />
Under the TG-DEISA Interoperability Project (funded as part of the LONI HPCOPS award),<br />
LONI expects to demonstrate interoperability of TG Queen Bee resources with DEISA in August,<br />
with the potential of extending support to all of TG soon after. The science component has begun<br />
using the available interoperability support, with results to be presented at TG’10 and to the UK<br />
e-Science Program.<br />
13.3 Data Systems<br />
IU worked closely with Mississippi State University to facilitate Scott Michael's astronomical<br />
research. By mounting IU's Data Capacitor Wide Area Filesystem at MSU, it was possible to<br />
bridge the campuses and prevent Michael from having to move 30 TB of data to MSU for<br />
analysis. This was an example of using a wide area filesystem to allow non-TeraGrid institutions<br />
to move their data onto the TeraGrid with ease by offering the user a standard filesystem<br />
interface.<br />
ORNL continues to work with the REDDnet collaboration. Three different TeraGrid sites<br />
(ORNL, SDSC, TACC) now have REDDnet nodes.<br />
John Cobb at ORNL continues to work with the DataONE Datanet project. TeraGrid as a whole<br />
has reached out to both awarded DataNet projects (DataONE and the Data Conservancy)<br />
13.4 Science Projects<br />
PSC is a member of two collaborations involving the NIH MIDAS Research Network, as the<br />
Computational Core for the U. Pittsburgh National Center of Excellence (COE) and as a research<br />
partner on the RTI Information Technology Resource. MIDAS is a collaborative network of<br />
research scientists funded by the NIH who use computational, statistical and mathematical<br />
models to understand infectious disease dynamics and thereby assist the nation to prepare for,<br />
detect and respond to infectious disease threats.<br />
1. In support of the U. Pittsburgh COE, PSC established infrastructure to facilitate<br />
development of software projects, including setup and management of a CVS server. The<br />
server deploys both ViewVC for viewing the CVS repository as a webpage and Google's<br />
coderev to generate browsable code reviews. These developments have led to better<br />
<strong>org</strong>anization and software design for the COE's software projects. PSC developers have<br />
actively participated in three software projects: FRED, a new agent-based simulator<br />
specifically tailored to support features such as behavioral, viral evolution, and<br />
emergency preparedness modeling; GAIA, a GIS Information-based visualization tool<br />
designed to represent epidemiological data; and HERMES, a discrete event simulation<br />
engine for modeling supply chain dynamics of vaccine cold-chains in developing nations.<br />
2. In support of the RTI Information Technology Resource, PSC Research Fellow Shawn<br />
Brown provided leadership for the MIDAS Software Working Group. PSC developed a<br />
web based portal, entitled MISSION (MIDAS Software Sharing and Information<br />
Outreach Network), with the purpose of facilitating collaboration between the various<br />
140
MIDAS Network software development efforts. The site provides a collaborative<br />
development environment for over fifty developers in seven different research groups,<br />
and it provides a platform for sharing software and promoting open access to MIDAS<br />
software tools. PSC also hosted the first “MIDAS Software Developers Workshop”<br />
focused on the issues associated with model development and productization of MIDAS<br />
tools, with 16 developers from around the world in attendance.<br />
Pan-STARRS -- the Panoramic Survey Telescope & Rapid Response System -- is an innovative<br />
design for a wide-field imaging facility being developed at the University of Hawaii's Institute for<br />
Astronomy. The immediate goal of Pan-STARRS is to discover and characterize Earthapproaching<br />
objects (both asteroids and comets) that might pose a danger to our planet. For Pan-<br />
STARRS, PSC Senior Scientific Specialist Joel Welling is developing 1) a web utility to look up<br />
available observations by sky coordinates, which he is implementing as a SOAP-based interface<br />
to the Pan-STARRS image database and skycell geometry database; 2) a SOAP-based interface to<br />
the tool which causes the IPP computing infrastructure to generate “postage stamp” images of<br />
particular sky locations; and 3) PHP pages to bridge the two. The utility, running on one of<br />
Michael Wood-Vasey’s machines at the University of Pittsburgh, has already had hits from<br />
Harvard, Queen’s University Belfast, and Denmark. This work develops RP/AUS staff expertise<br />
in large-scale data logistics and analysis.<br />
The Vaccine Modeling Initiative (VMI) is funded by the Bill & Melinda Gates Foundation and<br />
directed by Dr. Donald Burke, Dean of the University of Pittsburgh Graduate School of Public<br />
Health. The VMI is a research consortium between the University of Pittsburgh, Imperial College<br />
London and Princeton University with collaborators in many other institutions including the<br />
Johns Hopkins Bloomberg School of Public Health and the Pittsburgh Supercomputing Center.<br />
For the vaccine modeling initiative (VMI), PSC’s Joel Welling has developed and executed<br />
vaccine distribution simulations for distribution networks in Niger and districts of Thailand. A<br />
short-term goal is reconciling the predictions of HERMES, the code developed at PSC, against<br />
the predictions of the ARENA code developed at Pitt. Dr. Welling has also been working to refactor<br />
HERMES to separate more fully the model from the functional code, which is necessary to<br />
develop models of new distribution networks (e.g., areas other than Niger and Thailand, or<br />
incorporating different aspects) more efficiently and reliably. This work fosters computational<br />
techniques and RP/AUS staff skills in agent based modeling, data analysis and presentation,<br />
knowledge extraction and decision support.<br />
Wayne Peiffer of SDSC began a collaboration with Sam Levy and Ashley Van Zeeland of the<br />
Scripps Translational Science Institute (STSI). This is in conjunction with his Advanced User<br />
Support for Mark Miller's new TeraGrid allocation on Dash. He installed and tested on Dash two<br />
codes—BFAST and SHRiMP—that map short DNA sequence reads to a large reference genome.<br />
TACC continues to participate in the World Community Grid (WCG) effort. TACC resources<br />
contributing to the solution of problems addressed by the WCG include the visualization system<br />
Colt (40 compute cores) and the HPC serial cluster Stampede (1737 compute cores). TACC staff<br />
desktops and laptops also provide cycles to the project. Approximately 44 years of run time have<br />
been accumulated (93,616 results) on TACC resources in the reporting period.<br />
ORNL continues to operate an ESG node in its TeraGrid enclave. ESG is gearing up to support<br />
the next round of IPCC simulation runs for the next IPCC report.<br />
141
A<br />
Publications Listing<br />
A.1. TeraGrid Staff Publications<br />
For Q2 2010, the TeraGrid sites reported the following publications authored in whole or in part<br />
by GIG- or RP-funded staff members.<br />
13.4.1 Science and Engineering Highlights<br />
1. D Katz, Scott Callaghan, Robert Harkness, Shantenu Jha, et al “Science on the TeraGrid”, submitted to<br />
Journal of Computational Methods in Science and Technology [UC/ANL, SDSC, LONI]<br />
13.4.2 Data and Visualization<br />
2. Josephine Palencia, Robert Budden and Kevin Sullivan, “Kerberized Lustre 2.0 over the WAN”,<br />
TeraGrid 2010, Pittsburgh, PA, August 2-5, 2010, accepted for ACM Publication Copyright 2010 ACM<br />
978-1-60558-818-6/10/0. [PSC]<br />
3. 1. C Miceli, M Miceli and S Jha∗, “Understanding Performance of Distributed Data-Intensive<br />
Applications”, Accepted for publication in Philosophical Transactions of the Royal Society of London A<br />
(2010) [LONI]<br />
13.4.3 Users and User Support<br />
4. H. Karimabadi, H. X. Vu, B. Loring, Y. Omelchenko, M. Tatineni, A. Majumdar, and J. Dorelli, "Enabling<br />
Breakthrough Kinetic Simulations of the Magnetosphere Using Petascale Computing," Asia Oceania<br />
Geosciences Society (AOGS) Hyderabad, India, July 5-9, 2010<br />
5. Y Khamra, C White, S Jha, “Modelling Data-Driven CO2 Sequestration Using Distributed HPC<br />
Cyberinfrastructure: A Case Study”, TeraGrid 2010 Conference Proceedings [TACC, LONI]<br />
6. Joanne I. Yeh, Boris Shivachev, Srinivas Rapireddy, Matthew J. Crawford, Roberto R. Gil, Shoucheng<br />
Du, Marcela Madrid, and Danith H. Ly, “Crystal Structure of Chiral γPNA with Complementary DNA<br />
Strand—Insights into the Stability and Specificity of Recognition and Conformational Pre<strong>org</strong>anization”,<br />
Journal of the American Chemical Society, in press. [PSC]<br />
13.4.4 User-Facing Projects and Core Services<br />
7. David Hart, "Measuring TeraGrid: Workload Characterization for an HPC Federation," International<br />
Journal of HPC Applications, 2010, accepted. [SDSC]<br />
13.4.5 Network, Operations, and Security<br />
8. H Kim, Y Khamra, S Jha, M Parashar, “An Autonomic Approach to Integrated HPC Grid and Cloud<br />
Usage”, IEEE E-Science 2009 Conference Proceedings [TACC]<br />
13.4.6 Education, Outreach and Training<br />
9. Richard A. Aló, Diane Baxter, Karl Barnes, Al Kuslikis, Geoffrey Fox, Alex Ramirez, Advancing<br />
Computational Science, Visualization and Homeland Security Research/ Education at Minority Serving<br />
Institutions National Model Promoted/ Implemented by MSI-CIEC (Minority Serving Institutions-<br />
CyberInfrastructure Empowerment Coalition), accepted to the International Conference on<br />
Computational Science, ICCS 2010 for inclusion in the program (May 31 – June 2, 2010, Amsterdam)<br />
and for publication in the new Elsevier, Procedia Computer Science Series. [Indiana, SDSC]<br />
13.4.7 Science Gateways<br />
10. Tang, W., Bennett D. A., and Wang, S. 2010. "A Parallel Agent-based Model of Land Use Opinions."<br />
Journal of Land Use Science, under revision<br />
11. Bennett D. A., Tang, W., and Wang, S. 2010. "Toward an Understanding of Provenance in Complex<br />
Land Use Dynamics." Journal of Land Use Science, accepted<br />
142
12. Wang, S. 2010. "A Cyber-GIS Framework for the Synthesis of Cyberinfrastructure, GIS, and<br />
Spatial Analysis." Annals of the Association of American Geographers, 100 (3): 1-23<br />
A.2. Publications from TeraGrid Users<br />
The following 731 publications were gathered primarily from renewal Research submissions to<br />
the June 2010 TRAC meeting. The extraction process was conducted by a UCSD undergraduate<br />
student at SDSC. The publications are <strong>org</strong>anized by field of science and by the proposal with<br />
which they were associated.<br />
Advanced Scientific Computing<br />
ASC050025<br />
1. A. Bhatele, E. Bohm, and L. V. Kale. Optimizing communication for Charm++ applications by reducing<br />
network contention. Concurrency and Computation: Practice and Experience, 2010.<br />
2. A. Bhatele and L. V. Kale. Quantifying Network Contention on Large Parallel Machines. Parallel<br />
Processing Letters (Special Issue on Large-Scale Parallel Processing), 19(4):553–572, 2009.<br />
3. A. Bhatele, L. Wesolowski, E. Bohm, E. Solomonik, and L. V. Kale. Understanding application<br />
performance via microbenchmarks on three large supercomputers: Intrepid, Ranger and Jaguar.<br />
IJHPCA, 2010.<br />
4. S. Biersdorff, C. W. Lee, A. D. Malony, and L. V. Kale. Integrated Performance Views in Charm ++:<br />
Projections Meets TAU. In Proceedings of The 38th International Conference on Parallel Processing<br />
(ICPP), pages 140–147, Vienna, Austria, September 2009.<br />
5. C. L. M. Esteban Meneses and L. V. Kale. Team-based message logging: Preliminary results. In 3rd<br />
Workshop on Resiliency in High Performance Computing (Resilience) in Clusters, Clouds, and Grids<br />
(CCGRID 2010)., May 2010.<br />
6. F. Gioachin, C. W. Lee, and L. V. Kale. Scalable Interaction with Parallel Applications. In Proceedings<br />
of TeraGrid’09, Arlington, VA, USA, June 2009.<br />
7. P. Jetley, L. Wesolowski, and L. V. Kale. Studies on the performance of scalable GPU-based<br />
hierarchical N-body simulations. Submitted for Publication.<br />
8. E. R. Rodrigues, P. O. A. Navaux, J. Panetta, and C. L. Mendes. A new technique for data privatization<br />
in userlevel threads and its use in parallel applications. In ACM 25th Symposium On Applied Computing<br />
(SAC), Sierre, Switzerland, 2010.<br />
9. G. Zheng, E. Meneses, A. Bhatele, and L. V. Kale. Hierarchical Load Balancing for Large Scale<br />
Supercomputers. Technical <strong>Report</strong> 10-08, Parallel Programming Laboratory, March 2010.<br />
MCA08X024<br />
10. Lixia Liu and Zhiyuan Li, ʺ A Compiler-automated Array Compression Scheme for Optimizing Memory<br />
Intensive Programsʺ , Proceedings of 24th ACM International Conference on Supercomputing, June<br />
1-4, 2010, to appear.<br />
11. Yingchong Situ, Lixia Liu, Chandra S. Martha, Matthew E. Louis, Zhiyuan Li, Gregory A. Blaisdell and<br />
Anastasios S. Lyrintzis, ʺ Reducing Communication Overhead in Large Eddy Simulation of Jet Engine<br />
Noiseʺ , submitted to IEEE Cluster 2010.<br />
Atmospheric Sciences<br />
ATM090002<br />
12. Pritchard, M. S. and R. C. J. Somerville, 2009. Assessing the Diurnal Cycle of Precipitation in a Multi-<br />
Scale Climate Model. Journal of Advances in Modeling Earth Systems 1, Art. 12.<br />
13. Pritchard, M. S. and R. C. J. Somerville, 2009. Empirical orthogonal function analysis of the diurnal<br />
cycle of precipitation in a multi-scale climate model. Geophysical Research Letters 36 (5), L05812.<br />
ATM090042<br />
14. Sippel, J. A and F. Zhang, 2010: Predictability of Tropical Cyclones -- Understanding the Limits and<br />
Uncertainties in Hurricane Prediction. VDM Verlag, 178pp.<br />
143
15. Fang, J and F. Zhang, 2010: Initial development and genesis of Hurricane Dolly (2008).Journal of the<br />
Atmospheric Sciences, 63, 655-672.<br />
16. Zhang, F., Y. Weng, Y.-H. Kuo, and J. S. Whitaker, 2010: Predicting Typhoon Morakot’s Catastrophic<br />
Rainfall and Flooding With a Cloud-Scale Ensemble System. Weather and Forecasting, submitted.<br />
17. Zhang, F., Y. Weng, J. Gamache, and F. Marks, 2010: Future of Hurricane Prediction: Cloudresolving<br />
Ensemble Analysis and Forecast. Bulletin of American Meteorological Society, submitted.<br />
18. Weng, Y., M. Zhang, and F. Zhang , 2010: Advanced data assimilation for cloud-resolving hurricane<br />
initialization and prediction. Computing in Science and Engineering, submitted.<br />
19. Kieu, C. Q., F. Zhang, J. S. Gall and W. M. Frank, 2010: Tropical cyclone formation from the equatorial<br />
waves. Journal of the Atmospheric Sciences, submitted.<br />
20. Fang, J and F. Zhang, 2010: Evolution of Multi-scale Vortices in the Development of Hurricane Dolly<br />
(2008). Journal of the Atmospheric Sciences, submitted.<br />
21. Fang, J and F. Zhang, 2010: Beta-effect on the evolution of tropical cyclones. Monthly Weather Review,<br />
in preparation.<br />
22. Green, B., F. Zhang, and P. Markowski, 2010: Supercells in Hurricane Katrina (2005). Monthly Weather<br />
Review, in preparation.<br />
23. Poterjoy, J., and F. Zhang, 2010: Dynamics and structures of error covariance in hurricanes. Monthly<br />
Weather Review, in preparation.<br />
24. Weng, Y., and F. Zhang, 2010: Assimilation of airborne Doppler radar observations with an Ensemble<br />
Kalman filter for cloud-resolving hurricane analysis and forecasting. Monthly Weather Review, in<br />
preparation.<br />
25. Zhang, M. and F. Zhang: Coupling EnKF with 4D-Var in a hybrid data assimilation algorithm (4D-Hybrid)<br />
for cloud-resolving hurricane prediction. Monthly Weather Review, in preparation.<br />
ATM090046<br />
26. Karimabadi, H., J. Scudder, V. Roytershteyn, J. Scudder, W. Daughton, Why is reconnection in the<br />
solar wind so different than in other environments, 9th Annual International Astrophysics Conference,<br />
Pickup Ions Throughout the Heliosphere and Beyond, Maui, Hawaii, March 14-19, 2010.<br />
27. Karimabadi, H., W. Daughton, V. Roytershteyn, J. Scudder, Conditions for fast reconnection and its<br />
large-scale structure, EGU conference, Vienna, 2010.<br />
28. Karimabadi. H., H. X. Vu, B. Loring, Y. Omelchenko, M. Tatineni, A. Majumdar, J. Dorelli, Enabling<br />
Breakthrough Kinetic Simulations of the Magnetosphere Using Petascale Computing. AOGS, India,<br />
2010.<br />
29. Karimabadi, H., and J. Dorelli, H. X. Vu, B. Loring, Y. Omelchenko, Is quadrupole structure of out-ofplane<br />
magnetic field evidence of Hall reconnection, to appear in Modern Challenges in Nonlinear<br />
Plasma Physics, editor D. Vassiliadis, AIP conference, 2010.<br />
30. Daughton, W., V. Roytershteyn, H. Karimabadi, L.Yin, B.J. Albright, S.P. Gary and Kevin J. Bowers,<br />
Secondary Island Formation in Collisional and Collisionless Kinetic Simulations of Magnetic<br />
Reconnection, to appear in Modern Challenges in Nonlinear Plasma Physics, editor D. Vassiliadis, AIP<br />
conference, 2010.<br />
31. Karimabadi, H., V. Roytershteyn, C. G. Mouikis, L.M. Kistler, W. Daughton, Flushing Mechanism in<br />
Reconnection: Effects of Minority Species of Oxygen Ions, to appear in Planetary and Space Science,<br />
2010.<br />
ATM100032<br />
32. Jacobsen, D.A., Thibault, J.C., Senocak, I., 2010. An MPI-CUDA implementation for massively parallel<br />
incompressible flow computations on multi-GPU clusters, in: Proceedings of 48th AIAA Aerospace<br />
Sciences Meeting and Exhibit, paper no: 2010-522<br />
MCA95C006<br />
33. Zhao, K. and M. Xue, 2009: Assimilation of coastal Doppler radar data with the ARPS 3DVAR and cloud<br />
analysis for the prediction of Hurricane Ike (2008). Geophy. Res. Letters, 36, L12803,<br />
doi:10.1029/2009GL038658.<br />
34. Potvin, C. K., A. Shapiro, T.-Y. Yu, J. Gao, and M. Xue, 2009: Using a low-order model to characterize<br />
and detect tornadoes in multiple-Doppler radar data. Mon. Wea. Rev., 137, 1230–1249.<br />
35. Dunning, T. H., Jr., K. Schulten, J. Tromp, J. P. Ostriker, K. K. Droegemeier, M. Xue, and P. Fussell,<br />
2009: Science and engineering in the petascale era. Computing Sci. Engineering, 11, 28-36.<br />
144
36. Xue, M., M. Tong, and G. Zhang, 2009: Simultaneous state estimation and attenuation correction for<br />
thunderstorms with radar data using an ensemble Kalman filter: Tests with simulated data. Quart. J.<br />
Royal Meteor. Soc., 135, 1409-1423.<br />
37. Schwartz, C., J. Kain, S. Weiss, M. Xue, D. Bright, F. Kong, K. Thomas, J. Levit, and M. Coniglio, 2009:<br />
Next-day convection-allowing WRF model guidance: A second look at 2 vs. 4 km grid spacing. Mon.<br />
Wea. Rev., 137, 3351-3372.<br />
38. Clark, A. J., W. A. Gallus, Jr., M. Xue, and F. Kong, 2009: A comparison of precipitation forecast skill<br />
between small convection-permitting and large convection-parameterizing ensembles. Wea. and<br />
Forecasting, 24, 1121-1140.<br />
39. Jung, Y., M. Xue, and G. Zhang, 2010: Simulations of polarimetric radar signatures of a supercell storm<br />
using a two-moment bulk microphysics scheme. J. Appl. Meteor. Climatol., 49, 146-163.J. Appl. Meteor.<br />
Climatol., 49, 146-163.<br />
40. Jung, Y., M. Xue, and G. Zhang, 2010: The estimation of microphysical parameters and atmospheric<br />
state using simulated polarimetric radar data and ensemble Kalman filter in the presence of observation<br />
operator error. Mon. Wea. Rev., 138, 539–562.<br />
41. Coniglio, M. C., K. L. Elmore, J. S. Kain, S. Weiss, and M. Xue, 2010: Evaluation of WRF model output<br />
for severe-weather forecasting from the 2008 NOAA Hazardous Weather Testbed Spring Experiment.<br />
Wea. Forecasting, Accepted.<br />
42. Schwartz, C. S., J. S. Kain, S. J. Weiss, M. Xue, D. R. Bright, F. Kong, K. W.Thomas, J. J. Levit, M. C.<br />
Coniglio, and M. S. Wandishin, 2010: Toward improved convection-allowing ensembles: model physics<br />
sensitivities and optimizing probabilistic guidance with small ensemble membership. Wea. Forecasting,<br />
Accepted.<br />
43. Clark, A. J., W. A. Gallus, M. Xue, and F. Kong, 2010: Growth of spread in convection allowing and<br />
convection-parameterizing ensembles Wea. Forecasting, Accepted.<br />
44. Dawson, D. T., II, M. Xue, J. A. Milbrandt, and M. K. Yau, 2010: Comparison of evaporation and cold<br />
pool development between single-moment and multi-moment bulk microphysics schemes in idealized<br />
simulations of tornadic thunderstorms. Mon. Wea. Rev., Accepted.<br />
45. Xue, M., Y. Jung, and G. Zhang, 2010: State estimation of convective storms with a twomoment<br />
microphysics scheme and ensemble Kalman filter: Experiments with simulated radar data Q. J. Roy.<br />
Meteor. Soc, Accepted.<br />
46. Ge, G., J. Gao, K. Brewster, and M. Xue, 2010: Impacts of beam broadening and earth curvature on 3D<br />
variational radar data assimilation with two Doppler radars. J. Atmos. Ocean Tech., 27, 617-636.<br />
47. Clark, A. J., W. A. Gallus, Jr., M. Xue, and F. Kong, 2010: Convection-allowing and convectionparameterizing<br />
ensemble forecasts of a mesoscale convective vortex and associated severe weather.<br />
Wea. Forecasting, Accepted.<br />
48. Schenkman, A., M. Xue, A. Shapiro, K. Brewster, and J. Gao, 2010: Impact of radar data assimilation<br />
on the analysis and prediction of the 8-9 May 2007 Oklahoma tornadic mesoscale convective system,<br />
Part I: Mesoscale features on a 2 km grid. Mon. Wea. Rev., Conditionally accepted.<br />
49. Schenkman, A., M. Xue, A. Shapiro, K. Brewster, and J. Gao, 2010: Impact of radar data assimilation<br />
on the analysis and prediction of the 8-9 May 2007 Oklahoma tornadic mesoscale convective system,<br />
Part II: Sub-storm-scale mesovortices on a 400 m Grid. Mon. Wea. Rev., Conditionally accepted.<br />
50. Kain, J. S., M. Xue, M. C. Coniglio, S. J. Weiss, F. Kong, T. L. Jensen, B. G. Brown, J. Gao, K.<br />
Brewster, K. W. Thomas, Y. Wang, C. S. Schwartz, and J. J. Levit, 2010: Assessing advances in the<br />
assimilation of radar data within a collaborative forecasting-research environment. Wea. Forecasting,<br />
Accepted.<br />
Astronomical Sciences<br />
AST060032<br />
51. @* “Cyberinfrastructure to Support Science and Data Management for the Dark Energy Survey”<br />
52. Ngeow, C. Mohr, J.J., Alam, T., Barkhouse, W.A., Beldica, C., Cai, D., Daues, G., Plante, R. Annis, J.,<br />
Lin, H., Tucker, D. and Smith, C. 2006, SPIE, 6270, 68-79.<br />
53. *“The Dark Energy Survey Data Management System”<br />
54. Mohr, J.J., Adams, D., Barkhouse, W., Beldica, C., Bertin, E., Cai, Y.D., da Costa, L.A.N., Darnell, J.A.,<br />
Daues, G.E., Jarvis, M., Gower, M., Lin, H., Martelli, L., Neilsen, E., Ngeow, C., Ogando, R.L.C., Parga,<br />
A., Sheldon, E., Tucker, D., Kuropatkin, N. & Stoughton, C. for the Dark Energy Survey Collaboration<br />
2008, SPIE, 7016, 17.<br />
145
55. “The Dark Energy Survey Data-Management System: The Processing Framework”<br />
56. Gower, M.; Mohr, J. J.; Adams, D.; Cai, Y. D.; Lin, H.; Neilsen, E. H.; Tucker, D.; Bertin, E.;<br />
da Costa, L. A. N.; Martelli, L.; Ogando, R. L. C.; Jarvis, M.; Sheldon, E.<br />
57. 2009, contribution to the Astronomical Data Analysis Software and Systems XVIII ASP Conference<br />
Series, 411, 14-17<br />
58. “The Dark Energy Survey Data Management System: The Coaddition Pipeline and PSF<br />
Homogenization”<br />
Darnell, T.; Bertin, E.; Gower, M.; Ngeow, C.; Desai, S.; Mohr, J. J.; Adams, D.; Daues, G. E.;<br />
Gower, M.; Ngeow, C.; Desai, S.; Beldica, C.; Freemon, M.; Lin, H.; Neilsen, E. H.; Tucker, D.;<br />
da Costa, L. A. N.; Martelli, L.; Ogando, R. L. C.; Jarvis, M.; Sheldon, E.<br />
59. 2009, contribution to the Astronomical Data Analysis Software and Systems XVIII ASP Conference<br />
Series, 411, 18-21<br />
60. “Application of the Dark Energy Survey Data Management System to the Blanco Cosmology Survey<br />
Data” Ngeow, C.C., Mohr, J.J., Barkhouse, W., Alam, T., Beldica, C., Cai, D., Daues, G., Duda, P.,<br />
Annis, J., Lin, H., Tucker, D., Rest, A., Smith, C., Lin, Y., High, W., Hansen, S., Brodwin, M., Allam, S.<br />
and BCS Collaboration. 2006 American Astronomical Society Meeting, 209, #22.06<br />
61. “The Blanco Cosmology Survey: Data Reduction, Calibration and Photometric Redshift Estimation to<br />
Four Distant Galaxy Clusters Discovered by the South Pole Telescope”<br />
62. Ngeow, Chow Choong; Mohr, J.; Zenteno, A.; Data Management, DES; BCS; SPT Collaborations<br />
63. 2009 American Astronomical Society Meeting, 213, #448.04<br />
64. “A Data Management System for the Dark Energy Survey”<br />
65. Barkhouse, W., Alam, T., Beldica, C., Cai, D., Daues, G., Mohr, J., Ngeow, C., Plante, R., Annis, J., Lin,<br />
H. and Tucker, D. 2006 American Astronomical Society Meeting, 208, #62.02<br />
AST080007<br />
66. Kowal & Lazarian, 2010, “Turbulence in Collisionless Plasmas: Statistical Analysis from Numerical<br />
Simulations with Pressure Anisotropy”, submitted to ApJ<br />
67. Kowal, Falceta-Gon¸calves, D. & Lazarian, 2010, “Velocity Field of Compressible MHD Turbulence:<br />
Wavelet Decomposition and Mode Scalings”, submitted to ApJ<br />
68. Kowal, Lazarian, Vishniac & Otmianowska-Mazur, 2009, “Numerical Tests of Fast Reconnection in<br />
Weakly Stochastic Magnetic Fields”, ApJ, 700, 63<br />
69. Beresnyak & Lazarian, 2010, “Nonlocality of Balanced and Imbalanced Turbulence”, submitted to ApJ,<br />
arXiv:1002.2428<br />
70. Beresnyak, Yan & Lazarian, 2010 “Numerical study of Cosmic Ray Diffusion in MHD turbulence”,<br />
submitted to ApJ, arXiv:1002.2646<br />
71. Beresnyak & Lazarian 2009, “Structure of Stationary Strong Imbalanced Turbulence”, ApJ, 702, 460,<br />
72. Beresnyak & Lazarian 2009, “Comparison of spectral slopes of magnetohydrodynamic and<br />
hydrodynamic turbulence and measurements of alignment effects” ApJ, 702, 1190<br />
AST080028<br />
73. Simulations of Magnetized Disks Around Black Holes: Effects of Black Hole Spin, Disk Thickness, and<br />
Magnetic Field Geometry Authors: Penna, Robert F.; McKinney, Jonathan C.; Narayan, Ramesh;<br />
Tchekhovskoy, Alexander; Shafee, Rebecca; McClintock, Jeffrey E. Journal: Monthly Notices of the<br />
Royal Astronomical Society, 2010, submitted (arXiv:1003.0966) TeraGrid resources used: NICS<br />
Kraken, LONI QueenBee, NCSA Abe Web link: http://www.cfa.harvard.edu/_narayan/TeraGrid<br />
Papers/narayan1.<strong>pdf</strong><br />
74. Black Hole Spin and The Radio Loud/Quiet Dichotomy of Active Galactic Nuclei Authors:<br />
Tchekhovskoy, Alexander; Narayan, Ramesh; McKinney, Jonathan C. Journal: The Astrophysical<br />
Journal, 711, 50-63 (2010) TeraGrid resources used: LONI QueenBee Web link:<br />
http://www.cfa.harvard.edu/_narayan/TeraGrid Papers/narayan2.<strong>pdf</strong><br />
75. Numerical Studies of Relativistic Jets Authors: Narayan, Ramesh; Tchekhovskoy, Alexander; McKinney,<br />
Jonathan C. Journal: Proceedings of “Accretion and Ejection in AGN: A Global View” (Como, June 22-<br />
26, 2009), ASP Conference Series, Eds: L. Maraschi, G. Ghisellini, R. Della Ceca and F. Tavecchio, in<br />
press, 2010 (arXiv:1001.1355) TeraGrid resources used: LONI QueenBee Web link:<br />
http://www.cfa.harvard.edu/_narayan/TeraGrid Papers/narayan3.<strong>pdf</strong><br />
76. Magnetohydrodynamic Simulations of Gamma-Ray Burst Jets: Beyond the Progenitor Star Authors:<br />
Tchekhovskoy, Alexander; Narayan, Ramesh; McKinney, Jonathan C. Journal: New Astronomy, in<br />
146
press, 2010 (arXiv:0909.0011) TeraGrid resources used: LONI QueenBee Web link:<br />
http://www.cfa.harvard.edu/_narayan/TeraGrid Papers/narayan4.<strong>pdf</strong><br />
77. Efficiency of Magnetic to Kinetic Energy Conversion in a Monopole Magnetosphere Authors:<br />
Tchekhovskoy, Alexander; McKinney, Jonathan C.; Narayan, Ramesh Journal: The Astrophysical<br />
Journal, 699, 1789-1808, 2009 TeraGrid resources used: LONI QueenBee Web link:<br />
http://www.cfa.harvard.edu/_narayan/TeraGrid Papers/narayan5.<strong>pdf</strong><br />
AST100029<br />
78. Armitage, C., and Wandelt, B. D., “PreBeaM for Planck: a Polarized, Regularized, Beam-Deconvolution<br />
Map-Making Method,” arxiv:0807.4179, Astrophysical Journal Supplement, 181, 533-542 (2009)<br />
79. Fendt, W.A., Chluba, J., Rubino-Martin, J.A., Wandelt, B. D., “Rico: An Accurate Cosmological<br />
Recombination Code,” arXiv:0807.2577, Astrophysical Journal Supplement, 181, 627-638 (2009)<br />
80. Jewell, J. B., Eriksen, H. K., Wandelt, B. D., O'Dwyer, I. J., Huey, G., Gorski, K. M., “A Markov Chain<br />
Monte Carlo Algorithm for analysis of low signal-to-noise CMB data,” arXiv:0807.0624, Astrophysical<br />
Journal 697, 258-268 (2009)<br />
81. Elsner, F. and Wandelt, B. D., “Improved simulation of non-Gaussian temperature and polarization CMB<br />
maps,” arXiv:0909.0009, Astrophysical Journal Supplement, 184, 264-270 (2009)<br />
82. Dickinson, C., Eriksen, H. K., Banday, A. J., Jewell, J. B., Gorski, K.M., Huey, G., Lawrence, C. R.,<br />
O'Dwyer, I. J., Wandelt, B. D., “Bayesian component separation and CMB estimation for the 5-year<br />
WMAP temperature data,” arXiv:0903.4311, Astrophysical Journal, 705, 1607-1623 (2009)<br />
83. Biswas, R. and Wandelt, B. D., “Parameters and pitfalls in dark energy models with time varying<br />
equation of state,” arXiv:0903.4311, Physical Review D, in press.<br />
84. Groeneboom, N. E. , Eriksen, H. K., Gorski, K., Huey, G., Jewell, J., Wandelt, B. D., “Bayesian analysis<br />
of white noise levels in the 5-year WMAP data,” arXiv:0904.2554, Astrophysical Journal 702, L87-L90<br />
(2009)<br />
85. Elsner, F. and Wandelt, B. D., and Schneider, M., “Probing Local Non-Gaussianities Within a Bayesian<br />
Framework”, accepted for publication in Astronomy and Astrophysics.<br />
86. Kitaura, F. S., Jasche, J., Li, C., Ensslin, T. A., Metcalf, R. B., Wandelt, B. D., Lemson, G., White, S. D.<br />
M., “Cosmic Cartography of the Large-Scale Structure with Sloan Digital Sky Survey Data Release 6,”<br />
arXiv:0906.3978, accepted for publication in MNRAS.<br />
87. Lavaux, G. and Wandelt, B. D., “Precision cosmology with voids: definition, methods, dynamics,” 15<br />
pages, arXiv:0906.4101, accepted for publication in MNRAS.<br />
88. Jasche, J., F.Kitaura, S., Wandelt, B. D., and Ensslin, T. A., “Bayesian power-spectrum inference for<br />
Large Scale Structure data,” accepted for publication in Monthly Notices of the Royal Astronomical<br />
Society.<br />
89. Biswas, R., Alizadeh, E., and Wandelt, B. D., “Voids as a precision probe of dark energy,” accepted for<br />
publication in Physical Review D.<br />
90. Rubino-Martin, J.A., Chluba, J., Fendt, W.A., and Wandelt, B.D., “Estimating the impact of<br />
recombination uncertainties on the cosmological parameter constraints from cosmic microwave<br />
background experiments,” arXiv:0910.4383, submitted to Monthly Notices of the Royal Astronomical<br />
Society.<br />
91. Huffenberger, K. and Wandelt, B. D., “Fast and Exact Spin-S Spherical Harmonic Transforms,”<br />
submitted to the Astrophysical Journal Supplement.<br />
92. Lavaux, G. and Wandelt, B. D., “Fast CMB lensing using statistical interpolation on the sphere,” 9<br />
pages, arXiv:1003.4984, submitted to the Astrophysical Journal<br />
MCA04N015<br />
93. Ball, N.M., & Brunner, R.J., “Data Mining and Machine Learning in Astronomy”, International Journal of<br />
Modern Physics D., in press.<br />
94. Ball, N.M., Brunner, R.J., Myers, A.D., Strand, N.E., Alberts, S.L., and Tcheng, D., "Robust Machine<br />
Learning Applied to Astronomical Datasets. III. Probabilistic Photometric Redshifts for Galaxies and<br />
Quasars in the SDSS and GALEX", 2008, Astrophysical Journal, 683, 12.<br />
95. Ball, N.M., Brunner, R.J., Myers, A.D., Strand, N.E., Alberts, S.A., Tcheng, D., and Llora, X., "Robust<br />
Machine Learning Applied to Astronomical Datasets. II. Photometric Redshifts of Quasars in a GALEX-<br />
SDSS Federated Dataset.", 2007, Astrophysical Journal, 663, 774<br />
147
96. Ball, N.M., Brunner, R.J., Myers, A.D., and Tcheng, D., "Robust Machine Learning Applied to<br />
Astronomical Datasets. I. Star-Galaxy Classification of the Sloan Digital Sky Survey DR3 Using<br />
Decision Trees", 2006, Astrophysical Journal, 650, 497<br />
97. Lundgren Britt, PhD Thesis, University of Illinois at Urbana-Champaign (2009)<br />
98. Lundgren, B, Brunner, R.J., et al. 2009, “Cross-Correlation of MgII quasar absorption systems and LRG<br />
galaxies in SDSS DR5”, Astrophysical Journal, 698, 819.<br />
99. Myers, A.D., White, M., and Ball, N., 2009, “Incorporating Photometric Redshift Probability Density<br />
Information into Real-Space Clustering Measurements”, Monthly Notices of the Royal Astronomical<br />
Society, 399, 2279.<br />
100. Myers, A.D., Richards, G., Brunner, R.J., et al. 2008, “Quasar Clustering at 25 h-1 kpc from a Complete<br />
Sample of Binaries”, Astrophysical Journal, 678, 635.<br />
101. Myers, A.D., Brunner, R.J., and the SDSS collaboration, "First Measurement of the Clustering Evolution<br />
of Photometrically Classified Quasars", 2006 Astrophysical Journal, 638, 622<br />
102. Myers, A.D., Brunner, R.J., and the SDSS collaboration, "The Clustering Evolution of 300,000<br />
Photometrically Classified Quasars--I. Luminosity and Redshift Evolution in Quasar Bias", 2007<br />
Astrophysical Journal, 658, 85<br />
103. Myers, A.D., Brunner, R.J., and the SDSS collaboration, "The Clustering Evolution of 300,000<br />
Photometrically Classified Quasars--II. The Excess on Very Small Scales", 2007 Astrophysical Journal,<br />
658, 99<br />
104. Myers, A.D., Brunner, R.J., and the SDSS collaboration, "Quasar Clustering at 25 h-1 kpc from a<br />
Complete Sample of Binaries", 2008 ApJ, 678, 635<br />
105. Ross, A.J., Percival, W., and Brunner, R.J., 2010, “Evolution of the Clustering of Photometrically<br />
Selected SDSS Galaxies”, Monthly Notices of the Royal Astronomical Society, in press.<br />
106. Ross, Ashley, PhD Thesis, University of Illinois at Urbana-Champaign (2009)<br />
107. Ross, A.J. and Brunner, R.J., 2009, “Halo-Model Analysis of the clustering of Photometrically Selected<br />
Galaxies from the SDSS”, Monthly Notices of the Royal Astronomical Society, 399, 878.<br />
108. Ross, A.J., Brunner, R.J., and Myers, A.D., “Normalization of the Matter Power Spectrum via Higher<br />
Order Angular Correlations of Luminous Red Galaxies”, Astrophysical Journal, 682, 737 (2008).<br />
109. Ross, A.J., Brunner, R.J., and Myers, A.D., “Higher Order Angular Galaxy Correlations in the SDSS:<br />
Redshift and Color Dependence of Nonlinear Bias”, Astrophysical Journal, 665, 67 (2007).<br />
110. Ross A.J., Brunner R.J. & Myers A.D., “Precision Measurements of Higher-Order Angular Galaxy<br />
Correlations Using 11 Million SDSS Galaxies”, Astrophysical Journal, accepted (2006)<br />
111. Scranton R., Ménard B., Richards G.T., Nichol R.C., Myers A.D., Jain B., Gray A., Bartelmann M.,<br />
Brunner R.J., et al., “Detection of Cosmic Magnification with the Sloan Digital Sky Survey”,<br />
Astrophysical Journal (2005), 633, 589<br />
112. Strand, Natalie, PhD Thesis, University of Illinois at Urbana-Champaign (2009)<br />
113. Strand, N.E., Brunner, R.J., and Myers, A.D., 2008, “AGN Environments in the Sloan Digital Sky Survey<br />
I: Dependence on Type, Redshift, and Luminosity”, Astrophysical Journal, 688, 180<br />
MCA93S005<br />
114. Brown, B.P., Talk, “Rotating convection: Flows in plasma experiments”, Center for Magnetic Self-<br />
Organization workshop on ‘Flow Driven Instabilities and Turbulence in High Beta Plasmas’, Univ.<br />
Wisconsin, Madison, WI, Dec 2009.<br />
115. Gordovskyy, M., Jain, R. & Hindman, B.W. 2009, “The role of mode mixing in the absorption of p-<br />
modes”, Astrophys. J., 694, 1602-1609.<br />
116. Gough, D. & Hindman, B.W. 2010, “Helioseismic detection of deep meridional flow”, Astrophys. J., in<br />
press.<br />
117. Miesch, M.S., Invited talk, “Mean flows and turbulent transport in global solar convection simulations”, in<br />
Program on Dynamo Theory, KITP, Univ Calif Santa Barbara, CA, 2 July 08.<br />
118. Miesch, M.S. & Hindman, B.W., 2010, “Gyroscopic pumping in the solar near-surface shear layer”,<br />
Astrophys. J., to be submitted.<br />
119. Miesch, M.S. & Toomre, J., 2009, “Turbulence, magnetism, and shear in stellar interiors”, Ann. Rev.<br />
Fluid Mech., 41, 317–345, and on-line ARFM appendix, “Tachocline confinement”, 6pp.<br />
120. Rast, M.P., Invited talk, “A visual interface for the analysis of turbulent structure statistics”, in Astronum-<br />
2009, Chamonix, France, 3 July 2009.<br />
148
121. Rast, M.P., Talk, “Precision photometric imaging of the Sun: What have we learned with the PSPT”, in<br />
LASP colloquium, Boulder, Colorado, 10 Sep 2009.<br />
122. Rast, M.P., Talk, “Is there such a thing as quiet-sun”, in SOHO-23, Northeast Harbor, Maine, 22 Sept<br />
2009.<br />
123. Toomre, J., Invited overview talk, “Advanced simulations: Global solar convection and dynamos”,<br />
American Museum of Natural History, New York, NY, 7 Feb 08.<br />
124. Toomre, J., Invited plenary talk, “Joys of highly turbulent solar convection and magnetic dynamos”, at<br />
Intern. Conf. on Turbulent Mixing and Beyond, The Abdus Salam Intern. Center for Theor. Phys.,<br />
Trieste, Italy, 27 July 2009.<br />
125. Toomre, J., Invited talk, “New 3-D helioseismic inversion approaches for ring-diagram data”, HMI<br />
Science Team Meeting, Stanford Univ., Palo Alto, CA, 10 Sep 2009.<br />
126. Brun, A.S., Miesch, M.S. & Toomre, J., 2009, “Evolution of structures in magnetic dynamo action<br />
realized in deep shells of convection”, Astrophys. J., in preparation.<br />
127. Brun, A.S., Miesch, M.S. & Toomre, J., 2010, “Toward a 3-D nonlinear model of the solar interior”,<br />
Astrophys. J., to be submitted.<br />
128. Criscuoli, S., Harder, J.W., Rast, M.P. & Ermolli, I., Poster, “Unresolved magnetic elements and their<br />
implication for cycle variations in the solar spectral irradiance”, in “Recent directions in astrophysical<br />
quantitative spectroscopy and radiation hydrodynamics”, Boulder, Colorado, 30 Mar 2009.<br />
129. DeRosa, M.L., Talk, “Nonlinear force-free magnetic field modeling of AR 10953: A critical assessment”,<br />
given at Amer. Astron. Soc., Solar Phys. Div. meeting, Boulder, CO, June 2009.<br />
130. DeRosa, M.L., Talk, “Aspects of helio- and asterospheric magnetic fields”, in workshop on “Magnetism<br />
in Stars: origins, observations, opportunities”, CITA, Toronto, Oct 2009.<br />
131. DeRosa, M.L., Talk, “Adventures in solar magnetic field modeling”, presentation in the SSL seminar,<br />
Berkeley, CA, Mar 2010.<br />
132. DeRosa, M.L., Talk, “A spherical harmonic analysis of the evolution of the photospheric magnetic field,<br />
and consequences for the solar dynamo”, to appear in the Amer. Astron. Soc., Solar Phys. Div.<br />
meeting, Miami, FL, May 2010.<br />
133. Dikpati, M., Gilman, P.A., Cally, P.S., & Miesch, M.S., 2009, “Axisymmetric MHD instabilities in<br />
solar/stellar tachoclines”, Astrophys. J., 692, 1421–1431.<br />
134. Hindman, B.W., Haber, D.A. & Toomre, J. 2009, “Subsurface circulations within active regions,”<br />
Astrophys. J., 698, 1749- 1760.<br />
135. Hindman, B.W., Invited talk, “Subsurface Circulations within Active Regions”, HAO Seminar, High<br />
Altitude Observatory, Boulder, Colorado, Apr 2009.<br />
136. Hindman, B.W., Talk, “Subsurface Circulations Established by Active Regions”, SHINE meeting,<br />
Wolfville, Nova Scotia, Aug 2009.<br />
137. Hindman, B.W., Talk, “New 3-D inversion modules for ring-diagram data”, HMI Science Team Meeting,<br />
Stanford, Sept 2009.<br />
138. Jain, R., Hindman, B.W., Braun, D.C. & Birch, A.C. 2009, “Absorption of p modes by thin magnetic flux<br />
tubes,” Astrophys. J., 695, 325-335.<br />
139. Hurlburt, N.E., DeRosa, M.L., Augustson, K.C., & Toomre, J., 2010, “Effects of granulation upon largerscale<br />
convection”, Proc. Third Hinode Science Meeting, ASP Conf. Ser. , submitted.<br />
140. Miesch, M.S., Invited seminar, “Magnetism, and the inner turmoil of stars”, Center for Space Physics,<br />
Boston University, Boston, MA, Feb 26 2009.<br />
141. Miesch, M.S., Invited seminar, “Magnetism, and the inner turmoil of stars”, Dept. of Applied<br />
Mathematics, University of Colorado, Boulder, CO, 16 Apr 2009.<br />
142. Miesch, M.S., Invited seminar, “Magnetism, and the inner turmoil of stars”, Plasma Physics Seminar,<br />
University of California San Diego, San Diego, CA, 20 Apr 2009.<br />
143. Miesch, M.S., 2010, “Solar internal flows and dynamo action”, Ch. 5 of “Heliophysics III: evolving solar<br />
activity and the climates of space and earth”, ed. Schrijver, C.J. & Siscoe, G.L., Cambridge Univ. Press,<br />
in press.<br />
144. Miesch, M.S., 2010, “The dynamo dialectic: an inside look at the current solar minimum”, in Proc.<br />
“SOHO-23: understanding a peculiar solar minimum”, ASP Conference Series, in press.<br />
145. Miesch, M.S., Invited talk, “Perspectives on the global solar dynamo; turbulence, convection, and<br />
shear”, in the HELAS workshop: “Synergies between solar and stellar modeling”, Rome, Italy, 25 Jun<br />
2009.<br />
149
146. Miesch, M.S., Invited lecture, “Solar internal flows and dynamo action”, in “NASA heliophysics summer<br />
school, Year 3: the Earth’s climate system and long-term solar activity”, NCAR, Boulder, CO, 23-24 Ju<br />
2009.<br />
147. Miesch, M.S., Invited talk, “Convection, shear, and magnetism in the sun”, in “Magnetism in stars:<br />
origins, observations, and opportunities”, Canadian Institute for Theoretical Astrophysics, Toronto,<br />
Canada, 28 Oct 2009.<br />
148. Miesch, M.S., Invited talk, “WHI and the solar dynamo”, in “Whole heliosphere interval second<br />
workshop”, NCAR, Boulder, CO, 10 Nov 2009.<br />
149. Miesch, M.S., Invited lecture, “The internal rotation of the sun”, CMPD/CMSO Plasma Physics Winter<br />
School, UCLA, Los Angeles, CA, 7 Jan 2010.<br />
150. Miesch, M.S., Browning, M.K., Brun, A.S., Brown, B.P & Toomre, J., Talk, “Mean-field generation in<br />
turbulent convective dynamos: the role of a tachocline”, in the Amer. Astron. Soc., Solar Phys. Div.<br />
Meeting, Boulder, CO, 15 Jun 2009.<br />
151. Miesch, M.S., Li, L., Sofia, S. & Brown, B.P., 2010, “Dynamo-induced variations in solar structure”,<br />
Astrophys. J., to be submitted.<br />
152. Rast, M.P., Session Leader, in “Active region magnetic fields: from the photosphere to the corona,”<br />
SHINE Workshop, Wolfville, Nova Scotia, 6 Aug 2009.<br />
153. Rempel, M., Talk, “Sunspot structures from radiative MHD simulations”, Max-Planck Institut f¨ur<br />
Sonnensystemforschung, Freiburg, Germany, 27 Feb 2009.<br />
154. Rempel, M., Invited talk, “Radiative MHD simulations of sunspot structure”, in HELAS workshop on<br />
“Sunspots and helioseismology”, Berlin, Germany, 12 May 2009.<br />
155. Rempel, M., Talk, “Numerical simulations of sunspot structure”, Amer. Astron. Soc., Solar Phys. Div.<br />
Meeting, Boulder, CO, 14 Jun 2009<br />
156. Rempel, M., Invited talk, “Radiative MHD simulations of sunspot structure”, in conference on “Natural<br />
Dynamos”, Stara Lesna, Slovakia, 29 Aug 2009.<br />
157. Rempel, M., Invited talk, “Numerical MHD modelling of sunspot structure”, colloquium talk, Instituto<br />
Astrofisica de Canarias, Tenerife, Spain, 22 Oct 2009.<br />
158. Rempel, M., Invited talk, “Numerical sunspot models - subsurface structure and helioseismic forward<br />
modeling”, presented at the AGU Meeting, San Francisco, 14 Dec 2009.<br />
159. Toomre, J., Seminar, “Turbulent dynamo processes in stars”, Astron. Dept., Bologna Univ., Bologna,<br />
Italy, 29 July 2009.<br />
160. Toomre, J. Invited colloquium, “Probing the sources of solar magnetism with heliosismology and<br />
simulations”, Dept. of Astron., Yale Univ., New Haven, CT, 16 Oct 2009.<br />
161. Toomre, J., Invited public presentation, “Touching the heart of magnetism in our nearest star”, Hayden<br />
Planetarium, Amer. Museum of Natural History, New York, NY, 19 Oct 2009.<br />
162. Augustson, K., Brown, B.P., Brun, A.S. & Toomre, J., 2010, “Convection and differential rotation in F-<br />
type stars”, Astrophys. J., to be submitted.<br />
163. Brown, B.P., Thesis talk, “Global-scale stellar dynamos and wreaths of magnetism in rapidly rotating<br />
suns without tachoclines”, Amer. Astron. Soc. Meeting 213, Long Beach, CA, Jan 2009.<br />
164. Brown, B.P., Invited talk, “Mysteries of magnetism in rapidly rotating suns without tachoclines”, Univ<br />
Calif Los Angeles, Los Angeles, CA, Jan 2009.<br />
165. Brown, B.P., Talk, “Dynamo action and wreaths of magnetism in rapidly rotating suns”, colloquium at<br />
High Altitude Observatory, Boulder, CO, May 2009.<br />
166. Brown, B.P., Talk, “Wreaths of magnetism built by dynamos without tachoclines”, in Amer. Astron. Soc.,<br />
Solar Phys. Div. meeting 40, American Astronomical Society, Boulder, CO, May 2009.<br />
167. Brown, B.P., Ph.D. thesis, University of Colorado, Aug 2009, “Convection and Dynamo Action in<br />
Rapidly Rotating Suns”, 220 pp.<br />
168. Brown, B.P., Talk, ”Dynamos in rapidly rotating suns”, in Center for Magnetic Self-Organization general<br />
meeting, Madison, WI, Oct 2009.<br />
169. Brown, B.P.,Talk, “Convection and dynamo action in rapidly rotating suns”, in ‘Magnetism in Stars:<br />
Origins, Observations and Opportunities’, Canadian Institute for Theoretical Astrophysics (CITA),<br />
Toronto, Canada, Oct 2009.<br />
170. Brown, B.P.,Talk, “Simulations of magnetism in rapidly rotating stars”, 2010 NSF Astronomy &<br />
Astrophysics Postdoctoral Fellows Symposium, Amer. Astron. Soc, meeting 215, Washington, D.C., Jan<br />
2010.<br />
150
171. Brown, B.P.,Talk, “Solar rotation II: from the sun to the stars”, Center for Multiscale Plasma Dynamics<br />
and Center for Magnetic Self-Organization winter school, UCLA, Los Angeles, CA, Jan 2010.<br />
172. Brown, B.P., Talk, “Simulations of the global-dynamo in stars like the sun”, Seminar in Plasma Physics,<br />
Univ. Wisconsin, Madison, WI, Feb 2010.<br />
173. Brown, B.P., Browning, M.K., Brun, A.S., Miesch, M.S. & Toomre, J., 2010, “Persistant wreaths of<br />
magnetism in a rapidly rotating sun”, Astrophys. J., 711, 424–438 [Copy provided as Appendix E]<br />
174. Brown, B.P., Browning, M.K., Brun, A.S., Miesch, M.S. & Toomre, J., 2010, “Magnetic cycles in a<br />
convective dynamo simulation of a young solar-type star”, Astrophys. J., submitted.<br />
175. Brown, B.P., Browning, M.K., Brun, A.S., Miesch, M.S. & Toomre, J., Poster, “Wreath-building dynamos<br />
in rapidly rotating suns”, Amer. Astron. Soc. meeting 215, Washington, D.C., Jan 2010.<br />
176. Browning, M.K., Invited talk, “Convection and magnetism in stars: How magnetic fields are built, and<br />
what they can do”, in McGill / Universite de Montreal Joint Astrophysics Seminar, Montreal, Canada, 3<br />
Feb 09.<br />
177. Browning, M.K., Colloquium, “Dynamo action in low-mass stars”, MIT Astrophysics, Cambridge, MA, 7<br />
Apr 09.<br />
178. Browning, M.K., Invited talk, “Stellar dynamos”, in meeting on ‘Magnetic fields of fully convective<br />
dwarfs’, Toulouse, France, 16 Apr 09.<br />
179. Browning, M.K., Poster, “Rotation and magnetic fields in fully convective stars”, IAU General Assembly,<br />
Meeting, Rio de Janeiro, 3-14 Aug 2009.<br />
180. Browning, M.K., Invited talk, “Simulations of dynamos with and without tachoclines”, in meeting on<br />
“Natural Dynamos”, Stara Lesna, Slovakia, 1 Sep 09.<br />
181. Browning, M.K., Talk, “Global simulations of stellar dynamos”, in AGU Fall meeting, San Francisco, CA,<br />
13 Dec 09.<br />
182. Browning, M.K., Invited talk, “Momentum transport in stars”, Princeton Plasma Physics Laboratory,<br />
Princeton, NJ, 19 Jan 2010.<br />
183. Browning, M.K., Invited talk, “Convection and dynamo action in stars: problems and perspectives”,<br />
Physics and Astronomy Seminar, Univ. Delaware, Newark, DE, 20 Apr 2010.<br />
184. Browning, M.K., 2010, “Differential rotation in simulations of fully convective stars”, Astrophys. J.,<br />
submitted.<br />
185. Browning, M.K., Basri, G., Marcy, G.W., West, A.A. & Zhang, J., 2010, “Rotation and magnetic activity<br />
in a sample of M-dwarfs,” Astron. J., 139, 504–518.<br />
186. Brun, A.S. & Palacios, A, 2009, “Numerical simulations of a rotating red giant star. I. 3–D models of<br />
turbulent convection and associated mean flows”, Astrophys. J., submitted.<br />
187. Featherstone, N.A., Talk, “Fossil fields in the convective cores of A-stars”, in “Magnetism in stars:<br />
origins, observations, and opportunities”, CITA, Toronto, Canada, 28 Oct 2009.<br />
188. Featherstone, N.A., Browning, M.K., Brun, A.S. & Toomre, J., 2009, “Effects of fossil fields on<br />
convective core dynamos in A-type stars”, Astrophys. J., 705,1000–1018. [Copy provided as Appendix<br />
D]<br />
189. Reiners, A., Basri, G., & Browning, M., 2009, “Saturation of magnetic flux at low Rossby numbers: the M<br />
stars”, in ‘Cool Stars, Stellar Systems, and the Sun: Proc. 15th Cambridge Workshop’, AIP v. 1094, 728.<br />
190. Reiners, A., Scholz, A., Eisloffel, J., Hallinan, G., Berger, E., Browning, M., Irwin, J., Kuker, M., & Matt,<br />
S., 2009 “The rotation-magnetic field relation”, in ‘Cool Stars, Stellar Systems, and the Sun: Proc. 15th<br />
Cambridge Workshop’, AIP v. 1094, 250.<br />
191. Toomre, J., Invited colloquium, “Joys of turbulent convection and dynamos in both stellar envelopes and<br />
cores”, Astrophys. Dept., Amer. Museum of Natural History, New York, NY, 19 Oct 2009.<br />
MCA06N030<br />
192. Pereira, M. & Bryan, G. L., 2010, “Tidal Torquing of Elliptical Galaxies in Cluster Environments”, ApJ, in<br />
press<br />
193. Molnar, S. M., Chiu, I. -., Umetsu, K., Chen, P., Hearn, N., Broadhurst, T., Bryan, G., & Shang, C. 2010,<br />
“Testing Strict Hydrostatic Equilibrium in Abell 1689”, ApJ, submitted (arXiv:1002.4691)<br />
194. Shang, C., Bryan, G. L., & Haiman, Z. 2010, “Supermassive black hole formation by direct collapse:<br />
keeping protogalactic gas H2 free in dark matter haloes with virial temperatures Tvir > 104 K”, MNRAS,<br />
402, 1249<br />
195. Tonnesen, S., & Bryan, G. L. 2010, “The Tail of the Stripped Gas that Cooled: H I, H_, and X-ray<br />
Observational Signatures of Ram Pressure Stripping”, ApJ, 709, 1203<br />
151
196. Mesinger, A., Bryan, G. L., & Haiman, Z. 2009, “Relic HII regions and radiative feedback at high<br />
redshifts”, MNRAS, 399, 1650<br />
197. Joung, M. R., Mac Low, M.-M., & Bryan, G. L. 2009, “Dependence of Interstellar Turbulent Pressure on<br />
Supernova Rate”, ApJ, 704, 137<br />
198. Molnar, S. M., Hearn, N., Haiman, Z., Bryan, G., Evrard, A. E., & Lake, G. 2009, “Accretion Shocks in<br />
Clusters of Galaxies and Their SZ Signature from Cosmological Simulations”, ApJ, 696, 1640<br />
MCA06N053<br />
199. “The correlation function of quasar: evidence for mergers” DeGraf, C., Di Matteo, T., & Springel, V.<br />
2010, submitted<br />
200. “Galaxy morphology, kinematics and clustering in a hydrodynamic simulation of a cold dark matter<br />
universe” Croft,R., Di Matteo,T, Springel, V. & Hernquist, L., 2009 Monthly Notices of the Royal<br />
Astronomical Society, Volume 400, Issue 1, pp. 43-67<br />
201. “Hydrogen Self Shielding in a Quasar Proximity Zone”, Altay, G., Croft, R. A. C. & Di Matteo, Tiziana<br />
2009, eprint arXiv:0910.5233<br />
202. “Faint-end Quasar Luminosity Functions from Cosmological Hydrodynamic Simulations” DeGraf, C., Di<br />
Matteo, T., & Springel, V. 2009, eprint arXiv:0910.1843<br />
203. “Simulations of the Sunyaev-Zel’dovich effect from quasars” Chatterjee, S.; Di Matteo, T.; Kosowsky, A.;<br />
Pelupessy, I. 2008, Monthly Notices of the Royal Astronomical Society, Volume 390, Issue 2, pp. 535-<br />
544<br />
Bioengineering and Environmental Systems<br />
BCS090007<br />
204. Matthew S. Starosta and Andrew K. Dunn. Three-Dimensional computation of focused beam<br />
propagation through multiple biological cells. Optics Express, 17(15):12455–12469, July 2009.<br />
MCA03S012<br />
205. Cui, Y., K. B. Olsen, T. H. Jordan, K. Lee, J. Zhou, P. Small, D. Roten, G. Ely, D.K. Panda, A.<br />
Chourasia, J. Levesque, S. M. Day, P. Maechling (2010), Scalable Earthquake Simulation on Petascale<br />
Supercomputers, Proceedings of the SC10 International Conference for HPC, Networking and Analysis.<br />
(Submitted for publication).<br />
206. Taborda, R., J. López, H. Karaoglu, J. Urbanic and J. Bielak (2010), Speeding Up Wave Propagation for<br />
Large-Scale Earthquake Simulations, Proceedings of the SC10 International Conference for HPC,<br />
Networking and Analysis. (Submitted for publication).<br />
207. Scott Callaghan, Ewa Deelman, Dan Gunter, Gideon Juve, Philip Maechling, Christopher Brooks, Karan<br />
Vahi, Kevin Milner, Robert Graves, Edward Field, David Okaya, Thomas Jordan (2010), Scaling up<br />
workflow-based applications, J. Computer System Science, special issue on scientific workflows, in<br />
press.<br />
208. Graves, R., T. Jordan; S. Callaghan; E. Deelman; E. Field; G. Juve; C. Kesselman; P. Maechling; G.<br />
Mehta; K. Milner; D. Okaya; P. Small; and K. Vahi (2010). CyberShake: A Physics-Based Seismic<br />
Hazard Model for Southern California, Pure Applied Geophys., accepted for publication<br />
209. Olsen, K.B., and J.E. Mayhew (2010). Goodness-of-fit Criteria for Broadband Synthetic Seismograms<br />
with Application to the 2008 Mw5.4 Chino Hills, CA, Earthquake, Seism. Res. Lett., accepted for<br />
publication.<br />
210. Cruz-Atienza, V.M. and K.B. Olsen (2010). Supershear Mach-Waves Expose the Fault Breakdown Slip,<br />
Tectonophysics, in revision for the special volume on 'Supershar Earthquakes'.<br />
211. Rojas, O., E. Dunham, S.M. Day, L.A. Dalguer, and J.E. Castillo (2009). Finite difference modeling of<br />
rupture propagation with strong velocity-weakening friction, Geophys J. Int., in review.<br />
212. Bielak, J., Graves, R., Olsen, K., Taborda, R., Ramirez-Guzman, L., Day, S., Ely, G., Roten, D., Jordan,<br />
T., Maechling, P., Urbanic, J., Cui, Y. and Juve, G.: The ShakeOut Earthquake Scenario: Verification of<br />
Three Simulation Sets, submitted to Geophysical Journal International, March, 2009.<br />
213. Callaghan, S., P. Maechling, E. Deelman, K. Vahi, G. Mehta, G. Juve, K. Milner, R. Graves, E. Field, D.<br />
Okaya, D. Gunter, K. Beattie, and T. Jordan (2008), 'Reducing Time-to-Solution Using Distributed High-<br />
Throughput Mega-Workflows -- Experiences from SCEC CyberShake', Fourth IEEE International<br />
Conference on eScience, IEEE, Indianapolis, IN,<br />
152
214. Cui, Y., Chourasia, A., Moore, R., Olsen, K., Maechling, P., Jordan, T., The TeraShake Computational<br />
Platform, Advances in Geocomputing, Lecture Notes in Earth Sciences 119, DOI 10.1007/978-3-540-<br />
85879-9_7, pp229-278, editor H. Xing, Springer-Verlag Berlin Heidelberg, 2009.<br />
215. Dalguer, L. A., H. Miyake, S. M. Day, and K. Irikura (2008) Calibrated surface and buried dynamic<br />
rupture models constrained with statistical observations of past earthquakes, Bull. Seism. Soc. Am ,<br />
Vol. 98, doi: 10.1785/0120070134, pp. 1147-1161<br />
216. Day, S. M., R. W. Graves, J. Bielak, D. Dreger, S. Larsen, K. B. Olsen, A.Pitarka, and L. Ramirez-<br />
Guzman (2008). Model for basin effects on long-period response spectra in southern California,<br />
Earthquake Spectra (in press).<br />
217. Doser, D.I., K.B. Olsen, F.F. Pollitz, R.S. Stein, and S. Toda (2009). The 1911 M~6.6 Calaveras<br />
earthquake: Source parameters and the role of static, viscoelastic and dynamic Coulomb stress<br />
changes imparted by the 1906 San Francisco earthquake, Bull. Seis. Soc. Am. 99, 1746-1759.<br />
218. Duan, B., and S.M. Day (2008). Inelastic strain distribution and seismic radiation from rupture of a fault<br />
kink J. Geophys. Res. , in review.<br />
219. Ely, G., S.M. Day, and J-B. Minster (2008) Dynamic rupture models for the southern San Andreas fault,<br />
Bull. Seism. Soc. Am., in review.<br />
220. Ely, G. P., S. M. Day, and J. B. Minster (2008) A support-operator method for viscoelastic wave<br />
modeling in 3D heterogeneous media, Geophys. J. Int. , 172, doi: 10.1111/j.1365-246X.2007.03633.x,<br />
331-344<br />
221. Graves, R. W., The seismic response of the San Bernardino Basin region during the 2001 Big Bear<br />
Lake earthquake, Bull. Seismol. Soc. Am., 98, 241– 252, doi:10.1785/0120070013, 2008.<br />
222. Graves, R.W., B. Aagaard, K. Hudnut, L. Star, J. Stewart & T. H. Jordan (2008), Broadband simulations<br />
for Mw 7.8 southern San Andreas earthquakes: Ground motion sensitivity to rupture speed, Geophys.<br />
Res. Lett., submitted, Sept. 2008.<br />
223. Graves, R., S. Callaghan, E. Deelman, E. Field, N. Gupta, T. H. Jordan, G. Juve, C. Kesselman, P.<br />
Maechling, G. Mehta, D. Meyers, D. Okaya and K. Vahi (2008) Physics Based Probabilistic Seismic<br />
Hazard Calculations for Southern California,14th World Conference on Earthquake Engineering,<br />
October, 2008, Beijing China<br />
224. Ma, S., G.A. Prieto, and G. C. Beroza, Testing Community Velocity Models for Southern California<br />
Using the Ambient Seismic Field, Bull. Seismol. Soc. Am., 98, 2694-2714, DOI: 10.1785/0120080947,<br />
2008<br />
225. Maechling, P., E. Deelman, Y. Cui (2009), Implementing Software Acceptance Tests as Scientific<br />
Workflows, Proceedings of the IEEE International Parallel & Distributed Processing Symposium 2009,<br />
Las Vegas Nevada, July, 2009 (in press)<br />
226. Olsen, K. B., Dalguer, L., Day, S., Cui, Y., Zhu, J., Cruz, V.M., Roten, D., Mayhew, J., Maechling, P.,<br />
Jordan, T., Chourasia, A. and Okaya, D. ShakeOut-D: Ground Motion Estimates Using an Ensemble of<br />
Large Earthquakes on the Southern San Andreas Fault With Spontaneous Rupture Propagation,<br />
Geophysical Research Letters, doi:10.1029/2008GL036832, in press, 2009.<br />
227. Olsen, K.B., W.J. Stephenson, and Andreas Geisselmeyer (2008) 3D Crustal Structure and Long-period<br />
Ground Motions From a M9.0 Megathrust Earthquake in the Pacific Northwest Region, Jour. Seismol.<br />
DOI 10.1007/s10950-007-9082-y.<br />
Chemical and Thermal Systems<br />
CTS050016<br />
228. Broadbelt, L.J. and Pfaendtner, J., “Lexicography of Kinetic Modeling of Complex Systems”, AIChE J.,<br />
2005, 51(8), 2112-2121.<br />
229. Pfaendtner, J., Yu, X. and Broadbelt, L.J., “Quantum Chemical Investigation of Low-Temperature<br />
Intramolecular Hydrogen Transfer of Hydrocarbons”, J. Phys. Chem. A, 2006, 110(37), 10863-10871.<br />
230. Pfaendtner, J. and Broadbelt, L.J., “Elucidation of Structure-Reactivity Relationships in Hindered<br />
Phenols Via Quantum Chemistry and Transition State Theory”, Chem. Eng. Sci., 2007, 62(18-20),<br />
5232-5239.<br />
231. Pfaendtner, J., Yu, X. and Broadbelt, L.J., “The 1D Hindered Rotor Approximation”, Theoretical<br />
Chemistry Accounts, 2007, 118(5-6), 881-898.<br />
232. Pfaendtner, J. and Broadbelt, L.J., “Uncovering the Cause of Contrathermodynamic Behavior in<br />
Intermolecular Hydrogen Transfer of Alkylperoxy Radicals, ROO• + R’H”, ChemPhysChem, 2007, 8(13),<br />
1969-1978.<br />
153
233. Yu, X., Pfaendtner, J. and Broadbelt, L.J., “Ab Initio Study of Acrylate Polymerization Reactions: Methyl<br />
Methacrylate and Methyl Acrylate Propagation”, J. Phys. Chem. A, 2008, 112(29), 6772-6782.<br />
234. Yu, X., Levine, S.E. and Broadbelt, L.J., “Kinetic Study of the Copolymerization of Methyl Methacrylate<br />
and Methyl Acrylate using Quantum Chemistry”, Macromolecules, 2008, 41(21), 8242-8251.<br />
235. Adamczyk, A.J., Reyniers, M.-F., Marin, G.B. and Broadbelt, L.J., “Exploring 1,2-Hydrogen Shift in<br />
Silicon Nanoparticles: Reaction Kinetics from Quantum Chemical Calculations and Derivation of<br />
Transition State Group Additivity (TGSA) Database”, J. Phys. Chem. A, 2010, in press.<br />
236. Adamczyk, A.J., Reyniers, M.-F., Marin, G.B. and Broadbelt, L.J., “Kinetics of Substituted Silylene<br />
Addition and Elimination in Silicon Nanocluster Growth Captured by Group Additivity”, ChemPhysChem,<br />
2010, in press.<br />
237. Adamczyk, A.J., Reyniers, M.-F., Marin, G.B. and Broadbelt, L.J., “Hydrogenated Amorphous Silicon<br />
Nanostructures: Novel Structure-Reactivity Relationships for Cyclization and Ring Opening in the Gas<br />
Phase”, Theoretical Chemistry Accounts, 2010, submitted.<br />
CTS060035<br />
238. Qian, D., Sankaranarayanan, K., Sundaresan, S., Kontomaris, K., and McLaughlin, J.B., “Simulation of<br />
bubble breakup dynamics in homogeneous turbulence,” Chemical Engineering Communications 193,<br />
1038-1063 (2006).<br />
239. Jia, X., McLaughlin, J.B., and Kontomaris, K., “Lattice Boltzmann simulations of contact line motion on<br />
uniform surfaces,” Mathematics and Computers in Simulation 72, 156-159 (2006).<br />
240. Derksen, J., Kontomaris, K., McLaughlin, J.B., and Van den Akker, H.E.A., “Large eddy simulation of<br />
flow dynamics and mixing in an industrial crystallizer geometry,” IChemE Chemical Engineering<br />
Research and Design 85, 169-179 (2007).<br />
241. Jia, X., J.B. McLaughlin, G. Ahmadi, and K. Kontomaris, “Lattice Boltzmann simulations of contact line<br />
pinning,” International Journal of Modern Physics C 18, 595-601 (2007).<br />
242. Jia, X., J.B. McLaughlin, and K. Kontomaris, “Lattice Boltzmann simulations of flows with fluid-fluid<br />
interfaces,” Asia-Pacific Journal of Chemical Engineering 3, 124-143 (2008).<br />
243. Jia, X. J.B. McLaughlin, and K. Kontomaris, “Lattice Boltzmann simulations of drops colliding with solid<br />
surfaces,” European Journal of Physics Special Topics 171, 105-112 (2009).<br />
CTS080034<br />
244. A. D. Naiman, S. K. Lele, F. Ham, J. T. Wilkerson, and M. Z. Jacobson. Large eddy simulation of<br />
persistent contrails in wind shear and atmospheric turbulence. American Physical Society Division of<br />
Fluid Dynamics 62nd Annual Meeting, No. BS-8, Minneapolis, MN, 22-24 November 2009a.<br />
245. A. D. Naiman, S. K. Lele, F. Ham, J. T. Wilkerson, and M. Z. Jacobson. Large eddy simulation of<br />
persistent contrails. 2nd International Conference on Transport, Atmosphere, and Climate, Aachen,<br />
Germany and Maastricht, Netherlands, 22-25 June 2009b.<br />
246. A. D. Naiman, S. K. Lele, J. T. Wilkerson, and M. Z. Jacobson. Modeling persistent contrails in a large<br />
eddy simulation and a global climate model. American Geophysical Union 2009 Fall Meeting, No.<br />
A51C-0119, San Francisco, CA, 14-18 December 2009c.<br />
247. A. D. Naiman, S. K. Lele, F. Ham, J. T. Wilkerson, and M. Z. Jacobson. Large eddy simulation of<br />
persistent contrails. Physics of Fluids, in preparation, 2010a.<br />
CTS080042<br />
248. Voronov, R.S., VanGordon, S., Sikavitsas, V.I., and D.V. Papavassiliou, “Efficient Lagrangian scalar<br />
tracking method for reactive local mass transport simulation through porous media,” submitted to Int. J.<br />
of Numerical Methods in Fluids<br />
249. VanGordon, S., Voronov, R.S., Blue, T.B., Shambaugh, R.L., Papavassiliou, D.V., and V.I. Sikavitsas,<br />
V.I., and D.V. Papavassiliou, “Effects of scaffold architecture on preosteoblastic cultures under<br />
continuous fluid shear,” in press, Ind. Eng. Chem.. Res., 2010.<br />
250. Voronov, R., VanGordon, S., Sikavitsas, V.I., and D.V. Papavassiliou, “Local velocity and stress fields<br />
within 3D porous scaffolds used in perfusion bioreactors for bone tissue growth,” in press, J. of<br />
Biomechanics, 2010, DOI: 10.1016/j.jbiomech.2010.01.007<br />
251. Duong, H.M., Yamamoto, N., Bui, K., Papavassiliou, D.V., Maruyama, S., B.L. Wardle, “Morphology<br />
effects on non-isotropic thermal conduction of aligned single- and multi-walled carbon nanotubes in<br />
polymer nanocomposites,” accepted, Journal of Phys. Chemistry C, 2010.<br />
252. Srinivasan, C., and D.V. Papavassiliou, “Backwards and forwards dispersion of a scalar in turbulent wall<br />
flows,” Int. J. Heat Mass Transf. 53, 1023-1035, 2010, DOI: 10.1016/j.ijheatmasstransfer.2009.11.008<br />
154
253. Le, P.M., and D.V. Papavassiliou, “A physical picture of the mechanism of turbulent heat transfer from<br />
the wall,” Int. J. Heat Mass Transf., 52(21-22), 4873-4882, 2009.<br />
254. Duong, H.M., Yamamoto, N., Papavassiliou, D.V., Wardle, B.L., and S. Maruyama, “Inter-Carbon<br />
Nanotube contact in thermal transport of controlled-morphology polymer nanocomposites,”<br />
Nanotechnology, 20(15), 155702, 2009.<br />
255. VanGordon, S.B., Voronov, R.S., Blue, T.B., Shambaugh, R.L., Papavassiliou, D.V., and V.I.<br />
Sikavitsas, “Influence of Polymer Scaffold Architecture on the Shear Stress Distribution and Dynamic<br />
Culture of Preosteoblastic Cells,” Society of Biomaterials, 2010 Annual Meeting and Exposition,<br />
Seattle, WA, April 2010.<br />
256. Voronov, R., VanGordon S., Sikavitsas V.I., and D.V. Papavassiliou, “A Lagrangian Particle Method as<br />
an Alternative to Solute Lattice Boltzmann: Investigation of Mass Transfer in Biological Porous,” paper<br />
148b, AIChE Annual Conference, Nashville, November, 2009.<br />
257. Voronov, R., VanGordon S., Landy, B., Sikavitsas V.I., and D.V. Papavassiliou, “Understanding the<br />
Tissue Growth Process via Fluid Shear and Nutrient Transport Simulation in 3D Porous Scaffolds Used<br />
in a Perfusion Bioreactor,” paper 55f, AIChE Annual Conference, Nashville, November, 2009.<br />
258. Voronov, R., VanGordon S., Landy, B., Sikavitsas V.I., and D.V. Papavassiliou, “Insight into tissue<br />
growth process via local shear stress and nutrient transport simulation in 3D porous scaffolds cultured<br />
in a perfusion bioreactor,” paper 485u, AIChE Conference, Nashville, November, 2009.<br />
259. Papavassiliou, D.V., Voronov, R., Sikavitsas, V.I., and S. VanGordon, “Flow and nutrient transport<br />
through porous scaffolds used for the culture of bone cells in perfusion bioreactors,” paper P15.00011,<br />
March 2009 meeting of the American Physical Society, Pittsburgh, PA, March, 2009.<br />
260. Duong, H.M., Yamamoto, N., Panzer, M., Marconnet A., Goodson, K., Papavassiliou, D.V., Maruyama,<br />
S., and B.L. Wardle “Thermal Properties of Vertically Aligned Carbon Nanotube-Nanocomposites<br />
Boundary Resistance and Inter- Carbon Nanotube Contact: Experiments and Modeling,” paper<br />
J26.00002, March 2009 meeting of the APS, Pittsburgh, PA, March, 2009.<br />
261. Duong, H.M., Papavassiliou, D.V., Yamamoto, N., and B.L. Wardle “Computational study of the thermal<br />
conductivities of single- and multi-walled carbon nanotube-polymer composites with inter-carbon<br />
nanotube contact,” 2009 MRS Spring Meeting, San Francisco, April 2009.<br />
262. Voronov, R., VanGordon S., Sikavitsas V.I., and D.V. Papavassiliou, “Using a Particle Method to<br />
Investigate Mass Transfer in Biological Porous Media,” paper 679f, AIChE Annual Conference,<br />
Philadelphia, November, 2008.<br />
CTS090004<br />
263. Q. Zhang and D. J. Bodony, “Numerical Simulation of Two-Dimensional Acoustic Liners with High<br />
Speed Grazing Flow,” submitted to the AIAA J.<br />
264. J. B. Freund, “Adjoint-based optimization for understanding and suppressing jet noise,” Proceedings of<br />
the IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction, (2010).<br />
265. D. J. Bodony, G. Zagaris, A. Reichter, Q. Zhang, “Aeroacoustic predictions in complex geometries”,<br />
Proceedings of the IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction<br />
(2010).<br />
266. G. Zagaris, D. J. Bodony, M. D. Brandyberry, M. T. Campbell, E. Shaffer & J. B. Freund, “A collision<br />
detection approach to chimera grid assembly for high fidelity simulations of turbofan noise,” AIAA Paper<br />
2010- 0836 (2010).<br />
267. J. Kim, D. J. Bodony, and J. B. Freund, “A high-order, overset mesh algorithm for adjoint-based<br />
optimization for aeroacoustics control,” AIAA Paper 2010-3818 (2010).<br />
CTS090079<br />
268. M. C. Hagy, S. J. Fitzwater, T. H. Pierce, M. Schure, A. V. Popov and R. Hernandez, “Effects of interparticle<br />
energetic barriers on the dynamics of colloidal aggregation,” in preparation.<br />
269. M. C. Hagy, S. J. Fitzwater, T. H. Pierce, M. Schure, A. V. Popov and R. Hernandez, “On the<br />
structuring of colloidal suspensions under shear.” aggregation,” in preparation.<br />
270. G. Ozer, E. Valeev, S. Quirk and R. Hernandez, “The unfolding dynamics of Neuropeptide Y:<br />
accelerated molecular dynamics and adaptive steered molecular trajectories,” in preparation.<br />
155
Chemistry<br />
CHE040030<br />
271. J.W. Medlin, C.M. Horiuchi, M. Rangan, “Effects of ring structure on the reaction pathways of cyclic<br />
esters and ethers on Pd(111)”, Topics in Catalysis, in press (2010).<br />
272. C.M. Horiuchi, M. Rangan, B. Israel, J.W. Medlin, “Adsorption and ring-opening of 2,5(H)-furanone on<br />
the (111) surfaces of Pd and Pt: Implication for selectivity in reactions of unsaturated cyclic<br />
oxygenates”, Journal of Physical Chemistry C, 113 (2009) 14900-14907.<br />
273. M.T. Schaal, M.P. Hyman, M. Rangan, S. Ma, C.T. Williams, J.R. Monnier, and J.W. Medlin,<br />
“Theoretical and experimental studies of Ag-Pt Interactions for supported Ag-Pt bimetallic catalysts”,<br />
Surface Science 603 (2009) 690-696.<br />
274. D.C. Kershner, M.P. Hyman, J.W. Medin, “DFT study of the oxidation of silicon on Pd(111) and<br />
Pt(111)”, Surface Science 602 (2008) 3603-3610.<br />
275. M.P. Hyman and J.W. Medlin, “Mechanistic Studies of Electrocatalytic Reactions”, in Catalysis (volume<br />
20) pp. 309-337, edited by J.J. Spivey and K.M. Dooley. Royal Society of Chemistry (2007).<br />
276. M.P. Hyman, J.W. Medlin, “The Effects of Electronic Structure Modifications on the Adsorption of<br />
Oxygen Reduction Reaction Intermediates on Model Pt(111)-Alloy Surfaces”, Journal of Physical<br />
Chemistry C 111 (2007) 17052-17060.<br />
277. M.P. Hyman, B.T. Loveless, J.W. Medlin, “A Density Functional Theory Study of H2S Decomposition<br />
on the (111) Surfaces of Model Pd-alloys”, Surface Science 601 (2007) 5383-5394.<br />
278. M.P. Hyman, J.W. Medlin, “A Mechanistic Study of the Electrochemical Oxygen Reduction Reaction on<br />
Pt(111) Using Density Functional Theory”, Journal of Physical Chemistry B 110 (2006) 15338-15344.<br />
CHE080052<br />
279. S.L. Fiedler and A. Violi, “Simulation of Nanoparticle Permeation through a Lipid Membrane,”<br />
Biophysical Journal, 2010, in press.<br />
280. S. Chung and A. Violi, “Peri-condensed aromatics with aliphatic chains as key intermediates for the<br />
nucleation of aromatic hydrocarbons,” Proc. Combust. Inst. 33, 2010, accepted.<br />
281. S.L. Fiedler and A. Violi, “Interactions of carbonaceous nanoparticles with a lipid bilayer membrane: A<br />
molecular study,” Proc. 6th US Nat. Comb. Meeting, Ann Arbor, Michigan, May. 2009.<br />
282. S. Chung and A. Violi, “Molecular dynamics study of nanoparticle nucleation at high temperature,” Proc.<br />
6th US Nat. Comb. Meeting, Ann Arbor, Michigan, May. 2009.<br />
CHE080060<br />
283. H. Dong, M.R. Nimlos, D.K. Johnson, M.E. Himmel and X. Qian, ‘The Role of Water in β-D-xylose<br />
Condensation Reactions’, J. Phys. Chem. A 113, (2009) 8577-8585.<br />
284. J-L Li and X. Qian, ‘Free Energy Landscape for Glucose Condensation Reactions’ (manuscript to be<br />
submitted in June 2010). J. Phys. Chem.<br />
285. D-J Liu and X. Qian, ‘Mechanisms and Energetics of Glucose Conversion to HMF’ (manuscript to be<br />
submitted in June 2010). J. Phys. Chem.<br />
CHE100070<br />
286. Ellison, M. D.; Morris, S. T.; Sender, M. R.; Padgett, N. E.; Brigham, J. J. Phys. Chem. C 2007, 111,<br />
18127.<br />
CHE100074<br />
287. Sun, X.; Lee, J. K. "The Stability of DNA Duplexes Containing Hypoxanthine (Inosine): Gas versus<br />
Solution Phase and Biological Implications," J. Org. Chem., 2010, ASAP article published on Web<br />
February 25, 2010 (DOI: 10.1021/jo9023683)<br />
288. Zhachkina, A.; Lee, J. K. "Uracil and Thymine Reactivity in the Gas Phase: The SN2 Reaction and<br />
Implications for Electron Delocalization in Leaving Groups," J. Am. Chem. Soc. 2009, 131, 18376-<br />
18385.<br />
289. Zhachkina, A.; Liu, M.; Sun, X.; Amegayibor, F. S.; Lee, J. K. "Gas-Phase Thermochemical Properties<br />
of the Damaged Base O-Methylguanine versus Adenine and Guanine," J. Org. Chem. 2009, 74, 7429-<br />
7440.<br />
290. Tantillo, D. J.; Lee, J. K. "Reaction Mechanisms: Pericyclic Reactions," Annu. Rep. Prog. Chem., Sect.<br />
B, 2009, 105, 285-309.<br />
156
291. Liu, M.; Li, T.; Amegayibor, F. S.; Cardoso, D. S.; Fu, Y.; Lee, J. K. "Gas-Phase Thermochemical<br />
Properties of Pyrimidine Nucleobases," J. Org. Chem., 2008, 73, 9283-9291.<br />
292. Rozenberg, A; Lee, J. K. "Theoretical Studies of the Quinolinic Acid to Nicotinic Acid Mononucleotide<br />
Transformation," J. Org. Chem., 2008, 73, 9314-9319.<br />
293. Wepukhulu, W. O.; Smiley, V. L.; Vemulapalli, B.; Smiley, J. A.; Phillips, L. M.; Lee, J. K. "Evidence for<br />
Pre-Protonation in the Catalytic Reaction of OMP Decarboxylase: Kinetic Isotope Effects using the<br />
Remote Double Label Method," Organic and Biomolecular Chemistry, 2008, 6, 4533-4541 (ALSO<br />
FEATURED ON COVER).<br />
294. Tantillo, D. J.; Lee, J. K. "Reaction Mechanisms: Pericyclic Reactions," Annu. Rep. Prog. Chem., Sect.<br />
B, 2008, 104, 260-283.<br />
295. Liu, M.; Xu, M.; Lee, J. K. "The Intrinsic Reactivity of Ethenoadenine and Mechanism for Excision from<br />
DNA," J. Org. Chem., 2008, 73, 5907-5914.<br />
CHE100082<br />
296. I. Y. Lyubimov, J. McCarty, A. Clark, and M. G. Guenza “Analytical Rescaling of Polymer Dynamics<br />
from Mesoscale Simulations” (submitted to J. Chem. Phys.)<br />
297. J. McCarty, and M. G. Guenza “Multiscale Modeling of Polymer Mixtures” (in preparation).<br />
298. I. Y. Lyubimov, and M. G. Guenza “Rescaling of Dynamical Properties from Mesoscale Simulations” (in<br />
preparation)<br />
299. J. McCarty, I, Y, Lyubimov, and M. G. Guenza “Effective Soft-Core Potentials and Mesoscopic<br />
Simulations of Binary Polymer Mixtures” Macromolecules, 2010 (accepted).<br />
MCB080004<br />
300. Fatmi, M.Q, Ai, R., and Chang, C-E. A., Synergistic Regulation and Ligand Induced Conformational<br />
Changes of Tryptophan Synthase Biochemistry, 2009. 48, 9921-9931.<br />
301. 170 - Driving forces behind allosteric and synergistic regulations in tryptophan synthase Authors: Dr.<br />
Muhammad Q Fatmi PhD, Rizi Ai, Prof. Chia-en A Chang PhD Division: COMP: Division of Computers<br />
in Chemistry Date/Time: Tuesday, March 23, 2010 - 03:00 PM Session Info: Molecular Mechanics<br />
(01:30 PM - 05:45 PM)<br />
302. 252 - Discovering drug candidates for the alpha-subunit of tryptophan synthase: From docking,<br />
moleculardynamics simulations, to experiments Authors: Dr. Muhammad Q Fatmi PhD, Wai M Thant,<br />
So-Ning Lee, Dr. Dimitri Niks PhD, Prof. Michael F Dunn PhD, Prof. Chia-en A Chang PhD Division:<br />
COMP: Division of Computers in Chemistry Date/Time: Tuesday, March 23, 2010 - 06:00 PM Session<br />
Info: Poster Session (06:00 PM - 08:00 PM)<br />
MCA94P017<br />
303. Ilan, B., E. Tajkhorshid, K. Schulten, and G. A. Voth. 2004. The mechanism of proton exclusion in<br />
aquaporin channels. Proteins 55:223-228.<br />
304. Jeon, J. G., and G. A. Voth. 2005. The dynamic stress responses to area change in planar lipid bilayer<br />
membranes. Biophys. J. 88:1104-1119.<br />
305. Izvekov, S., and G. A. Voth. 2005. A multiscale coarse-graining method for biomolecular systems. J.<br />
Phys. Chem. B 109:2469-2473.<br />
306. Izvekov, S., and G. A. Voth. 2005. Effective force field for liquid hydrogen fluoride from ab initio<br />
molecular dynamics simulation using the force-matching method. J. Phys. Chem. B 109:6573-6586.<br />
307. Tepper, H. L., and G. A. Voth. 2005. Protons may leak through pure lipid bilayers via a concerted<br />
mechanism. Biophys. J. 88:3095-3108.<br />
308. Xu, J., and G. A. Voth. 2005. Computer simulation of explicit proton translocation in cytochrome c<br />
oxidase: the D-pathway. Proc. Natl. Acad. Sci. USA. 102:6795-6800.<br />
309. Wu, Y., and G. A. Voth. 2005. A computational study of the closed and open states of the influenza A<br />
M2 proton channel. Biophys. J 89:2402-2411.<br />
310. Ka, B. J., and G. A. Voth. 2004. Combining the semiclassical initial value representation with centroid<br />
dynamics. J. Phys. Chem. B 108:6883-6892.<br />
311. Yan, T. Y., C. J. Burnham, M. G. Del Popolo, and G. A. Voth. 2004. Molecular dynamics simulation of<br />
ionic liquids: The effect of electronic polarizability. J Phys Chem B 108:11877-11881.<br />
312. Chu, J. W., and G. A. Voth. 2005. Allostery of actin filaments: Molecular dynamics simulations and<br />
coarse-grained analysis. Proc. Natl. Acad. Sci. U. S. A. 102:13111-13116.<br />
157
313. Izvekov, S., M. Parrinello, C. J. Burnham, and G. A. Voth. 2004. Effective force fields for condensed<br />
phase systems from ab initio molecular dynamics simulation: A new method for force-matching. J.<br />
Chem. Phys. 120:10896-10913.<br />
314. Izvekov, S., and G. A. Voth. 2005. Ab initio molecular-dynamics simulation of aqueous proton salvation<br />
and transport revisited. J. Chem. Phys. 123:044505.<br />
315. Izvekov, S., and G. A. Voth. 2005. Multiscale coarse-graining of liquid state systems. J. Chem. Phys.<br />
123:134105-134117.<br />
316. Blood, P. D., G. S. Ayton, and G. A. Voth. 2005. Probing the molecular-scale lipid bilayer response to<br />
shear flow using nonequilibrium molecular dynamics. J. Phys. Chem. B 109:18673-18679.<br />
317. Iyengar, S. S., M. K. Petersen, T. J. F. Day, C. J. Burnham, V. E. Teige, and G. A. Voth. 2005. The<br />
properties of ion-water clusters. I. The protonated 21-water cluster. J. Chem. Phys. 123:084309.<br />
318. Ayton, G. S., and G. A. Voth. 2004. Mesoscopic lateral diffusion in lipid bilayers. Biophys. J. 87:3299-<br />
3311.<br />
319. Chang, R., G. S. Ayton, and G. A. Voth. 2005. Multiscale coupling of mesoscopic- and atomistic-level<br />
lipid bilayer simulations. J. Chem. Phys. 122:224716.<br />
320. Izvekov, S., and G. A. Voth. 2006. Multiscale coarse-graining of mixed phospholipid/cholesterol<br />
bilayers. J. Chem. Theor. Comp. 2:637-648.<br />
321. Shi, Q., and G. A. Voth. 2005. Multi-scale modeling of phase separation in mixed lipid bilayers. Biophys.<br />
J. 89:2385-2394.<br />
322. Wang, Y. T., and G. A. Voth. 2005. Unique spatial heterogeneity in ionic liquids. J. Am. Chem. Soc.<br />
127:12192-12193.<br />
323. Wang, Y. T., S. Izvekov, T. Y. Yan, and G. A. Voth. 2006. Multiscale coarse-graining of ionic liquids. J.<br />
Phys. Chem. B 110:3564-3575.<br />
324. Chen, H., Y. Wu, and G. A. Voth. 2006. Origins of proton transport behavior from selectivity domain<br />
mutations of the aquaporin-1 channel. Biophys. J. 90:L73-L75.<br />
325. Xu, J., and G. A. Voth. 2006. Free energy profiles for H+ conduction in the D-pathway of cytochrome c<br />
oxidase: a study of the wild type and N98D mutant enzymes. Biochim. Biophys. Acta. 1757:852-859.<br />
326. Shi, Q., S. Izvekov, and G. A. Voth. 2006. Mixed atomistic and coarse-grained molecular dynamics:<br />
simulation of a membrane-bound ion channel. J. Phys. Chem. B 110:15045-15048.<br />
327. Paramore, S., and G. A. Voth. 2006. Examining the influence of linkers and tertiary structure in the<br />
forced unfolding of multiple-repeat spectrin molecules. Biophys. J. 91:3436-3445.<br />
328. Paramore, S., G. S. Ayton, D. T. Mirijanian, and G. A. Voth. 2006. Extending a spectrin repeat unit I.<br />
Linear force-extension response. Biophys. J. 90:92-100.<br />
329. Wang, Y., and G. A. Voth. 2006. Tail aggregation and domain diffusion of ionic liquids. J. Phys. Chem.<br />
B. 110:18601.<br />
330. Blood, P. D., and G. A. Voth. 2006. Direct observation of Bin/amphiphysin/Rvs (BAR) domain-induced<br />
membrane curvature by means of molecular dynamics simulations. Proc. Nat. Acad. Sci. USA<br />
103:15068-15072.<br />
331. Paramore, S., G. S. Ayton, and G. A. Voth. 2006. Extending a spectrin repeat unit II. Rupture behavior.<br />
Biophys. J. 90:101-111.<br />
332. Chen, H., B. Ilan, Y. Wu, F. Zhu, K. Schulten, and G. A. Voth. 2007. Charge delocalization in proton<br />
channels. I. The aquaporin channels and proton blockage Biophys. J. 92:46-60.<br />
333. Chu, J.-W., S. Izvekov, and G. A. Voth. 2006. The multiscale challenge for biomolecular systems:<br />
coarsegrained modeling. Mol. Sim. 32:211.<br />
334. Jiang, W., Y. Wang, and G. A. Voth. 2007. Molecular dynamics simulation of a nanostructural<br />
<strong>org</strong>anization in ionic liquid/water mixtures. J. Phys. Chem. B. 111:4812-4818.<br />
335. Liu, P., and G. A. Voth. 2007. Smart resolution replica exchange: An efficient algorithm for exploring<br />
complex energy landscapes. J. Chem. Phys. 126:045106.<br />
336. Swanson, J. M. J., C. M. Maupin, H. Chen, M. K. Peterson, J. Xu, Y. Wu, and G. A. Voth. 2007. Proton<br />
solvation and transport in aqueous and biomolecular systems: Insights from computer simulations<br />
(Invited feature article). J. Phys. Chem. B. 111:4300-4314.<br />
337. Wu, Y., B. Ilan, and G. A. Voth. 2007. Charge delocalization in proton channels. II. The synthetic LS2<br />
channel and proton selectivity. Biophys. J. 92:61-69.<br />
158
338. Xu, J., M. A. Sharpe, L. Qin, S. Ferguson-Miller, and G. A. Voth. 2007. Storage of an excess proton in<br />
the hydrogen-bonded network of the D-pathway of cytochrome c oxidase: Identification of a protonated<br />
water cluster. J. Am. Chem. Soc. 129:2910-2913.<br />
339. Chen, H., Y. Wu, and G. A. Voth. 2007. Proton transport behavior through the influenza A M2 channel:<br />
Insights from molecular simulation. . Biophys. J. 93:3470-3479.<br />
340. Chu, J.-W., S. Izvekov, G. S. Ayton, and G. A. Voth. 2007. Emerging methods for multiscale simulation<br />
of biomolecular systems. Mol. Phys. 105:167-175.<br />
341. Paesani, F., S. Iuchi, and G. A. Voth. 2007. Quantum effects in liquid water from an ab initio-based<br />
polarizable force field. J. Chem. Phys.:074506.<br />
342. Zhou, J., I. F. Thorpe, S. Izvekov, and G. A. Voth. 2007. Coarse-grained peptide modeling using a<br />
systematic force-matching approach. Biophys. J. 92:4289-4303.<br />
343. Chu, J.-W., and G. A. Voth. 2007. Coarse-grained free energy functions for studying protein<br />
conformational changes: A double-well network model. Biophys. J. 93:3860-3871.<br />
344. Chu, J. W., and G. A. Voth. 2006. Coarse-Grained Modeling of the Actin Filament Derived from<br />
Atomistic-Scale Simulations. Biophys. J. 90:1572-1582.<br />
345. Liu, P., S. Izvekov, and G. A. Voth. 2007. Multiscale coarse-graining of monosaccharides. J. Phys.<br />
Chem. B. 111:11566-11575.<br />
346. Blood, P. D., R. D. Swenson, and G. A. Voth. 2008. Factors influencing local membrane curvature<br />
induction by N-BAR domains as revealed by molecular dynamics simulations. Biophys. J 95:1866-1876.<br />
347. Noid, W. G., P. Liu, Y. Wang, J.-W. Chu, G. S. Ayton, S. Izvekov, H. C. Andersen, and G. A. Voth.<br />
2008. The Multiscale coarse-graining method. II. Numerical implementation for coarse-grained<br />
molecular models. J. Chem. Phys. 128:244115.<br />
348. Thorpe, I., J. Zhou, and G. A. Voth. 2008. Peptide folding using multi-scale coarse-grained model. J.<br />
Phys. Chem B. 112:13079-13090.<br />
349. Wang, F., S. Izvekov, and G. A. Voth. 2008. Unusual "Amphiphilic" Association of Hydrated Protons in<br />
Strong Acid Solution. J. Am. Chem. Soc. 130:3120-3126.<br />
350. Wu, Y., H. Chen, F. Wang, F. Paesani, and G. A. Voth. 2008. An Improved Multistate Empirical Valence<br />
Bond Model for Aqueous Proton Solvation and Transport. J. Phys. Chem. B 112:467-482.<br />
351. Jeon, J., and G. A. Voth. 2008. Gating of the mechanosensitive channel protein MscL: The interplay of<br />
membrane and protein. Biophys. J. 94:3497-3511.<br />
352. Izvekov, S., J. M. J. Swanson, and G. A. Voth. 2008. Coarse-Graining in Interaction Space: A<br />
Systematic Approach for Replacing Long-Range Electrostatics with Short-Range Potentials. J. Phys.<br />
Chem. B 112:4711-4724.<br />
353. Xu, J., and G. A. Voth. 2008. Redox-coupled proton pumping in cytochrome c oxidase: Further insights<br />
from computer simulation. Biochimica et Biophysica Acta (BBA) - Bioenergetics 1777:196-201.<br />
354. Zhang, Z., L. Lu, W. G. Noid, V. Krishna, J. Pfaendtner, and G. A. Voth. 2008. A Systematic<br />
Methodology for Defining Coarse-grained Site in Large Biomolecules. Biophys. J. 95:5073-5083.<br />
355. Shi, Q., P. Liu, and G. A. Voth. 2008. Coarse-graining in Interaction Space: An Analytical Approximation<br />
for the Effective Short-ranged Electrostatics†. J. Phys. Chem. B 112:16230.<br />
356. Lu, L., and G. A. Voth. 2009. Systematic coarse-graining of a multi-component lipid bilayer. J. Phys.<br />
Chem. B 113:1501-1510.<br />
357. Wang, Y., W. G. Noid, P. Liu, and G. A. Voth. 2009. Effective force coarse-graining. Phys. Chem.<br />
Chem. Phys. 11:2002-2015.<br />
358. Izvekov, S., and G. A. Voth. 2008. A solvent free lipid bilayer model using multiscale coarse-graining. J.<br />
Phys. Chem. B 113:4443-4455.<br />
359. Chen, H., J. Xu, and G. A. Voth. 2009. Unusual Hydrophobic Interactions in Acidic Aqueous Solutions.<br />
J. Phys. Chem. B 113:7291.<br />
360. Li, H., H. Chen, T. Zeuthen, C. Conrad, B. Wu, E. Beitz, and G. A. Voth. 2010 Enhancement of Proton<br />
Conductance by Mutations of the Selectivity Filter of Aquaporin-1 in preparation.<br />
361. Knight, C., C. M. Maupin, S. Izvekov, and G. A. Voth. 2010. Force matching reactive force fields using<br />
condensed phase ab initio simulations: Excess proton in bulk water. (in preparation).<br />
362. Wang, D., and G. A. Voth. 2009. Proton Transport Pathway in the CLC Cl-/H+ Antiporter. Biophys. J.<br />
97:121-131.<br />
363. Yamashita, T., and G. A. Voth. 2010. Properties of Hydrated Excess Protons near Phospholipid<br />
Bilayers. J. Phys. Chem. B 114:592-603.<br />
159
364. Ayton, G. S., E. Lyman, V. Krishna, R. Swenson, C. Mim, V. Unger, and G. A. Voth. 2009. New insights<br />
into BAR domain induced membrane remodeling. Biophys. J. 97:1616-1625.<br />
365. Ayton, G. S., E. Lyman, and G. A. Voth. 2010. Hierarchical coarse-graining strategy for proteinmembrane<br />
systems to access mesoscopic scales. Faraday Disc. Chem. Soc. 144:347-357.<br />
366. Ayton, G. S., and G. A. Voth. 2009. A hybrid coarse-graining approach for lipid bilayers at large length<br />
and time scales. J. Phys. Chem. B 113:4413-4424.<br />
367. Ayton, G. S., and G. A. Voth. 2010. Multiscale simulation of the HIV-1 virion. Biophys. J. submitted.<br />
368. Krishna, V., G. S. Ayton, and G. A. Voth. 2010. Role of protein interactions in defining HIV-1 viral capsid<br />
shape and stability: A coarse-grained analysis. Biophys. J. 98:18-26.<br />
369. Lyman, E., C. Higgs, B. Kim, D. Lupyan, J. C. Shelley, R. Farid, and G. A. Voth. 2009. A Role for a<br />
Specific Cholesterol Interaction in Stabilizing the Apo Configuration of the Human A2A Adenosine<br />
Receptor. Structure 17:1660-1668.<br />
370. Pfaendtner, J., D. Branduardi, M. Parrinello, T. D. Pollard, and G. A. Voth. 2009. The Nucleotide-<br />
Dependent Conformational States of Actin. Proc. Nat. Acad. Sci. 106:12723-12728.<br />
371. Pfaendtner, J., E. M. De La Cruz, and G. A. Voth. 2010. Actin filament remodeling by actin<br />
depolymerization factor/cofilin. Proc. Natl. Acad. Sci. DOI:10.1073/pnas.0911675107.<br />
372. Pfaendtner, J., E. Lyman, T. D. Pollard, and G. A. Voth. 2010. Structure and dynamics of the actin<br />
filament. J. Mol. Biol. 396:252-263.<br />
373. Cui, H. S., G. S. Ayton, and G. A. Voth. 2009. Membrane Binding by the Endophilin N-BAR Domain.<br />
Biophysical Journal 97:2746-2753.<br />
374. Lyman, E., H. S. Cui, and G. A. Voth. 2010. Water under the BAR. Biophys. J. submitted.<br />
375. Wang, Y., and G. A. Voth. 2010. Molecular dynamics simulations of polyglutamine aggregation using<br />
solvent-free multiscale coarse-grained models. J. Phys. Chem. B submitted.<br />
376. Hills, R., L. Lu, and G. A. Voth. 2010. Multiscale coarse-graining of the folding energy landscape:<br />
Towards a general protein force field. PLoS Computational Biology submitted.<br />
377. Lai, C. L., K. E. Landgraf, G. A. Voth, and J. J. Falke. 2010. EPR-guided molecular dynamics simulation<br />
successfully predicts the PIP2 lipid binding stoichiometry of membrane-bound PKCalpha C2 domain.<br />
Biophys. J. submitted.<br />
378. Jao, C. C., C. L. Lai, G. S. Ayton, J. L. Gallop, B. J. Peter, H. T. McMahon, G. A. Voth, and R. Langen.<br />
2010. Membrane binding and self-association of the epsin N-terminal homology domain. Biophys. J.<br />
submitted.<br />
379. Larini, L., L. Lu, and G. A. Voth. 2010. The multiscale coarse-graining method. VI. Implementation of<br />
three-body coarse-grained potentials. J. Chem. Phys. in press.<br />
380. Lu, L., S. Izvekov, A. Das, H. C. Andersen, and G. A. Voth. 2010. Efficient, Regularized, and Scalable<br />
Algorithms for Multiscale Coarse-Graining. Journal of Chemical Theory and Computation 6:954-965.<br />
Computer and Computation Research<br />
CCR090035<br />
381. Chiu, C.; Udayana, R. J. K.; Flores, D. T.; Perez, V. D.; Moore, P. B.; Shinoda, W.; Nielsen S. O. A<br />
mean field approach for computing solid-liquid surface tension for nanoscale interfaces. J. Chem. Phys.<br />
2010, 132:054706.<br />
382. Gao. J., Li. Z., “Uncover the conserved property underlying sequence-distant and structure-similar<br />
proteins.” Biopolymers, 2010 93(4). 340-347<br />
383. Chiu, C.; Moore, P. B.; Shinoda, W.; Nielsen, S. O. Size-dependent hydrophobic to hydrophilic transition<br />
for nanoparticles: A molecular dynamics study. J. Chem. Phys. 2009, 131:244706.<br />
384. Vazquez U. OM.; Shinoda, W.; Chiu, C.; Nielsen, S. O.; Moore, P. B. Calculating the surface tension<br />
between a flat solid and a liquid: a theoretical and computer simulation study of three topologically<br />
different methods. J. Math. Chem. 2009, 45:161-174.<br />
385. Kalescky, R. JB.; Shinoda, W.; Chiu, C.; Moore, P. B.; Nielsen, S. O. Area per ligand as a function of<br />
nanoparticle radius: A theoretical and computer simulation approach; Langmuir, 2009, 25:1352-1359.<br />
386. Liu, Z.; Remsing, R. C.; Liu, D.; Moyna, G.; Pophristic, V. Hydrogen bonding in ortho-substituted<br />
arylamides: The influence of protic solvents J. Phys. Chem. B 2009, 113, 7041-7044.<br />
160
387. Galan, J. F.; Brown, J.; Wildin, J.; Liu, Z.; Liu, D.; Moyna, G.; Pophristic, V. Intramolecular hydrogen<br />
bonding in ortho-substituted arylamide oligomers: A computational and experimental study of orthofluoro<br />
and ortho-chloro N-methylbenzamides J. Phys. Chem. B 2009, 113, 12809-12815.<br />
388. Petrik, I.; Remsing, R. C.; Liu, Z.; O'Brien, B. B.; Moyna, G. Solvation of carbohydrates in N,N'-<br />
Dialkylimidazolium ionic liquids: Insights from multinuclear NMR spectroscopy and molecular dynamics<br />
simulations. In ACS symposium book series, 2009.<br />
389. Gao. J., Li. Z. Comparing four different approaches for the determination of inter-residue interactions<br />
provides insight for the structure prediction of helical membrane proteins. Biopolymers, 2009 91(7) 547-<br />
556<br />
390. Pabuwal. V., Li. Z. Comparative analysis of the packing toplogy of structurally important residues in<br />
helical membrane and soluble proteins. Protein Engg. Des. Sel., 2009 22(2) 67-73<br />
391. Nguyen, TH. T.; Rao, N. Z.; Schroeder, W. M.; Moore, P. B. Coarse-grained molecular dynamics of<br />
tetrameric transmembrane peptide bundle within a lipid bilayer. Chem. Phys. of Lipids. (Accepted with<br />
minor revisions)<br />
392. Galan, J. F.; Tang, C. N.; Chakrabarty, S.; Liu, Z.; Moyna, G.; Pophristic, V. Influence of Solvents on<br />
Intramolecular Hydrogen Bonding in Furan and Thiophene Based Amides: A Computational and<br />
Experimental Study. (In preparation)<br />
393. Galan, J. F.; Liu, Z.;Tang, C. N.; Luong, M.; Tooley, C.; Pophristic, V. Relevant structural features that<br />
control the degree of backbone flexibility in aromatic oligoamide foldamers: A Quantum Mechanical<br />
Assessment. (In preparation)<br />
394. Liu, Z.; Ensing, B.; Moore, P. B. Quantitative Assessment of the Performance of Commonly used Force<br />
Fields on both Low Energy and Transition State Regions of the ( ) Conformational Space: Free<br />
Energy Surfaces of Alanine Dipeptide. (In preparation)<br />
395. Liu, Z.; Pophristic, V. A Detailed ab initio Molecular Orbital Study of Intramolecular Hydrogen Bonding in<br />
ortho-substituted Arylamide: Implications in the Parameterization of Molecular Mechanics Force Fields<br />
and the Design of Arylamide Foldamer. (In preparation)<br />
396. Liu, Z.; Petrik, P.; Remsing, R. C.; O’brien, B.; Moyna, G. Cation - Carbohydrate Interactions in Ionic<br />
Liquids, Fact or Fiction A Computer Simulation Study. (In preparation)<br />
397. Coarse-grained molecular dynamics simulations of fullerines interacting with lipid bilayers, Preston<br />
Moore, Steven Nielsen, Russel DeVane, ACS Mid-Atlantic Regional Meeting (MARM): April 10-13, 2010<br />
Wilmington, Oral Presentation<br />
398. Binding pocket analysis of seven helical transmembrane proteins: Is sequence identity alone suitable<br />
for modeling GPCRs as drug targets Vagmita Pabuwal and Zhijun Li, ACS Mid-Atlantic Regional<br />
Meeting (MARM): April 10-13, 2010 Wilmington, Oral Presentation<br />
399. Area per ligand as a function of nanoparticle radius, A theoretical and computer simulation approach,<br />
Preston Moore, Robert JB. Kalescky, Wataru Shinoda, and Steven O. Nielsen, ACS 238th National<br />
Meeting: August 16-20, 2009 Washington DC, Oral Presentation<br />
400. Expanding NMR in the undergraduate curriculum using sodium-23 NMR in foods, Donald A Bouchard<br />
and Guillermo Moyna, ACS 238th National Meeting: August 16-20, 2009 Washington DC, Oral<br />
Presentation<br />
401. Aromatic oligoamide foldamers: Conformational insights from molecular dynamics simulation using<br />
reparameterized molecular mechanics force fields regarding aromatic-amide torsions, Zhiwei Liu,<br />
Jhenny Galan, and Vojislava Pophristic, ACS 238th National Meeting: August 16-20, 2009 Washington<br />
DC, Oral Presentation<br />
402. Modeling of transmembrane proteins and peptides: All-atom and coarse grain molecular dynamics<br />
simulations of helical bundles in palmitoyloleoylphosphatidylcholine (POPC) lipid bilayer, Zhiwei Liu,<br />
Thuy Hien Nguyen, Jhenny Galan, Russell DeVane, and Preston B Moore, ACS 238th National<br />
Meeting: August 16-20, 2009 Washington DC, Oral Presentation<br />
403. Intramolecular hydrogen bonding of ortho-substituted arylamide oligomers: Model compound study,<br />
Jhenny Galan, J Brown, Zhiwei Liu, Jayme L. Wildin, Chi Ngong Tang, and Vojislava Pophristic, ACS<br />
238th National Meeting: August 16-20, 2009 Washington DC, Oral Presentation<br />
404. Investigating intramolecular hydrogen bonding in aromatic oligoamide foldamers, Jessica Geer, Jayme<br />
L. Wildin, Guillermo Moyna, Jhenny Galan, Zhiwei Liu, and Vojislava Pophristic, ACS 238th National<br />
Meeting: August 16-20, 2009 Washington DC, Oral Presentation<br />
405. Coarse grain molecular dynamics simulations of membrane proteins, Jhenny Galan, Zhiwei Liu, Russell<br />
DeVane, and Preston B. Moore, ACS 238th National Meeting: August 16-20, 2009 Washington DC,<br />
Oral Presentation<br />
161
406. Coarse grain paramaterization of drugs, proteins, and lipids for use in molecular dynamics simulations,<br />
Preston B. Moore: Russell DeVane, Jhenny Galan, and Zhiwei Liu, ACS 238th National Meeting:<br />
August 16-20, 2009 Washington DC, Oral Presentation<br />
407. Molecular dynamics of amphiphathic peptides embedded in a lipid bilayer, Thuy Hien T Nguyen, Zhiwei<br />
Liu, Preston B Moore, ACS Mid-Atlantic Regional Meeting (MARM): April 10-13, 2010 Wilmington,<br />
Poster Presentation<br />
408. Mechanism of anesthetic binding to lipid bilayer, Nicolas Chen, Jhenny Galan, Zhiwei Liu, Preston<br />
Moore, Julian Snow, ACS Mid-Atlantic Regional Meeting (MARM): April 10-13, 2010 Wilmington, Poster<br />
Presentation<br />
409. Development of a coarse-grain intramolecular forcefield for proteins, Kenny Nguyen, Jhenny Galan,<br />
Zhiwei Liu, Russell DeVane, Preston Moore, ACS Mid-Atlantic Regional Meeting (MARM): April 10- 13,<br />
2010 Wilmington, Poster Presentation<br />
410. Targeting the human And<strong>org</strong>en Receptor with steroidal CYP17 inhibitors: A Molecular Docking<br />
approach, Eleonora Gianti, Randy J. Zauhar, Puranik Purushottamachar, Vincent C. O. Njar, ACS Mid-<br />
Atlantic Regional Meeting (MARM): April 10-13, 2010 Wilmington, Poster Presentation<br />
411. Computational study of the electronic structure and function of a novel class of cyclic Phosphazenes,<br />
Whelton A. Miller, Vincent S. Pagnotti, Edward R. Birnbaum, Preston B. Moore, ACS Mid-Atlantic<br />
Regional Meeting (MARM): April 10-13, 2010 Wilmington, Poster Presentation<br />
412. Comparative analysis of the packing topology of structurally important residues in helical membrane<br />
and soluble proteins, Vagmita Pabuwal and Zhijun Li, ACS 238th National Meeting: August 16-20, 2009<br />
Washington DC, Poster Presentation<br />
413. Implementation of a novel coarse-grain model using rhodopsin in a lipid bilayer, Kenny Nguyen, Jhenny<br />
Galan, Zhiwei Liu, and Preston Moore, ACS 238th National Meeting: August 16-20, 2009 Washington<br />
DC, Poster Presentation<br />
414. Mechanism of anesthetic binding to lipid bilayer, Nicolas Chen, Thuy Hien T. Nguyen, Julian Snow, and<br />
Preston B. Moore, ACS 238th National Meeting: August 16-20, 2009 Washington DC, Poster<br />
Presentation<br />
415. Molecular dynamics of proteins embedded in a lipid bilayer, Thuy Hien Nguyen and Preston B Moore,<br />
ACS 238th National Meeting: August 16-20, 2009 Washington DC, Poster Presentation<br />
416. Surface tension, contact angle, and line tension in a liquid nanodroplet, Vladimir Perez, Chi-cheng Chiu,<br />
Steven O. Nielsen, and Preston Moore, ACS 238th National Meeting: August 16-20, 2009 Washington<br />
DC, Poster Presentation<br />
417. Molecular dynamics studies of cellobiose in N,N'-dialkylimidazolium ionic liquids, Igor D. Petrik, Richard<br />
C. Remsing, Zhiwei Liu, Christopher Petoukhoff, and Guillermo Moyna, ACS 238th National Meeting:<br />
August 16-20, 2009 Washington DC, Poster Presentation<br />
418. Molecular dynamics studies of N,N'-dialkylimidazolium ionic liquid solvation and aggregation, Igor D.<br />
Petrik, Richard C. Remsing, Brendan B. O'Brien, Zhiwei Liu, and Guillermo Moyna, ACS 238th National<br />
Meeting: August 16-20, 2009 Washington DC, Poster Presentation<br />
Earth Sciences<br />
MCA04N026<br />
419. V. Akcelik, H. Flath, O. Ghattas, J. Hill, B. van Bloemen Waanders, and L. Wilcox, Fast algorithms for<br />
Bayesian uncertainty quanti_cation in large-scale linear inverse problems based on low-rank partial<br />
Hessian approximations. Submitted, 2009.<br />
420. L. Alisic, C. Burstedde, O. Ghattas, M. Gurnis, G. Stadler, E. Tan, L. C. Wilcox, and S. Zhong, Rhea:<br />
Parallel adaptive mantle convection. Manuscript to be submitted to Geophysical Journal International,<br />
http://ccgo.ices.utexas.edu/publications/AlisicBursteddeGhattasEtAl10.<strong>pdf</strong>, 2010.<br />
421. C. Burstedde, M. Burtscher, O. Ghattas, G. Stadler, T. Tu, and L. C. Wilcox, ALPS: A framework for<br />
parallel adaptive PDE solution, Journal of Physics: Conference Series, 180 (2009), p. 012009.<br />
422. C. Burstedde, O. Ghattas, M. Gurnis, T. Isaac, A. Klockner, G. Stadler, T. Warburton, and L. C.<br />
Wilcox, Extreme-scale AMR. Submitted to the Proceedings of the 2010 ACM/IEEE Conference on<br />
Supercomputing. http://ccgo.ices.utexas.edu/publications/BursteddeGhattasGurnisEtAl10.<strong>pdf</strong>, 2010.<br />
423. C. Burstedde, O. Ghattas, M. Gurnis, E. Tan, T. Tu, G. Stadler, L. C. Wilcox, and S. Zhong, Scalable<br />
adaptive mantle convection simulation on petascale supercomputers, in SC '08: Proceedings of the<br />
162
2008 ACM/IEEE Conference on Supercomputing, Piscataway, NJ, USA, 2008, IEEE Press, pp. 1{15.<br />
Gordon Bell Prize _nalist.<br />
424. C. Burstedde, O. Ghattas, J. Martin, and L. C. Wilcox, Uncertainty quanti_cation in inverse problems<br />
with stochastic Newton MCMC. In preparation, 2010.<br />
425. C. Burstedde, O. Ghattas, G. Stadler, T. Tu, and L. C. Wilcox, Towards adaptive mesh PDE<br />
simulations on petascale computers, in Proceedings of Teragrid '08, 2008. Winner, NSF TeraGrid<br />
Capability Computing Challenge.<br />
426. ALPS: A framework for parallel adaptive PDE solution, 2009. Best poster at IEEE/ACM SC'09.<br />
427. C. Burstedde, O. Ghattas, G. Stadler, T. Tu, and L. C. Wilcox, Parallel scalable adjoint-based adaptive<br />
solution for variable-viscosity Stokes ows, Computer Methods in Applied Mechanics and Engineering,<br />
198 (2009), pp. 1691{1700.<br />
428. C. Burstedde, L. C. Wilcox, and O. Ghattas, p4est: Scalable algorithms for parallel adaptive mesh<br />
re_nement on forests of octrees. Submitted to SIAM Journal on Scienti_c Computing.<br />
http://ccgo.ices.utexas.edu/publications/BursteddeWilcoxGhattas10.<strong>pdf</strong>, 2010.<br />
429. T. Isaac, C. Burstedde, L. C. Wilcox, and O. Ghattas, Fast recursive algorithms on a parallel adaptive<br />
forest of octrees. In preparation, 2010.<br />
430. G. Stadler, C. Burstedde, O. Ghattas, T. Tu, and L. C. Wilcox, Towards highly parallel mesh adaptation<br />
for large-scale PDE applications, in Optimal Control of Coupled Systems of PDE, K. Kunisch, G.<br />
Leugering, J. Sprekels, and F. Troltzsch, eds., vol. 13/2008 of Oberwolfach <strong>Report</strong>, Mathematisches<br />
Forschungsinstitut Oberwolfach, 2008, pp. 645{655.<br />
431. G. Stadler, M. Gurnis, C. Burstedde, L. C. Wilcox, L. Alisic, and O. Ghattas, The dynamics of plate<br />
tectonics and mantle ow: From local to global scales. Submitted to Science,<br />
http://www.gps.caltech.edu/~gurnis/Share/Rhea_papers/Stadler_etal_2010_with_Figures.<strong>pdf</strong>, 2010.<br />
432. L. C. Wilcox, G. Stadler, C. Burstedde, and O. Ghattas, A high-order discontinuous Galerkin method for<br />
wave propagation through coupled elastic-acoustic media, Journal of Computational Physics, (2010).<br />
Submitted, http://ccgo.ices.utexas.edu/publications/WilcoxStadlerBursteddeEtAl10.<strong>pdf</strong>.<br />
433. S. Zhong, A. McNamara, E. Tan, L. Moresi, and M. Gurnis, A benchmark study on mantle convection in<br />
a 3-D spherical shell using CitcomS, Geochemistry, Geophysics, Geosystems, 9 (2008), pp. 10017{+.<br />
MCA08X018<br />
434. Kollat, J.B. and P. Reed, Visualizing the Time-Evolving Consequences and Tradeoffs for Investments in<br />
Environmental Data. Expected Submission September 2010, In-Preparation.<br />
435. Hadka, D. and P.M. Reed, Diagnostic Assessment of Search Controls and Failure Modes in Many-<br />
Objective Evolutionary Optimization. IEEE Transactions on Evolutionary Computation, In-Review.<br />
436. Kollat, J.B., P.M. Reed, and R.M. Maxwell, Many-Objective Groundwater Monitoring Network Design<br />
using Bias-Aware Ensemble Kalman Filtering, Evolutionary Optimization, and Visual Analytics. Water<br />
Resources Research, In-Review.<br />
Electrical and Communications Systems<br />
MCA08X012<br />
437. G. Klimeck, M. Luisier, H. Ryu, S. Lee, R. Rahman, L. Hollenberg, B. Weber and M. Simmons,<br />
“Atomistic Electronic Structure and Transport Modeling of Realistically Extended Nanoelectronics<br />
Devices,” Invited to International Conference On Nanoscience and Nanotechnology (ICONN), Sydney,<br />
Feb. 2010.<br />
438. G. Klimeck, “Conceptual Challenges for Simulation in Nanotechnology,” Invited to NSF Nanoscale<br />
Science and Engineering Grantees Conference, Arlington, Dec. 2009.<br />
439. B. Weber, S. Mahapatra, H. Ryu, S. Lee, G. Klimeck and M. Simmons, “Atomistic-scale Si:P dopant<br />
wires,” presented to International Conference On Nanoscience and Nanotechnology (ICONN), Sydney,<br />
Feb. 2010.<br />
440. W. Pok, G. Scappucci, W. Lee, W. Thompson, H. Buech, B. Weber, S. Mahapatra, L. Hollenberg, M.<br />
Simmons, H. Ryu, S. Lee, G. Klimeck, M. Friesen and M. A. Eriksson, “Precision control of tunneling in<br />
STM-patterned Si:P devices,” Silicon Qubit Workshop, University of California at Berkeley, Aug. 2009.<br />
441. H. Ryu, G. Klimeck, S. Lee, R. Rahman, B. Haley, S. H. Park, N. Kharche, Z. Jiang, T Boykin, C.<br />
Wellard, J. Cole, L. Hollenberg, G. Lansbergen, S. Rogge, B. Weber and M. Simmons, “Nanoelectronic<br />
Modeling (NEMO) for High Fidelity Simulation of Solid- State Quantum Computing Gates”, Silicon Qubit<br />
Workshop, University of California at Berkeley, Aug. 2009.<br />
163
442. S. Kim, A. Paul, M. Luisier, T. B. Boykin and G. Klimeck, “Full 3-D Quantum Transport Simulation of<br />
Interface Roughness in Nanowire FETs,” presented to American Physics Society (APS) Meeting, Mar<br />
2010.<br />
443. H. Ryu, S. Lee, B. Weber, S. Mahapatra, M. Y. Simmons, L. C. L. Hollenberg, and G. Klimeck,<br />
“Quantum Transport in Ultra-scaled Phosphorus-doped Silicon Nanowires,” submitted to IEEE Silicon<br />
Nanoelectronics Workshop (SNW), Hawaii, Jun 2010.<br />
444. B. Weber, S. Mahapatra, H. Ryu, S. Lee, G. Klimeck and M. Y. Simmons, “Atomic-scale Si:P dopant<br />
wire,” submitted to IEEE Silicon Nanoelectronics Workshop (SNW), Hawaii, Jun 2010.<br />
445. S. Lee, H. Ryu and G. Klimeck, “Equilibrium Bandstructure of a Phosphorus -doped Layer in Silicon<br />
using a Tight-binding Approach,” submitted to International Conference on Nanotechnology (IEEE<br />
NANO) 2010, Seoul, Aug 2010.<br />
Environmental Biology<br />
DEB090011<br />
446. Pfeiffer, W. and Stamatakis, A. (2010) “Hybrid MPI/Pthreads parallelization of the RAxML<br />
phylogenetics code.” Ninth IEEE International Workshop on High Performance Computational Biology<br />
(HiCOMB 2010) Atlanta, April 19, 2010.<br />
447. Pfeiffer, W. and Stamatakis, A. (2010) “Hybrid parallelization of the MrBayes and RAxML phylogenetics<br />
codes.” http://wwwkramer.in.tum.de/exelixis/Phylo100225.<strong>pdf</strong>.<br />
448. San Mauro, D., and Agorreta, A. (2010) "Molecular systematics: A synthesis of the common methods<br />
and the state of knowledge." Cell Mol Biol Lett.15(2):311-41. Epub 2010 Mar 5.<br />
449. Regier, J.C., Shultz, J.W., Zwick, A., Hussey, A., Ball, B., Wetzer, R., Martin, J. W., and Cunningham,<br />
C. W. (2010) "Arthropod relationships revealed by phylogenomic analysis of nuclear protein-coding<br />
sequences." Nature 463, 1079-1083.<br />
450. Meyer, A., and Lieb, B. (2010) "Respiratory proteins in Sipunculus nudus - Implications for phylogeny<br />
and evolution of the hemerythrin family." Comparative Biochemistry and Physiology Part B:<br />
Biochemistry and Molecular Biology 155, 171-177.<br />
451. Fritz-Laylin, L.K., Prochnik, S.E., Ginger, M.L., Dacks, J.B., Carpenter, M.L., Field, M.C., Kuo, A.,<br />
Paredez, A., Chapman, J., Pham, J., Shu, S., Neupane, R., Cipriano, M., Mancuso, J., Tu,H., Salamov,<br />
A., Lindquist, E., Shapiro, H., Lucas, S., Grigoriev, I.V., Cande, W.Z., Fulton, C., Rokhsar, D.S., and<br />
Dawson, S.C. (2010) "The Genome of Naegleria gruberi Illuminates Early Eukaryotic Versatility" Cell,<br />
140(5): 631-642.<br />
452. Pang, P.S., Planet, P.J., and Glenn, J.S. (2009) "The Evolution of the Major Hepatitis C Genotypes<br />
Correlates with Clinical Response to Interferon Therapy" PLoS ONE 4(8): e6579.<br />
453. McKenna, D.D., Sequeira, A.S., Marvaldi, A.E., and Farrell, B.D. (2009). "Temporal lags and overlap in<br />
the diversification of weevils and flowering plants." PNAS. 106(17):7083-7088.<br />
454. Van der Merwe, M.M., Kinnear, M.W., Barrett, L.G., Dodds, P.N., Ericson, L., Thrall, P.H., and Burdon,<br />
J.J., "Positive selection in AvrP4 avirulence gene homologues across the genus Melampsora" Proc. R.<br />
Soc. B 276:2913-2922.<br />
455. Zimkus, B.M. (2009) "Patterns of speciation and diversity among African frogs (genus<br />
Phrynobatrachus)." Ph.D. dissertation, Harvard University.<br />
456. Toon, A., Finley, M., Staples, J., and Crandall, K.A. (2009) "Decapod phylogenetics and molecular<br />
evolution" in Decapod Crustacean Phylogenetics (Crustacean Issues) (J. Martin, D. L. Felder, and K. A.<br />
Crandall, eds.). CRC Press, Bocan Raton.<br />
457. Dolapsakis, N. and Economou-Amilli, A. (2009) "A new marine species of Amphidinium (Dinophyceae)<br />
from Thermaikos Gulf, Greece." Acta Protozoologica 48(2):91-190.<br />
458. Klassen, J.L. (2009) "Carotenoid diversity in novel Hymenobacter strains isolated from Victoria Upper<br />
Glacier, Antarctica, and implications for the evolution of microbial carotenoid biosynthesis." PhD thesis,<br />
University of Alberta, Edmonton, Canada.<br />
459. Klassen, J.L. (2009) "Pathway evolution by horizontal transfer and positive selection is accommodated<br />
by relaxed negative selection upon upstream pathway genes in purple bacterial carotenoid<br />
biosynthesis." Journal of Bacteriology, in the press.<br />
460. Spinks, P.Q. and Shaffer, H.B. (2009). "Conflicting mitochondrial and nuclear phylogenies for the widely<br />
disjunct Emys (Testudines: Emydidae) species complex, and what they tell us about biogeography and<br />
hybridization." Syst. Biol. 58(1):1–20.<br />
164
461. Spinks, P.Q., Thomson, R.C., Lovely, G.A. and Shaffer, H.B. (2009) "Assessing what is needed to<br />
resolve a molecular phylogeny: simulations and empirical data from emydid turtles." BMC Evolutionary<br />
Biology 9:56. doi:10.1186/1471-2148-9-56.<br />
462. Penna, A., Fraga, S, Batocchi, C., Casabianca, S., Giacobbe, M.G., Riobò, P. and Vernesi, C. (2009) "A<br />
phylogeographic study of the toxic benthic dinoflagellate genus Ostreopsis Schmidt." Journal of<br />
Biogeography in the press.<br />
463. Hibbett, D.S. and Matheny, P.B. (2009) "The relative ages of ectomycorrhizal mushrooms and their<br />
plant hosts estimated using Bayesian relaxed molecular clock analyses." BMC Biology 7:13<br />
doi:10.1186/1741-7007-7-13.<br />
464. Elias M. and Archibald J. M. (2009) "The RJL family of small GTPases is an ancient eukaryotic<br />
invention probably functionally associated with the flagellar apparatus." Gene 442(1-2):63-72. Epub<br />
2009 Apr 22. doi:10.1016/j.gene.2009.04.011<br />
465. Gile, G.H., Novis, P.M., Cragg D.S., Zuccarello G.C., Keeling P.J. (2009) "The distribution of EF-1alpha,<br />
EFL, and a non-canonical genetic code in the Ulvophyceae: Discrete genetic characters support a<br />
consistent phylogenetic framework." J. Eukaryot. Microbiol. 56(4):367-372.<br />
466. Jabaily, R.S. and Sytsma, K.J. (2009) "Phylogenetics of Puya (Bromeliaceae): Placement, major<br />
lineages and evolution of Chilean species." accepted: American Journal of Botany in the press.<br />
467. Weirauch, C. and Munro, J. (2009) "Molecular phylogeny of the assassin bugs (Hemiptera: Reduviidae),<br />
based on mitochondrial and nuclear ribosomal genes." Molecular Phylogenetics & Evolution 53:<br />
287-299. doi:10.1016/j.ympev.2009.05.039<br />
468. Wilson, N. G., Rouse, G. W., and Giribet, G. (2009) "Assessing the molluscan hypothesis Serialia<br />
(Monoplacophora + Polyplacophora) using novel molecular data." Mol Phylogenet Evol. Jul 30. [Epub<br />
ahead of print]. doi:10.1016/j.ympev.2009.07.028<br />
469. Segade, F. (2009) "Functional evolution of the microfibril-associated glycoproteins." Gene 439:43– 54.<br />
470. Welsh, A.K., Dawson, J.O., Gottfried, G.J., and Hahn, D. "Diversity of Frankia in root nodules of<br />
geographically isolated Arizona alder in central Arizona (USA)." Applied and Environmental<br />
Microbiology (in press).<br />
471. Mirza, B.S., Welsh, A.K., and Hahn, D. (2009) "Growth of Frankia strains in leaf litter-amended soil and<br />
the rhizosphere of a nonactinorhizal plant." FEMS Microbiology Ecology 70:132-141 Epub 2009 Jul 21.<br />
472. Welsh, A.K, B.S. Mirza, J.P. Rieder, M.W. Paschke, D. Hahn. (2009) "Diversity of frankiae in nodules of<br />
Morella pensylvanica grown in soils from five continents." Systematic and Applied Microbiology 32: 201-<br />
210. doi:10.1016/j.syapm.2009.01.002<br />
473. Mirza, B.S., A.K. Welsh, G. Rasul, J.P. Rieder, M.W. Paschke, D. Hahn. (2009) "Variation in Frankia<br />
populations of the Elaeagnus host infection group in nodules of six host plant species after inoculation<br />
with soil." Microbial Ecology 58:384-393. doi:10.1007/s00248-009-9513-0<br />
474. Li, J-t., Che, J., Murphy, R.W., Zhao, H., Zhao, E.-m., Rao, D.-q., and Zhang, Y.-p. (2009) "New insights<br />
to the molecular phylogenetics and generic assessment in the Rhacophoridae (Amphibia: Anura) based<br />
on five nuclear and three mitochondrial genes, with comments on the evolution of reproduction."<br />
Molecular Phylogenetics and Evolution 53: 509–522. doi:10.1016/j.ympev.2009.06.023<br />
475. Murienne, J., Karaman, I., and Giribet, G. (2009) "Explosive evolution of an ancient group of<br />
Cyphophthalmi (Arachnida: Opiliones) in the Balkan Peninsula." J. Biogeography Published online, Oct<br />
14, 2009.<br />
476. Schulte II, J.A., and Cartwright, E.M. (2009) Phylogenetic relationships among iguanian lizards using<br />
alternative partitioning methods and TSHZ1: A new phylogenetic marker for reptiles. Molecular<br />
Phylogenetics and Evolution 50:391-396. doi:10.1016/j.ympev.2008.10.018.<br />
477. Armbruster, W.S., Leed,J., and Baldwind, B.G. (2009) "Macroevolutionary patterns of defense and<br />
pollination in Dalechampia vines: Adaptation, exaptation, and evolutionary novelty" Proc. Natl. Acad. Sci<br />
USA. Oct 19 e-pub ahead of publication.<br />
478. Zimkus, B.M., Rödel, M.-C., and Hillers, A. (2009) Complex patterns of continental speciation: molecular<br />
phylogenetics and biogeography of sub-Saharan puddle frogs (Phrynobatrachus) manuscript submitted.<br />
479. Chen, W.-J., Lheknim, V. and Mayden, R. L. (2009). "Molecular phylogeny of the Cobitoidea (Teleostei:<br />
Cypriniformes) revisited: position of enigmatic loach Ellopostoma Vaillant resolved with six nuclear<br />
genes." Journal of Fish Biology 75, In press.<br />
480. Chen, W.-J. and Mayden, R. L. (2009). Molecular systematics of the Cyprinoidea (Teleostei:<br />
Cypriniformes), the world’s largest clade of freshwater fishes: Further evidence from six nuclear genes.<br />
Molecular Phylogenetics and Evolution 52, 544-549. doi:10.1016/j.ympev.2009.01.006<br />
165
481. Mayden, R. L., Chen, W.-J., Bart, H. L., Doosey, M. H., Simons, A. M., Tang, K. L., Wood, R. M.,<br />
Agnew, M. K., Yang, L., Hirt, M. V., Clements, M. D., Saitoh, K., Sado, T., Miya, M. and Nishida, M.<br />
(2009). "Reconstructing the phylogenetic relationships of the Earth’s most diverse clade of freshwater<br />
fishes — Order Cypriniformes (Actinopterygii: Ostariophysi): a case study using multiple nuclear loci<br />
and the mitochondrial genome." Molecular Phylogenetics and Evolution 51, 500-514.<br />
doi:10.1016/j.ympev.2008.12.015<br />
482. Spinks, P.Q., Thomson, R.C., Shaffer, R.B. and Barley, A.J. (2009) "Fourteen nuclear genes provide<br />
phylogenetic resolution for difficult nodes in the turtle tree of life." Molecular Phylogenetics and<br />
Evolution in the press.<br />
Materials Research<br />
DMR060061<br />
483. “Tuning of Metal-Metal Bonding by Counterion Size in Hypothetical AeTiO2 Compounds” Wen, X.-D.;<br />
Cahill, T. J.; Hoffmann, R.; Miura, A. J. Am. Chem. Soc. (communication). 2009, 131 (41), 14632-<br />
14633.<br />
484. “Playing the Quantum chemical Slot Machine: An Exploration of ABX2 Compounds” Wen, X.-D.; Cahill,<br />
T. J.; Gerovac, N.; Bucknum, M. J.; Hoffmann, R. In<strong>org</strong>. Chem, 2010, 49 (1), 249-260.<br />
485. “A Little Bit of Lithium Does a Lot for Hydrogen” Zurek, E.; Hoffmann, R.; Ashcroft, N. W.; Oganov, A.<br />
R.; Lyakhov, A. O. Proc. Nat. Acad. Sci. 2009, 106 (42), 17640-17643.<br />
486. “Large-Scale Soft Colloidal Template Synthesis of 1.4 nm Thick CdSe Nanosheets” Son, J. S.; Wen,<br />
X.-D.; Joo, J. Chae, J.; Baek, S.; Park, K.; Kim, J. H.; An, K.; Yu, J. H.; Kwon, S. G.; Choi, S.-H.; Wang,<br />
C.; Kim, Y. W.; Kuk, Y.; Hoffmann, R.; Hyeon, T. Angew. Chem. Int. Ed. 2009, 48 (37), 6861-6864.<br />
487. “Exploring Group 14 Structures: 1D to 2D to 3D” Wen, X.-D.; Cahill, T. J.; Hoffmann, R. Chem. Eur. J,<br />
2010, in press.<br />
488. “Segregation into Layers: A General Problem for Structures under Pressure, Exemplified by SnH4”<br />
Gonzalez-Morelos, P.; Hoffmann, R. Ashcroft, N. W. In preparation.<br />
489. “Reconstructing a solid-solid phase transformation pathway in CdSe nanosheets with associated soft<br />
ligands” Wang, Z.; Wen, X.-D.; Hoffmann, R.; Son, J. S.; Li, R.; Fang,C-C.; Smilgies, D-M.; Hyeon, T. In<br />
preparation.<br />
490. “Breaking into the C-C Bond of Cyclopropane on Metal Surfaces” Cahill, T. J.; Hoffmann, R. In<br />
preparation.<br />
DMR090098<br />
491. A. Bagri, R. Grantab, N. V. Medhekar and V. B. Shenoy, Stability and formation mechanisms of<br />
carbonyl- and hydroxyl- decorated holes in graphene oxide, Journal of Physical Chemistry (2010), in<br />
press.<br />
492. N. V. Medhekar, A. Ramasubramaniam, R. S. Ruoff and V. B. Shenoy, Hydrogen bond networks in<br />
graphene oxide composites: structure and mechanical properties, ACS Nano, As Soon As Publishable,<br />
April 9, 2010.<br />
493. A. Bagri, C. Mattevi, M. Acik, Y. J. Chabal, M. Chhowalla and V. B. Shenoy, Carbon and Oxygen<br />
interactions during the reduction of graphene oxide, Nature Chemistry, in press (2010).<br />
494. R. Grantab and V. B. Shenoy, The Role of Nanoscale Surface Facets on the Mechanical Strength and<br />
Failure of Wurtzite and Twin-Plane Superlattice Nanowires, Submitted to Physical Review B (2010).<br />
DMR100058<br />
495. Amarasinghe, P. M., Katti, K. S., & Katti, D. R. (2008). Molecular Hydraulic Properties of<br />
Montmorillonite: A Polarized Fourier Transform Infrared Spectroscopic Study. Applied Spectroscopy,<br />
62(12), 1303-1313.<br />
496. Amarasinghe, P. M., Katti, K. S., & Katti, D. R. (2009). Nature of <strong>org</strong>anic fluid-montmorillonite<br />
interactions: An FTIR spectroscopic study. [Article]. Journal of Colloid and Interface Science, 337(1),<br />
97-105. doi: 10.1016/j.jcis.2009.05.011<br />
497. Katti, D. R., Matar, M. I., Katti, K. S., & Amarasinghe, P. M. (2009). Multiscale modeling of swelling<br />
clays: A computational and experimental approach. [Article]. Ksce Journal of Civil Engineering, 13(4),<br />
243-255. doi: 10.1007/s12205-009-0243-0<br />
166
MCA06N063<br />
498. "Emerging Photoluminescence in Monolayer MoS2", A. Splendiani, L. Sun, Y. Zhang, T. Li, J. Kim, C.-<br />
Y. Chim, G.Galli, and F. Wang, Nano Lett. 2010<br />
499. "First-principles investigations of the dielectric properties of crystalline and amorphous Si3N4 thin films",<br />
T. A. Pham, T. Li, S. Shankar, F. Gygi and G. Galli, Appl. Phys. Lett. , 96, 062902 (2010).<br />
500. "Van der Waals interactions in molecular assemblies from first-principles calculations", Y. Li, D. Lu, H-V<br />
Nguyen and G. Galli, J. Phys. Chem. A, 114, 1944-1952 (2010).<br />
501. "Ab initio calculation of van der Waals bonded molecular crystals", D. Lu, Y. Li, D. Rocca and G. Galli,<br />
Phys. Rev. Lett. 102, 206411 (2009).<br />
502. "Nature and strength of interlayer binding in graphite", L. Spanu, S. Sorella and G. Galli, Phys. Rev.<br />
Lett., 103, 196401 (2009).<br />
503. "High energy excitations in silicon nanoparticles" , A. Gali, M. Vörös, D. Rocca, G. Zimanyi and G. Galli,<br />
Nano Lett. 9, 3780 (2009).<br />
504. "Theoretical investigation of methane under pressure", L. Spanu, D. Donadio, D. Hohl and G. Galli, J.<br />
Chem.Phys. 130, 164520 (2009).<br />
Mathematical Sciences<br />
DMS060014<br />
505. B. Bourdin, D. Bucur, and É. Oudet. Optimal Partitions. SIAM J. Sci. Comp. 31(6), 4100–4114, 2009-<br />
2010.<br />
506. B. Bourdin, G.A. Francfort, and J.-J. Marigo. The Variational Approach to Fracture. Springer, 2008.<br />
507. B. Bourdin, G.A. Francfort, and J.-J.Marigo. The Variational Approach to Fracture. J. Elasticity, 91 (1-<br />
3):1– 148, 2008.<br />
508. B. Bourdin, C.J. Larsen, and C. Richardson. A Variational Approach to Dynamic Fracture of Brittle<br />
Materials I: Modeling and Implementation. Submitted, 2009.<br />
509. B. Bourdin. Numerical Implementation of a Variational Formulation of Quasi-Static Brittle Fracture.<br />
Interfaces Free Bound., 9:411–430, 2007.<br />
510. B. Bourdin. The Variational Formulation of Brittle Fracture: Numerical Implementation and Extensions.<br />
In R. de Borst A. Combescure, T. Belytschko, editor, IUTAM Symposium on Discretization Methods for<br />
Evolving Discontinuities, pages 381–393. Springer, 2007.<br />
511. B. Bourdin. Numerical implementation of the variational brittle fracture formulation In G. Dal Maso, G.<br />
Francfort, A. Mielke and T. Roubicek, editor, Mathematisches Forschungsinstitut Oberwolfach workshop<br />
on Analysis and Numerics for Rate-Independent Processes, 4(1), 2007.<br />
512. May 2010: Mechanical Engineering seminar, California Institute of Technology (Pasadena, CA, USA)<br />
513. Apr. 2010: Chevron Special Projects & Unconventional Resources (Houston, TX, USA).<br />
514. APr. 2010: AppliedMathematics Seminar, Department ofMathematics, University of Utah (Salt Lake<br />
City, UT, USA).<br />
515. Feb. 2010: Applied Mathematics Colloquium, UCLA (Los Angeles, CA, USA).<br />
516. Dec. 2009: Colloquium, Department of Petroleum Engineering, Louisiana State University (Baton<br />
Rouge, LA, USA).<br />
517. Nov. 2009: Colloquium, Institut Jean le Rond d’Alembert, Université Pierre etMarie Curie (Paris,<br />
France).<br />
518. May 2008: SIAM Student seminar, CCT, Louisiana State University (Baton Rouge, LA, USA).<br />
519. Sep. 2008: Lloyd Roeling Mathematics Conference. University of Louisiana (Lafayette, LA, USA).<br />
520. Sep. 2008: IUTAM Symposium on Variational Concepts with Applications to the Mechanics of<br />
Materials, (Bochum, Germany).<br />
521. May 2008: SIAMConference onMathematical Aspects ofMaterials Science, minisymposium “Damage<br />
and Fracture Evolution”, (Philadelphia, PA, USA).<br />
522. Nov. 2007: Mathematics and Computer Science Division Seminar, Argonne National Laboratories<br />
(Argonne, IL, USA).<br />
523. Jul 2007: 9th U.S. National Congress on Computational Mechanics, minisymposium “Numerical<br />
Techniques for the Modeling of Failure in Solids” (San Francisco, CA, USA)<br />
167
524. Apr. 2007: Mini-Workshop: “Shape Analysis for Eigenvalues”, Mathematisches Forschungsinstitut<br />
Oberwolfach (Germany).<br />
525. Jan. 2007: Workshop “Free Discontinuity Problems: From Image Processing to Material Science”, LSU<br />
(Baton Rouge, LA, USA) Organizer.<br />
526. Jan 2007: 113th AMS Annual Meeting, minisymposium “Free Discontinuity Problems: From Image<br />
Processing to Material Science” (New Orleans, LA, USA). Co-<strong>org</strong>anizer (with C.J. LarsenWPI, MA,<br />
USA).<br />
Molecular Biosciences<br />
MCB060011<br />
527. E.I. Goksu, M.I. Hoopes, B.A. Nellis, C. Xing, R. Faller, C.W. Frank, S.H. Risbud, J.H. Satcher, M.L.<br />
Longo, Silica Xerogel/Aerogel - Supported Lipid Bilayers: Consequences of Surface Corrugation,<br />
Biochim Biophys Acta 1798 (2010) 719-729.<br />
528. C. Xing, R. Faller, Density imbalances and free energy of lipid transfer in supported lipid bilayers, The<br />
Journal of Chemical Physics 131 (2009) 175104- 175107.<br />
529. C. Xing, R. Faller, Coarse-Grained Simulations of Supported and Unsupported Lipid Monolayers, Soft<br />
Matter 5 (2009) 4526-4530.<br />
530. C. Xing, R. Faller, What is the difference between a supported and a free Bilayer Insights from<br />
Molecular Modeling on different Scales., in: A. Iglic (Ed.), Advances in Planar Lipid Bilayers and<br />
Liposomes, vol. 12, Academic Press, 2010, p. in press.<br />
MCB060067<br />
531. G. Interlandi and W. Thomas; The catch bond mechanism between von Willebrand Factor and platelet<br />
surface receptors investigated by molecular dynamics simulations; Proteins: Structure, Function and<br />
Bioinformatics; In press<br />
532. G. Interlandi and W. Thomas; Type 2A mutations in von Willebrand Factor A2 domain lower its force<br />
resistance; (In preparation)<br />
533. P. Aprikian_, G. Interlandi_, B. A. Kidd, I. Le Trong, V. Tchesnokova, O. Yakovenko, R. E. Stenkamp, W.<br />
E. Thomas, E. V. Sokurenko; Mechanically optimized structure of the adhesive fimbrial tip of E. coli ;<br />
Under review, attached to the progress report; (_Shared first authorship)<br />
534. I. Le Trong, P. Aprikian, B. A. Kidd, M. Forero-Shelton, V. Tchesnokova, P. Rajagopal, V.<br />
Rodriguez, G. Interlandi, R. Klevit, V. Vogel, R. E. Stenkamp, E. V. Sokurenko, W. E. Thomas;<br />
Mechanical activation of the bacterial adhesive protein FimH involves allostery in a _-sandwich; Cell ; In<br />
press<br />
MCB070059<br />
535. Cameron F. Abrams and Eric Vanden-Eijnden, “Large-scale conformational sampling of proteins using<br />
temperature-accelerated molecular dynamics,” Proc Natl Acad Sci USA :4961-4966 (2010).<br />
536. Harish Vashisth and Cameron F. Abrams, “Docking of insulin to a structurally equilibrated insulin<br />
receptor ectodomain,” Proteins :1531-1543 (2010) (Cover).<br />
537. Heather L. Nyce, Spencer Stober, Cameron F. Abrams, and Michael M. White, “Mapping spatial<br />
relationships between residues in the ligand-binding domain of the 5-HT3 receptor using a ‘molecular<br />
ruler’,” Biophys J (accepted, 2010).<br />
538. Harish Vashisth and Cameron F. Abrams, “All-atom structural models for complexes of insulin-like<br />
growth factor hormones IGF1 and IGF2 with their cognate receptor,” J Mol Biol (under review).<br />
539. Ali Emileh and Cameron F. Abrams, “A mechanism by which binding of the broadly neutralizing<br />
antibody b12 unfolds the inner domain alpha-1 helix in an engineered gp120,” Proteins (under review).<br />
540. Harish Vashisth and Cameron F. Abrams, “‘DFG-flip’ in the insulin receptor kinase is facilitated by a<br />
helical intermediate state of the activation loop,” JACS (under review).<br />
MCB080077<br />
541. Provasi, D., Bortolato A., Filizola, M. “Exploring Molecular Mechanisms of Ligand Recognition by Opioid<br />
Receptors with Metadynamics.” Biochemistry (2009) 48 (42): 10020-10029.<br />
542. Provasi, D., Filizola, M. “Putative Active States of a Prototypic G-Protein Coupled Receptor from Biased<br />
Molecular Dynamics.” Biophysical Journal (2010) in press.<br />
168
543. Zhu, J., Zhu, J., Negri, A., Provasi, D., Filizola, M., Coller, B.S., Springer, T.A. “Structural, Functional,<br />
and Dynamic Characterization of a Novel αIIb-Specific Inhibitor of Integrin αIIbβ3” submitted.<br />
MCB090130<br />
544. Z. Issa, C. Manke, and J. Potoff, “Ca2+ bridging of apposed phospholipid bilayers,” Biophysical J., 2010<br />
(in preparation).<br />
545. Z. Issa, C. Manke, and J. Potoff, “Effect of Ca2+ and Na+ on the interfacial tension of phospholipid<br />
bilayers,” Biochimimica et Biophysica Acta, 2010 (in preparation).<br />
546. N. Bhatnagar, C. Manke, and J. Potoff, “Energetics of SYT binding to phospholipid bilayers in the<br />
presence of Ca2+,” J. Phys. Chem. B. 2010 (in preparation).<br />
MCB090132<br />
547. Shan, J., Khelashvili, G., and H. Weinstein. Large-Scale MD Simulations Reveal Structural Elements of<br />
the Activated State in the 5-HT2a Receptor and their Relation to Cholesterol Dynamics. in Biophysical<br />
Society Meeting. 2010. San Francisco, CA, USA.<br />
548. Khelashvili, G., Pabst, G., Weinstein, H., and D. Harries. Cholesterol Orientation and Tilt Modulus in<br />
DMPC Bilayers. in Biophysical Society Meeting. 2010. San Francisco, CA, USA.<br />
549. Ballesteros, J.A. and H. Weinstein, Integrated methods for the construction of threedimensional models<br />
and computational probing of structure-function relations in G proteincoupled receptors. Methods in<br />
Neurosciences, 1995. 40: p. 366-428<br />
550. Mehler, E.L., Hassan, S. A., Kortagere, S., and H. Weinstein. Ab initio computational modeling of loops<br />
in G-protein-coupled receptors: lessons from the crystal structure of rhodopsin. Proteins, 2006. 64(3): p.<br />
673-90.<br />
551. Khelashvili, G., Grossfield, A., Feller, S. E., Pitman, M. C., and H. Weinstein. Structural and dynamic<br />
effects of cholesterol at preferred sites of interaction with rhodopsin identified from microsecond length<br />
molecular dynamics simulations. Proteins, 2009. 76(2): p. 403-17.<br />
552. Han, Y., Moreira I.S., Urizar, E., Weinstein, H., and J. Javitch. Allosteric communication between<br />
protomers of dopamine class A GPCR dimers modulates activation. Nat Chem Biol, 2009. 5(9): p. 688-<br />
95.<br />
MCA93S028<br />
553. L. G. Trabuco, E. Villa, E. Schreiner, C. B. Harrison, and K. Schulten. Molecular Dynamics Flexible<br />
Fitting: A practical guide to combine cryo-electron microscopy and X-ray crystallography. Methods,<br />
49:174–180, 2009.<br />
554. E. Villa, J. Sengupta, L. G. Trabuco, J. LeBarron, W. T. Baxter, T. R. Shaikh, R. A. Grassucci, P.<br />
Nissen, M. Ehrenberg, K. Schulten, and J. Frank. Ribosome-induced changes in elongation factor Tu<br />
conformation control GTP hydrolysis. Proceedings of the National Academy of Sciences, USA, 106:1063–<br />
1068, 2009.<br />
555. J. Gumbart, L. G. Trabuco, E. Schreiner, E. Villa, and K. Schulten. Regulation of the proteinconducting<br />
channel by a bound ribosome. Structure, 17:1453–1464, 2009.<br />
556. T. Becker, S. Bhushan, A. Jarasch, J.-P. Armache, S. Funes, F. Jossinet, J. Gumbart, T. Mielke, O.<br />
Berninghausen, K. Schulten, E.Westhof, R. Gilmore, E. C. Mandon, and R. Beckmann. Structure of<br />
monomeric yeast and mammalian Sec61 complexes interacting with the translating ribosome. Science,<br />
326:1369–1373, 2009.<br />
557. B. Seidelt, C. A. Innis, D. N. Wilson, M. Gartmann, J.-P. Armache, E. Villa, L. G. Trabuco, T. Becker, T.<br />
Mielke, K. Schulten, T. A. Steitz, and R. Beckmann. Structural insight into nascent polypeptide chainmediated<br />
translational stalling. Science, 326:1412–1415, 2009.<br />
558. J. Frauenfeld, J. Gumbart, M. Gartmann, E. O. van der Sluis, T. Mielke, O. Berninghausen, T. Becker,<br />
K. Schulten, and R. Beckmann. Structure of the ribosome-SecYEG complex in the membrane<br />
environment, 2010. Submitted.<br />
559. L. G. Trabuco, C. B. Harrison, E. Schreiner, and K. Schulten. Recognition of the regulatory nascent<br />
chain TnaC by the ribosome. Structure, 2010. In press.<br />
560. L. G. Trabuco, E. Schreiner, J. Eargle, P. Cornish, T. Ha, Z. Luthey-Schulten, and K. Schulten.<br />
Structure and dynamics of ribosomal L1 stalk:tRNA complexes, 2010. To be submitted.<br />
561. P. L. Freddolino, C. B. Harrison, Y. Liu, and K. Schulten. Challenges in protein folding simulations:<br />
Timescale, representation, and analysis, 2010. Submitted.<br />
562. P. L. Freddolino and K. Schulten. Common structural transitions in explicit-solvent simulations of villin<br />
headpiece folding. Biophysical Journal, 97:2338–2347, 2009.<br />
169
563. A. Rajan, P. L. Freddolino, and K. Schulten. Going beyond clustering in MD trajectory analysis: an<br />
application to villin headpiece folding. PLoS One, 2010. In press.<br />
564. L. Miao and K. Schulten. Probing a structural model of the nuclear pore complex channel through<br />
molecular dynamics. Biophysical Journal, 98:1658–1667, 2010.<br />
565. L. Le, E. H. Lee, K. Schulten, and T. Truong. Molecular modeling of swine influenza A/H1N1, Spanish<br />
H1N1, and avian H5N1 flu N1 neuraminidases bound to Tamiflu and Relenza. PLoS Currents: Influenza,<br />
2009 Aug 27:RRN1015, 2010.<br />
566. L. Le, E. H. Lee, D. J. Hardy, T. N. Truong, and K. Schulten. Electrostatic funnel directs binding of<br />
Tamiflu to H5N1/H1N1pdm neuramindase, 2010. Submitted.<br />
567. D. E. Chandler, J. Gumbart, J. D. Stack, C. Chipot, and K. Schulten. Membrane curvature induced by<br />
aggregates of LH2s and monomeric LH1s. Biophysical Journal, 97:2978–2984, 2009.<br />
568. J. Hsin, D. E. Chandler, J. Gumbart, C. B. Harrison, M. Sener, J. Strumpfer, and K. Schulten. Selfassembly<br />
of photosynthetic membranes. ChemPhysChem, 11:1154–1159, 2010.<br />
569. J. Hsin, C. Chipot, and K. Schulten. A glycophorin A-like framework for the dimerization of<br />
photosynthetic core complexes. Journal of the American Chemical Society, 131:17096–17098, 2009.<br />
570. E. R. Cruz-Chu, T. Ritz, Z. S. Siwy, and K. Schulten. Molecular control of ionic conduction in polymer<br />
nanopores. Faraday Discussion, 143:47–62, 2009.<br />
571. E. R. Cruz-Chu and K. Schulten. The role of protonable surface residues in nanoprecipitation oscillation,<br />
2010. Submitted.<br />
572. A. Arkhipov, W. H. Roos, G. J. L. Wuite, and K. Schulten. Elucidating the mechanism behind irreversible<br />
deformation of viral capsids. Biophysical Journal, 97:2061–2069, 2009.<br />
573. W. H. Roos, M. M. Gibbons, A. Arkhipov, C. Uetrecht, N. Watts, P. Wingfield, A. C. Steven, A. Heck, K.<br />
Schulten, W. S. Klug, and G. J. Wuite. Squeezing protein shells: how continuum elastic models,<br />
molecular dynamics simulations and experiments coalesce at the nanoscale, 2010. Submitted.<br />
574. E. H. Lee, J. Hsin, M. Sotomayor, G. Comellas, and K. Schulten. Discovery through the computational<br />
microscope. Structure, 17:1295–1306, 2009.<br />
575. E. H. Lee, J. Hsin, E. von Castelmur, O. Mayans, and K. Schulten. Tertiary and secondary structure<br />
elasticity of a six-Ig titin chain. Biophysical Journal, 98:1085–1095, 2010.<br />
576. A. Arkhipov, Y. Yin, and K. Schulten. Membrane-bending mechanism of amphiphysin N-BAR domains.<br />
Biophysical Journal, 97:2727–2735, 2009.<br />
577. Y. Yin, A. Arkhipov, and K. Schulten. Multi-scale simulations of membrane sculpting by N-BAR<br />
domains. In P. Biggin and M. Sansom, editors, Molecular Simulations and Biomembranes: From<br />
Biophysics to Function. Royal Society of Chemistry, 2010.<br />
578. R. Gamini, M. Sotomayor, and K. Schulten. Cytoplasmic domain filter function in the mechanosensitive<br />
channel of small conductance, 2010. Submitted.<br />
MCA99S007<br />
579. G.G. Maisuradze, A.Liwo, S. Ołdziej, H.A. Scheraga. Evidence, from simulations, of a single state with<br />
residual native structure at the thermal denaturation midpoint of a small globular protein. J. Am. Chem.<br />
Soc., 2010, 132 (27), pp 9444-9452.<br />
580. U. Kozłowska, A. Liwo. H.A. Scheraga. Determination of side-chain-rotamer and side-chain and<br />
backbone virtual-bond-stretching potentials of mean force from AM1 energy surfaces of terminallyblocked<br />
amino-acid residues, for coarse-grained simulations of protein structure and folding. 1. The<br />
Method. J. Comput. Chem., 2010, 31 (6), pp 1143-1153.<br />
581. U. Kozłowska, G.G. Maisuradze, A. Liwo, H.A. Scheraga. Determination of side-chain-rotamer and sidechain<br />
and backbone virtual-bond-stretching potentials of mean force from AM1 energy surfaces of<br />
terminally-blocked amino-acid residues, for coarse-grained simulations of protein structure and folding.<br />
2. Results, comparison with statistical potentials, and implementation in the UNRES force field. J.<br />
Comput. Chem., 2010, 31 (6), 1154-1167.<br />
582. J.A. Vila, P. Serrano, K. Wüthrich and H.A. Scheraga. Sequential nearest-neighbor effects on computed<br />
13Ca chemical shifts. Journal of Biomolecular NMR, (2010), in press.<br />
170
Physics<br />
AST030030<br />
583. N. F. Loureiro, A. A. Schekochihin, D. A. Uzdensky, S. C. Cowley and R. Samtaney, “Instability of<br />
current sheets and formation of plasmoid chains. Part II”, Phys. Plasmas, submitted (2010).<br />
584. D. A. Uzdensky, N. F. Loureiro and A. A. Schekochihin, “Plasmoid-dominated magnetic reconnection in<br />
large systems”, Phys. Rev. Lett., in preparation.<br />
585. N. F. Loureiro, D. A. Uzdensky, A. A. Schekochihin and R. Samtaney, “Nonlinear plasmoid dynamics in<br />
magnetic reconnection”, Phys., Rev. Lett., in preparation.<br />
586. R. Samtaney, N. F. Loureiro, D. A. Uzdensky, A. A. Schekochihin and S. C. Cowley, “Formation of<br />
Plasmoid Chains in Magnetic Reconnection”, Phys. Rev. Lett 103, 105004 (2009), Arxiv:0903:0542<br />
587. US-Japan Workshop on Magnetic Reconnection, Madison, Wi., USA, 2009<br />
588. International Conference on the Numerical Simulation of Plasmas, Lisbon, Portugal, 2009<br />
589. Yosemite 2010 Interdisciplinary Wokshop on Magnetic Reconnection, Yosemite, Ca., USA, 2010<br />
590. Laboratory of Astrophysics of Toulouse-Tarbes, Toulouse, France, 2010<br />
591. A. B. Iskakov, A. A. Schekochihin, S. C. Cowley, and A. Tsinober, “Small-scale structure of isotropic<br />
magnetohydrodynamic turbulence,” Phys. Rev. Lett., in preparation (2010).<br />
592. Workshop on The Physics of Intracluster Medium: Theory and Computation, Univ. of Michigan, Ann<br />
Arbor, Mi., USA, 2010<br />
593. Workshop on Self-Organization in Turbulent Plasmas and Fluids, Dresden, Germany, 2010<br />
594. Workshop on Large-Scale Magnetic Fields in the Universe, ISSI, Bern, Germany, 2010<br />
595. T. Tatsuno, G. G. Plunk, M. Barnes, S. C. Cowley,W. Dorland, G. G. Howes, R. Numata and A. A.<br />
Schekochihin, “Gyrokinetic simulation of entropy cascade in two-dimensional electrostatic turbulence,”<br />
J. Plasma Fusion Res. SERIES, in press (2010); Arxiv:1003.3933.<br />
596. M. Barnes, W. Dorland and T. Tatsuno, “Resolving velocity space dynamics in continuum gyrokinetics”,<br />
Phys. Plasmas 17, 032106 (2010)<br />
597. G. G. Plunk, S. C. Cowley, A. A. Schekochihin, and T. Tatsuno, “Two-dimensional gyrokinetic<br />
turbulence,” J. Fluid Mech., submitted (2009), Arxiv:0904.0243.<br />
598. T. Tatsuno, W. Dorland, A. A. Schekochihin, G. G. Plunk, M. A. Barnes, S. C. Cowley, and G. G.<br />
Howes, “Nonlinear phase mixing and phase-space cascade of entropy in gyrokinetic plasma<br />
turbulence,” Phys. Rev. Lett. 103, 015003 (2009), Arxiv:0811.2538.<br />
599. A. A. Schekochihin, S. C. Cowley, W. Dorland, G. W. Hammett, G. G. Howes, E. Quataert, and T.<br />
Tatsuno, “Astrophysical gyrokinetics: kinetic and fluid turbulent cascades in magnetized weakly<br />
collisional plasmas,” Astrophys. J. Suppl. 182, 310 (2009), Arxiv:0704:0044.<br />
600. M. Barnes, I. G. Abel, W. Dorland, D. R. Ernst, G. W. Hammett, P. Ricci, B. N. Rogers, A. A.<br />
Schekochihin, and T. Tatsuno, “Linearized model Fokker-Planck collision operators for gyrokinetic<br />
simulations. II. Numerical implementation and tests”, Phys. Plasmas 16, 072107 (2009),<br />
Arxiv:0809.3945<br />
601. I. G. Abel, M. Barnes, S. C. Cowley, W. Dorland, and A. A. Schekochihin, “Linearized model Fokker-<br />
Planck collision operators for gyrokinetic simulations. I. Theory,” Phys. Plasmas 15, 122509 (2008)<br />
602. 12th International Solar Wind Conference, Saint-Malo, France, 2009<br />
603. Laboratoire de Physique Statistique et des Plasmas, Universit´e Libre de Bruxelles, Belgium, 2009<br />
604. 7th General Scientific Assembly of the Asia Plasma and Fusion Association and 9th Asia- Pacific<br />
Plasma Theory Conference, Aomori, Japan, 2009<br />
605. Japan Geoscience Union Meeting, Chiba, Japan, 2010<br />
606. Workshop on Multiscale Physics in Coronal Heating and Solar Wind Acceleration, ISSI, Bern,<br />
Germany, 2010<br />
MCA08X008<br />
607. “Toward TeV Conformality." LSD Collaboration, T. Appelquist, A. Avakian, R. Babich, R. C. Brower, M.<br />
Cheng, M. A. Clark, S. D. Cohen, G. T. Fleming, J. Kiskis, E. T. Neil, J. C. Osborn, C. Rebbi, D.<br />
Schaich, and P. Vranas. Phys. Rev. Lett. 104 (2010) 071601, arXiv:0910.2224 [hep-ph].<br />
608. _ \Lattice Strong Dynamics at the TeV Scale." LSD Collaboration, T. Appelquist, A. Avakian, R. Babich,<br />
R. C. Brower, M. Cheng, M. A. Clark, S. D. Cohen, G. T. Fleming, J. Kiskis, E. T. Neil, J. C. Osborn, C.<br />
Rebbi, D. Schaich, and P. Vranas. In preparation.<br />
171
PHY060019<br />
609. Measurement of the Rate of Muon Capture in Hydrogen Gas and Determination of the Proton’s<br />
Pseudoscalar Coupling gP , V. A. Andreev et al. (MuCap Collaboration), Phys. Rev. Lett. 99, 032002<br />
(2007)<br />
610. Nuclear Muon Capture in Hydrogen and its Interplay with Muon Atomic Physics, P. Kammel (MuCap<br />
Collaboration), arXiv:0803.1892 [nucl-ex]<br />
611. Muon Capture in Hydrogen: First MuCap Results and Future Plans, P. Kammel (MuCAP Collaboration),<br />
Few-Body Systems, 44, 333 (2008)<br />
612. Muon Capture Experiments in Hydrogen and Deuterium, C. Petitjean (MuCAP Collaboration), Nucl.<br />
Instr. Meth. A 600, 338 (2009)<br />
613. Muon Lifetime Program at PSI, P. Kammel (MuCAP Collaboration), PoS(Nufact08)107<br />
614. Deuterium Removal Unit for the MuCap Experiment, I. Alekseev et al., NHA Annual Hydrogen<br />
Conference, Sacramento, 2008, http://nha.confex.com/nha/2008/techprogram/MEETING.HTM<br />
615. Muon Capture in Hydrogen and Deuterium, C. Petitjean (MuCAP and MuSun Collaborations), Hyperfine<br />
Interactions, to be published (2009)<br />
616. MuCap: Muon Capture on the Proton to Determine the Pseudoscalar Coupling gP , B. Kiburg (MuCAP<br />
Collaboration), Tenth Conference on the Intersections of Particle and Nuclear Physics, AIP 1182, 686-9<br />
(2009)<br />
617. Muon Capture in Hydrogen, P. Kammel (MuCAP Collaboration), SSP 2009, Taipei, Taiwan (2009) to be<br />
published in Nucl. Phys. A<br />
618. Muon Capture on the Deuteron, The MuSun Experiment, V.A. Andreev et al. [MuSun Collaboration],<br />
PSI Experiment R-08-01, arXiv:1004.1754v1 [nucl-ex] (2010)<br />
PHY080005<br />
619. “The bottomonium spectrum from lattice QCD with 2+1 flavors of domain wall fermions” S. Meinel Phys.<br />
Rev. D 79, 094501 (2009)<br />
620. “Bottom hadrons from lattice QCD with domain wall and NRQCD fermions” S. Meinel, W. Detmold, C. J.<br />
Lin and M. Wingate arXiv:0909.3837 [hep-lat]<br />
621. “Bottom hadron mass splittings in the static limit from 2+1 flavour lattice QCD” W. Detmold, C. J. Lin<br />
and M. Wingate Nucl. Phys. B 818, 17 (2009)<br />
PHY090070<br />
622. Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, arXiv:0911.2463 [hep-lat].<br />
623. Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, Phys. Lett. B 681, 353 (2009)<br />
[arXiv:0907.4562 [hep-lat]].<br />
624. Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, arXiv:0911.2934 [hep-lat].<br />
625. Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, JHEP 0908, 084 (2009) [arXiv:0905.3586<br />
[hep-lat]].<br />
626. Z. Fodor, K. Holland, J. Kuti, D. Nogradi and C. Schroeder, JHEP 0911, 103 (2009) [arXiv:0908.2466<br />
[hep-lat]].<br />
PHY090083<br />
627. J.A. Ludlow, T. G. Lee, M. S. Pindzola, F. J. Robicheaux, and J. P. colgan, J. Phys. B 42, 225204<br />
(November 2009).<br />
628. J.P. Colgan, D. C. Griffin, C. P. Balance, and M. S. Pindzola, Phys. Rev. A 80, 063414 (December<br />
2009).<br />
629. A. Muller, S. Schippers, R. A. Phaneuf, S. W. J. Scully, A. Aguilar, A. M. Covington, I. Alvarez, C.<br />
Cisneros, E. D. Emmons, M. F. Gharaibeh, G. Hinojosa, A. S. Schlachter, and B. M. McLaughlin, J.<br />
Phys. B 42, 235602 (December 2009).<br />
630. J. A. Ludlow, T. G. Lee, and M. S. Pindzola, Phys. Rev. A 81, 023407 (January 2010).<br />
631. J. P. Colgan, O. Al-Hagan, D. H. Madison, A. J. Murray, and M. S. Pindzola, J. Phys. B 42, 171001<br />
(September 2009).<br />
632. M. S. Pindzola, J. A. Ludlow, and J. P. Colgan, Phys. Rev. A 80, 032707 (September 2009).<br />
633. M. S. Pindzola, J. A. Ludlow, F. J. Robicheauz, J. P. Colgan, and C. J. Fontes, Phys. Rev. A 80,<br />
052706 (November 2009).<br />
172
634. M. S. Pindzola, J. A. Ludlow, F. J. Robicheauz, J. P. Colgan, and D. C. Griffin, J. Phys. B 42, 215204<br />
(November 2009).<br />
635. B. Sun and M. S. Pindzola, Phys. Rev. A 80, 033616 (September 2009).<br />
636. B. Sun and M. S. Pindzola, J. Phys. B 42, 175301 (September 2009).<br />
637. B. Sun and M. S. Pindzola, J. Phys. B 42, 055301 (March 2010).<br />
638. C. P. Ballance, J. A. Ludlow, M. S. Pindzola, and S. D. Loch, J. Phys. B 42, 175202 (September 2009).<br />
639. S. D. Loch, C. P. Ballance, M. S. Pindzola, and D. P. Stotler, Plasma Phys. And Controlled Fusion 51,<br />
105006 (October 2009).<br />
640. M. S. Pindzola, J. A. Ludlow, C. P. Balance, and D. C. Griffin, J. Phys. B 43, 025201 (January 2010).<br />
641. M. S. Pindzola, C. P. Ballance, J. A. Ludlow, S. D. Loch, and J. P. Colgan, J. Phys. B 43, 065201<br />
(March 2010).<br />
642. J. A. Ludlow, C. P. Ballance, S. D. Loch, and M. S. Pindzola, J. Phys. B 43, 074029 (April 2010).<br />
643. S. Miyake, P. C. Stancil, H. R. Sadeghpour, A. Dalgarno, B. M. McLaughlin, and R. C. Forrey,<br />
Astrophys. Journal 709, L168 (February 2010).<br />
PHY090085<br />
644. 1. S. Agarwal, G. Klimeck, and M. Luisier, “Leakage Reduction Design Concepts for Low Power Vertical<br />
Tunneling Field-Effect Transistors”, accepted for publication in IEEE Elec. Dev. Lett. (2010).<br />
645. M. Luisier and G. Klimeck, “Simulations of Nanowire Tunneling Transistors: from the WKB<br />
Approximation to Full-Band Phonon-Assisted Tunneling” accepted for publication in J. of App. Phys.<br />
(2010).<br />
646. T. B. Boykin, M. Luisier, M. Salmani-Jelodar, and G. Klimeck, “Strain-induced, off-diagonal, same-atom<br />
parameters in empirical tight-binding theory suitable for [110] uniaxial strain applied to a silicon<br />
parameterization”, Phys. Rev. B 81, 125202 (2010).<br />
647. M. Luisier and G. Klimeck, “Numerical Strategies towards Peta-Scale Simulations of Nanoelectronic<br />
Devices”, Parallel Computing 36, 117-128 (2010).<br />
648. G. Klimeck and M. Luisier, “Atomistic Modeling of Realistically Extended Semiconductor Devices with<br />
NEMO /OMEN” (invited), IEEE Computers in Science and Engineering (CISE) 12, 28-35 (2010).<br />
649. M. Luisier and G. Klimeck, “Atomistic full-band simulations of silicon nanowire transistors: Effects of<br />
electron-phonon scattering”, Phys. Rev. B 80, 155430 (2009).<br />
650. M. Luisier, “Atomistic Simulations of Nanoelectronic Devices” (invited), NNIN Workshop on Bridging the<br />
Gap between Theory and Experiment, Stanford, CA, USA, February 2010.<br />
651. M. Luisier and G. Klimeck, "Performance Comparisons of Tunneling Field-Effect Transistors made of<br />
InSb, Carbon, and GaSb-InAs Broken Gap Heterostructures", IEDM 2009, Baltimore MD, USA,<br />
December 2009.<br />
652. N. Kharche, G. Klimeck, D.-H. Kim, J. A. del Alamo, and M. Luisier, "Performance Analysis of Ultra-<br />
Scaled InAs HEMTs", IEDM 2009, Baltimore MD, USA, December 2009.<br />
653. M. Luisier, “OMEN: A quantum transport simulator for nanoelectronic devices” (invited), 33rd Annual<br />
EDS/CAS Activities in Western New York Conference, Rochester Institute of Technology, Rochester<br />
NY, USA, November 2009.<br />
654. M. Luisier and G. Klimeck, "Investigation of InxGa1-xAs Ultra-Thin-Body Tunneling FETs using a Full-<br />
Band and Atomistic Approach", International Conference on Simulation of Semiconductor Processes<br />
and Devices, SISPAD 2009, San Diego CA, USA, September 2009.<br />
655. Cover story on the NICS web page in August-September 2009 entitled “Nano Nano: Researchers<br />
model next generation transistors on Kraken” by Gregory Scott Jones.<br />
PHY090084<br />
656. G. G. Howes, J. M. TenBarge, S. C. Cowley, W. Dorland, E. Quataert, and A. A. Schekochihin. A<br />
Weakened Cascade Model for Turbulence in Astrophysical Plasmas. Phys. Plasmas, 2010. in<br />
preparation.<br />
657. G. G. Howes, J. M. TenBarge, S. C. Cowley, W. Dorland, E. Quataert, A. A. Schekochihin, R. Numata,<br />
and T. Tatsuno. Importance of Nonlocal Interactions in the Dissipation of Plasma Turbulence. Phys.<br />
Rev. Lett., 2010. in preparation.<br />
658. R. Numata, G. G. Howes, T. Tatsuno, M. Barnes, and W. Dorland. AstroGK: Astrophysical Gyrokinetics<br />
Code. ArXiv e-prints, April 2010. submitted to J. Comp. Phys.<br />
173
PHY100031<br />
659. Invited for Proc. Nat. Acad. Sci., Special Issue on Surface Chemistry (2010).<br />
PHY100033<br />
660. P. Ajith et al., 'Complete' gravitational waveforms for black-hole binaries with non-precessing spins<br />
(2009), 0909.2867.<br />
661. D. Pollney, C. Reisswig, E. Schnetter, N. Dorband, and P. Diener, High accuracy binary black hole<br />
simulations with an extended wave zone (2009), 0910.3803.<br />
662. L. Santamaria, F. Ohme, M. Hannam, S. Husa, B. Krishnan, P. Ajith, B. Brugmann, N. Dorband, D.<br />
Pollney, C. Reisswig, et al., A new phenomenological waveform family for modeling non-precessing<br />
spinning black-hole binaries.<br />
663. O. Korobkin, E. Schnetter, N. Stergioulas, and B. Zink, Multipatch methods in general relativistic<br />
astrophysics: Nonaxisymmetric instabilities in self-gravitating disks around black holes, in preparation<br />
(2009).<br />
664. B. Zink, O. Korobkin, E. Schnetter, N. Stergioulas, and K. Kokkotas, Nonaxisymmetric f-modes in<br />
rapidly rotating general relativistic neutron stars, in preparation (2009).<br />
665. D. Pollney and C. Reisswig, The gravitational memory of binary black hole mergers (2010), in<br />
preparation.<br />
666. C. Reisswig and D. Pollney, Aliasing e_ects in the integration of numerical waveforms (2010), in<br />
preparation.<br />
667. C. Reisswig, C. Ott, S. Uli, E. Schnetter, and D. Pollney, Comparing wave-extraction methods for<br />
stellar-collapse simulations (2010), in preparation.<br />
668. F. Lo_er, Numerical Simulations of Neutron Star-Black Hole Mergers, Ph.D. thesis, Max Planck<br />
Institute for Gravitational Physics (Albert Einstein Institute), Am Muhlenberg 1, D-14476 Potsdam,<br />
Germany (2005).<br />
669. C. D. Ott, Stellar Iron Core Collapse in 3+1 General Relativity and The Gravitational Wave Signature of<br />
Core-Collapse Supernovae, Ph.D. thesis, Universitat Potsdam, Potsdam, Germany (2006).<br />
670. E. N. Dorband, Computing and analyzing gravitational radiation in black hole simulations using a new<br />
multi-block approach to numerical relativity, Ph.D. thesis, Louisiana State University, Baton Rouge, USA<br />
(2007).<br />
671. C. Reisswig, Binary black hole mergers and novel approaches to gravitational wave extraction in<br />
numerical relativity, Ph.D. thesis, Leibniz Universitat Hannover (2010).<br />
672. M. Jasiulek, Spin measures on isolated and dynamical horizons in numerical relativity, Master's thesis,<br />
Humboldt-Universitat, Berlin, Berlin (2008).<br />
673. P. Mosta, Puncture evolutions within the harmonic framework, Master's thesis, Universitat Kassel,<br />
Kassel (2008).<br />
674. C. Reisswig, N. T. Bishop, D. Pollney, and B. Szilagyi, Characteristic extraction in numerical relativity:<br />
binary black hole merger waveforms at null in_nity, Class. Quant. Grav. 27, 075014 (2010), 0912.1285.<br />
675. P. Mosta et al., Vacuum Electromagnetic Counterparts of Binary Black-Hole Mergers, Phys. Rev. D81,<br />
064017 (2010), 0912.2330.<br />
676. E. O'Connor and C. D. Ott, A New Open-Source Code for Spherically-Symmetric Stellar Collapse to<br />
Neutron Stars and Black Holes, Class. Quant. Grav. in press, arXiv:0912.2393 [astro-ph] (2010).<br />
677. E. B. Abdikamalov, C. D. Ott, L. Rezzolla, L. Dessart, H. Dimmelmeier, A. Marek, and H. Janka,<br />
Axisymmetric general relativistic simulations of the accretion-induced collapse of white dwarfs, Phys.<br />
Rev. D. 81, 044012 (2010).<br />
678. C. D. Ott, Probing the core-collapse supernova mechanism with gravitational waves, Class. Quant.<br />
Grav. 26, 204015 (2009).<br />
679. C. D. Ott, E. Schnetter, A. Burrows, E. Livne, E. O'Connor, and F. Lo_er, Computational models of<br />
stellar collapse and core-collapse supernovae, J. Phys. Conf. Ser. 180, 012022 (2009).<br />
680. C. D. Ott, TOPICAL REVIEW: The gravitational-wave signature of core-collapse supernovae, Class.<br />
Quant. Grav. 26, 063001 (2009).<br />
681. D. Brown, P. Diener, O. Sarbach, E. Schnetter, and M. Tiglio, Turduckening black holes: an analytical<br />
and computational study, Phys. Rev. D (accepted) (2009), arXiv:0809.3533 [gr-qc].<br />
682. L. Dessart, C. D. Ott, A. Burrows, S. Rosswog, and E. Livne, Neutrino Signatures and the Neutrino-<br />
Driven Wind in Binary Neutron Star Mergers, Astrophys. J. 690, 1681 (2009).<br />
174
683. C. Reisswig et al., Gravitational-wave detectability of equal-mass black-hole binaries with aligned spins,<br />
Phys. Rev. D80, 124026 (2009), 0907.0462.<br />
684. C. Reisswig, N. T. Bishop, D. Pollney, and B. Szilagyi, Unambiguous determination of gravitational<br />
waveforms from binary black hole mergers, Phys. Rev. Lett. 103, 221101 (2009), 0907.2637.<br />
685. D. Pollney, C. Reisswig, N. Dorband, E. Schnetter, and P. Diener, The Asymptotic Fallo_ of Local<br />
Waveform Measurements in Numerical Relativity, Phys. Rev. D80, 121502 (2009), 0910.3656.<br />
686. M. Hannam et al., The Samurai Project: verifying the consistency of black- hole-binary waveforms for<br />
gravitational-wave detection, Phys. Rev. D79, 084025 (2009), 0901.2437.<br />
687. B. Aylott et al., Testing gravitational-wave searches with numerical relativity waveforms: Results from<br />
the _rst Numerical INJection Analysis (NINJA) project, Class. Quant. Grav. 26, 165008 (2009),<br />
0901.4399.<br />
688. B. Aylott et al., Status of NINJA: the Numerical INJection Analysis project, Class. Quant. Grav. 26,<br />
114008 (2009), 0905.4227.<br />
689. P. Ajith et al., A template bank for gravitational waveforms from coalescing binary black holes: I. nonspinning<br />
binaries, Phys. Rev. D77, 104017 (2008), 0710.2335.<br />
690. M. C. Babiuc, S. Husa, I. Hinder, C. Lechner, E. Schnetter, B. Szil_agyi, Y. Zlochower, N. Dorband, D.<br />
Pollney, and J. Winicour, Implementation of standard testbeds for numerical relativity, Class. Quantum<br />
Grav. 25, 125012 (2008), arXiv:0709.3559 [gr-qc], URL http://arxiv.<strong>org</strong>/abs/0709.3559.<br />
691. T. Damour, A. Nagar, E. N. Dorband, D. Pollney, and L. Rezzolla, Faithful E_ective-One-Body waveforms<br />
of equal-mass coalescing black-hole binaries, Phys. Rev. D77, 084017 (2008), 0712.3003.<br />
692. L. Dessart, A. Burrows, E. Livne, and C. D. Ott, The Proto-Neutron Star Phase of the Collapsar Model<br />
and the Route to Long-Soft Gamma-Ray Bursts and Hypernovae, Astrophys. J. Lett. 673, L43 (2008).<br />
693. H. Dimmelmeier, C. D. Ott, A. Marek, and H.-T. Janka, Gravitational wave burst signal from core<br />
collapse of rotating stars, Phys. Rev. D. 78, 064056 (2008).<br />
694. C. D. Ott, E. Schnetter, G. Allen, E. Seidel, J. Tao, and B. Zink, A case study for petascale applications<br />
in astrophysics: Simulating Gamma-Ray Bursts, in Proceedings of the 15th ACM Mardi Gras<br />
conference: From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed<br />
computing requirements, applications, tools, infrastructures, interoperability, and the incremental<br />
adoption of key capabilities (ACM, Baton Rouge, Louisiana, 2008), no. 18 in ACM International<br />
Conference Proceeding Series.<br />
695. C. D. Ott, A. Burrows, L. Dessart, and E. Livne, Two-Dimensional Multiangle, Multigroup Neutrino<br />
Radiation-Hydrodynamic Simulations of Postbounce Supernova Cores, Astrophys. J. 685, 1069 (2008).<br />
696. L. Rezzolla, P. Diener, E. N. Dorband, D. Pollney, C. Reisswig, E. Schnetter, and J. Seiler, The _nal<br />
spin from coalescence of aligned-spin black-hole binaries, Astrophys. J. 674, L29 (2008),<br />
arXiv:0710.3345 [gr-qc].<br />
697. L. Rezzolla et al., On the _nal spin from the coalescence of two black holes, Phys. Rev. D78, 044002<br />
(2008), 0712.3541.<br />
698. L. Rezzolla et al., Spin Diagrams for Equal-Mass Black-Hole Binaries with Aligned Spins, Astrophys.<br />
J679, 1422 (2008), 0708.3999.<br />
699. E. Schnetter, G. Allen, T. Goodale, and M. Tyagi, Alpaca: Cactus tools for application level<br />
performance and correctness analysis, Tech. Rep. CCT-TR-2008-2, Louisiana State University (2008),<br />
URL http: //www.cct.lsu.edu/CCT-TR/CCT-TR-2008-2.<br />
700. E. Schnetter, C. D. Ott, P. Diener, and C. Reisswig, Astrophysical applications of numerical relativity |<br />
from Teragrid to Petascale (2008), the 3rd annual TeraGrid Conference, TeraGrid '08.<br />
701. E. Schnetter, Multi-physics coupling of Einstein and hydrodynamics evolution: A case study of the<br />
Einstein Toolkit (2008), cBHPC 2008 (Component-Based High Performance Computing).<br />
702. J. Seiler, B. Szilagyi, D. Pollney, and L. Rezzolla, Constraint-preserving boundary treatment for a<br />
harmonic formulation of the Einstein equations, Class. Quant. Grav. 25, 175020 (2008), 0802.3341.<br />
703. T. Z. Summerscales, A. Burrows, L. S. Finn, and C. D. Ott, Maximum Entropy for Gravitational Wave<br />
Data Analysis: Inferring the Physical Parameters of Core-Collapse Supernovae, Astrophys. J. 678, 1142<br />
(2008).<br />
704. J. Tao, G. Allen, I. Hinder, E. Schnetter, and Y. Zlochower, XiRel: Standard benchmarks for numerical<br />
relativity codes using Cactus and Carpet, Tech. Rep. CCT-TR-2008-5, Louisiana State University<br />
(2008), URL http://www.cct.lsu.edu/CCT-TR/CCT-TR-2008-5.<br />
175
705. B. Zink, E. Schnetter, and M. Tiglio, Multi-patch methods in general relativistic astrophysics { I.<br />
Hydrodynamical ows on _xed backgrounds, Phys. Rev. D 77, 103015 (2008), arXiv:0712.0353<br />
[astroph], URL http://arxiv.<strong>org</strong>/abs/0712.0353.<br />
706. G. Allen, E. Caraba, T. Goodale, Y. El Khamra, and E. Schnetter, A scienti_c application benchmark<br />
using the Cactus Framework, Tech. Rep., Center for Computation & Technology (2007), URL http:<br />
//www.cct.lsu.edu/.<br />
707. P. Ajith et al., Phenomenological template family for black-hole coalescence waveforms, Class. Quant.<br />
Grav. 24, S689 (2007), arXiv:0704.3764 [gr-qc].<br />
708. D. Brown, O. Sarbach, E. Schnetter, M. Tiglio, P. Diener, I. Hawke, and D. Pollney, Excision without<br />
excision, Phys. Rev. D 76, 081503(R) (2007), arXiv:0707.3101 [gr-qc].<br />
709. P. Diener, E. N. Dorband, E. Schnetter, and M. Tiglio, Optimized high-order derivative and dissipation<br />
operators satisfying summation by parts, and applications in three-dimensional multi-block evolutions, J.<br />
Sci. Comput. 32, 109 (2007), gr-qc/0512001.<br />
710. H. Dimmelmeier, C. D. Ott, H.-T. Janka, A. Marek, and E. Muller, Generic Gravitational-Wave Signals<br />
from the Collapse of Rotating Stellar Cores, Phys. Rev. Lett. 98, 251101 (2007).<br />
711. M. Koppitz, D. Pollney, C. Reisswig, L. Rezzolla, J. Thornburg, P. Diener, and E. Schnetter, Recoil<br />
velocities from equal-mass binary-black-hole mergers, Phys. Rev. Lett. 99, 041102 (2007), grqc/0701163.<br />
712. C. D. Ott, H. Dimmelmeier, A. Marek, H.-T. Janka, I. Hawke, B. Zink, and E. Schnetter, 3d collapse of<br />
rotating stellar iron cores in general relativity including deleptonization and a nuclear equation of state,<br />
Phys. Rev. Lett 98, 261101 (2007), astro-ph/0609819.<br />
713. C. D. Ott, H. Dimmelmeier, A. Marek, H.-T. Janka, B. Zink, I. Hawke, and E. Schnetter, Rotating<br />
collapse of stellar iron cores in general relativity, Class. Quantum Grav. 24, S139 (2007), astroph/<br />
0612638.<br />
714. D. Pollney, C. Reisswig, L. Rezzolla, B. Szilagyi, M. Ans<strong>org</strong>, B. Deris, P. Diener, E. N. Dorband, M.<br />
Koppitz, A. Nagar, et al., Recoil velocities from equal-mass binary black-hole mergers: a systematic<br />
investigation of spin-orbit aligned con_gurations, Phys. Rev. D 76, 124002 (2007), arXiv:0707.2559 [grqc].<br />
715. E. Schnetter, C. D. Ott, G. Allen, P. Diener, T. Goodale, T. Radke, E. Seidel, and J. Shalf, Cactus<br />
Framework: Black holes to gamma ray bursts, in Petascale Computing: Algorithms and Applications,<br />
edited by D. A. Bader (Chapman & Hall/CRC Computational Science Series, 2008), chap. 24,<br />
arXiv:0707.1607 [cs.DC], URL http://arxiv.<strong>org</strong>/abs/0707.1607.<br />
716. J. Shalf, E. Schnetter, G. Allen, and E. Seidel, Cactus as benchmarking platform, Tech. Rep. CCT-TR-<br />
2006-3, Louisiana State University (2007), URL http://www.cct.lsu.edu/CCT-TR/CCT-TR-2006-3.<br />
717. D. Stark, G. Allen, T. Goodale, T. Radke, and E. Schnetter, An extensible timing infrastructure for<br />
adaptive large-scale applications, in Parallel Processing and Applied Mathematics (PPAM), Lecture<br />
Notes in Computer Science (2007).<br />
718. B. Szil_agyi, D. Pollney, L. Rezzolla, J. Thornburg, and J. Winicour, An explicit harmonic code for blackhole<br />
evolution using excision, Class. Quantum Grav. 24, S275 (2007), gr-qc/0612150.<br />
719. J. Thornburg, P. Diener, D. Pollney, L. Rezzolla, E. Schnetter, E. Seidel, and R. Takahashi, Are moving<br />
punctures equivalent to moving black holes, Class. Quantum Grav. 24, 3911 (2007), gr-qc/0701038.<br />
720. E. Pazos, E. N. Dorband, A. Nagar, C. Palenzuela, E. Schnetter, and M. Tiglio, How far away is far<br />
enough for extracting numerical waveforms, and how much do they depend on the extraction method,<br />
Class. Quantum Grav. 24, S341 (2007), gr-qc/0612149.<br />
721. B. Zink, N. Stergioulas, I. Hawke, C. D. Ott, E. Schnetter, and E. Muller, Nonaxisymmetric instability<br />
and fragmentation of general relativistic quasitoroidal stars, Phys. Rev. D 76, 024019 (2007), astroph/<br />
0611601.<br />
722. P. Diener, F. Herrmann, D. Pollney, E. Schnetter, E. Seidel, R. Takahashi, J. Thornburg, and J.<br />
Ventrella, Accurate evolution of orbiting binary black holes, Phys. Rev. Lett. 96, 121101 (2006), grqc/<br />
0512108, URL http://link.aps.<strong>org</strong>/abstract/PRL/v96/e121101.<br />
723. E. N. Dorband, E. Berti, P. Diener, E. Schnetter, and M. Tiglio, A numerical study of the quasinormal<br />
mode excitation of Kerr black holes, Phys. Rev. D 74, 084028 (2006), gr-qc/0608091.<br />
724. F. Lo_er, L. Rezzolla, and M. Ans<strong>org</strong>, Numerical evolutions of a black hole-neutron star system in full<br />
general relativity: Head-on collision, Phys. Rev. D 74, 104018 (2006), gr-qc/0606104.<br />
725. E. Schnetter and B. Krishnan, Non-symmetric trapped surfaces in the Schwarzschild and Vaidya spacetimes,<br />
Phys. Rev. D 73, 021502(R) (2006), gr-qc/0511017.<br />
176
726. E. Schnetter, P. Diener, N. Dorband, and M. Tiglio, A multi-block infrastructure for three-dimensional<br />
time-dependent numerical relativity, Class. Quantum Grav. 23, S553 (2006), gr-qc/0602104, URL<br />
http://stacks.iop.<strong>org</strong>/CQG/23/S553.<br />
727. E. Schnetter, B. Krishnan, and F. Beyer, Introduction to Dynamical Horizons in numerical relativity,<br />
Phys. Rev. D 74, 024028 (2006), gr-qc/0604015.<br />
728. B. Zink, N. Stergioulas, I. Hawke, C. D. Ott, E. Schnetter, and E. Muller, Fragmentation of general<br />
relativistic quasi-toroidal polytropes, in Proceedings of the 11th Marcel Grossmann Meeting (MG11) in<br />
Berlin, Germany, July 23-29, 2006 (2007 (submitted)), arXiv:0704.0431 [gr-qc], URL http://arxiv.<strong>org</strong>/<br />
abs/0704.0431.<br />
729. B. Zink, N. Stergioulas, I. Hawke, C. D. Ott, E. Schnetter, and E. Muller, Rotational instabilities in<br />
supermassive stars: a new way to form supermassive black holes, in International Scienti_c Workshop<br />
on Cosmology and Gravitational Physics, Thessaloniki, December 15-16, 2005, edited by N. K. Spyrou,<br />
N. Stergioulas, and C. Tsagas (ZITI, Thessaloniki, 2006), pp. 155{160.<br />
730. C. Bona, L. Lehner, and C. Palenzuela-Luque, Geometrically motivated hyperbolic coordinate<br />
conditions for numerical relativity: Analysis, issues and implementations (2005), gr-qc/0509092.<br />
PHY100035<br />
731. P. Zhang, H. R. Sadeghpour and A. Dalgarno, "Structure and spectroscopy of ground and excited<br />
states of LiYb" is now in review in J. Chem. Phys.<br />
B<br />
Performance on CY2010 Milestones<br />
177
Work Package Dependencies Planned<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
Resources 06/30/10 AD Status 06/30/10 AD Notes 3/31/10 AD Status 03/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
SI 2 Area: Software Integration<br />
SI 2.0<br />
Objective 2.0 Operations: maintain/sustain<br />
current capabilities<br />
SI.Coord 2.0.1 O SI Area Coordination<br />
SI.Coord 2.0.1.3 O SI Area Coordination PY5 Aug-09 Jul-10<br />
UC Ian Foster[3%], Lee<br />
Liming[35%]; ANL JP<br />
Navarro [35%]<br />
SI.Coord 2.0.1.3.1 M PY 5 Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10 75% 50% 25%<br />
SI.Coord 2.0.1.4 O SI Area Coordination Extension Aug-10 Jul-11<br />
SI.Coord 2.0.1.4.1 M Extension Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-10 Jul-11<br />
SI.Inf 2.0.2 O Operate Infrastructure Services<br />
SI.Inf 2.0.2.3 O Operate Infrastructure Services PY5 Aug-09 Jul-10<br />
UC Ian Foster[3%], Lee<br />
Liming[35%]; ANL JP<br />
Navarro [35%]<br />
ANL JP Navarro [2%];<br />
Eric Blau[2%]; NCSA<br />
Jason Brechin[5%];<br />
TACC Warren Smith<br />
[10%]<br />
SI.Inf 2.0.2.3.1 M Extension Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10 75% 50% 25%<br />
SI.Inf 2.0.2.4 O Operate Infrastructure Services Extension Aug-10 Jul-10<br />
ANL JP Navarro [2%];<br />
Eric Blau[2%]; NCSA<br />
Jason Brechin[5%];<br />
TACC Warren Smith<br />
[10%]<br />
SI.Inf 2.0.2.4.1 M PY5 Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-10 Jul-11 75% 50% 25%<br />
SI.CTSS 2.0.3 O Maintain Current CTSS Kits<br />
SI.CTSS 2.0.3.3 O Maintain Selected Current CTSS Kits PY5 Aug-09 Jul-10<br />
UC Eric Blau[45%], Lee<br />
Liming[15%], TBD[20%];<br />
NCSA Jason<br />
Brechin[14%]; TACC<br />
Warren Smith[10%];<br />
ANL JP Navarro [10%];<br />
UW TBD[7%], Jaime<br />
Frey[7%], Greg<br />
Thaine[6%]<br />
SI.CTSS 2.0.3.3.1 M PY5 Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10 75% 50% 25%<br />
SI.CTSS 2.0.3.4 P GT 5 Oct-09 Jul-10 70%<br />
In beta, new release<br />
being prepared for RPs<br />
40% Still on original target 20% New Effort Added 2009Q4<br />
SI.CTSS 2.0.3.5 O Maintain Selected Current CTSS Kits Extension Aug-10 Jul-11<br />
UC Eric Blau[45%], Lee<br />
Liming[15%], TBD[20%];<br />
NCSA Jason<br />
Brechin[14%]; TACC<br />
Warren Smith[10%];<br />
ANL JP Navarro [10%];<br />
UW TBD[7%], Jaime<br />
Frey[7%], Greg<br />
Thaine[6%]<br />
SI.CTSS 2.0.3.5.1 M Extension Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-10 Jul-11<br />
SI.CTSS 2.0.3.4 O GT 5 Oct-10 Jul-11<br />
SI.Pkg 2.0.4 O Package Software<br />
SI.Pkg 2.0.4.3 O Package Software PY5 Aug-09 Jul-10<br />
ANL JP Navarro<br />
[15%];NCSA Jason<br />
Brechin[15%]; UC TBD<br />
[10%], Eric Blau[44%],<br />
Joseph Bester[35%];<br />
UW TBD [10%], Jaime<br />
Frey [10%], Greg<br />
SI.Pkg 2.0.4.3.1 M PY5 Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10<br />
Th i [10%]<br />
75% 50% 25%<br />
SI.Pkg 2.0.4.4 P Local Compute Oct-09 Mar-10 Completed Completed 50% New Effort Added 2009Q4<br />
SI.Pkg 2.0.4.5 P Distributed Programming Jul-09 Jun-10 Completed On target 75% On target 50% New Effort Added 2009Q4<br />
SI.Pkg 2.0.4.6 O Package Software Extension Aug-10 Jul-11<br />
ANL JP Navarro<br />
[15%];NCSA Jason<br />
Brechin[15%]; UC TBD<br />
[10%], Eric Blau[44%],<br />
Joseph Bester[35%];<br />
UW TBD [10%], Jaime<br />
Frey [10%], Greg<br />
Th i [10%]
SI.Pkg 2.0.4.6.1 M Extension Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-10 Jul-11<br />
SI.Pkg 2.0.4.7 O Continue to package software Aug-10 Jul-11<br />
SI.Pkg 2.0.4.8 O Deliver common TG and OSG packages Aug-10 Jul-11<br />
SI 2.1<br />
Objective 2.1 Projects: Information Services<br />
Enhancements<br />
SI.IS 2.1.4 P Information Services Enhancements PY5 Aug-09 Jul-10<br />
SI.IS 2.1.4.1 P<br />
demonstrate ability for resource operators to register<br />
their own locally supported capabilities<br />
Core2 Aug-09 Jul-10<br />
ANL JP Navarro [30%];<br />
NCSA Jason<br />
Brechin[10%]; SDSC<br />
Ken Yashimoto[15%];<br />
UC Eric Blau[10%],<br />
TBD[20%]<br />
SI.IS 2.1.4.1.2 P Capability documented, and registration designed Nov-09 Jan-10 50%<br />
SI.IS 2.1.4.1.3 P Prototype registration(s) deployed Feb-10 Apr-10<br />
SI.IS 2.1.4.1.4 P Production registration(s) deployed May-10 Jul-10<br />
SI.IS 2.1.4.2 P<br />
evaluate the feasibility of gateways publishing about<br />
themselves to information services<br />
SGW Aug-09 Jul-10<br />
SI.IS 2.1.4.2.4 P Publishing evaluation and next step decision May-10 Jul-10 80%<br />
SI.IS 2.1.4.3 P<br />
respond to data, gateways, visualization, and<br />
scheduling needs for expanded information service<br />
content<br />
From<br />
SGW, DV<br />
SI.IS 2.1.5 Information Services Enhancements Extension Apr-10 Jul-11<br />
SI.IS 2.1.5.1 P<br />
Track2D, Viz, and storage capability deployment<br />
registration<br />
Apr-10 Jul-11<br />
SI.IS 2.1.5.2 P New TG capabilities are registered Apr-10 Jul-11<br />
SI.IS 2.1.5.3 P Community capabilities are registered Apr-10 Jul-11<br />
SI.IS 2.1.5.4 P IIS transition to XD<br />
SI 2.2<br />
Objective 2.2 Projects: Application Service<br />
Hosting Partnership<br />
SI.Host 2.2.2 P Application Hosting Services Partnership V1 PY4 Aug-08 Jul-09<br />
SI.Host 2.2.2.1 M<br />
Application Hosting Service Description &<br />
Implementation Documents<br />
Docs<br />
Team,<br />
SGW<br />
Low priority because<br />
RPs are not requesting<br />
this capability.<br />
Dependent on gateway<br />
team to complete their<br />
part before this can be<br />
completed<br />
50% 50% Draft doc to be circulated to RPS<br />
On-going<br />
Mar-09 Jul-10 75% 75% 50%<br />
ANL JP Navarro [30%];<br />
NCSA Jason<br />
Brechin[10%]; SDSC<br />
Ken Yashimoto[15%];<br />
UC Eric Blau[10%],<br />
TBD[20%]<br />
Kate Keahy[20%],<br />
Lee Liming[5%]<br />
Aug-08 Jul-09 75% Lee to talk to Kate 75% Lee to talk to Kate 75%<br />
SI.Host 2.2.2.2 M Coordinate Inca, IS Schema for Hosting Service NOS Jul-09 Oct-09 Lee Liming[6%] 40% Lee to talk to Kate 40% Lee to talk to Kate 40%<br />
8/3 Update: Percentage 75%, The imp<br />
docs are no longer required. The<br />
description document has mostly been<br />
completed. This still needs a small<br />
amount work to complete<br />
IS Schema proto available, no INCA<br />
test yet<br />
SI.Host 2.2.3 P Application Hosting Services Partnership V2 PY5 Aug-09 Jul-10 UC Kate Keahey[25%]<br />
SI.Host 2.2.3.1 M Documentation V2 and transition plan Aug-09 Jul-10<br />
SI.Host 2.2.3.1.1 M Analyze V1 experiences and produce V2 goals, plan Aug-09 Oct-09 0% Low priority 0% Low priority 0% Low priority<br />
SI.Host 2.2.3.1.2 M Develope V1->V2 transition plan Feb-10 Apr-10 0% Low priority 0% Low priority<br />
SI.Host 2.2.3.1.3 M Documnent V2 May-10 Jul-10 0%<br />
SI.Host 2.2.3.2 M Coordinate changes and deployment To RPs Aug-09 Jul-10<br />
SI.Host 2.2.3.2.1 M Collect V1 RP experiences Aug-09 Oct-09 0% Low priority 0% Low priority 0% Low priority<br />
SI.Host 2.2.3.2.2 M Coordinate V2 testing deployment Feb-10 Apr-10 0%<br />
SI.Host 2.2.3.2.3 M Coodinate V2 production deployment May-10 Jul-10 0%<br />
SI 2.3<br />
Objective 2.3 Projects: Public Build and Test<br />
Service<br />
SI.STCI 2.3.2 P TeraGrid STCI User Support PY5 Aug-09 Jul-10<br />
SI.STCI 2.3.2.1 M Feasibility Evaluation for Community Usage Aug-09 Jan-10<br />
SI.STCI 2.3.2.1.1 M<br />
Analyze early user experiences and summarize, asses<br />
community interest<br />
ANL JP Navarro[8%];<br />
UC Eric Blau[9%]<br />
Aug-09 Oct-09 10%<br />
Due to staff losses and<br />
lag in hiring<br />
10%<br />
replacements these task<br />
are behind schedule.<br />
SI.STCI 2.3.2.1.2 M Produce evaluation report and conclusion Nov-09 Jan-10 0% 0%<br />
SI.STCI 2.3.2.2 M Standalone Build & Test Capability Kit 2.3.2.1 Feb-10 Jul-10<br />
SI.STCI 2.3.2.2.1 M Capability kit beta testing, produce documentation Feb-10 Apr-10<br />
SI.STCI 2.3.2.2.2 M Capability kit production rollout May-10 Jul-10<br />
SI 2.5 Objective 2.5 Projects: Scheduling<br />
SI.SPRC 2.5.2.17 P SPRUCE PY4 To RPs Aug-08 Jul-09<br />
Suman Nadella[100%],<br />
Nicholas Trebon[100%],<br />
Pete Beckman [5%],<br />
Initial discussions have<br />
taken place<br />
10% Initial discussions have taken place
SI.SPRC<br />
2.5.2.17.2<br />
M<br />
On-demand compute capability documented and<br />
integrated into production operations on RP systems<br />
SI.SPRC 2.5.2.18 P SPRUCE PY5 To RPs Aug-09 Jul-10<br />
SI.SPRC 2.5.2.18.3 P<br />
SI.SPRC 2.5.2.18.4 P<br />
Deploy a production service for guaranteed network<br />
bandwidth for urgent computations.<br />
Deploy Spruce for virtualized cloud infrastructure<br />
(collaboration with Eucalyptus).<br />
Work with users from VT to utilize Spruce for H1N1<br />
influenza simulations on cloud infrastructures<br />
Work with Purdue and other cloud providers to support<br />
Spruce.<br />
Maintain Spruce infrastructure (i.e., web service server,<br />
database, user/admin portals).<br />
Update Spruce plug-ins at resource-end as new<br />
schedulers are introduced.<br />
Work to get Spruce supported at new TeraGrid<br />
resources that have replaced retired machines that<br />
supported Spruce (e.g., Frost).<br />
Feb-10<br />
May-10<br />
Apr-10<br />
Jul-10<br />
UC Suman<br />
Nadella[40%], Nicholas<br />
Trebon[35%], Pete<br />
Beckman [5%]<br />
67% 67% 67%<br />
100%<br />
Contigent on Pete's<br />
report<br />
Lee to check with Pete<br />
67%<br />
SI.Skd 2.5.3 O Scheduling Working Group<br />
SI.Skd 2.5.3.3 O Scheduling Working Group Coordination PY5 Aug-09 Jul-10<br />
TACC Warren Smith<br />
[25%]<br />
75% 50%<br />
SI.Skd 2.5.3.4 O Scheduling Working Group Coordination Extension Aug-10 Jul-11<br />
TACC Warren Smith<br />
[25%]<br />
SI.QBETS 2.5.9 P QBETS PY4 moved to PY5 Aug-09 Feb-10 Warren<br />
SI.QBETS 2.5.9.2 P<br />
Queue prediction capability documented and<br />
integrated into TeraGrid's production operations<br />
SI.QBETS 2.5.9.2.1 P Queue Predicition Capability Documented Aug-09 Nov-09 60%<br />
Lee to talk to Warren<br />
about Karnak<br />
deployment<br />
60%<br />
Lee to talk to Warren<br />
about Karnak<br />
deployment<br />
SI.QBETS 2.5.9.2.2 P<br />
Queue Predicition Capability integrated into TG<br />
Not formally complete<br />
Not formally complete<br />
Nov-09 Feb-10 25%<br />
0%<br />
operations<br />
but available for use<br />
but available for use<br />
NICS Troy Baer[6%];<br />
SI.MTS 2.5.10 P Metascheduling PY5 Aug-09 Jul-10<br />
SDSC Kenneth<br />
Yashimoto[35%]; TACC<br />
Warren Smith[55%]<br />
SI.MTS 2.5.10.1 P Detailed PY5 metascheduling plan Aug-09 Oct-09 Completed Completed<br />
8/3 Update: On-demand capability has<br />
already been implement by a number of<br />
RPs, documentation and awareness is<br />
still needed for this implementation.<br />
This will be completed by Mar 10<br />
SI.MTS 2.5.10.2 P Metascheduling capabilities developed Nov-09 Jan-10<br />
Condor-G/class AD<br />
Condor-G/class AD<br />
Completed<br />
metascheduling 80%<br />
metascheduling<br />
SI.MTS 2.5.10.3 P<br />
Testing automated and dynamic (information services<br />
based) metascheduling<br />
Feb-10 Apr-10 Completed Completed<br />
SI.MTS 2.5.10.4 P<br />
Production automated and dynamic (information<br />
services based) metascheduling<br />
May-10 Jul-10 Lee to talk to Warren Lee to talk to Warren<br />
SI.MTS 2.5.11 P Metascheduling Extension Apr-10 Jul-11<br />
NICS Troy Baer[6%];<br />
SDSC Kenneth<br />
Yashimoto[35%]; TACC<br />
Warren Smith[55%]<br />
SI.MTS 2.5.11.1 O<br />
Advanced scheduling tools ported to new TeraGrid<br />
systems as appropriate per RP wishes 13%<br />
SI.MTS 2.5.11.2 P<br />
Best practice documentation for using advanced<br />
scheduling capabilities in conjunction with non-<br />
TeraGrid systems<br />
0% Not started<br />
Improved Coordinated Capability User<br />
SI.ICUD 2.7 P<br />
Documentation PY5<br />
Aug-09 Jul-10 SDSC Fariba Fana[15%]<br />
SI.ICUD 2.7.1 P<br />
High-level documentation layout designed, detailed<br />
JP needs to talk to doc<br />
JP needs to talk to doc<br />
Aug-09 Oct-09<br />
project tasks identified, scheduled, and assigned<br />
0%<br />
team 0%<br />
team<br />
SI.ICUD 2.7.2 P<br />
Capability kit introduction/use case documentation<br />
improvements<br />
Nov-09 Jan-10<br />
0%<br />
SI.ICUD 2.7.3 P<br />
Half of capability kit detailed documentation<br />
improvements including IS integration<br />
Feb-10 Apr-10<br />
0%<br />
SI.ICUD 2.7.4 P<br />
Half of capability kit detailed documentation<br />
improvements including IS integration<br />
May-10 Jul-10<br />
0%<br />
0%<br />
Contigent on LifeRay implementation<br />
dependant<br />
SI<br />
END SI
Work Package Dependencies Planned<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
Resources 6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
SGW 1 Area: Science Gateways<br />
SGW 1.0<br />
Objective 1.0 Operations: maintain/sustain<br />
current capabilities<br />
SGW.Coord 1.0.1.2 O SGW Area Coordination PY4 Aug-08 Jul-09 Nancy Wilkins-Diehr[50%], Ongoing Ongoing<br />
SGW.Coord<br />
D<br />
Support for Gateways requiring urgent computing.<br />
The LEAD gateway will run jobs that pre-empt<br />
others by using SPRUCE.<br />
Standardize security approach to gateways.<br />
SGW.Coord<br />
D Among gateways with roaming allocations, usage<br />
of multiple TeraGrid resources will increase<br />
Gateway Web services registry. It will be possible<br />
SGW.Coord<br />
D to discover services available through Gateways<br />
programmatically<br />
SGW.Coord 1.0.1.3 O SGW Area Coordination PY5 Aug-09 Jul-10 Nancy Wilkins-Diehr[50%],<br />
SGW.Coord 1.0.1.3.1 M Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10 Stuart Martin[50%]<br />
Ongoing Ongoing<br />
SGW.GDoc 1.0.2 O Gateway Documentation May-07 Jul-10<br />
SGW.GDoc 1.0.2.1.10 P Use of GRAM audit Dec-07 Mar-08 Delayed Delayed<br />
IU Canceled Deferred Not done, currently no RPs support on<br />
demand computing. Gateways will pursue<br />
an application driver for this in PY6 after<br />
capability is developed by scheduling-wg<br />
In progress In progress Delayed until PY5. Targeted effort to<br />
standardize treatment of community<br />
accounts funded at NICS in PY5<br />
RENCI In progress In progress Delayed. Progress slower than expected.<br />
Work expected to wrap up in the first half of<br />
PY5<br />
SGW.GDoc 1.0.2.1.12 P<br />
How to incorporate TG resources into an OGCE<br />
framework<br />
Dec-07 Mar-08 Canceled Delayed<br />
SGW.GDoc 1.0.2.1.13 P Condor, Globus use in a gateway Dec-07 Dec-07 Canceled Delayed<br />
SGW.GDoc 1.0.2.2 O Gateway Documentation PY4 Aug-08 Jun-09 Diana Diehl[50%]<br />
Provide content for documentation, formation of<br />
SGW.GDoc 1.0.2.2.2 M ideas and solutions to common gateway problems<br />
Aug-08 Sep-08 Delayed<br />
(RENCI);<br />
SDSC Diana Diehl[20%];<br />
SGW.GDoc 1.0.2.3 O Gateway Documentation PY5 Aug-09 Jul-10 UNC Jason Reilly [40%],<br />
John McGee [5%]<br />
SGW.GDoc 1.0.2.2.1 M Up to date documentation on science gateways. Aug-09 Jul-10 Ongoing Ongoing<br />
SGW.GDoc 1.0.2.2.2 M Anatomy of Gateway tutorial Aug-09 Jul-10 Done Done<br />
SGW 1.1<br />
Objective 1.1 Projects: Gateways Targeted<br />
Support Program<br />
SGW.HDN 1.1.2.4 O Helpdesk PY5 Aug-09 Jul-10 NCSA Yan Liu[20%]<br />
Provide helpdesk support for production science<br />
SGW.HDN 1.1.2.5 O<br />
gateways by answering user questions, routing<br />
Aug-09 Jul-10<br />
user requests to appropriate gateway contacts,<br />
and tracking user responses<br />
Ongoing<br />
Ongoing<br />
Provide the helpdesk support of the SimpleGrid<br />
SGW.HDN 1.1.2.6 O online training and prototyping services to gateway<br />
Aug-09 Jan-10<br />
Ongoing<br />
Ongoing<br />
SGW.HDN 1.1.2.7 P<br />
developers and communities<br />
Gather requirements for a knowledge base to<br />
improve gateway helpdesk services.<br />
SGW.Gen 1.1.5 P Genomics gateway, SidGrid, MIT SEAS PY4 Aug-08 Mar-09 Suresh Marru[50%]<br />
Work with potential new gateway including polar<br />
grid (IU);<br />
Feb-10<br />
Jul-10<br />
Effort redirected to<br />
GridChem AUS support<br />
request. This work will not be<br />
picked up<br />
Effort redirected to<br />
GridChem AUS support<br />
request<br />
SGW.Gen 1.1.5.1 M Jan-09 Jul-09 Raminder Singh [100%] Completed In progress<br />
CalTech Julian Bunn [10%],<br />
SGW.CIG 1.1.6.3 P CIG support PY5 Aug-09 Jul-10 John McCorquodale [25%],<br />
Roy Wiliams [5%]<br />
Work with NOAO to develop a TG portal that<br />
SGW.CIG 1.1.6.4 P allows users to run customized galaxy collision<br />
Aug-09 Jul-10 Effort redirected to Arroyo<br />
Effort redirected to Arroyo<br />
simulations using the power of the TeraGrid<br />
gateway, tasks forthcoming<br />
gateway, tasks forthcoming<br />
A scriptable interface to portal services will be<br />
Effort redirected to Arroyo<br />
Effort redirected to Arroyo<br />
SGW.CIG 1.1.6.5 P Aug-09 Oct-09<br />
created.<br />
gateway, tasks forthcoming<br />
gateway, tasks forthcoming<br />
Extend the CIG portal with an easy-to-use<br />
language-agnostic RPC-styleinterface that<br />
SGW.CIG 1.1.6.6 P parallels the functionality of the web-based<br />
Nov-09 Jan-10<br />
Effort redirected to Arroyo<br />
Effort redirected to Arroyo<br />
interactiveportal (and extends it where<br />
gateway, tasks forthcoming<br />
gateway, tasks forthcoming<br />
appropriate)<br />
SGW.CIG 1.1.6.7 P<br />
Provide bindings from this interface into popular<br />
languages (driven by user demand) such as C,<br />
Pythonand Perl<br />
SGW.Bio 1.1.10.3 P Bio web services, registry PY4 Aug-08 Jul-09<br />
SGW.Bio 1.1.10.3.4 M<br />
Work with the various TeraGrid teams to evolve<br />
the registry to meet the broader TeraGrid<br />
requirements (RENCI);<br />
SGW.Bio 1.1.10.3.5 M<br />
Work with other gateways to incorporate their<br />
service offerings into the registry (RENCI);<br />
Feb-10 Apr-10 Effort redirected to Arroyo<br />
gateway, tasks forthcoming<br />
Jason Reilly[50%],<br />
Mats Rynge[25%]<br />
John McGee[5%]<br />
To IS, NOS Aug-08 Jul-09 In progress In progress<br />
Effort redirected to Arroyo<br />
gateway, tasks forthcoming<br />
GRAM5 released Jan, 2010. Waiting for TG<br />
installation from software-wg<br />
Documentation staff picking this back up<br />
after Liferay distractions.<br />
Documentation staff picking this back up<br />
after Liferay distractions.<br />
Documentation staff delayed on this due to<br />
LifeRay migration<br />
Aug-08 Jul-09 In progress In progress<br />
SGW.GC 1.1.12 P GridChem or Other New PY5 Aug-09 Jul-10 IU Suresh Marru [50%]<br />
SGW.GC 1.1.12.1 P<br />
Complete Gateway clients as determined by the<br />
Targeted Support Program<br />
Jan-10 Jul-10 In progress In progress Ultrascan<br />
SGW.GC 1.1.12.2 P<br />
Ongoing support as required for existing Gateway<br />
projects<br />
Jan-10 Jul-10 In progress In progress GridChem, Ultrascan, LEAD
Work Package Dependencies Planned<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
Resources 6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
SGW.GC 1.1.12.3 P<br />
Assist GridChem gateway in the areas of software<br />
access, data management, improved workflows,<br />
Aug-09 Dec-09<br />
visualization and scheduling<br />
Completed<br />
In progress<br />
SGW.PG 1.1.13 P PolarGrid or Other New PY5 Aug-09 Jul-10 IU SGW TBD1 [100%]<br />
SGW.PG 1.1.13.1 P<br />
Complete Gateway clients as determined by the<br />
Targeted Support Program<br />
Jan-10 Jul-10 Completed In progress<br />
SGW.PG 1.1.13.2 P<br />
Ongoing support as required for existing Gateway<br />
projects<br />
Assist PolarGrid team with TeraGrid integration.<br />
SGW.PG 1.1.13.3 P<br />
May include realtime processing of sensor data,<br />
Aug-09 Jul-10<br />
support for parallel simulations, GIS integration<br />
and EOT components<br />
In progress<br />
SGW.SID 1.1.14 P SIDGrid or Other New PY5 Aug-09 Jul-10 UC Wenjun Wu [100%]<br />
SIDGrid scheduling will be enhanced by taking<br />
advantage of work underway in the Scheduling<br />
SGW.SID 1.1.14.1 P Working Group to enable monitoring of and<br />
Aug-09 Oct-09<br />
submission to multiple TeraGrid resources through<br />
a single interface<br />
SGW.SID 1.1.14.2 P<br />
Improved security model for community accounts<br />
utilized by the gateway<br />
Nov-09 Jan-10 Completed In progress<br />
SGW.SID 1.1.14.3 P<br />
Increased number of analysis codes available<br />
through the gateway<br />
Feb-10 Apr-10 Completed<br />
Isolation of execution within virtual machines or<br />
other constrained environments, and data sharing<br />
SGW.SID 1.1.14.4 P with collaborators, while simultaneously protecting<br />
Feb-10 Apr-10<br />
the users' data and execution environment from<br />
accidental or malicious compromise.<br />
Completed<br />
In progress<br />
SGW.SID 1.1.14.5 P<br />
VDS, the current workflow solution, will be<br />
May-10 Jul-10 Completed<br />
replaced by its successor, SWIFT.<br />
Documentation and code that is generally<br />
applicable to gateway efforts will be made<br />
available to TeraGrid gateway partners<br />
Workflow complete, scheduling in<br />
progress<br />
Jan-10 Jul-10 In progress In progress PolarGrid, Ultrascan<br />
Done Using Swift Done Using Swift<br />
SGW.SID 1.1.14.6 P<br />
May-10 Jul-10<br />
In progress<br />
SGW.OSG 1.1.15 P RENCI-OSG PY5 Aug-09 Jul-10<br />
UNC Jason Reilly [15%],<br />
John McGee [5%]<br />
SGW.OSG 1.1.15.1 P<br />
Prototype of OSG jobs running on TeraGrid<br />
resources via NIMBUS<br />
Aug-09 Oct-09 Not completed Deferred<br />
SGW.OSG 1.1.15.2 P<br />
Successful interactions with and assistance for<br />
gateway developers<br />
Nov-09 Apr-10 Not completed In progress<br />
SGW.ES 1.1.16 P Environmental Science Gateway PY5 Aug-09 Jul-10<br />
SGW.ES 1.1.16.1 P<br />
Release of a demonstration version of the ESGC<br />
Portal capable of invoking CCSM runs on the<br />
TeraGrid<br />
Ability for runs invoked on the Purdue Climate<br />
SGW.ES 1.1.16.2 P Portal or the ESGC Gateway to be published back<br />
to the ESG data holdings<br />
SGW.ES 1.1.16.3 P<br />
Investigate federation of Purdue data holdings<br />
with ESG data archives<br />
SGW 1.2 Objective 1.2 Projects: Gateways User Count<br />
Aug-09<br />
SGW.UCnt 1.2.2 P Gateway User Count PY4 To NOS Aug-08 Jul-09<br />
SGW.UCnt<br />
SGW.UCnt<br />
D<br />
D<br />
End gateway users can be counted<br />
programmatically, as opposed to current manual<br />
aggregation method. By the end of PY5, all jobs<br />
submitted via community accounts will include<br />
attributes identifying the end user of that gateway.<br />
Extend TGCDB to store attributes with job logs.<br />
All jobs submitted via community accounts will be<br />
associated with individual gateway users and<br />
made available to backend processes. Job<br />
information will be stored in the TGCDB, which will<br />
make it possible to produce detailed usage reports<br />
through a TGCDB query<br />
SGW.UCnt 1.2.3 P Gateway User Count PY5 NOS Aug-09 Jul-10<br />
SGW.UCnt 1.2.3.1 M<br />
Develop internal web pages for the security<br />
working group and the science gateway<br />
administration page in support of attribute-based<br />
authentication<br />
Oct-09<br />
NCAR NCAR SGW TBD<br />
[50%] ;Purdue PU SGW<br />
TBD1[50%]<br />
In progress<br />
In progress<br />
Nov-09 Jan-10<br />
In progress<br />
In progress<br />
Feb-10 Jul-10 In progress In progress<br />
Aug-09<br />
Dec-09<br />
Tom Scavo[75%],<br />
Jim Basney[10%],<br />
Nancy Wilkins-Diehr[5%]<br />
Delayed<br />
Pending GRAM5 installs and subsequent<br />
RP work.<br />
In progress<br />
NCSA Done. In progress<br />
NCSA Tom Scavo[50%], Jim<br />
Basney[10%], Terry Fleury<br />
[25%]; SDSC Nancy Wilkins-<br />
Diehr[5%], Michael Dwyer<br />
[10%]<br />
Investigating CCSM v4 rather than<br />
v3<br />
Done<br />
Done<br />
SGW.UCnt 1.2.3.2 M Ubiquitous science gateway adoption. Aug-09 Dec-09 Delayed Waiting for GRAM5 install Delayed Dependent on GRAM5 install<br />
SGW.UCnt 1.2.3.3 M Ubiquitous RP adoption. Feb-09 Dec-09 Delayed Waiting for GRAM5 install Delayed<br />
SGW.UCnt 1.2.3.4 M User count INCA tests Jan-10 Feb-10 Done Done
Work Package Dependencies Planned<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
Resources 6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
1.2.3.5 M Post-TG architecture documentation. Feb-10 Jul-10 In progress In progress<br />
SGW 1.4<br />
Objective 1.4 Projects: Gateways General<br />
Services Discovery<br />
SGW.SG 1.4.2 P<br />
SimpleGrid, helpdesk, accounting, new<br />
communities PY4<br />
SGW.SG<br />
D<br />
Reusable SijmpleGrid modules. TeraGrid science<br />
gateway program will open an online learning<br />
service for gateway development by integrating<br />
SimpleGrid documentation, progressive online<br />
courses, and common components contributed<br />
from other existing gateways. All the training,<br />
coding, and prototyping exercises will be<br />
conducted through web browser.<br />
To NOS<br />
Aug-08<br />
SGW.SG 1.4.1.1 P<br />
SimpleGrid, helpdesk, accounting, new<br />
communities PY5<br />
Aug-09 Jul-10<br />
Continue to develop and support the SimpleGrid<br />
SGW.SG 1.4.1.1.1 M online training service for building new science<br />
Aug-09 Jan-10<br />
gateways<br />
Develop a SimpleGrid prototyping service to<br />
support virtualized access to TeraGrid by<br />
SGW.SG 1.4.1.1.2 M efficiently converting a new community application<br />
Aug-09 Jul-10<br />
to a science gateway application that provides web<br />
access for sharing within the community.<br />
Develop a streamlined packaging service for new<br />
SGW.SG 1.4.1.1.3 M<br />
communities to create their science gateway<br />
Feb-10 Jul-10<br />
software based on the use of the SimpleGrid<br />
prototyping service<br />
Develop a user-level TeraGrid usage service<br />
SGW.SG 1.4.1.1.4 M<br />
within SimpleGrid based on the community<br />
Feb-10 Jul-10<br />
account model and attributes-based security<br />
services<br />
Work with potential new communities to improve<br />
SGW.SG 1.4.1.1.5 M the usability and documentation of the proposed<br />
Aug-09 Jul-10<br />
gateway support services<br />
Conduct education and outreach work using the<br />
SimpleGrid online training service and related<br />
SGW.SG 1.4.1.1.6 M science gateway technologies in the contexts of<br />
Aug-09 Jul-10<br />
undergraduate, graduate, and K-12 education and<br />
training and document experiences<br />
SGW.CA 1.4.5 P Community Accounts PY5 Aug-09 Jul-10<br />
SGW.CA 1.4.5.1 M Review and documentation of requirements Aug-09 Oct-09<br />
Jul-09<br />
Shaowen Wang[8%],<br />
Yan Liu[80%]<br />
NCSA<br />
NCSA Shaowen Wang[10%],<br />
Yan Liu[80%]<br />
NICS Victor<br />
Hazlewood[30%];SDSC<br />
Nancy Wilkins-Diehr[15%]<br />
Delayed<br />
Ongoing<br />
In progress<br />
In progress<br />
Will be using CI Tutor for this in the<br />
extension year.<br />
In progress<br />
Ongoing<br />
Work will complete in the extension In progress<br />
VM deployment in th extension year In progress<br />
Deferred Waiting for GRAM5 install Deferred<br />
In progress<br />
In progress<br />
Complete SciDAC, TG10 In progress<br />
Complete<br />
In progress<br />
SGW.CA 1.4.5.2 M Communication of requirements Nov-09 Jan-10 Complete In progress<br />
SGW.CA 1.4.5.3 M Specification of a standard deployment model(s) Feb-10 Apr-10 Complete In progress<br />
SGW.CA 1.4.5.4 M Deployment of standard model on one gateway May-10 Jul-10 In progress<br />
SDSC Nancy Wilkins-<br />
SGW.GCD 1.4.6 P Gateways Code Discovery PY5 Aug-09 Jul-10 Diehr[5%]; UNC Josh Coyle<br />
[30%] John McGee [5%]<br />
SGW.GCD 1.4.6.1 P<br />
RENCI Gateway Services advertised appropriately<br />
in TG Info Services<br />
Aug-09 Aug-09 Done Done<br />
SGW.GCD 1.4.6.2 P Schema for Gateway services and capabilities Aug-09 Sep-09 Done Done<br />
SGW SGW END<br />
SimpleGrid portal and tutorial server have<br />
been deployed on TeraGrid gateway cluster<br />
for two gateway developers and new<br />
gateway tutorials (TeraGrid'09 and<br />
SciDAC'09). As SimpleGrid is released with<br />
new versions, the online tutorial is being<br />
improved to incorporate core features of<br />
Web 2.0 user interface technologies. New<br />
tutorial experiences have been adopted in<br />
the online learning service. The online<br />
tutorial is being modified to incorporate<br />
efficient ways for new user support based on<br />
our gateway helpdesk support experience.<br />
We will release the online learning service<br />
in mid-PY05 for targeted evaluation.<br />
Pending attribute implementation,<br />
which is delayed until GT5 installs<br />
complete.<br />
installing @ NICS the Science<br />
Gateway kit, the commsh toolset<br />
and began discussions of<br />
implementation of this with<br />
GridAMP as a start. A survey of<br />
what is being done and what is<br />
needed as requirements is still<br />
needed.
TeraGrid US PY5 Project Plan (as of 1/7/09)<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Work Package Dependencies Planned<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
PY5Q3<br />
Status<br />
Update<br />
US 6.0 Operations: Frontline User Support<br />
US.Coord 6.0.1 O US Area Coordination PSC Sergiu Sanielevici (30%)<br />
0 6.0.1.3 O US Area Coordination PY5<br />
user interface<br />
council<br />
AUS, NOS,<br />
UFP<br />
US.Eng 6.0.2 O User Engagement PY5 Aug-09 Jul-10<br />
Aug-09 Jul-10 ongoing<br />
PSC David O'Neal(25%),<br />
Richard Raymond(20%),<br />
Raghurama Reddy(45%),<br />
Sergiu Sanielevici (30%)<br />
US.Eng 6.0.2.1 O User Champion Coordination Aug-09 Jul-10 ongoing<br />
US.Eng 6.0.2.2 O Campus Champion Coordination Aug-09 Jul-10 ongoing<br />
US.Eng 6.0.2.3 O Startup and education grant support coordination Aug-09 Jul-10 ongoing<br />
US.Eng 6.0.2.4 O Analyze and report usage patterns Aug-09 Jul-10 ongoing<br />
US.Eng 6.0.2.5 P TG User Survey PY5 question input All Aug-09 Jul-10 external contractor DONE<br />
PSC David O'Neal(10%), ongoing<br />
US.Tick 6.0.3 O Share and maintain best practices for ticket resolution across all RPs PY5 Aug-09 Jul-10<br />
Richard Raymond(30%),<br />
Raghurama Reddy(10%),<br />
Sergiu Sanielevici (10%)<br />
US.Tick 6.0.3.1 O<br />
Focus on providing users with a substantive clarification of the nature of the problem and<br />
the way forward<br />
Aug-09 Jul-10 ongoing<br />
US.Tick 6.0.3.2 O Focus on the coordinated resolution of problems spanning RPs and systems Aug-09 Jul-10 ongoing<br />
US.Tick 6.0.3.3 O Stale ticket count reaches zero Aug-09 Jul-10 ongoing<br />
US.Tick 6.0.3.4 O At least 85% survey ratings for promptness and quality Aug-09 Jul-10 ongoing<br />
Estabrook [100%], ongoing<br />
US.Cnslt.RP.NCSA 6.0.5.5 O NCSA RP User Services - Consulting Operations Apr-10 Mar-11 Jackson[100%], John [100%],<br />
US.Cnslt.RP.IU 6.0.5.6 O IU RP User Services - Consulting Operations Apr-10 Mar-11 ongoing<br />
US.Cnslt.RP.LONI 6.0.5.7 O LONI RP User Services - Consulting Operations Apr-10 Mar-11 Jundt [50%], Xu [100%] ongoing<br />
Loftis [50%], Lucio [100%], ongoing<br />
US.Cnslt.RP.NICS 6.0.5.8 O NICS RP User Services - Consulting Operations Apr-10 Mar-11<br />
Sharkey [100%], Wong [50%],<br />
Halloy [50%], Crosby [50%],<br />
TBD [250%]<br />
Blood[25%], Costa[90%], ongoing<br />
US.Cnslt.RP.PSC<br />
US.Cnslt.RP.PU<br />
6.0.5.10<br />
6.0.5.11<br />
O<br />
O<br />
PSC RP User Services - Consulting Operations<br />
PU RP User Services - Consulting Operations<br />
Apr-10<br />
Apr-10<br />
Mar-11<br />
Mar-11<br />
Gomez[45%], Madrid[35%],<br />
Maiden[20%], Nigra[70%],<br />
Raymond[3%],<br />
Sanielevici[2%] Wang[5%]<br />
P. Smith [20%], New person<br />
[100%]<br />
ongoing<br />
US.Cnslt.RP.SDSC 6.0.5.12 O SDSC RP User Services - Consulting Operations Apr-10 Mar-11<br />
Choi [5%], Greenberg [15%],<br />
Tatineni [55%], Wolter [100%]<br />
ongoing<br />
ongoing<br />
US.Cnslt.RP.TACC 6.0.5.13 O TACC RP User Services - Consulting Operations Apr-10 Mar-11<br />
Turner [40%], Wilson [40%],<br />
Sadahiro [50%]<br />
ongoing
TeraGrid AUS Project Plan<br />
Work Package Dependencies Planned<br />
Resources<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
AUS 10 Area: Advanced User Support<br />
10.0 AUS Area Coordination<br />
AUS.Coord 10.0.2 O AUS Area Coordination PY5 Aug-09 Jul-10<br />
Majumdar [75%],Alameda[0%], Sanielevici<br />
[0%],Barth[0%], Cheesman[0%],Jundt[0%],<br />
Crosby[0%]<br />
Total 0.75<br />
10.1 Advanced Support Projects<br />
AUS.LIB 10.1.1 P Library Database PY5 Aug-09 Jul-10<br />
NICS Mark Fahey[10%], Intern[50%], HPC Ops<br />
Staff[10%]<br />
AUS.LIB 10.1.1.1 P implement wrappers for ld and aprun Aug-09 Oct-09 100% done 100% done<br />
AUS.LIB 10.1.1.2 P test wrappers; write mining scripts Nov-09 Jan-10 100% done 100% done<br />
AUS.LIB<br />
AUS.LIB<br />
10.1.1.3 P<br />
10.1.1.4 P<br />
wrappers in production mode; implement internal<br />
web interface; share beta release with other<br />
interested sites Feb-10 Apr-10 100% done<br />
everything in production; begin using as a tool to<br />
help manage software; prepare full release;<br />
share with other interested sites May-10 Jul-10 90% done<br />
AUS.MPI 10.1.2 P MPIg Support PY5 Aug-09 Jul-10<br />
AUS.MPI 10.1.2.1 P<br />
Work with ASTA apps groups to port to MPIg on<br />
TG sites. Continue investigation of running apps<br />
at full scale on Ranger. Aug-09 Oct-09<br />
AUS.MPI 10.1.2.2 P Attempt cross-site runs with MPIg on TG sites. Nov-09 Jan-10<br />
AUS.MPI 10.1.2.3 P<br />
NIU Undergrad[29%], Grad[75%], Nick<br />
Karonis[25%]<br />
100% done<br />
(since we<br />
postponed TG<br />
site runs)<br />
100% done<br />
(since we<br />
postponed TG<br />
site runs)<br />
Work with largest TG RP providers to enable<br />
cross-site runs (this is being postponed due to<br />
unable to use MPIg on TG sites). Instead adding<br />
new task of integraing UPC and MPI for user<br />
apps Feb-10 Apr-10 10% done<br />
Investigate intetrating clouds, possibly with<br />
AUS.MPI 10.1.2.4 P Globus, to run MPIg jobs. May-10 Jul-10 80% done<br />
10.2 Advanced Support Operations<br />
AUS.ASTA 10.2.1 O Advanced Support TG Applications<br />
AUS.ASTA 10.2.1.2 O Advanced Support TG Applications PY5 Aug-09 Jul-10<br />
NCSA 0.6 FTE<br />
Jay Alameda [20%]<br />
Dodi Heryadi [0%]<br />
Seid Koric [0%]<br />
Rick Kufrin[25%]<br />
Sudhakar Pamidighantam [0%]<br />
Mark Straka[15%]<br />
Ahmed Taha[0%]<br />
Michelle Gower[0%]<br />
David Bock [0%]<br />
Mark Vanmoer [0%]<br />
PSC 3.15 FTE<br />
Mahin Mahmoodi [10%]<br />
The tracking database is in<br />
production; the internal web interface<br />
was dropped as a deliverable earlier,<br />
rather a suite of scripts to provide the<br />
same functionality.<br />
( g 95% done<br />
have not completed the license<br />
agreement as we work with the<br />
lawyers.) We have shared the beta<br />
with the new NOAA center coming<br />
online at ORNL. We are using the<br />
tool to help guide our software<br />
maintenance. Full release is not<br />
quite done - licensing agreement and<br />
manual need some more time.<br />
Due to unable to use MPIg on TG<br />
sites we are postponing the task of<br />
running full scale)<br />
Due to unable to use MPIg on TG<br />
sites cross-site runs on TG sitesis<br />
postponed for now<br />
Some of the UPC had been<br />
integrated into the app, but based on<br />
discussions with user, focus has now<br />
shifted from UPC to the more<br />
ubiquitous OpenMP and pthreads.<br />
We are now 10% done in that new<br />
100% done<br />
(since we<br />
postponed TG<br />
site runs)<br />
100% done<br />
(since we<br />
postponed TG<br />
site runs)<br />
direction.<br />
10% done<br />
successfully integrated MPIg and<br />
Globus with NIMBUS and ran<br />
application on cloud. RPs have yet<br />
to embrace cloud computing, moving<br />
on to OpenMP and pthreads.<br />
If/when return to clouds, need to add<br />
vendor MPI integration.<br />
The tracking software is in<br />
production on both kraken and<br />
athena. Some scripts to mine data<br />
have been developed. The manual<br />
is not quite done for the full release.<br />
A beta release with incomplete docs<br />
could be shared with interested<br />
parties.<br />
Due to unable to use MPIg on TG<br />
sites we are postponing the task of<br />
running full scale)<br />
Due to unable to use MPIg on TG<br />
sites cross-site runs on TG sitesis<br />
postponed for now<br />
Redirecting this task to integrate<br />
UPC and MPI
TeraGrid AUS Project Plan<br />
Work Package Dependencies Planned<br />
Resources<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
Raghu Reddy [30%]<br />
John Urbanic [25%]<br />
Joel Welling [50%]<br />
Philip Blood [35%]<br />
Roberto Gomez [40%]<br />
Marcela Madrid [20%]<br />
David O'Neal[50%]<br />
Yang Wang [30%]<br />
Anirban Jana [25%]<br />
SDSC 1.75 FTE<br />
Natasha Balac[15%]<br />
Dong Ju Choi[15%]<br />
Amit Chourasia[10%]<br />
Yifeng Cui[40%]<br />
Dmitry Pekurovsky [25%]<br />
Wayne Pfeiffer[10%]<br />
Mahidhar Tatineni[20%]<br />
Ross Walker[20%]<br />
TBD/Doc etc.[20%]<br />
Purdue 0.45 FTE<br />
Phil Cheeseman[45%]<br />
LSU 0.1 FTE<br />
Adam Jundt [10%]<br />
Honggao Liu /POC[0%]<br />
ORNL 0.2 FTE<br />
John Cobb, backup POC ORNL[0%]<br />
Mei Li Chen, backup POC[10%]<br />
Vickie Lynch, POC ORNL[10%]<br />
NICS 0.8 FTE<br />
Lonnie Crosby, POC NICS[40%]<br />
Christian Halloy[20%]<br />
Kwai Wong[20%]<br />
Bruce Loftis, backup POC[0%]<br />
TACC 3.85 FTE<br />
Bill Barth, POC TACC<br />
John Cazes<br />
Lars Koesterke<br />
B. D. Kim<br />
Hang Liu<br />
Robert McLay<br />
Kent Milfeld<br />
John Peterson<br />
Karl Schulz (backup POC)<br />
TACC total 3.85<br />
IU 0.0 FTE<br />
Don Berry[0%]<br />
Ray Shepperd POC[0%]<br />
ANL 0.31 FTE<br />
Joe Insley[31%]<br />
AUS.ASP 10.2.2 O Advanced Support Projects<br />
AUS.ASP 10.2.2.2 O Advanced Support Projects PY5 Aug-09 Jul-10<br />
Total 11.06<br />
NCSA 0.4 FTE<br />
Jay Alameda [20%]<br />
Dodi Heryadi [0%]<br />
Seid Koric [0%]<br />
Ahmed Taha[0%]<br />
Michelle Gower[0%]<br />
David Bock [0%]<br />
Mark Vanmoer [0%]<br />
Mark Straka[20%]<br />
PSC 1.65 FTE
TeraGrid AUS Project Plan<br />
Work Package Dependencies Planned<br />
Resources<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
Nick Nystrom [15%]<br />
Mahin Mahmoodi [40%]<br />
Raghu Reddy[5%]<br />
Joel Welling [30%]<br />
David O'Neal[15%]<br />
Anirban Jana[10%]<br />
Roberto Gomez[20%]<br />
Yang Wang [15%]<br />
Phil Blood [15%]<br />
SDSC 1.05 FTE<br />
Chourasia [15%]<br />
Pfeiffer[40%]<br />
Dmitry Pekurovsky[10%]<br />
Tatineni[5%]<br />
Balac[10%]<br />
Ross Walker [25%]<br />
Purdue 0.3 FTE<br />
TBD 30%<br />
LSU 0.1 FTE<br />
Adam Jundt [10%]<br />
TACC 1.9 FTE<br />
Bill Barth [X%]<br />
others/TACC<br />
TACC total 1.9<br />
IU 0.0 FTE<br />
Don Berry [0]%<br />
NICS 0.4 FTE<br />
Christian Halloy[20%]<br />
Kwai Wong[20%]<br />
AUS.ASEOT 10.2.3 O Advanced Support EOT<br />
AUS.ASEOT 10.2.3.2 O Advanced Support EOT PY5 Aug-09 Jul-10<br />
Total 5.85<br />
NCSA 0.25 FTE<br />
Jay Alameda [10%]<br />
Sudhakar Pamidighantam [0%]<br />
David Bock [0%]<br />
Mark Vanmoer [0%]<br />
Mark Straka[15%]<br />
Purdue 0.2 FTE<br />
TBD[20%]<br />
PSC 0.7 FTE<br />
Nick Nystrom [10%]<br />
Philip Blood [10%]<br />
Marcela Madrid[30%]<br />
Shawn Brown[10%]<br />
Mahin Mahmoodi [10%]<br />
NICS 0.38 FTE<br />
Lonnie Crosby[38]<br />
SDSC 0.4 FTE<br />
Yifeng Cui[10%]<br />
Dmitry Pekurovsky[15%]<br />
TBD/doc[15%]<br />
TACC 0.85 FTE<br />
Lars Koesterke<br />
Kent Milfeld [X%]<br />
Bill Barth [X%]<br />
TACC total 0.85<br />
LSU 0.05 FTE
TeraGrid AUS Project Plan<br />
Work Package Dependencies Planned<br />
Resources<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
OPMD<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
Adam Jundt [5%]<br />
IU 0.0 FTE<br />
Don Berry [0%]<br />
Total 2.88
Work Package Dependencies Planned<br />
Resources<br />
(per GIG budget 7/7/08)<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
UFP 9 Area: User-facing and Core Services Projects<br />
UFP 9<br />
Objective: User-facing Projects:<br />
Maintainance and Development<br />
May-07 Jul-10<br />
UFP.Coord 9.0 O UFP Area Coordination Aug-07 Jul-10<br />
UFP.Coord 9.0.3 O UFC Area Coordination PY5 Aug-09 Jul-10 SDSC David Hart [93%] Ongoing Ongoing<br />
UFP.Coord 9.0.3.2 M Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31, 6/30)<br />
UFP.Coord<br />
9.0.3.3<br />
Lead UFC-RP working group for coordination<br />
O of RPs with UFC<br />
UFP.Acct<br />
NCSA Steve Quinn[20%],<br />
Michael Shapiro[20%];<br />
Ongoing<br />
Ongoing<br />
9.6 O Accounting Operations PY5 Aug-09 Jul-10<br />
SDSC Leo Carson[44%],<br />
Tiffany Duffield[17%], Henry<br />
Jaime[7%]<br />
UFP.Acct 9.6.1 O TGCDB and AMIE PY5 Aug-09 Jul-10<br />
UFP.Acct<br />
Ongoing operations and maintenance of the<br />
Ongoing<br />
Ongoing<br />
9.6.1.1<br />
central TGCDB-AMIE environment and<br />
O servers Aug-09 Jul-10<br />
Leo Carson, H. Jaime (SDSC)<br />
(0.36 FTE)<br />
UFP.Acct<br />
Update TGCDB tables and code to support<br />
Ongoing<br />
Ongoing<br />
9.6.1.2<br />
evolving TG requirements, including addition<br />
of new RP resources into TGCDB; changes to<br />
O existing resources. Aug-09 Jul-10<br />
Michael Shapiro (NCSA) (0.10<br />
FTE), Steve Quinn (NCSA)<br />
(0.10 FTE)<br />
UFP.Acct<br />
Michael Shapiro (NCSA) (0.10 Ongoing<br />
Ongoing<br />
9.6.1.3<br />
Update AMIE tables and code to support<br />
O evolving TG requirements Aug-09 Jul-10<br />
FTE), Steve Quinn (NCSA)<br />
(0.10 FTE)<br />
UFP.Acct 9.6.2 O tgusage and gx-map PY5<br />
UFP.Acct<br />
Maintain and perform critical fixes to tgusage<br />
utility as directed by trouble tickets and in<br />
Tiffany Duffield (SDSC)<br />
[replacing former staff] (0.17<br />
Ongoing<br />
9.6.2.1 O support of evolving UFC needs. Aug-09 Jul-10 FTE)<br />
UFP.Acct<br />
Maintain and perform critical fixes to gx-map<br />
Ongoing<br />
utility as directed by trouble tickets and in<br />
Leo Carson (SDSC) (0.15<br />
9.6.2.2 O support of evolving UFC needs Aug-09 Jul-10 FTE)<br />
UFP.Acct 9.6.3 O Metrics PY5<br />
UFP.Acct<br />
Maintain web-based TGU query and reporting<br />
Ongoing<br />
system, in current form and later as integrated<br />
9.6.3.1 O part of TGUP. Aug-09 Jul-10<br />
UFP.Acct<br />
Metrics reporting for QSRs, including updated<br />
Ongoing<br />
metrics as capabilities supported in TGCDB<br />
9.6.3.2 O and POPS (9/30, 12/31, 3/31, 6/30) Aug-09 Jul-10<br />
UFP.UDoc 9.1 R TG Documentation<br />
UFP.UDoc 9.1.1 R TG Docs PY4 Aug-08 Jun-09<br />
Fariba Fana[100%],<br />
Diana Diehl[25%]<br />
UFP.UDoc 9.1.1.3<br />
TG Resource Catalog integrated with MDS +<br />
M<br />
enhancements<br />
Core2 Aug-08 Jul-09 Initiated<br />
UFP.UDoc 9.1.1.4 M PI User Services Log integrated with TGCDB Core2 Aug-08 Jun-09 In progress<br />
UFP.UP 9.3 User Portal<br />
UFP.UP 9.3.2 R TG User Portal PY4<br />
Enhanced user views, broken down by field of<br />
Aug-08 Jun-09<br />
John Boisseau[8%],<br />
Maytal Dahan[18%],<br />
Patrick Hurley[35%],<br />
Praveen Nuthalapati[45%],<br />
Steve Mock[25%],<br />
TACC NOS[15%]<br />
science, across the user portal. Users will be<br />
UFP.UP 9.3.2.3 M able to filter systems, software, gateways, data<br />
collections, training, news, information<br />
services, etc. by scientific domain (e.g.<br />
biology chemistry etc )<br />
Aug-08 Mar-09 Cancelled<br />
Expanded personalization and collaboration<br />
features across the user portal, this includes<br />
UFP.UP 9.3.2.4 M allocation management, customized resource<br />
Aug-08 Jun-09 In progress<br />
listing, collaboration of proposals, file space,<br />
etc<br />
UFP.UR 9.7 O User Requests<br />
UFP.UR 9.7.1 O User Requests PY5 Aug-09 Jul-10 NCSA V. Halberstadt[100%]<br />
Student employee identified at SDSC to<br />
pursue this as summer activity.<br />
Being actively tested by services-wg.<br />
Troubleshooting a few Liferay issues.<br />
Effort consumed by Liferay<br />
troubleshooting. Focusing on other<br />
priorities.<br />
File sharing capability added to file<br />
manager portlet in TGUP and TG Mobile.<br />
Friendly user mode in Q2. Released early<br />
Q3. Ticket system interface deployed in<br />
TGUP<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Delayed<br />
In progress<br />
Delayed<br />
In progress<br />
Due to ongoing challenges with Liferay<br />
learning curve<br />
Completion delayed due to Liferay<br />
learning curve and operational<br />
challenges<br />
Will be revisited when Liferay situation<br />
under control<br />
Focusing on New User process first.
Work Package Dependencies Planned<br />
Resources<br />
(per GIG budget 7/7/08)<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
UFP.UR<br />
9.7.1.1<br />
Create/establish user accounts in TGCDB,<br />
O mail new user packets, collect RP site logins. Aug-09 Jul-10<br />
UFP.UR<br />
Support PI requests for removal of users from<br />
9.7.1.2 O allocations Aug-09 Jul-10<br />
UFP.UR<br />
Vet/review user information for new PIs and co-<br />
9.7.1.3 O PIs Aug-09 Jul-10<br />
UFP.UR<br />
Serve as first point of contact for general user<br />
9.7.1.4 O inquiries re: UFC procedures Aug-09 Jul-10<br />
UFP.AP 9.8 O Allocations Process<br />
UFP.AP<br />
O Allocations Process PY4<br />
UFP.AP<br />
D POPS interaction with RDR<br />
UFP.AP<br />
** This process may change in<br />
late PY4, early PY5<br />
This process will continue in<br />
PY5<br />
TACC Kent Milfeld[20%],<br />
Marg Murray[20%], Valerie<br />
Alvarez[25%]<br />
9.8.1 O Allocations Process PY5<br />
UFP.AP<br />
Quarterly TRAC Meetings (9/09, 12/09, 3/10,<br />
Ongoing<br />
Ongoing<br />
9.8.1.1 M 6/10) Aug-09 Jul-10<br />
UFP.AP<br />
Ongoing<br />
Ongoing<br />
Processing of intermittent TRAC submissions<br />
9.8.1.2 O (supplements, justifications, etc) Aug-09 Jul-10<br />
UFP.AP<br />
Ongoing review and processing of Startup and<br />
Ongoing<br />
Ongoing<br />
9.8.1.3 O Education requests Aug-09 Jul-10<br />
UFP.AP<br />
Maintenance and updates to Allocations policy<br />
Ongoing<br />
Ongoing<br />
9.8.1.4 O and procedures documentation Aug-09 Jul-10<br />
UFP.AP 9.8.1.5 O Recruiting and supporting TRAC members Ongoing Ongoing<br />
UFP.POP 9.9 O Resource Requests (POPS)<br />
UFP.POP<br />
NCSA Steve Quinn[10%],<br />
9.9.1 O Resource Requests (POPS) PY5<br />
Ester Soriano[20%]<br />
UFP.POP<br />
Support for Quarterly TRAC meetings (9/09,<br />
Ongoing<br />
9.9.1.1 M 12/09, 3/10, 6/10) Aug-09 Jul-10<br />
UFP.POP<br />
Operate, maintain and update POPS system<br />
Ongoing<br />
9.9.1.2 O in support of allocations process Aug-09 Jul-10<br />
UFP.IP 9.10 O Information Presentation/Web Presence<br />
UFP.IP<br />
SDSC Michael Dwyer[10%];<br />
Information Presentation/Web Presence<br />
TACC Patrick Hurley[25%];<br />
9.10.1 O PY5 Aug-09 Jul-10 UC Tim Dudek[50%]<br />
UFP.IP<br />
Maintenance and updates for deployed portal<br />
Ongoing<br />
9.10.1.1 O capabilities (SSO, etc.) Aug-09 Jul-10 P. Hurley (TACC)<br />
UFP.IP<br />
Support for RPs entering information into<br />
Ongoing<br />
9.10.1.2 O catalogs, monitors, and news services. Aug-09 Jul-10 M. Dwyer (SDSC) (0.05 FTE)<br />
UFP.IP<br />
Ongoing maintenance, updates, and<br />
Ongoing<br />
operations for existing catalogs: Compute and<br />
9.10.1.3 O data resources, CTSS, Gateways, etc. Aug-09 Jul-10 M. Dwyer (SDSC) (0.05 FTE)<br />
UFP.IP<br />
UFP.IP<br />
UFP.IP<br />
UFP.IP<br />
Maintenance and updates to TeraGrid's Batch<br />
Queue Prediction Service, including addition<br />
9.10.1.4<br />
of new resources, user support and bug fixes,<br />
O by QBETS team at UCSB. Aug-09 Jul-10<br />
G. Obertelli (UCSB) (0.50<br />
FTE)<br />
Provide statistics and metrics for QSR to<br />
assist in measuring success for TG efforts that<br />
appear in web/portal/KB behavior (9/30,<br />
9.10.1.5 O 12/31, 3/31, 6/30) Aug-09 Jul-10 T. Dudek (ANL) (0.25 FTE)<br />
Administration of WebSphere servers and<br />
9.10.1.6 O infrastructure Aug-09 Jul-10 Subcontract<br />
9.10.1.7 O<br />
Administration of TeraGrid wiki for<br />
problems/spam and fix creation of secure<br />
areas/user access etc. Aug-09 Jul-10 T. Dudek (ANL) (0.25 FTE)<br />
UFP.QA 9.11 O User Information Quality Assurance<br />
UFP.QA<br />
UFP.QA<br />
9.11.1 O User Information Quality Assurance PY5 Aug-09 Jul-10<br />
Run Web Link Validator weekly and correct<br />
9.11.1.1 O broken links Aug-09 Jul-10<br />
IU Andy Orahood[10%], Julie<br />
Thatcher[25%], Paul<br />
Brown[50%], Mary<br />
Hrovat[20%], Jonathon<br />
Bolte[25%]; SDSC Diana<br />
Diehl[55%]; UC Tim<br />
Dudek[50%]<br />
D. Diehl (SDSC) (0.55 across<br />
all items here)<br />
Ongoing<br />
Done<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Cancelled/M<br />
oved<br />
Ongoing<br />
Ongoing<br />
User News application ported from<br />
SDSC's Oracle to TeraGrid postgreSQL.<br />
Ongoing<br />
Done<br />
Ongoing<br />
Ongoing<br />
Delayed<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Ongoing<br />
Cancelled/M<br />
oved<br />
Ongoing<br />
Ongoing
Work Package Dependencies Planned<br />
Resources<br />
(per GIG budget 7/7/08)<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
UFP.QA<br />
Respond to user documentation correction<br />
Ongoing<br />
Ongoing<br />
requests and initiate corrections to major<br />
9.11.1.2 O errors within 1 business day. Aug-09 Jul-10 D. Diehl (SDSC)<br />
UFP.QA<br />
D. Diehl (SDSC); IU KB team; Ongoing<br />
Ongoing<br />
9.11.1.3<br />
Monitor and compile user feedback to UFC<br />
web presence and procedures Aug-09 Jul-10<br />
T. Dudek (ANL); M. Dahan<br />
(TACC)<br />
UFP.QA<br />
Respond to content update requests for<br />
Ongoing<br />
Ongoing<br />
9.11.1.4 O www.teragrid.<strong>org</strong> within 1 business day. Aug-09 Jul-10 T. Dudek (ANL) (0.25 FTE)<br />
UFP.QA<br />
Provide central coordination of TG-wide<br />
Ongoing<br />
Ongoing<br />
documentation contributions and updates<br />
submitted from across the GIG and RP sites,<br />
using capabilities within the Liferay<br />
9.11.1.5 O environment. Aug-09 Jul-10 D. Diehl (SDSC)<br />
UFP.QA<br />
Conduct ongoing series of coordinated<br />
reviews to evaluate, synchronize and update<br />
documentation, KB and other web info related<br />
Ongoing Review and release of AUS performance Delayed<br />
pages into Liferay.<br />
9.11.1.6 O to TG-wide user info areas. Aug-09 Jul-10 D. Diehl (SDSC); IU KB team<br />
UFP.QA<br />
Completion of 250 new KB documents, some<br />
Ongoing<br />
Ongoing<br />
in response to US and user inputs through the<br />
9.11.1.7 O ticket system. Aug-09 Jul-10 IU KB team<br />
UFP.QA<br />
Ongoing updates and special requests for<br />
Ongoing<br />
Ongoing<br />
web content including posting information for<br />
EOT and ER areas, and conference/workshop<br />
9.11.1.8 O sites. Aug-09 Jul-10 T. Dudek (ANL) (0.25 FTE)<br />
UFP.EUA 9.12 P Enhanced TeraGrid User Access<br />
UFP.EUA<br />
Enhanced TeraGrid User Access PY4<br />
Linking Shibboleth login to TGUP login in<br />
NOS provided prototype Shibboleth-<br />
UFP.EUA<br />
D TGUP test instance<br />
Liferay integration in April. Work<br />
In progress progressing by TGUP team.<br />
In progress<br />
Complete user account management,<br />
UFP.EUA<br />
D including end-to-end Shibboleth authentication<br />
in production In progress In progress<br />
Addition of customization and personalization<br />
UFP.EUA<br />
D capabilities to TGUP<br />
Cancelled<br />
In progress<br />
Effort consumed by Liferay<br />
troubleshooting. Focusing on other<br />
priorities.<br />
UFP.EUA<br />
Interactive capabilities for job submission in<br />
D<br />
TGUP Delayed Delayed<br />
UFP.EUA D Job reservations form in TGUP Delayed Delayed<br />
UFP.EUA<br />
UFP.EUA<br />
TGUP features combined into a workflow<br />
D including file management, job submission<br />
and remote visualization<br />
Co-scheduling proof of concept portlet<br />
D<br />
Delayed<br />
In progress<br />
File sharing service deployed in TGUP in<br />
July.<br />
Delayed<br />
MCP portlet prototyped. Staff effort<br />
redirected to other efforts during part of<br />
Q2. Populating with user data from<br />
various sources. Anticipating friendly user<br />
release in Q3<br />
In progress<br />
Catching up with content updates delayed<br />
during the Liferay content freeze.<br />
Focusing on New User process first.<br />
MCP portlet in development. Likely for Q2<br />
release.<br />
UFP.EUA<br />
NCSA Ester Soriano[10%],<br />
9.12.1 P Enhanced TeraGrid User Access PY5 Aug-09 Jul-10<br />
Steve Quinn[10%]; TACC<br />
Praveen Nuthulapati[100%],<br />
Steve Mock [50%]<br />
UFP.EUA 9.12.1.1 P Users able to create TG logins at the TGUP Aug-09 Oct-09 Ongoing Implementation plan approved Ongoing Implementation plan in progress.<br />
UFP.EUA 9.12.1.2 P Job submission interface in TGUP Nov-09 Jan-10 Delayed Delayed<br />
UFP.EUA<br />
Users can use campus credentials to<br />
Apr-10 In progress NOS team delivered prototype Liferay- In progress<br />
9.12.1.3 P authenticate to TGUP<br />
Feb-10<br />
Shibboleth integration.<br />
UFP.EUA<br />
Cancelled<br />
9.12.1.4 P<br />
Lustre-WAN access via TGUP with cross-TG<br />
data movement<br />
UFP.RAM 9.13 P Resource Authorization and Management<br />
UFP.RAM<br />
Mar-10<br />
Jul-10 Cancelled New file sharing feature in TGUP file<br />
manager portlet provides partial<br />
capability, including cross-TG data<br />
movement.<br />
NCSA Ester Soriano[60%],<br />
Steve Quinn[25%], Mike<br />
Shapiro[10%]; TACC Maytal<br />
9.13.1 P<br />
Resource Authorization and Management<br />
PY5 Aug-09 Jul-10<br />
Dahan[50%]; PSC Edward<br />
Hanna[10%], Rob Light[10%]<br />
UFP.RAM 9.13.1.1 P Improved user authorization process Aug-09 Oct-09 Ongoing Ongoing<br />
Full completion will not be possible due to<br />
Lustre-side issues. TGUP file manager<br />
now supports Lustre-WAN access via<br />
systems where Lustre-WAN is mounted.
Work Package Dependencies Planned<br />
Resources<br />
(per GIG budget 7/7/08)<br />
6/30/10 AD<br />
Status<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
6/30/10 AD Notes<br />
3/31/10 AD<br />
Status<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
3/31/10 AD Notes<br />
Project-ID<br />
WBS<br />
O P M<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
UFP.RAM<br />
UFP.RAM<br />
9.13.1.2 P<br />
Expanding user capabilities for managing<br />
allocations and usage<br />
Nov-09<br />
9.13.1.3 P POPS integrated with TGUP Feb-10<br />
UFP.RAM<br />
Resource Advisor made available to users<br />
9.13.1.4 P within UFP systems<br />
Mar-10<br />
UFP.IIS 9.14 P RP Integration and Information Sharing<br />
UFP.IIS<br />
RP Integration and Information Sharing<br />
P PY4<br />
UFP.IIS<br />
RDR system moved into production<br />
environment<br />
D<br />
UFP.IIS<br />
Storage usage record transport and<br />
D processing mechanisms implemented<br />
UFP.IIS<br />
Multi-audience TGCDB and POPS reporting<br />
capabilities implemented within TGUP as UFC<br />
D development permits<br />
UFP.IIS<br />
Jan-10 Ongoing New resource request interface deployed Ongoing<br />
in POPS. Work beginning on revised<br />
POPS entry screens.<br />
Apr-10 Cancelled Cancelled Finally admitting this is unlikely to happen,<br />
given current staffing levels and other<br />
demands.<br />
Jul-10 On hold On hold Initial prototype may be merged with<br />
existing Resource catalog interface<br />
Delayed Student effort to integrate RDR and<br />
Resource Catalog identified at SDSC for<br />
Q3.<br />
Delayed Due to continued Liferay Challenges<br />
Completed Deployed in production. Initiated Will be completed in Q2<br />
In progress<br />
Allocations query in progress. SU Map In progress<br />
completed. Working to identify production<br />
location for both.<br />
NCSA Ester Soriano[10%],<br />
Mike Shapiro[10%]; PSC<br />
9.14.1 P<br />
RP Integration and Information Sharing<br />
PY5 Aug-09 Jul-10<br />
Edward Hanna[25%], Rob<br />
Light[25%]<br />
UFP.IIS 9.14.1.1 P RDR-based descriptions in POPS Aug-09 Oct-09 Delayed Delayed<br />
UFP.IIS 9.14.1.2 P RDR-based descriptions in TGCDB Nov-09 Jan-10 Delayed Delayed<br />
UFP.IIS<br />
Apr-10 Delayed Likely to pick up in Q3 as student works Delayed<br />
9.14.1.3 P Hardened, updated RDR Feb-10<br />
to integrate Resource Catalog.<br />
UFP.IIS 9.14.1.4 P RDR includes non-compute resources Mar-10 Jul-10 Completed Completed<br />
UFP.UIP 9.15 P User Information Presentation<br />
UFP.UIP P User Information Presentation PY4<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP.UIP<br />
UFP<br />
Post-migration alignment of portal and Web<br />
D site look, feel, and <strong>org</strong>anization.<br />
With WebSphere environment, developing<br />
scalable documentation model for TeraGrid<br />
D<br />
Migration to RDR/MDS as definitive source of<br />
D Resource Catalog info<br />
Enhanced TGUP system monitor with<br />
customizable list of resources and expanded<br />
D resource information<br />
Ongoing Will be an ongoing effort for duration of<br />
TeraGrid.<br />
Delayed<br />
In progress External editor/contributor added for Delayed<br />
TG10 content. Working to expand<br />
contributor pool, but proceeding slowly<br />
due to Liferay issues<br />
In progress With summer student at SDSC Delayed<br />
9.15.1 P User Information Presentation PY5 Aug-09 Jul-10<br />
PSC Edward Hanna[20%],<br />
Rob Light[20%]; SDSC<br />
Michael Dwyer[65%]; TACC<br />
Rion Dooley[75%]<br />
Integration of TeraGrid user portal info<br />
Oct-09 Delayed Delayed<br />
9.15.1.1 P services with RDR<br />
Aug-09<br />
Integration of TG catalogs and User News with<br />
Jan-10 Delayed Delayed<br />
9.15.1.2 P RDR<br />
Nov-09<br />
Enhanced system monitors for HPC, storage,<br />
Apr-10 Delayed Delayed<br />
9.15.1.3 P and vis resources<br />
Feb-10<br />
Enhanced integration of user news in TGUP<br />
Jul-10<br />
9.15.1.4 P and user notification<br />
Mar-10<br />
END<br />
Delayed<br />
Delayed<br />
"SU Map" portlet in prototype mode.<br />
Due to Liferay issues
TeraGrid DV PY5 Project Plan (as of 1/11/09)<br />
Work Package Dependencies Planned<br />
Resources<br />
Project-ID<br />
WBS<br />
O P M X<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
DV 3.0<br />
Objective 3.0 Operations:<br />
maintain/sustain current capabilities<br />
DV.Coord 3.0.1 O DAV Area Coordination<br />
DV.Coord 3.0.1.3 O DV Area Coordination PY5 Aug-09 Jul-10<br />
TACC Kelly Gaither [50%],<br />
Chris Jordan[25%]<br />
ongoing<br />
DV.Coord 3.0.1.3.1<br />
Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
M Aug-09 Jul-10 ongoing<br />
DV.DDM.RP<br />
DV.DDM.RP.NCSA 3.0.1.4 O<br />
DV.DDM.RP.IU 3.0.1.5 O<br />
DV.DDM.RP.LONI 3.0.1.6 O<br />
DV.DDM.RP.NICS 3.0.1.7 O<br />
DV.DDM.RP.ORNL 3.0.1.8 O<br />
DV.DDM.RP.PSC 3.0.1.9 O<br />
DV.DDM.RP.PU 3.0.1.10 O<br />
DV.DDM.RP.SDSC 3.0.1.11 O<br />
DV.DDM.RP.TACC 3.0.1.12 O<br />
O<br />
6/30)<br />
Aggregate RP Distributed Data<br />
Management Ops<br />
NCSA RP Distributed Data Management<br />
Operations<br />
LONI RP Distributed Data Management<br />
Operations<br />
NICS RP Distributed Data Management<br />
Operations<br />
PSC RP Distributed Data Management<br />
Operations<br />
SDSC RP Distributed Data Management<br />
Operations<br />
TACC RP Distributed Data Management<br />
Operations<br />
Apr-10<br />
Apr-10<br />
Mar-11<br />
Mar-11<br />
Alt [40%], B.Butler [75%],<br />
M.Butler [65%], Cai [40%],<br />
Chen [100%], Cribbs [50%],<br />
Glasgow [90%], Kerner [90%],<br />
Loftus [45%] Long [20%]<br />
ongoing<br />
Apr-10 Mar-11 Martinez [100%] ongoing<br />
Apr-10<br />
Apr-10<br />
Mar-11<br />
Mar-11<br />
Kovatch [50%], Hazlewood<br />
[50%], TBD [100%], TBD<br />
[100%], TBD [50%]<br />
Kar[40%], Litzinger[50%],<br />
Nowoczynski[60], S<strong>org</strong>e[15],<br />
Stone[80], Yanovich[80]<br />
ongoing<br />
ongoing<br />
Bennett [75%], Cai [95%], ongoing<br />
Apr-10 Mar-11<br />
Chen, S. [40%], Chen, L.<br />
[80%], Dinh [50%], Hom [75%],<br />
Rivers [65%]<br />
Apr-10 Mar-11 Jones[25%] ongoing<br />
DV.DDM.RP.UCANL 3.0.1.13 O<br />
DV.DC.RP O Aggregate RP Data Collections Ops Apr-10 Mar-11<br />
DV.DC.RP.NCSA 3.0.1.14 O<br />
DV.DC.RP.IU 3.0.1.15 O<br />
DV.DC.RP.LONI 3.0.1.16 O<br />
DV.DC.RP.NICS 3.0.1.17 O NICS RP Data Collections Operations Apr-10 Mar-11 Hazlewood [50%], TBD [70%] ongoing<br />
DV.DC.RP.ORNL 3.0.1.18 O<br />
DV.DC.RP.PSC 3.0.1.19 O<br />
DV.DC.RP.PU 3.0.1.20 O PU RP Data Collections Operations Apr-10 Mar-11<br />
Zhao[50%], 2 half time Grad<br />
Students [100%]<br />
ongoing<br />
DV.DC.RP.SDSC 3.0.1.21 O SDSC RP Data Collections Operations Apr-10 Mar-11 Nunes [10%], Wong [25%] ongoing<br />
PY5Q3 Status Update
DV.DC.RP.TACC 3.0.1.22 O TACC RP Data Collections Operations Apr-10 Mar-11 Urban[50%] ongoing<br />
DV.DC.RP.UCANL 3.0.1.23 O<br />
DV.Viz.RP O Aggregate Visualization Ops Apr-10 Mar-11<br />
DV.Viz.RP.NCSA 3.0.1.24 O<br />
DV.Viz.RP.IU 3.0.1.25 O<br />
DV.Viz.RP.LONI 3.0.1.26 O<br />
DV.Viz.RP.NICS 3.0.1.27 O NICS RP Visualization Operations Apr-10 Mar-11 TBD [60%], TBD [60%] ongoing<br />
DV.Viz.RP.ORNL 3.0.1.28 O<br />
DV.Viz.RP.PSC 3.0.1.29 O PSC RP Visualization Operations Apr-10 Mar-11 Foss[90%], new[10%] ongoing<br />
DV.Viz.RP.PU 3.0.1.30 O PU RP Visualization Operations Apr-10 Mar-11<br />
New Person [100%], 2 half time ongoing<br />
grad students [100%]<br />
DV.Viz.RP.SDSC 3.0.1.31 O SDSC RP Visualization Operations Apr-10 Mar-11 Chourasia [25%], TBD [150%] ongoing<br />
Gaither[10%], Burns[50%], ongoing<br />
DV.Viz.RP.TACC 3.0.1.32 O TACC RP Visualization Operations Apr-10 Mar-11<br />
GregSJohnson[30%],<br />
Schneider[50%],<br />
GregPJohnson[50%]<br />
DV.Viz.RP.UCANL 3.0.1.33 O UCANL RP Visualization Operations Apr-10 Mar-11 Insley[11%] ongoing<br />
DV.DPI 3.2<br />
Objective 3.2 Projects: Data Movement<br />
Performance<br />
DV.DPI 3.2.5 P Data Movement Performance PY5 Aug-09 Jul-10<br />
PSC Kathy Benninger[20%],<br />
Bob Budden[50%], Derek<br />
Simmel[60%]<br />
DV.DPI 3.2.5.2<br />
Production tools for scheduled data<br />
P Apr-10 Mar-11 TBD ONGOING<br />
DV 3.3<br />
movement<br />
Objective 3.3 Projects: Global Wide<br />
Area File Systems<br />
DV.GFS<br />
DV.GFS<br />
3.3.3<br />
3.3.3.1<br />
P Global Wide Area File Systems PY5<br />
P Lustre-WAN in Production<br />
Aug-09<br />
Apr-10<br />
Jul-10<br />
Mar-11<br />
NICS Phil Andrews[5%],<br />
Patricia Kovatch[15%],<br />
Nathaniel Mendoza[15%],<br />
Victor Hazlewood[15%]; PSC<br />
Josephine Palencia[50%];<br />
SDSC Jeff Bennett[25%],<br />
Thomas Guptill[25%]; TACC<br />
GFS Engineer[20%]<br />
ONGOING<br />
DV.GFS 3.3.3.6 P<br />
DV 3.4<br />
Evaluation of pNFS readiness for<br />
production<br />
Objective 3.4 Projects: TeraGrid Wide<br />
Data Architect Design<br />
NICS Phil Andrews[10%],<br />
DV.DWG 3.4.4 P Data Architecture PY5 Aug-09 Jul-10<br />
Bruce Loftis[20%]; PSC J Ray<br />
Scott[25%]; TACC Chris<br />
Jordan[50%]<br />
DV.DWG 3.4.4.3 P Physical instantiation of the design 3.4.3.3 TBD Apr-10 Mar-11 ONGOING<br />
DV 3.5 Objective 3.5 Projects: Visualization<br />
DV.Viz 3.5.3 P Visualization PY5 Aug-09 Jul-10<br />
DV<br />
END<br />
Apr-10<br />
Jul-10<br />
ANL Joe Insley [50%]; NCAR<br />
Alan Norton[15%], Viz<br />
TBD[50%]<br />
DELAYED(SOFTWARE<br />
AVAILABILITY)
TeraGrid NOS Project Plan<br />
Work Package Dependencies Planned<br />
Resources<br />
Project-ID<br />
WBS<br />
O P M X<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Name [%FTE]<br />
Status<br />
NOS 4.0<br />
Objective 4.0 Operations:<br />
maintain/sustain current capabilities<br />
NOS.Coord 4.0.1 O NOS Area Coordination<br />
NOS.Coord 4.0.1.3 O NOS Area Coordination PY5 Aug-09 Jul-10 UCANL Jeff Koerner [25%] ongoing<br />
NOS.Coord 4.0.1.3.1<br />
Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
M<br />
6/30)<br />
Aug-09 Jul-10 ongoing<br />
NOS.Net 4.0.2 Networking<br />
NOS.Net 4.0.2.3 O Networking Lead PY5 Apr-10 Mar-11<br />
SDSC Tom Hutton [10%];<br />
ANL Linda Winkler [50%]<br />
ongoing<br />
NOS.Net.RP O Aggregate RP Networking Ops Apr-10 Mar-11<br />
NOS.NET.RP.NCSA 4.0.2.4 O NCSA RP Networking Operations Apr-10 Mar-11 Wefel [10%], Shoop [10%] ongoing<br />
NOS.NET.RP.IU 4.0.2.5 Apr-10 Mar-11 ongoing<br />
NOS.NET.RP.LONI 4.0.2.6 O LONI RP Networking Operations Apr-10 Mar-11 ongoing<br />
NOS.NET.RP.NICS 4.0.2.7 O NICS RP Networking Operations Apr-10 Mar-11<br />
Mendoza [50%], TBD [100%], ongoing<br />
Baer [50%]<br />
NOS.NET.RP.ORNL 4.0.2.8 ongoing<br />
Adams[25%],<br />
ongoing<br />
NOS.NET.RP.PSC 4.0.2.9 O PSC RP Networking Operations Apr-10 Mar-11<br />
Benninger[40%],<br />
Huntoon[40%], Lambert[50%],<br />
Lappa[25%], Rapier[25%]<br />
NOS.NET.RP.PU 4.0.2.10 O Purdue RP Networking Operations Apr-10 Mar-11 Lewis [10%] ongoing<br />
Carlson [45%], Dombrowski<br />
NOS.NET.RP.SDSC 4.0.2.11 O SDSC RP Networking Operations Apr-10 Mar-11 [65%], Hutton [20%], Valente<br />
ongoing<br />
[60%]<br />
NOS.NET.RP.TACC 4.0.2.12 O TACC RP Networking Operations Apr-10 Mar-11 Jones [25%] ongoing<br />
NOS.NET.RP.UCANL 4.0.2.13 O UCANL RP Networking Operations Apr-10 Mar-11 Hedden [13%] ongoing<br />
NOS.Sec 4.0.4 O Security Services Apr-10 Mar-11<br />
NOS.Sec 4.0.4.1 O Operational Security Team PY5 Aug-09 Jul-10 PSC James Marsteller [50%] ongoing<br />
NOS.Sec 4.0.4.1.1 O Incident Response activities Aug-09 Jul-10 ongoing<br />
NOS.Sec 4.0.4.1.2<br />
Coordinating operational security issues<br />
O<br />
across the project<br />
Aug-09 Jul-10 ongoing<br />
NOS.Sec 4.0.4.1.3 O Participate in and help lead TAGPMA Aug-09 Jul-10 ongoing<br />
NOS.Sec.RP O Aggregate RP Security Ops Apr-10 Mar-11 ongoing<br />
NOS.Sec.RP.NCSA 4.0.4.2 O NCSA RP Security Operations Apr-10 Mar-11<br />
Barlow [30%], Brooks [15%],<br />
Sharma [60%]<br />
ongoing<br />
NOS.Sec.RP.IU 4.0.4.3 IU RP Securit Operations Apr-10 Mar-11 Cornet [75%] ongoing<br />
NOS.Sec.RP.LONI 4.0.4.4 ongoing<br />
NOS.Sec.RP.NICS 4.0.4.5 ongoing<br />
NOS.Sec.RP.ORNL 4.0.4.6 ongoing
Bennett[30%],<br />
ongoing<br />
NOS.Sec.RP.PSC 4.0.4.7 O PSC RP Security Operations Apr-10 Mar-11<br />
Marsteller[40%],<br />
Shelmire[60%], Simmel[10%],<br />
new[10]<br />
NOS.Sec.RP.PU 4.0.4.8 ongoing<br />
NOS.Sec.RP.SDSC 4.0.4.9 O SDSC RP Security Operations Apr-10 Mar-11 Sakai [30%], Bennett [50%] ongoing<br />
NOS.Sec.RP.TACC 4.0.4.10 O TACC RP Security Operations Apr-10 Mar-11 Murray [25%] ongoing<br />
NOS.Sec.RP.UCANL 4.0.4.11 O UCANL RP Security Operations Apr-10 Mar-11 Leggett [13%] ongoing<br />
NOS.Sec 4.0.4.3<br />
Security Services (kerb, myproxy, CA)<br />
John Quinn [50%]<br />
O Aug-09 Jul-10<br />
PY5<br />
ongoing<br />
Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
NOS.Sec 4.0.4.3.1 M<br />
6/30)<br />
Aug-09 Jul-10 ongoing<br />
NOS.TOC 4.0.5 O TOC Services Aug-09 Jul-10 ongoing<br />
NOS.TOC 4.0.5.3 O TOC Services PY5 Aug-09 Jul-10 NCSA Ops Staff [200%] ongoing<br />
NOS.TOC 4.0.5.3.1<br />
Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
M<br />
6/30)<br />
Aug-09 Jul-10 ongoing<br />
NOS.TOC.RP O Aggregate RP Helpdesk Ops Apr-10 Mar-11 ongoing<br />
Cagle [50%], Nickens [20%], ongoing<br />
NOS.TOC.RP.NCSA 4.0.5.4 NCSA RP Helpdesk Operations Apr-10 Mar-11<br />
Pingleton [45%], Roney [75%],<br />
Sellers [75%], Ten Have<br />
[45%], Wells [60%], Wilson<br />
[55%]<br />
NOS.TOC.RP.IU 4.0.5.5 ongoing<br />
NOS.TOC.RP.LONI 4.0.5.6 ongoing<br />
NOS.TOC.RP.NICS 4.0.5.7 ongoing<br />
NOS.TOC.RP.ORNL 4.0.5.8 ongoing<br />
Hackworth[10%], Maiden[5%], ongoing<br />
NOS.TOC.RP.PSC 4.0.5.9 O PSC RP Helpdesk Operations Apr-10 Mar-11 Raymond[2%],<br />
Sanielevici[3%]<br />
NOS.TOC.RP.PU 4.0.5.10 O Purdue RP Helpdesk Operations Apr-10 Mar-11 New person [50%] ongoing<br />
NOS.TOC.RP.SDSC 4.0.5.11 ongoing<br />
NOS.TOC.RP.TACC 4.0.5.12 ongoing<br />
NOS.TOC.RP.UCANL 4.0.5.13 ongoing<br />
NOS.HPCOps 4.0.8 O HPC Operations ongoing<br />
NOS.HPCOps.RP O Aggregate RP HPC Operations Apr-10 Mar-11 ongoing<br />
Bouvet [85%], Fernsler [50%], ongoing<br />
Hoyenga [80%], Khin [80%],<br />
NOS.HPCOps.RP.NCSA 4.0.8.1 O NCSA RP HPC Operations Apr-10 Mar-11<br />
Lapine [75%], Marcusiu<br />
[65%], Parga [75%],<br />
Pflugmacher [10%], J.Quinn<br />
NOS.HPCOps.RP.IU 4.0.8.2 O IU RP HPC Operations Apr-10 Mar-11<br />
[65%] Scharf [50%]<br />
Lowe [100%], Moore[100%],<br />
Miller[100%]<br />
ongoing<br />
Martinez [100%], Leche<br />
NOS.HPCOps.RP.LONI 4.0.8.3 O LONI RP HPC Operations Apr-10 Mar-11 [100%], Giaime [100%],<br />
ongoing<br />
Scheinine [50%]<br />
Walsh [100%], Baer [50%],<br />
NOS.HPCOps.RP.NICS 4.0.8.4 O NICS RP HPC Operations Apr-10 Mar-11 Kovatch [50%], Jones [50%],<br />
ongoing<br />
Ezell [50%], TBD [300%]<br />
NOS.HPCOps.RP.ORNL 4.0.8.5 O ongoing
Albert[85%], Bennett[45%], ongoing<br />
Budden[90%], Flaus[20%],<br />
NOS.HPCOps.RP.PSC 4.0.8.6 O PSC RP HPC Operations Apr-10 Mar-11<br />
Gill[90%], Graham[25%],<br />
Johanson[80%], Kar[40%],<br />
Kochmar[90%],<br />
Palencia[20%] Perrone[50%]<br />
Scott[50%],<br />
ongoing<br />
Sommerfield[80%],<br />
S<strong>org</strong>e[75%], Sullivan[50%],<br />
NOS.HPCOps.RP.PU<br />
4.0.8.6<br />
4.0.8.7<br />
O PSC RP HPC Operations Continued<br />
O Purdue RP HPC Operations<br />
Apr-10<br />
Apr-10<br />
Mar-11<br />
Mar-11<br />
Vargo[90%], Vizino[90%],<br />
Webb[50%], Wozniak[80%],<br />
Cubbison[30%], Lappa[25%],<br />
new[55]<br />
P. Smith [10%], Braun [50%] ongoing<br />
Conway [75%], Diegel [50%], ongoing<br />
NOS.HPCOps.RP.SDSC 4.0.8.8 O SDSC RP HPC Operations Apr-10 Mar-11<br />
Fillez [30%], Furman [60%],<br />
Guptill [20%], Hocks [50%],<br />
Kamrath [65%], Khem [60%],<br />
McNew[25%], Silva [25%],<br />
Smallen [5%], Yoshimoto<br />
[50%]<br />
Timm [50%], Carver [50%],<br />
NOS.HPCOps.RP.TACC 4.0.8.9 O TACC RP HPC Operations Apr-10 Mar-11 Anderson [25%], Walling<br />
ongoing<br />
[25%]<br />
NOS.HPCOps.RP.UCANL 4.0.8.10 O UCANL RP HPC Operations Apr-10 Mar-11<br />
Leggett [16%], Insley [18%],<br />
Hedden [45%], Olson [58%]<br />
ongoing<br />
Objective 4.2 Projects: Expanding<br />
NOS 4.2 P<br />
Secure TG Access<br />
ongoing<br />
NOS.Acc 4.2.2 P Expanding Secure TG Access PY4<br />
To WGs,<br />
Jim Basney [20%],<br />
Aug-08 Jul-09<br />
SGW<br />
Terry Fleury [50%]<br />
ongoing<br />
NOS.Acc 4.2.2.2 M Automated DN distribution compliance Aug-08 Dec-09 status update pending<br />
NOS.Acc 4.2.2.4<br />
Requirements/Design for cross-grid<br />
M<br />
Collab.<br />
Aug-08 Jul-10 ongoing<br />
NOS.Acc 4.2.3 P Expanding Secure TG Access PY5 Aug-09 Jul-10<br />
NOS.Acc 4.2.3.1 P<br />
Ubiquitous Adoption of Gateway<br />
accounting<br />
Aug-09<br />
Jul-10<br />
NCSA Jim Basney [25%], Jon<br />
Siwek [50%], Terry Fleury<br />
[25%]<br />
NOS.Acc 4.2.3.2 P INCA tests for Gateway accounting Aug-09 Jul-10 started<br />
NOS.Acc 4.2.3.3 P<br />
TG/OSG authorization interoperability<br />
functional<br />
NOS.Acc 4.2.3.4 P Shibboleth access for campus to TGUP Aug-09 Jul-10 done<br />
NOS 4.3<br />
Objective 4.3 Projects: Operational<br />
Intrumentation (device tracking)<br />
NOS.Inst 4.3.3 P Operational Instrumentation PY5 Aug-09 Jul-10 NCSA Neil Gorsuch [45%]<br />
NOS.Inst 4.3.3.1<br />
Ongoing Monthly, Quarterly and Annual<br />
O<br />
reporting<br />
Aug-09 Jul-10 ongoing<br />
Upgrade Globus Listener to support<br />
NOS.Inst 4.3.3.3 P<br />
GRAM5<br />
Feb-10 Jul-10 done<br />
NOS.Inst 4.3.3.4 P Integration plan for TAIS May-10 Jul-10 ongoing<br />
NOS.INCA 4.4.3 P INCA Improvements PY5 Aug-09 Jul-10<br />
SDSC Kate Ericson [50%],<br />
Shava Smallen[50%]<br />
ongoing<br />
NOS.INCA 4.4.3.1 P QA/CUE views Aug-09 Jul-10 ongoing<br />
NOS.INCA 4.4.3.2 P Link Tickets to Inca results Nov-09 Jan-10 ongoing<br />
Aug-09<br />
Jul-10<br />
pending gram5 deployment at<br />
all sites<br />
not started - depends on 4.2.2.4<br />
above
NOS.INCA 4.4.3.3 P knowledgbase linkage Jan-10 Jun-10 ongoing<br />
NOS.INCA 4.4.3.4 P integrate inca into TGUP Mar-10 Sep-10 ongoing<br />
NOS 4.6<br />
Objective 4.6 Projects: Annual Risk<br />
Assessment<br />
NOS.Risk 4.6.1 P Risk Assessment Aug-08 Jul-09 Jim Rome [10%]<br />
NOS.Risk 4.6.1.1 M ID assessment area and write plan Aug-08 Sep-09 Complete complete<br />
NOS.Risk 4.6.1.2 M Begin assessment of area Aug-08 Dec-09 started<br />
NOS.Risk 4.6.1.3 M Draft report of findings Aug-08 Mar-10 status update pending<br />
NOS.Risk 4.6.1.4 M Final assessment and recommendations Aug-08 Jun-10 status update pending<br />
NOS.QACUE<br />
NOS.QA<br />
Objective X.X Projects: QA/CUE<br />
PY4 Availability Improvement Aug-08 Jul-09<br />
ANL Joe Insley[17%]; IU Mike<br />
Lowe[50%]; LONI Archit<br />
Kulshrestha[25%]; NCSA Dan<br />
Lapine[25%], Luke<br />
Scharf[25%]; NICS Victor<br />
Hazlewood[50%]; PSC<br />
TBD[100%]; Purdue<br />
TBD[50%]; SDSC Thomas<br />
Guptill[50%], Jerry<br />
Greenberg[50%] TACC Da id<br />
NOS.QA<br />
Organize and identify priorities and<br />
SDSC Kate Ericson[12.5%];<br />
Apr-10 Mar-11<br />
timelines for software testing cycle<br />
Shava Smallen[12.5%]<br />
Ongoing<br />
NOS.QA Identify new and existing tests Apr-10 Mar-11<br />
SDSC Kate Ericson[12.5%];<br />
Shava Smallen[12.5%]<br />
Ongoing<br />
NOS.QA Determine Test baselines Dec-08 Mar-09<br />
SDSC Kate Ericson[12.5%];<br />
Shava Smallen[12.5%]<br />
Complete<br />
NOS.QA TG grid services administration guide Aug-09 Jan-10<br />
SDSC Kate Ericson[12.5%];<br />
Shava Smallen[12.5%]<br />
Complete<br />
NOS.QA PY4 Reliability Improvement Aug-08 Dec-09<br />
NOS.QA Identify common INCA errors and classify Aug-08 Mar-10<br />
SDSC Kate Ericson[12.5%];<br />
Shava Smallen[12.5%]<br />
Ongoing<br />
NOS.QACUE O PY5 QA/CUE Leadership Aug-09 Jul-10<br />
SDSC Kate Ericson[12.5%];<br />
Shava Smallen[12.5%]; PSC<br />
Shawn Brown[20%]<br />
NOS.QACUE P Common User interface roll out Aug-10 Jul-10 Shawn Brown[20%] Ongoing<br />
NOS.QA O PY5 Availability Improvement Aug-09 Jul-10<br />
NOS.Email P Maintain TeraGrid.<strong>org</strong> email services Apr-11 Mar-11 Neil Gorscuh (5%) Complete<br />
NOS<br />
END
Project-ID<br />
WBS<br />
O P M<br />
Work Package Dependencies Planned<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
EOT 5 Area: Education, Outreach and Training<br />
EOT 5.0<br />
Objective 5.0 Operations:<br />
maintain/sustain current capabilities<br />
EOT.Coord 5.0.1 O EOT Area Coordination<br />
EOT.Coord 5.0.1.7 O EOT Area Coordination PY5 Aug-09 Jul-10 Scott Lathrop[50%]<br />
EOT.Coord 5.0.1.7.1 M<br />
working group meetings; info sharing among<br />
100% 75%<br />
Aug-09 Jul-10<br />
RP EOT staff<br />
50%<br />
EOT.Coord 5.0.1.7.2 M<br />
PY5 Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
100% 50%<br />
Aug-09 Jul-10<br />
6/30)<br />
25%<br />
EOT.COOR 5.0.1.8 O Support HPCU portal infrastructure Aug-09 Jul-10 Shodor contract 100% 75% 50%<br />
EOT.Coord 5.0.1.9 O EOT Area Coordination Extension Aug-10 Jul-11 1.5 FTE for Extension<br />
EOT.Coord 5.0.1.9.1 M<br />
AD and project coodinator activities for Scott<br />
and Elizabeth<br />
Aug-10 Jul-11<br />
EOT 5.1<br />
Objective 5.1 Projects: Training<br />
(HPC University Workforce Development)<br />
EOT.TS 5.1.2 P Objective 5.1 Projects: Training<br />
US, AUSS,<br />
SG, DIV<br />
Aug-09<br />
Training & Seminars PY5 (training offered<br />
EOT.TS 5.1.2.1<br />
To RPs Aug-09 Jul-10<br />
at RP sites; on-line materials developed)<br />
Development of one new training content for<br />
US, AUSS,<br />
EOT.TS 5.1.2.5 P live, synchronous, and/or on-line delivery each<br />
Jul-09 Jul-10<br />
SG, DIV<br />
quarter<br />
EOT.TS.RP 5.1.2.7 O Aggregate RP Training Operations Aug-09 Mar-10<br />
EOT.TS.RP.<br />
5.1.2.7.1 O NCSA RP Training Operations Aug-09 Mar-10<br />
NCSA<br />
EOT.TS.RP.I<br />
5.1.2.7.2 O IU RP Training Operations Aug-09 Mar-10<br />
U<br />
EOT.TS.RP.<br />
5.1.2.7.3 O LONI RP Training Operations Aug-09 Mar-10<br />
LONI<br />
EOT.TS.RP.<br />
5.1.2.7.4 O NICS RP Training Operations Aug-09 Mar-10<br />
NICS<br />
EOT.TS.RP.<br />
5.1.2.7.5 O ORNL RP Training Operations Aug-09 Mar-10<br />
ORNL<br />
EOT.TS.RP.<br />
PSC<br />
Jul-10<br />
5.1.2.7.6 O PSC RP Training Operations Aug-09 Mar-10<br />
IU Robert Ping [20%]<br />
Scott Tiege[20%];<br />
NCSA Sandie<br />
Kappes[50%]; NICS<br />
James Ferguson[30%],<br />
Bruce Loftis[20%]; PSC<br />
Laura McGinnis[30%],<br />
Robin Flaus[5%], Ryan<br />
Omecene[10%], Tom<br />
Maiden[3%], John<br />
Urbanic[3%], Phil<br />
Blood[3%]; SDSC<br />
Edward Jeff Sale[33%];<br />
TACC Chris<br />
Training team 90% Developing two new online courses. 75%<br />
Arnold [50%], File<br />
[80%], Glick [25%],<br />
Kappes [20%]<br />
Ferguson [50%], TBD<br />
[50%]<br />
100% 75%<br />
100% 75%<br />
100% 75%<br />
100% 75%<br />
McGinnis[4%],<br />
Mahmoodi[8%],<br />
Urbanic[15%], Brown,<br />
S[3%], Madrid[5%],<br />
Maiden[35%] Jana[5%]<br />
100% 75%<br />
100% 75%<br />
EOT.TS.RP.<br />
5.1.7.7<br />
PU<br />
O PU RP Training Operations Aug-09 Mar-10<br />
EOT.TS.RP.<br />
Sale [50%] 100% 75%<br />
5.1.7.8 O SDSC RP Training Operations Aug-09 Mar-10<br />
SDSC<br />
EOT.TS.RP.<br />
Turner [10%], Wilson 100% 75%<br />
5.1.7.9 O TACC RP Training Operations Apr-09 Mar-10<br />
TACC<br />
[10%]<br />
EOT.TS.RP.<br />
75%<br />
5.1.7.10 O UC/ANL RP Training Operations Aug-09 Mar-10<br />
UCANL<br />
Training & Seminars Extension (training<br />
EOT.TS 5.1.8.1<br />
offered at RP sites; on-line materials<br />
To RPs Aug-10 Jul-11<br />
developed)<br />
EOT.TS 5.1.8.1.1 P Develop 3 new classes<br />
US, AUSS,<br />
Training team<br />
Aug-10 Jul-11<br />
SG, DIV<br />
EOT.TS 5.1.8.1.2 P<br />
Webcast Parallell Computing and<br />
US, AUSS,<br />
Training team<br />
Aug-10 Jul-11<br />
Programming Class 2 times<br />
SG, DIV<br />
EOT.TS 5.1.8.1.2 P Produce 4 webcasts<br />
US, AUSS,<br />
Training team<br />
Aug-10 Jul-11<br />
SG, DIV<br />
EOT.TS 5.1.8.1.2 P XD Transition place holder<br />
US, AUSS,<br />
Training team<br />
SG, DIV<br />
EOT.RPTRN 5.1.9<br />
O<br />
Extension - Aggregate RP Training<br />
Aug-10 Jul-11<br />
Operations<br />
EOT.RPTRN<br />
Arnold [50%], File<br />
5.1.9.1 O Extension - NCSA RP Training Operations Aug-10 Jul-11<br />
.NCSA<br />
[80%], Glick [25%],<br />
Kappes [20%]<br />
EOT.RPTRN<br />
5.1.9.2<br />
.IU<br />
O Extension - IU RP Training Operations Aug-10 Jul-11<br />
EOT.RPTRN<br />
5.1.9.3<br />
.LONI<br />
O Extension - LONI RP Training Operations Aug-10 Jul-11<br />
EOT.RPTRN<br />
Ferguson [50%], TBD<br />
5.1.9.4 O Extension - NICS RP Training Operations Aug-10 Jul-11<br />
.NICS<br />
[50%]<br />
McGinnis[4%],<br />
EOT.RPTRN<br />
Mahmoodi[8%],<br />
5.1.9.5 O Extension - PSC RP Training Operations Aug-10 Jul-11<br />
.PSC<br />
Urbanic[15%], Brown,<br />
S[3%], Madrid[5%],<br />
Maiden[35%] Jana[5%]<br />
EOT.RPTRN<br />
5.1.9.6<br />
.PU<br />
O Extension - PU RP Training Operations Aug-10 Jul-11<br />
EOT.RPTRN<br />
Sale [50%]<br />
5.1.9.7 O Extension - SDSC RP Training Operations Aug-10 Jul-11<br />
.SDSC<br />
65% Two more in development<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%
Project-ID<br />
WBS<br />
O P M<br />
Work Package Dependencies Planned<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
EOT.RPTRN<br />
Turner [10%], Wilson<br />
5.1.9.8 O Extension - TACC RP Training Operations Aug-10 Jul-11<br />
.TACC<br />
[10%]<br />
EOT.RPTRN<br />
5.1.9.9 O Extension - UC/ANL RP Training Operations Aug-10 Jul-11<br />
.UCANL<br />
EOT.OTS 5.1.10 Extension - Online Training Aug-10 Jul-11 1.04 FTE<br />
EOT.OTS 5.1.10.1 P Develop 3-4 on-line training courses Aug-10 Jul-11<br />
EOT.OTS 5.1.10.2 P XD Transition place holder Aug-10 Jul-11<br />
EOT.TQA 5.1.11 Training and HPC University Quality Aug-10 Jul-11 0.48 FTE<br />
EOT.TQA 5.1.11.1 P QA of training materials Aug-10 Jul-11<br />
EOT.TQA 5.1.11.2 P XD Transition place holder Aug-10 Jul-11<br />
EOT.COMM 5.1.12 TG Community Aug-10 Jul-11 1.25 FTE<br />
EOT.COMM 5.1.12.1 P<br />
Create a TeraGrid social collaboration<br />
environment for learning and community<br />
Aug-10 Jul-11<br />
building<br />
EOT.COMM 5.1.12.2 P XD Transition place holder Aug-10 Jul-11<br />
EOT 5.2<br />
Objective 5.2 Projects: External Relations<br />
(Science Highlights)<br />
EOT.SH 5.2.3 Science Highlights 2010 Jan-10 Nov-10<br />
UC Elizabeth<br />
Leake[10%]<br />
Spans fiscal years<br />
EOT.SH 5.2.3.1 P Science Highlights Production Apr-10 Nov-10 50%<br />
EOT.SH 5.2.3.2 P Review and select science stories to highlight<br />
Froelich,ER team and 100%<br />
Bruett Consulting<br />
EOT.SH 5.2.3.3 P Initiate production for Science Highlights Apr-10 Jul-10 ER team 25%<br />
EOT.SH 5.2.3 Science Highlights 2011 Apr-10 Nov-11 Spans fiscal years<br />
EOT.SH 5.2.4.1 P<br />
Initiate collection of stories for the winning XD<br />
ER team<br />
Apr-11 Nov-11<br />
team<br />
EOT 5.3 Objective 5.3 Projects: Outreach<br />
EOT.OE 5.3.2.2.1 M Close out books on TG'09 Conference Jul-09 Dec-09 On-going Mike - Need to check with John T. On-going Need to check with Matt. On-going Matt is tallking with Towns to get the data.<br />
EOT.OE 5.3.5 Outreach Events PY5 ALL TG Aug-09 Jul-10<br />
IU Robert Ping[20%];<br />
PSC Laura<br />
McGinnis[10%]; SDSC<br />
Ange Mason[20%]<br />
EOT.OE 5.3.5.1 M<br />
presentations, exhibits at conferences and<br />
100% 75%<br />
Aug-09 Jul-10<br />
outreach events<br />
50%<br />
EOT.OE 5.3.5.3 M TG10 Aug-09 Jul-10<br />
90% 25%<br />
Event is in August so effort increases as<br />
20%<br />
the event approaches.<br />
EOT.OE 5.3.5.3.1 M Close out books on TG'10 Conference Jul-10 Dec-10<br />
EOT.OE 5.3.5.5 M Campus Champions meeting at TG10 Jun-10 Jul-10 Hunt<br />
EOT.OE 5.3.5.6 M Campus Champions meeting Spring 2010 Jan-10 Apr-10 Hunt 100%<br />
EOT.OE 5.3.5.7 M<br />
Campus Champion Visits - 2 per quarter<br />
100% 75%<br />
Aug-09 Jul-10<br />
throughout the year<br />
50%<br />
EOT.OE 5.3.5.8 O Campus Champions Consulting Aug-09 Jul-10 100% 75% 50%<br />
EOT.OE 5.3.5.9 M<br />
Campus Champions monthly training sessions<br />
100% 75%<br />
Aug-09 Jul-10<br />
- conference calls and/or sync delivery tools<br />
50%<br />
EOT.OE 5.3.5.10 P Professional Society Outreach Aug-09 Jul-10 Ping [20%] 100% 75% 50%<br />
EOT.OE 5.3.5.10.1 P<br />
Collect information on professional society<br />
Ping 100% 75%<br />
Aug-09 Jul-10<br />
meetings throughout the year<br />
50%<br />
Review and select future outreach events<br />
Ping 100% 75%<br />
EOT.OE 5.3.5.10.2 P quarterly: e.g. AAAS, APS, AGU, ACS, NSTA,<br />
Aug-09 Jul-10<br />
50%<br />
Tapia etc<br />
EOT.OE 5.3.5.10.3 P<br />
PY5 - presentations, exhibits at conferences<br />
100% 75%<br />
Aug-09 Jul-10<br />
and outreach events throughout the year<br />
50%<br />
EOT.OE 5.3.5.11 O TeraGrid Pathways Aug-09 Jul-10 100% 75% 50%<br />
EOT.OE 5.3.5.11.1 O Consulting of under-represented populations AUSS Aug-09 Jul-10 100% 75% 50%<br />
EOT.OE 5.3.5.11.2 O Fellowships for under-represented populations AUSS Aug-09 Jul-10 100% 75% 50%<br />
EOT.OE 5.3.5.11.3 O Mentoring of under-served populations AUSS Aug-09 Jul-10 100% 75% 50%<br />
EOT.OE 5.3.5.12 P<br />
Develop student competitions and awards in<br />
Wiziecki [20%] 100% 75%<br />
Aug-09 Jul-10<br />
conjunction with National Science Olympiad<br />
50%<br />
EOT.OE 5.3.5.12.1 P<br />
Work with Olympiad to establish<br />
Wiziecki 100% 75%<br />
Aug-09 Dec-09<br />
computational science component<br />
50%<br />
EOT.OE 5.3.5.12.2 P Develop competitions Jan-10 May-10<br />
Wiziecki 100% 1 competition completed, one<br />
remaining in May<br />
50% 1 competition completed, one<br />
remaining in May<br />
EOT.OE 5.3.5.12.3 P Deliver competitions and make awards Apr-10 Jul-10<br />
Wiziecki 100% 1 competition completed, one<br />
remaining in May<br />
50% 1 competition completed, one<br />
remaining in May<br />
EOT.OE 5.3.6 Outreach Events Extension ALL TG Aug-10 Jul-11<br />
PSC Laura<br />
McGinnis[10%]; SDSC<br />
Ange Mason[20%] 0.5<br />
FTE for Extension<br />
IU Robert Ping[20%];<br />
EOT.OE 5.3.6.1 M<br />
Campus Champions monthly training sessions<br />
- conference calls and/or sync delivery tools<br />
Aug-10 Jul-11<br />
EOT.OE 5.3.6.2 P Professional Society Outreach Aug-10 Jul-11 Ping [20%]<br />
EOT.OE 5.3.6.3 P<br />
Collect information on professional society<br />
Ping<br />
Aug-10 Jul-11<br />
meetings throughout the year<br />
Review and select future outreach events<br />
Ping<br />
EOT.OE 5.3.6.4 P quarterly: e.g. AAAS, APS, AGU, ACS, NSTA,<br />
Tapia etc<br />
Aug-10 Jul-11<br />
EOT.OE 5.3.6.5 P<br />
Extension - presentations, exhibits at<br />
conferences and outreach events throughout<br />
Aug-10 Jul-11<br />
the year<br />
EOT.OE 5.3.6.10 P<br />
Develop student competitions and awards in<br />
Wiziecki [20%]<br />
Aug-10 Jul-11<br />
conjunction with National Science Olympiad<br />
EOT.OE 5.3.6.11 P<br />
Work with Olympiad to establish<br />
Wiziecki<br />
Aug-10 Dec-10<br />
computational science component<br />
EOT.OE 5.3.6.12 P Develop competitions Jan-11 May-11 Wiziecki<br />
EOT.OE 5.3.6.13 P Deliver competitions and make awards Apr-11 Jul-11 Wiziecki
Project-ID<br />
WBS<br />
O P M<br />
Work Package Dependencies Planned<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
EOT.Edu 5.4.3 Education PY5 ALL TG Aug-09 Jul-10<br />
NCSA Edee Wiziecki<br />
[20%]; PSC Laura<br />
McGinnin [10%], Robin<br />
Flaus [10%], Phil Blood<br />
[5%], Shawn Brown<br />
[10%]; Purdue KayHunt<br />
[50%], TBD [25%],<br />
SDSC Ane Mason<br />
[20%], Diane Baxter<br />
EOT.EDU 5.4.3.2 M Education workshops May-10 Aug-10 90%<br />
EOT.EDU 5.4.3.3 M monthly newsletter of EOT resources Aug-09 Mar-10 100% 75% 50%<br />
EOT.EDU 5.4.3 P EOT Highlights 2010 Apr-10 Dec-10<br />
EOT.EDU 5.4.3.1 P Request stories from EOT team Apr-10 Jul-10 Mason [20%] 100%<br />
EOT.EDU 5.4.3.2 P Select and write stories for inclusion Jul-10 Oct-10 Mason, EOT team 50%<br />
EOT.EDU 5.4.3.3 P Prepare layout and proof Sep-10 Dec-10<br />
Mason, EOT team, 25%<br />
Bruett Consulting<br />
EOT.EDU 5.4.3.4 P Print Highlights Sep-10 Oct-10 Bruett Consulting<br />
EOT.EDU 5.4.3.5 M PY5 - SC10 Education Program Jan-10 Nov-10 75%<br />
EOT.Edu 5.4.4 Education Extension ALL TG Aug-10 Jul-11<br />
NCSA Edee Wiziecki<br />
[20%]; PSC Laura<br />
McGinnin [10%], Robin<br />
Flaus [10%], Phil Blood<br />
[5%], Shawn Brown<br />
[10%]; Purdue KayHunt<br />
[50%], TBD [25%],<br />
SDSC Ane Mason<br />
[20%], Diane Baxter<br />
[20%] Extension Year<br />
EOT.EDU 5.4.4.1 M Education workshops May-11 Jul-11<br />
EOT.EDU 5.4.4.2 M monthly newsletter of EOT resources Aug-10 Mar-11<br />
EOT.EDU 5.4.4.3 P EOT Highlights 2011 Jan-11 Mar-11<br />
EOT.EDU 5.4.4.3.1 P Request stories from EOT team Jan-11 Mar-11 Mason [20%]<br />
EOT.EDU 5.4.4.8 M Extension - Education Program Jan-11 Jul-11<br />
EOT.EDU.R<br />
Aggregate RP Education & Outreach<br />
5.4.7 O<br />
P<br />
Operations<br />
Aug-09 Mar-10<br />
EOT.EDU.R<br />
100% 75%<br />
5.4.7.1 O NCSA RP Education & Outreach Operations Aug-09 Mar-10<br />
P.NCSA<br />
50%<br />
EOT.EDU.R<br />
100% 75%<br />
5.4.7.2 O IU RP Education & Outreach Operations Aug-09 Mar-10<br />
P.IU<br />
50%<br />
EOT.EDU.R<br />
100% 75%<br />
5.4.7.3 O LONI RP Education & Outreach Operations Aug-09 Mar-10<br />
P.LONI<br />
50%<br />
EOT.EDU.R<br />
Ferguson [50%], TBD 100% 75%<br />
5.4.7.4 O NICS RP Education & Outreach Operations Aug-09 Mar-10<br />
P.NICS<br />
[100%], TBD [50%]<br />
50%<br />
EOT.EDU.R<br />
P.ORNL<br />
5.4.7.5 O ORNL RP Education & Outreach Operations Aug-09 Mar-10<br />
EOT.EDU.R<br />
P.PSC<br />
5.4.7.6 O PSC RP Education & Outreach Operations Aug-09 Mar-10<br />
Budden[5%],<br />
Flaus[15%],<br />
Graham[15%],<br />
Hanna[5%], Light[5%],<br />
McGinnis[25%],<br />
Nystrom[10%], Brown,<br />
S[2%], Cubbison[10%],<br />
Hackworth[15%],<br />
Maiden[10%],<br />
Domaracki[50%],<br />
Czech[50%],<br />
Ishwad[25%],<br />
100% 75%<br />
100% 75%<br />
EOT.EDU.R<br />
5.4.7.7 O PU RP Education & Outreach Operations Aug-09 Mar-10<br />
P.PU<br />
EOT.EDU.R<br />
Baxter [37%], Mason 100% 75%<br />
5.4.7.8 O SDSC RP Education & Outreach Operations Aug-09 Mar-10<br />
P.SDSC<br />
[20%]<br />
EOT.EDU.R<br />
Armosky [25%] 100% 75%<br />
5.4.7.9 O TACC RP Education & Outreach Apr-09 Mar-10<br />
P.TACC<br />
EOT.EDU.R<br />
100% 75%<br />
5.4.7.10 O UCANL RP Education & Outreach Operations Apr-09 Mar-10<br />
P.UCANL<br />
EOT.RPOE 5.4.8 O Education &Outreach Aug-10 Mar-11<br />
0.78 FTE for Extension<br />
Year<br />
EOT.RPOE.<br />
5.4.8.1 O NCSA RP Education & Outreach Operations Aug-10 Mar-11<br />
NCSA<br />
EOT.RPOE.I<br />
5.4.8.2 O IU RP Education & Outreach Operations Aug-10 Mar-11<br />
U<br />
EOT.RPOE.<br />
5.4.8.3 O LONI RP Education & Outreach Operations Aug-10 Mar-11<br />
LONI<br />
EOT.RPOE.<br />
Ferguson [50%], TBD<br />
5.4.8.4 O NICS RP Education & Outreach Operations Aug-10 Mar-11<br />
NICS<br />
[100%], TBD [50%]<br />
Budden[5%],<br />
Flaus[15%],<br />
Graham[15%],<br />
Hanna[5%], Light[5%],<br />
McGinnis[25%],<br />
EOT.RPOE.<br />
Nystrom[10%], Brown,<br />
5.4.8.6 O PSC RP Education & Outreach Operations Aug-10 Mar-11<br />
PSC<br />
S[2%], Cubbison[10%],<br />
Hackworth[15%],<br />
Maiden[10%],<br />
Domaracki[50%],<br />
Czech[50%],<br />
Ishwad[25%],<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%
Project-ID<br />
WBS<br />
O P M<br />
Work Package Dependencies Planned<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
EOT.RPOE.<br />
PU<br />
5.4.8.7 O PU RP Education & Outreach Operations Aug-10 Mar-11<br />
EOT.RPOE.<br />
Baxter [37%], Mason<br />
5.4.8.8 O SDSC RP Education & Outreach Operations Aug-10 Mar-11<br />
SDSC<br />
[20%]<br />
EOT.RPOE.<br />
Armosky [25%]<br />
5.4.8.9 O TACC RP Education & Outreach Aug-10 Mar-11<br />
TACC<br />
EOT.RPOE.<br />
UCANL<br />
5.4.8.10 O UCANL RP Education & Outreach Operations Aug-10 Mar-11<br />
EOT.OCCP 5.4.9 O Outreach: Campus Champions Program Aug-10 Mar-11 1.95 FTE<br />
EOT 5.5 Objective 5.5 Projects: Evaluation<br />
EOT.EVAL 5.5.2 PY5 Evaluation<br />
To SI, RPs,<br />
Abbott - external<br />
Aug-09 Jul-10<br />
NSF<br />
evaluator on consulting<br />
agreement<br />
EOT.EVAL 5.5.2.1 O<br />
All RPs collect data at all events throughout<br />
all RPs 100% 75%<br />
Aug-09 Jul-10<br />
year and share with external evaluator<br />
EOT.EVAL 5.5.2.2 M<br />
Distribute surveys to people attending events<br />
Abbott 100% 10%<br />
Feb-10 Jul-10<br />
held 6 months ago<br />
EOT.EVAL 5.5.2.2 M Quartlery summary of data Dec-09 Jul-10 Abbott 75% 50%<br />
EOT.EVAL 5.5.2.3 M Mid-year analysis of data Jan-10 Feb-10 Abbott Completed Completed<br />
EOT.EVAL 5.5.2.4 M Full analysis report annually Jul-10 Jul-10 Abbott<br />
EOT.EVAL.<br />
5.5.3 PY5 - Evaluations offered by RPs Aug-09 Jul-10<br />
RP<br />
EOT.EVAL.R<br />
100% 75%<br />
5.5.3.1 O Collect data throughout year Aug-09 Jul-10<br />
P.IU<br />
EOT.ER 5.6 P Objective 5.5 External Relations<br />
UC Elizabeth<br />
Leake[100%]<br />
EOT.ER 5.6.2 O<br />
PY5 - working group meetings; info sharing<br />
100% 75%<br />
Aug-09 Jul-10<br />
among RP ER staff<br />
EOT.ER 5.6.4 O<br />
PY5 - Quarterly <strong>Report</strong>ing (9/30, 12/31, 3/31,<br />
100% 50%<br />
Aug-09 Jul-10<br />
6/30)<br />
EOT.ER.RP 5.6.5 O Aggregate RP External Relations Operations Aug-09 Mar-10<br />
EOT.ER.RP.<br />
Barker [40%], Bell 100% 75%<br />
5.6.5.1 O NCSA RP External Relations Operations Aug-09 Mar-10<br />
NCSA<br />
[25%]<br />
EOT.ER.RP.<br />
100% 75%<br />
5.6.5.2 O IU RP External Relations Operations Apr-09 Mar-10<br />
IU<br />
EOT.ER.RP.<br />
100% 75%<br />
5.6.5.3 O LONI RP External Relations Operations Apr-09 Mar-10<br />
LONI<br />
EOT.ER.RP.<br />
100% 75%<br />
5.6.5.4 O NICS RP External Relations Operations Apr-09 Mar-10<br />
NICS<br />
EOT.ER.RP.<br />
5.6.5.5<br />
ORNL<br />
O ORNL RP External Relations Operations Apr-09 Mar-10<br />
EOT.ER.RP.<br />
Schneider[50%], 100% 75%<br />
5.6.5.6 O PSC RP External Relations Operations Aug-09 Mar-10<br />
PSC<br />
Williams[50%]<br />
EOT.ER.RP.<br />
100% 75%<br />
5.6.5.7 O PU RP External Relations Operations Apr-09 Mar-10<br />
PU<br />
EOT.ER.RP.<br />
Zverina [50%] 100% 75%<br />
5.6.5.8 O SDSC RP External Relations Operations Aug-09 Mar-10<br />
SDSC<br />
EOT.ER.RP.<br />
Singer [25%] 100% 75%<br />
5.6.5.9 O TACC RP External Relations Apr-09 Mar-10<br />
TACC<br />
EOT.ER.RP.<br />
100% 75%<br />
5.6.5.10 O UC/ANL RP External Relations Operations Apr-09 Mar-10<br />
UCANL<br />
EOT.ER 5.6.7 M TG10<br />
Communica<br />
tions and<br />
PR<br />
Booth<br />
planning<br />
all RPs Aug-09 Jul-10<br />
Leake [10%], ER team<br />
members<br />
90% 25%<br />
EOT.ER 5.6.9 M SC10<br />
ADs May-10 Dec-10<br />
Coordinated by Leake 50%<br />
[10%], ER team<br />
Identify new venues for disseminating<br />
Leake [10%] 100% 75%<br />
EOT.ER 5.6.10 P<br />
information about TeraGrid among science<br />
Aug-09 Jul-10<br />
and engineering communities to attract new<br />
users to TeraGrid resources and services<br />
EOT.ER 5.6.10.1 P<br />
Work with ER team, AUS, ADs, and TG Forum<br />
Leake 100% 75%<br />
Aug-09 Jul-10<br />
to identify venues<br />
EOT.RPERC 5.6.5 O Aggregate RP External Relations Operations Aug-10 Mar-11 0.75 FTE for Extension<br />
EOT.RPERC<br />
Barker [40%], Bell<br />
5.6.5.1 O NCSA RP External Relations Operations Aug-10 Mar-11<br />
.NCSA<br />
[25%]<br />
EOT.RPERC<br />
5.6.5.2<br />
.IU<br />
O IU RP External Relations Operations Aug-10 Mar-11<br />
EOT.RPERC<br />
5.6.5.3<br />
.LONI<br />
O LONI RP External Relations Operations Aug-10 Mar-11<br />
EOT.RPERC<br />
5.6.5.4<br />
.NICS<br />
O NICS RP External Relations Operations Aug-10 Mar-11<br />
EOT.RPERC<br />
5.6.5.5<br />
.ORNL<br />
O ORNL RP External Relations Operations Aug-10 Mar-11<br />
EOT.RPERC<br />
Schneider[50%],<br />
5.6.5.6 O PSC RP External Relations Operations Aug-10 Mar-11<br />
.PSC<br />
Williams[50%]<br />
EOT.RPERC<br />
5.6.5.7<br />
.PU<br />
O PU RP External Relations Operations Aug-10 Mar-11<br />
EOT.RPERC<br />
Zverina [50%]<br />
5.6.5.8 O SDSC RP External Relations Operations Aug-10 Mar-11<br />
.SDSC<br />
EOT.RPERC<br />
Singer [25%]<br />
5.6.5.9 O TACC RP External Relations Aug-10 Mar-11<br />
.TACC<br />
EOT.RPERC<br />
5.6.5.10<br />
.UCANL<br />
O UC/ANL RP External Relations Operations Aug-10 Mar-11<br />
EOT.ERCom<br />
5.6.7 M<br />
m<br />
EOT.ERCom<br />
5.6.9 M<br />
m<br />
Write stories about TeraGrid (non RP-specific)<br />
resources and services. Customize existing<br />
stories for NSF-OCI, OLPA, and other<br />
purposes<br />
Graphic Design assistance with the facilitation<br />
and updating of TeraGrid’s web<br />
content—specifically images and graphic files<br />
Aug-10<br />
Aug-10<br />
Leake [10%], ER team<br />
Mar-11<br />
members, Grad<br />
Student, 0.87 FTE for<br />
Extension<br />
Mar-11<br />
50%<br />
50%<br />
50%<br />
25%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
50%<br />
20%<br />
50%<br />
50%
Project-ID<br />
WBS<br />
O P M<br />
Work Package Dependencies Planned<br />
Description<br />
Task<br />
Cross<br />
Area<br />
Start<br />
Date<br />
End<br />
Date<br />
Resources<br />
Name [%FTE]<br />
QSR <strong>2010Q2</strong> <strong>Report</strong> Values<br />
QSR 2010Q1 <strong>Report</strong> Values<br />
6/30/10 AD Status 6/30/10 AD Notes 3/31/10 AD Status 3/31/10 AD Notes<br />
12/31/09 AD<br />
Status<br />
QSR 2009Q4 <strong>Report</strong> Values<br />
12/31/09 AD Notes<br />
EOT.ERCom<br />
5.6.10 P<br />
m<br />
EOT.ERCom<br />
5.6.10.1 P<br />
m<br />
EOT.SW 5.7<br />
Graduate assistant to help with research, list<br />
maintenance, and ER XD transition<br />
documentation<br />
Internal relations activities. International<br />
collaboration. Training in support of<br />
international communication<br />
Science Writing (stories written and<br />
distributed)<br />
Aug-10<br />
Aug-10<br />
Aug-09<br />
EOT.SW 5.7.3 O Science Writing PY5 Aug-09 Jul-10<br />
Mar-11<br />
Mar-11<br />
Jul-10<br />
PSC Michael<br />
Schneider[20%],<br />
Shandra Williams[20%]<br />
ER team; coordinated<br />
by Leake<br />
100% 75%<br />
Review and select science stories to highlight,<br />
100% 75%<br />
EOT.SW 5.7.3.1 O<br />
RPs, GIG Aug-09 Jul-10<br />
50%<br />
write, and disseminate throughout year<br />
EOT.SW 5.7.3.2 O Select and write stories each quarter RPs, GIG Aug-09 Jul-10 ER team 100% 75% 50%<br />
Pubish stories quarterly on web; disseminate<br />
ER team 100% 75%<br />
EOT.SW 5.7.3.3 O<br />
GIG, UFP Aug-09 Jul-10<br />
50%<br />
through media - iSGTW, HPCWire, etc.<br />
EOT.SW 5.7.4 O Science Writing Extension Aug-10 Jul-11<br />
PSC Michael<br />
Schneider[20%],<br />
Shandra Williams[20%]<br />
EOT.SW 5.7.4.1 O<br />
Review and select science stories to highlight,<br />
ER team; coordinated<br />
RPs, GIG Aug-10 Jul-11<br />
write, and disseminate throughout year<br />
by Leake<br />
EOT.SW 5.7.4.2 O Select and write stories each quarter RPs, GIG Aug-10 Jul-11 ER team<br />
EOT.SW 5.7.4.3 O<br />
Pubish stories quarterly on web; disseminate<br />
ER team<br />
GIG, UFP Aug-10 Jul-11<br />
through media - iSGTW, HPCWire, etc.<br />
EOT.SW 5.7.4.4 O XD Transition place holder GIG, UFP ER team<br />
EOT.CSPW 5.8<br />
Computational Science Problem of the<br />
Aug-10 Jul-11<br />
Week (EOT.CSPW)<br />
EOT.CSPW 5.8.1 O<br />
Computational Science Problem of the Week<br />
(EOT.CSPW)<br />
Aug-10 Jul-11<br />
0.68 FTE<br />
EOT END<br />
50%