28.12.2014 Views

TGQR 2010Q2 Report.pdf - Teragridforum.org

TGQR 2010Q2 Report.pdf - Teragridforum.org

TGQR 2010Q2 Report.pdf - Teragridforum.org

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

community. User feedback received, from materials science users, for this project also showed<br />

strong support for this project.<br />

The third project involves evaluation and benchmarking of the hybrid (MPI-OpenMP/pthreads)<br />

programming model. Due to the emergence of multi-core processors, hybrid programming, using<br />

MPI tasks and multiple OpenMP threads or pthreads per tasks, is of interest to users. This model<br />

may provide better performance than straight MPI programming. In addition hybrid programming<br />

may allow some simulations that are otherwise not possible due to limited memory available per<br />

core and hence provide a science capability not achievable by straight MPI codes on multi-core<br />

architectures. In order to investigate these issues, AUS staff from PSC, NICS, TACC and SDSC<br />

are comparing performance and usage of codes written in hybrid mode versus straight MPI mode.<br />

The codes being targeted are an astrophysics code, a hybrid benchmarking code, phylogenetics<br />

codes, FFTW, and a Monte Carlo code and work is ongoing. AUS staff are planning to write up<br />

the results comparing straight MPI versus MPI-OpenMP/pthreads performance for TeraGrid user<br />

community. Raghu Reddy (PSC) continued to work with University of Colorado researchers on<br />

measuring the hybrid performance of their code. One of them members of the group will be<br />

presenting the results at TG10. This work also resulted in the following<br />

submission:http://arxiv.<strong>org</strong>/abs/1003.4322 .<br />

As mentioned earlier the fourth project involves investigating PGAS languages. Progress to-date<br />

includes the implementation of a 2D finite difference kernel in UPC and MPI. Testing of said<br />

kernel demonstrated expected MPI performance and pretty poor UPC performance. Interactions<br />

have began with the Berkeley UPC group, and the TACC project team is still waiting on a<br />

response from them on a few items before they move on to implementation of other kernels. A<br />

Chapel implementation of the same kernel is underway. Benchmarks were also run with MUPC<br />

with similar results.<br />

The fifth project involved developing TAU tutorial and this is done by AUS staff from PSC. This<br />

tutorial and documentation were linked from the AUS projects page (i.e. the same page from<br />

where the other projects of benchmark results and hybrid programming results are published).<br />

The TAU tutorial will give users information about TAU installation on Ranger and Kraken and<br />

other TeraGrid resources, and will be a practical guide for TAU. It will allow users to learn about<br />

the commonly used features of the TAU software tool and will enable users to successfully start<br />

using TAU on TeraGrid resources. Fig 5.1 shows one the images from the tutorial.<br />

Figure 5.1 TAU Startup Screen as Shown in the TAU Tutorial.<br />

57

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!