Magellan Final Report - Office of Science - U.S. Department of Energy
Magellan Final Report - Office of Science - U.S. Department of Energy
Magellan Final Report - Office of Science - U.S. Department of Energy
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Magellan</strong> <strong>Final</strong> <strong>Report</strong><br />
[67] M. Palankar, A. Iamnitchi, M. Ripeanu, and S. Garfinkel. Amazon S3 for science grids: a viable solution<br />
In Proceedings <strong>of</strong> the 2008 international workshop on Data-aware distributed computing, pages 55–64.<br />
ACM, 2008.<br />
[68] Phloem website. https://asc.llnl.gov/sequoia/benchmarks/PhloemMPIBenchmarks\_summary\<br />
_v1.0.pdf.<br />
[69] X. Qiu, J. Ekanayake, S. Beason, T. Gunarathne, G. Fox, R. Barga, and D. Gannon. Cloud technologies<br />
for bioinformatics applications. In Proceedings <strong>of</strong> the 2nd Workshop on Many-Task Computing on Grids<br />
and Supercomputers, pages 1–10. ACM, 2009.<br />
[70] L. Ramakrishnan, R. S. Canon, K. Muriki, I. Sakrejda, and N. J. Wright. Evaluating interconnect<br />
and virtualization performance for high performance computing. In Proceedings <strong>of</strong> 2nd International<br />
Workshop on Performance Modeling, Benchmarking and Simulation <strong>of</strong> High Performance Computing<br />
Systems (PMBS11), 2011.<br />
[71] L. Ramakrishnan et al. VGrADS: Enabling e-<strong>Science</strong> Workflows on Grids and Clouds with Fault<br />
Tolerance. In Proceedings <strong>of</strong> the ACM/IEEE SC2009 Conference on High Performance Computing,<br />
Networking, Storage and Analysis, Portland, Oregon, Portland, Oregon, November 2009.<br />
[72] L. Ramakrishnan, P. T. Zbiegel, S. Campbell, R. Bradshaw, R. S. Canon, S. Coghlan, I. Sakrejda,<br />
N. Desai, T. Declerck, and A. Liu. <strong>Magellan</strong>: experiences from a science cloud. In Proceedings <strong>of</strong> the<br />
2nd international workshop on Scientific cloud computing, <strong>Science</strong>Cloud ’11, pages 49–58, New York,<br />
NY, USA, 2011. ACM.<br />
[73] C. Ranger, R. Raghuraman, A. Penmetsa, G. Bradski, and C. Kozyrakis. Evaluating mapreduce for<br />
multi-core and multiprocessor systems. In Proceedings <strong>of</strong> the 2007 IEEE 13th International Symposium<br />
on High Performance Computer Architecture, pages 13–24, Washington, DC, USA, 2007. IEEE<br />
Computer Society.<br />
[74] J. Rehr, F. Vila, J. Gardner, L. Svec, and M. Prange. Scientific computing in the cloud. Computing in<br />
<strong>Science</strong> and Engineering, 99(PrePrints), 2010.<br />
[75] M. C. Schatz. CloudBurst: highly sensitive read mapping with MapReduce. Bioinformatics, pages<br />
1363–1369, June 2009.<br />
[76] K. Shvachko, H. Kuang, S. Radia, and R. Chansler. The hadoop distributed file system. In Mass Storage<br />
Systems and Technologies (MSST), 2010 IEEE 26th Symposium on, pages 1 –10, May 2010.<br />
[77] D. Skinner. Integrated Performance Monitoring: A portable pr<strong>of</strong>iling infrastructure for parallel applications.<br />
In Proc. ISC2005: International Supercomputing Conference, Heidelberg, Germany, 2005.<br />
[78] Spec website. https://portal.futuregrid.org/.<br />
[79] J. Thatcher et al. NAND Flash Solid State Storage for the Enterprise - An In-depth Look at Reliability.<br />
Technical report, SNIA, 2009.<br />
[80] Top500 Supercomputer Sites. http://www.top500.org.<br />
[81] E. Walker. The Real Cost <strong>of</strong> a CPU Hour. IEEE Computer, 42(4), 2009.<br />
[82] G. Wang and T. E. Ng. The impact <strong>of</strong> virtualization on network performance <strong>of</strong> amazon ec2 data center.<br />
In Proceedings <strong>of</strong> IEEE INFOCOM, 2010.<br />
[83] Windows azure. http://www.micros<strong>of</strong>t.com/windowsazure.<br />
137