Download
Download
Download
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
How can we improve the laohu<br />
supercomputer with cell phone processors?<br />
(Report and Future Plans with laohu)<br />
Rainer Spurzem*, Peter Berczik, Silk Road Team...<br />
National Astronomical Observatories (NAOC), Chinese Academy of Sciences<br />
Kavli Institute for Astronomy and Astrophysics (KIAA), Peking University<br />
Astronomisches Rechen-Inst., ZAH, Univ. of Heidelberg, Germany<br />
spurzem@nao.cas.cn<br />
http://silkroad.bao.ac.cn<br />
And Computer Network and Information Center at NAOC<br />
(Cui Chenzhou, Li Changhua)<br />
*Special State Foreign Expert in Thousand People's Plan in China<br />
Tianshan Mountain near Almaty, Kazakhstan
General Purpose<br />
GPU Supercomputing (GPGPU)<br />
http://www.nvidia.com<br />
http://www.astrogpu.org<br />
http://gpgpu.org<br />
NVIDIA Tesla C1060<br />
in kolob cluster Heidelberg Univ.
PRACE Award -<br />
2011<br />
Edited Volume<br />
November 2011<br />
With Paper<br />
Spurzem, et al.,<br />
Accelerated Many-Core<br />
GPU Computing for<br />
Physics and Astrophysics<br />
on Three Continents<br />
Astrophysical Particle Simulations with Large Custom GPU Clusters<br />
on Three Continents<br />
Rainer Spurzem, et al, Chinese Academy of Sciences & University of Heidelberg
Presently used GPU (GRAPE) N-body code<br />
Harfst, Berczik, Merritt, Spurzem et al, NewA, 12, 357 (2007)<br />
Spurzem et al., Comp. Science Res. & Dev. 23, 231 (2009)<br />
Hierarchical Individual Block Time Steps<br />
4th th order Hermite scheme<br />
d 2 ⃗r i<br />
dt 2 =⃗a i<br />
ftp://ftp.ari.uni-heidelberg.de/pub/staff/berczik/phi-GRAPE/
Our own φGRAPE/GPU N-body code<br />
~N ~N^2<br />
N<br />
⃗a = ∑ i<br />
j=1;j≠i<br />
⃗ f ij<br />
GPU<br />
⃗f =− ij G⋅m j<br />
(r<br />
2<br />
+ε<br />
2<br />
)<br />
3/2<br />
ij<br />
⃗r ij
Basic idea of parallel N-body code<br />
N loc = N<br />
N proc<br />
i<br />
j<br />
i,j− particle<br />
Some communication scheme...<br />
Nactloc<br />
= Nact/Np
Parallel code on cluster<br />
Nopt
Green Grid of GPU Clusters<br />
Berkeley<br />
Heidelberg/Jülich<br />
Heidelberg<br />
Kiev<br />
Almaty<br />
Lahore<br />
Beijing<br />
On the path to Exascale?<br />
Black: ICCS Nodes, probably not Green Grid<br />
Green: confirmed partners with GPU clusters Green Grid<br />
Red: clusters in construction or planned<br />
Nagasaki
* (III) Take part in Technical Working Groups<br />
Beijing, NAOC, 2012, March 26 – 30,<br />
http://ilibrary.las.ac.cn/web/silkroad/3rd-iccs-workshop/school