Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David.

Similar presentations


Presentation on theme: "Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David."— Presentation transcript:

1 Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David A. Yuen 3, Hui Wang 1, Shi Chen 1, Shaolin Chen 1, Zhenzhen Yan 1 Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Sciences July 22 nd, 2006 1 Graduate University of Chinese Academy of Sciences 2 University of Missouri-Columbia 3 University of Minnesota, Twin Cities

2 Computational Geodynamics Huge amount of data from GIS, GPS, and observation Large-scale parallel machines ; Fast development of network and between HPCCs and inter-institute high speed network interconnections ; Middle-wares for grid computing; Computational mathematics development for large scale linear system and nonlinear algorithms for parallel computing ; Problems become more and more complex;

3 There is more than one way to do parallel computing and Grid Computing Now,we are thinking about ways to do parallel computing Developing state-of-art sourcecode packages – www.geodynamics.org; Specific type of models can be plugged-into a general supporting system (wave, fluid, structure, etc.) - Geofem Developing a platform that can generate parallel and grid computing sourcecode according user ’ s modeling – modeling language based computing environment

4 Automatic source code generator func funa=+[u/x] ……… funf=+[u/y]+[v/x] ……… dist =+[funa;funa]*d(1,1)+[funa;funb]*d(1,2)+[funa;func]*d(1,3) +[funb;funa]*d(2,1)+[funb;funb]*d(2,2)+[funb;func]*d(2,3) +[func;funa]*d(3,1)+[func;funb]*d(3,2)+[func;func]*d(3,3) +[fund;fund]*d(4,4)+[fune;fune]*d(5,5)+[funf;funf]*d(6,6) load = +[u]*fu+[v]*fv+[w]*fw-[funa]*f(1)-[funb]*f(2)-[func]*f(3) -[fund]*f(4)-[fune]*f(5)-[funf]*f(6) PDEs Complete source code FEM Modeling Language Data Grid (GEON and others) Physical model Model results HPCC Data =>??? SWF

5 Why high-performance computing 200 5000 20000 Not all numerical results are reliable. Even the first order stress pattern need high precision numerical simulation

6 disp u v coor x y func funa funb func shap %1 %2 gaus %3 mass %1 load = fu fv $c6 pe = prmt(1) $c6 pv = prmt(2) $c6 fu = prmt(3) $c6 fv = prmt(4) $c6 fact = pe/(1.+pv)/(1.-2.*pv) func funa=+[u/x] funb=+[v/y] func=+[u/y]+[v/x] stif dist = +[funa;funa]*fact*(1.-pv) +[funa;funb]*fact*(pv) +[funb;funa]*fact*(pv) +[funb;funb]*fact*(1.-pv) +[func;func]*fact*(0.5-pv) *es,em,ef,Estifn,Estifv, *es(k,k),em(k),ef(k),Estifn(k,k),Estifv(kk), goto (1,2), ityp 1 call seuq4g2(r,coef,prmt,es,em,ec,ef,ne) goto 3 2 call seugl2g2(r,coef,prmt,es,em,ec,ef,ne) goto 3 3 continue DO J=1,NMATE PRMT(J) = EMATE((IMATE-1)*NMATE+J) End do PRMT(NMATE+1)=TIME PRMT(NMATE+2)=DT prmt(nmate+3)=imate prmt(nmate+4)=num Other element matrix computing Subs PDE expression Contains information of the physical model, such as variables and equations for generating element stiffness matrix. Fortran Segments codes that realize the physical model at element level. variables equation Automated Code Generator Step 1: From PDE expression to Fortran segments Segment 1 Segment 2 Segment 3 Segment 4

7 Step 2: From algorithm expression to Fortran segments do i=1,k do j=1,k estifn(i,j)=0.0 end do do i=1,k estifn(i,i)=estifn(i,i) do j=1,k estifn(i,j)=estifn(i,j)+es(i,j) end do U(IDGF,NODI)=U(IDGF,NODI) *+ef(i) defi stif S mass M load F type e mdty l step 0 equation matrix = [S] FORC=[F] SOLUTION U write(s,unod) U end Algorithm Expression Contains information for forming global stiffness matrix for the model. Fortran Segments codes that realize the physical model at global level. Stiffness matrix Segment 5 Segment 6

8 SUBROUTINE ETSUB(KNODE,KDGOF,IT,KCOOR,KELEM,K,KK, *NUMEL,ITYP,NCOOR,NUM,TIME,DT,NODVAR,COOR,NODE, #SUBET.sub *U) implicit double precision (a-h,o-z) DIMENSION NODVAR(KDGOF,KNODE),COOR(KCOOR,KNODE), *U(KDGOF,KNODE),EMATE(300), #SUBDIM.sub *R(500),PRMT(500),COEF(500),LM(500) #SUBFORT.sub #ELEM.sub C WRITE(*,*) 'ES EM EF =' C WRITE(*,18) (EF(I),I=1,K) #MATRIX.sub L=0 M=0 I=0 DO 700 INOD=1,NNE ……… U(IDGF,NODI)=U(IDGF,NODI) #LVL.sub DO 500 JNOD=1,NNE ……… 500 CONTINUE 700 CONTINUE ……… return end Program StencilFortran Segments generated Step 3: Plug Fortran segments into a stencil, forming final FE program Segment 1 Segment 2 Segment 4 Segment 3 Segment 5 Segment 6 …………..

9 Grid computing profile Geo models Computing grids Data grids Computing grids clusters High speed interconnection and middleware for grid computing Is there one computing environment which can use these facilities as a WHOLE?

10 Asian tectonic, from theory samples to large-scale similation Asian Tectonic problem

11 parallel investigate Asian plate defarmation

12 Investigate Asian plate defarmations

13 Developing full 3-D model of tsunami

14 Tsunami Modeling 2

15 Tsunami modeling 5 Details of Finite Element Element Data from Gtop30, and generation of Finite element meshes, more than 2 million nodes for parallel version Local zoom in

16 Uplifts formation around islands Tsunami Modeling 3 Three dimensional simulation of tsunami generation

17 Full 3D simulation of tsunami propagation Our formulation allows the tracking and simulation of three stages, principally the formation, propagation and run-up stages of tsunami and waves coming ashore. The sequential version of this code can run on a workstation with 4 Gbyte memory less than 2 minutes per time step for one million grid points. This code has also been parallelized with MPI-2 and scales linearly. We have employed the actual ocean seafloor topographical data to construct oceanic volume and attempt to construct the coastline as realistic as possible, using 11 levels structure meshes in the radial direction of the earth. In order to understand the intricate dynamics of the wave interactions, we have implemented a visualization overlay based on Amira, a 3-D volume rendering visualization tools for massive data post-processing. Employed Amira visualization package (www. amiravis. com ) Visualization of tsunami wave propagation


Download ppt "Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David."

Similar presentations


Ads by Google