Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David.

Slides:



Advertisements
Similar presentations
CHAPTER 1: COMPUTATIONAL MODELLING
Advertisements

Workshop finale dei Progetti Grid del PON "Ricerca" Avviso Febbraio 2009 Catania Abstract In the contest of the S.Co.P.E. italian.
MEG 361 CAD Finite Element Method Dr. Mostafa S. Hbib.
Multidisciplinary Computation and Numerical Simulation V. Selmin.
Parameterizing a Geometry using the COMSOL Moving Mesh Feature
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES A community modeling environment: geodynamic integration of multi-scale geoscience data Mian Liu.
N-Body I CS 170: Computing for the Sciences and Mathematics.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
1 DISTRIBUTION STATEMENT XXX– Unclassified, Unlimited Distribution Laser Propagation Modeling Making Large Scale Ultrashort Pulse Laser Simulations Possible.
Web-based Interactive Landform Simulation Model (WILSIM) Wei Luo Dept. of Geography Northern Illinois University, DeKalb, IL.
Application: TSD-MPI Calculation of Thermal Stress Distribution By Using MPI on EumedGrid Abdallah ISSA Mazen TOUMEH Higher Institute for Applied Sciences.
Chapter 17 Design Analysis using Inventor Stress Analysis Module
High Performance Dimension Reduction and Visualization for Large High-dimensional Data Analysis Jong Youl Choi, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox.
Hierarchical Multi-Resolution Finite Element Model for Soft Body Simulation Matthieu Nesme, François Faure, Yohan Payan 2 nd Workshop on Computer Assisted.
April “ Despite the increasing importance of mathematics to the progress of our economy and society, enrollment in mathematics programs has been.
MA5233: Computational Mathematics
Landscape Erosion Kirsten Meeker
Finite Element Method in Geotechnical Engineering
FLANN Fast Library for Approximate Nearest Neighbors
Current Visualization Software NCL, Amira, and OpenDX By Drew Brumm.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Introduction to virtual engineering László Horváth Budapest Tech John von Neumann Faculty of Informatics Institute of Intelligent Engineering.
Numerical Grid Computations with the OPeNDAP Back End Server (BES)
MUMPS A Multifrontal Massively Parallel Solver IMPLEMENTATION Distributed multifrontal.
Development of numerical library software in Java Feb 8, 2000 H.Okazawa, Shizuoka Seika College, Japan and T.Sasaki, KEK,Japan.
S.S. Yang and J.K. Lee FEMLAB and its applications POSTEC H Plasma Application Modeling Lab. Oct. 25, 2005.
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
1 Computer Programming (ECGD2102 ) Using MATLAB Instructor: Eng. Eman Al.Swaity Lecture (1): Introduction.
Proof of concept studies for surface-based mechanical property reconstruction 1. University of Canterbury, Christchurch, NZ 2. Eastman Kodak Company, Rochester,
Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial:
Issues in (Financial) High Performance Computing John Darlington Director Imperial College Internet Centre Fast Financial Algorithms and Computing 4th.
SALSA HPC Group School of Informatics and Computing Indiana University.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
ACES WorkshopJun-031 ACcESS Software System & High Level Modelling Languages by
Visual Computing of Global Postglacial Rebound in a Spherical Domain Ladislav Hanyk 1, Ctirad Matyska 1 and David A. Yuen 2
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Interactive Computational Sciences Laboratory Clarence O. E. Burg Assistant Professor of Mathematics University of Central Arkansas Science Museum of Minnesota.
Parallel Solution of the Poisson Problem Using MPI
1 HPC Middleware on GRID … as a material for discussion of WG5 GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii.
Numerical Evaluation of Tsunami Wave Hazards in Harbors along the South China Sea Huimin H. Jing 1, Huai Zhang 1, David A. Yuen 1, 2, 3 and Yaolin Shi.
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Integrating high performance computing with data infrastructure using the GEON grid 4D simulation.
Jungpyo Lee Plasma Science & Fusion Center(PSFC), MIT Parallelization for a Block-Tridiagonal System with MPI 2009 Spring Term Project.
SPICE Research and Training Workshop III, July 22-28, Kinsale, Ireland Overlapping Multidomain Chebyshev Method: Verification.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
C - IT Acumens. COMIT Acumens. COM. To demonstrate the use of Neural Networks in the field of Character and Pattern Recognition by simulating a neural.
Lecture 1. source
CAD and Finite Element Analysis Most ME CAD applications require a FEA in one or more areas: –Stress Analysis –Thermal Analysis –Structural Dynamics –Computational.
Improving Swift Hal Levison (PI), SwRI Martin Duncan (CoI), Queen’s University Mark Lewis (CoI), Trinity University David Kaufmann, SwRI.
A Fully Conservative 2D Model over Evolving Geometries Ricardo Canelas Master degree student IST Teton Dam 1976.
Visualization in Scientific Computing (or Scientific Visualization) Multiresolution,...
ANSYS, Inc. Proprietary © 2004 ANSYS, Inc. Chapter 5 Distributed Memory Parallel Computing v9.0.
An Introduction to Computational Fluids Dynamics Prapared by: Chudasama Gulambhai H ( ) Azhar Damani ( ) Dave Aman ( )
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
CHAPTER 2 - EXPLICIT TRANSIENT DYNAMIC ANALYSYS
Finite Element Method in Geotechnical Engineering
Nathan Brasher February 13, 2005
Software for scientific calculations
CAD and Finite Element Analysis
Construction of Parallel Adaptive Simulation Loops
Unstructured Grids at Sandia National Labs
Component Frameworks:
Guoliang Chen Parallel Computing Guoliang Chen
FEM Steps (Displacement Method)
GENERAL VIEW OF KRATOS MULTIPHYSICS
Comparison of CFEM and DG methods
Parallel Implementation of Adaptive Spacetime Simulations A
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Multi-Scale Parallel computing in Laboratory of Computational Geodynamics, Chinese Academy of Sciences Yaolin Shi 1, Mian Liu 2,1, Huai Zhang 1, David A. Yuen 3, Hui Wang 1, Shi Chen 1, Shaolin Chen 1, Zhenzhen Yan 1 Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Sciences July 22 nd, Graduate University of Chinese Academy of Sciences 2 University of Missouri-Columbia 3 University of Minnesota, Twin Cities

Computational Geodynamics Huge amount of data from GIS, GPS, and observation Large-scale parallel machines ; Fast development of network and between HPCCs and inter-institute high speed network interconnections ; Middle-wares for grid computing; Computational mathematics development for large scale linear system and nonlinear algorithms for parallel computing ; Problems become more and more complex;

There is more than one way to do parallel computing and Grid Computing Now,we are thinking about ways to do parallel computing Developing state-of-art sourcecode packages – Specific type of models can be plugged-into a general supporting system (wave, fluid, structure, etc.) - Geofem Developing a platform that can generate parallel and grid computing sourcecode according user ’ s modeling – modeling language based computing environment

Automatic source code generator func funa=+[u/x] ……… funf=+[u/y]+[v/x] ……… dist =+[funa;funa]*d(1,1)+[funa;funb]*d(1,2)+[funa;func]*d(1,3) +[funb;funa]*d(2,1)+[funb;funb]*d(2,2)+[funb;func]*d(2,3) +[func;funa]*d(3,1)+[func;funb]*d(3,2)+[func;func]*d(3,3) +[fund;fund]*d(4,4)+[fune;fune]*d(5,5)+[funf;funf]*d(6,6) load = +[u]*fu+[v]*fv+[w]*fw-[funa]*f(1)-[funb]*f(2)-[func]*f(3) -[fund]*f(4)-[fune]*f(5)-[funf]*f(6) PDEs Complete source code FEM Modeling Language Data Grid (GEON and others) Physical model Model results HPCC Data =>??? SWF

Why high-performance computing Not all numerical results are reliable. Even the first order stress pattern need high precision numerical simulation

disp u v coor x y func funa funb func shap %1 %2 gaus %3 mass %1 load = fu fv $c6 pe = prmt(1) $c6 pv = prmt(2) $c6 fu = prmt(3) $c6 fv = prmt(4) $c6 fact = pe/(1.+pv)/(1.-2.*pv) func funa=+[u/x] funb=+[v/y] func=+[u/y]+[v/x] stif dist = +[funa;funa]*fact*(1.-pv) +[funa;funb]*fact*(pv) +[funb;funa]*fact*(pv) +[funb;funb]*fact*(1.-pv) +[func;func]*fact*(0.5-pv) *es,em,ef,Estifn,Estifv, *es(k,k),em(k),ef(k),Estifn(k,k),Estifv(kk), goto (1,2), ityp 1 call seuq4g2(r,coef,prmt,es,em,ec,ef,ne) goto 3 2 call seugl2g2(r,coef,prmt,es,em,ec,ef,ne) goto 3 3 continue DO J=1,NMATE PRMT(J) = EMATE((IMATE-1)*NMATE+J) End do PRMT(NMATE+1)=TIME PRMT(NMATE+2)=DT prmt(nmate+3)=imate prmt(nmate+4)=num Other element matrix computing Subs PDE expression Contains information of the physical model, such as variables and equations for generating element stiffness matrix. Fortran Segments codes that realize the physical model at element level. variables equation Automated Code Generator Step 1: From PDE expression to Fortran segments Segment 1 Segment 2 Segment 3 Segment 4

Step 2: From algorithm expression to Fortran segments do i=1,k do j=1,k estifn(i,j)=0.0 end do do i=1,k estifn(i,i)=estifn(i,i) do j=1,k estifn(i,j)=estifn(i,j)+es(i,j) end do U(IDGF,NODI)=U(IDGF,NODI) *+ef(i) defi stif S mass M load F type e mdty l step 0 equation matrix = [S] FORC=[F] SOLUTION U write(s,unod) U end Algorithm Expression Contains information for forming global stiffness matrix for the model. Fortran Segments codes that realize the physical model at global level. Stiffness matrix Segment 5 Segment 6

SUBROUTINE ETSUB(KNODE,KDGOF,IT,KCOOR,KELEM,K,KK, *NUMEL,ITYP,NCOOR,NUM,TIME,DT,NODVAR,COOR,NODE, #SUBET.sub *U) implicit double precision (a-h,o-z) DIMENSION NODVAR(KDGOF,KNODE),COOR(KCOOR,KNODE), *U(KDGOF,KNODE),EMATE(300), #SUBDIM.sub *R(500),PRMT(500),COEF(500),LM(500) #SUBFORT.sub #ELEM.sub C WRITE(*,*) 'ES EM EF =' C WRITE(*,18) (EF(I),I=1,K) #MATRIX.sub L=0 M=0 I=0 DO 700 INOD=1,NNE ……… U(IDGF,NODI)=U(IDGF,NODI) #LVL.sub DO 500 JNOD=1,NNE ……… 500 CONTINUE 700 CONTINUE ……… return end Program StencilFortran Segments generated Step 3: Plug Fortran segments into a stencil, forming final FE program Segment 1 Segment 2 Segment 4 Segment 3 Segment 5 Segment 6 …………..

Grid computing profile Geo models Computing grids Data grids Computing grids clusters High speed interconnection and middleware for grid computing Is there one computing environment which can use these facilities as a WHOLE?

Asian tectonic, from theory samples to large-scale similation Asian Tectonic problem

parallel investigate Asian plate defarmation

Investigate Asian plate defarmations

Developing full 3-D model of tsunami

Tsunami Modeling 2

Tsunami modeling 5 Details of Finite Element Element Data from Gtop30, and generation of Finite element meshes, more than 2 million nodes for parallel version Local zoom in

Uplifts formation around islands Tsunami Modeling 3 Three dimensional simulation of tsunami generation

Full 3D simulation of tsunami propagation Our formulation allows the tracking and simulation of three stages, principally the formation, propagation and run-up stages of tsunami and waves coming ashore. The sequential version of this code can run on a workstation with 4 Gbyte memory less than 2 minutes per time step for one million grid points. This code has also been parallelized with MPI-2 and scales linearly. We have employed the actual ocean seafloor topographical data to construct oceanic volume and attempt to construct the coastline as realistic as possible, using 11 levels structure meshes in the radial direction of the earth. In order to understand the intricate dynamics of the wave interactions, we have implemented a visualization overlay based on Amira, a 3-D volume rendering visualization tools for massive data post-processing. Employed Amira visualization package (www. amiravis. com ) Visualization of tsunami wave propagation