George Em Karniadakis Division of Applied Mathematics The CRUNCH group: www.cfm.brown.edu/crunch Cross-Site Simulations on the TeraGrid spectral elementsMicro.

Slides:



Advertisements
Similar presentations
Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU) Climate Simulation on ApGrid/TeraGrid at SC2003 Client (AIST) Ninf-G Severs.
Advertisements

A mathematical model of steady-state cavitation in Diesel injectors S. Martynov, D. Mason, M. Heikal, S. Sazhin Internal Engine Combustion Group School.
Wind Flow Over Forested Hills: Mean Flow and Turbulence Characteristics CEsA - Centre for Wind Energy and Atmospheric Flows, Portugal J. Lopes da Costa,
DEBRIS FLOWS & MUD SLIDES: A Lagrangian method for two- phase flow simulation Matthias Preisig and Thomas Zimmermann, Swiss Federal Institute of Technology.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
9 th HEDLA Conference, Tallahassee, Florida, May 3, 2012 Spontaneous Deflagration-to-Detonation Transition in Thermonuclear Supernovae Alexei Poludnenko.
A stochastic Molecular Dynamics method for multiscale modeling of blood platelet phenomena Multiscale Simulation of Arterial Tree on TeraGrid PIs: G.E.
A New Domain Decomposition Technique for TeraGrid Simulations Leopold Grinberg 1 Brian Toonen 2 Nicholas Karonis 2,3 George Em Karniadakis 1 1 Brown University.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
Modeling Generation and Nonlinear Evolution of VLF Waves for Space Applications W.A. Scales Center of Space Science and Engineering Research Virginia Tech.
Modeling Generation and Nonlinear Evolution of Plasma Turbulence for Radiation Belt Remediation Center for Space Science & Engineering Research Virginia.
Bin Fu Eugene Fink, Julio López, Garth Gibson Carnegie Mellon University Astronomy application of Map-Reduce: Friends-of-Friends algorithm A distributed.
Reconfigurable Application Specific Computers RASCs Advanced Architectures with Multiple Processors and Field Programmable Gate Arrays FPGAs Computational.
Computational Mechanics & Numerical Mathematics University of Groningen Multi-scale modeling of the carotid artery G. Rozema, A.E.P. Veldman, N.M. Maurits.
Linné FLOW Centre Research on Ekman at the Linné Flow Center, KTH Mechanics Dan Henningson, Director.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
Interpolation Snakes Work by Silviu Minut. Ultrasound image has noisy and broken boundaries Left ventricle of dog heart Geodesic contour moves to smoothly.
Simo Niskala Teemu Pasanen
MODEL STUDIES OF BLOOD FLOW IN BASILAR ARTERY WITH 3D LASER DOPPLER ANEMOMETER Biomedical Engineering Sergey Frolov, Tambov State Technical University,
Darema Dr. Frederica Darema NSF Dynamic Data Driven Application Systems (Symbiotic Measurement&Simulation Systems) “A new paradigm for application simulations.
EXTREME THEORETICAL PRESSURE OSCILLATIONS IN CORONARY BYPASS Ana Pejović-Milić, Ryerson University, CA Stanislav Pejović, University of Toronto, CA Bryan.
SAINT2002 Towards Next Generation January 31, 2002 Ly Sauer Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation,
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
Grid Computing With Charm++ And Adaptive MPI Gregory A. Koenig Department of Computer Science University of Illinois.
Advancing Scientific Discovery through TeraGrid Scott Lathrop TeraGrid Director of Education, Outreach and Training University of Chicago and Argonne National.
August 2007 Advancing Scientific Discovery through TeraGrid Adapted from S. Lathrop’s talk in SC’07
Dynamics of Blood Flow Transport System A closed double-pump system: Systemic Circulation Lung Circulation Left side of heart Right side of heart.
Computational Fluid Dynamics - Fall 2003 The syllabus Term project CFD references (Text books and papers) Course Tools Course Web Site:
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Hemodynamics 1. Objectives Define resistance and understand the effects of adding resistance in series vs.in parallel in total resistance and flow. Describe.
Direct Numerical Simulation of Particle Settling in Model Estuaries R. Henniger (1), L. Kleiser (1), E. Meiburg (2) (1) Institute of Fluid Dynamics, ETH.
ON MULTISCALE THEORY OF TURBULENCE IN WAVELET REPRESENTATION M.V.Altaisky
Parallelization of 2D Lid-Driven Cavity Flow
The UK eScience Grid (and other real Grids) Mark Hayes NIEeS Summer School 2003.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
1 “~DDDAS” proposals awarded in FY00 ITR Competition Pingali, Adaptive Software for Field-Driven Simulations; CCF
BioSensing and BioActuation Proposed Research Opportunities/Challenges 1.Sensor Informatics Guided by Life Understand and emulate data mining and prioritization,
I m going to talk about my work in last 2 years
Grid Enabled Neurosurgical Imaging Using Simulation
1 ©2004 Board of Trustees of the University of Illinois Computer Science Overview Laxmikant (Sanjay) Kale ©
1 SOS7: “Machines Already Operational” NSF’s Terascale Computing System SOS-7 March 4-6, 2003 Mike Levine, PSC.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
J.-N. Leboeuf V.K. Decyk R.E. Waltz J. Candy W. Dorland Z. Lin S. Parker Y. Chen W.M. Nevins B.I. Cohen A.M. Dimits D. Shumaker W.W. Lee S. Ethier J. Lewandowski.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Hemodynamics. Objectives Define resistance and understand the effects of adding resistance in series vs.in parallel in total resistance and flow. Describe.
Important Measures of Flow pressure, P, – force/area, usually given as mmHg or kPa volume, V or q, given as liters or some fraction particle velocity,
Classification of vessels:
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
LCSE – NCSA Partnership Accomplishments, FY01 Paul R. Woodward Laboratory for Computational Science & Engineering University of Minnesota October 17, 2001.
Tree methods, and the detection of vortical structures in the vortex filament method Andrew Baggaley, Carlo Barenghi, Jason Laurie, Lucy Sherwin, Yuri.
University of Texas at Arlington Scheduling and Load Balancing on the NASA Information Power Grid Sajal K. Das, Shailendra Kumar, Manish Arora Department.
Computational Fluid Dynamics - Fall 2007 The syllabus CFD references (Text books and papers) Course Tools Course Web Site:
Distributed and Parallel Processing George Wells.
SCEC Capability Simulations on TeraGrid
Andrey I. Svitenkov*, Oleg V. Rekin, P. S. Zun,
Engineering (Richard D. Braatz and Umberto Ravaioli)
Circulation 1 Dr.Radmanesh
Joint Techs, Columbus, OH
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Large Eddy Simulation of Mixing in Stratified Flows
Interpolation Snakes Work by Silviu Minut.
CFD I - Spring 2000 The syllabus Term project, in details next time
Hemodynamics.
The C&C Center Three Major Missions: In This Presentation:
Review of Microvascular Anatomy and Physiology
Chapter 31 Assessment and Management of Patients With Vascular Disorders and Problems of Peripheral Circulation.
Presentation transcript:

George Em Karniadakis Division of Applied Mathematics The CRUNCH group: Cross-Site Simulations on the TeraGrid spectral elementsMicro / Nano-fluidicsparallel computing

Grand-Challenge Problem 1: Turbulence – Drag crisis (Tightly-Coupled Problem) Turbulence – Last frontier in classical physics Climate, environment, transport, energy,… Re=300,000 (CPU ~ Re 3 ) requires 20 Billion DOFs Memory 4 TBytes

Wave Propagation in a Model of the Arterial Circulation (Data of 55 main arteries from J.J. Wang and K. Parker, 1997) Grand-Challenge Problem 2: Human Arterial Tree (Loosely-Coupled Problem)

First Parallel TeraGrid Paradigm NCSA IA64 SDSC IA64 in-site communication Cross-site communication in-site communication TG Site Whole flow Domain All-to-all

-5/3 DNS versus Experiments: max Re=10,000 DNS Experiments (Rockwell, 2004) Energy Spectrum Black – simulation Blue - experiment RMS velocity

Turbulence: Single-Site Performance Fixed problem sizeFixed workload PSC: Compaq Alpha EV68, 1 GHz 300 Million DOFs, 2-level MPI MPICH-G2 and MPI perform similarly (SDSC/IA-64)

Half processors from NCSA, half from SDSC Intel IA-64 processors (Itanium-2, 1.5 GHz) Slow-down factor 1.5 SDSC TG NCSA TG FFT Matrix transposition Turbulence: Cross-Site Performance Fixed problem sizeFixed workload

P(t) W1W1 W2W2 Ascending aorta U(t) Inflow conditions U(t) P(t) Thoracic aorta Femoral P(t) U(t) W1W1 W2W2 Tibial P(t) Outflow conditions (Peripheral resistance) 1D Model – Sherwin et al. / Imperial College

Platelet Aggregation in Arterioles and Venules FLOW Parameters: Vessel diameter - 50 µm, vessel length µm, blood velocity µm/s, platelet diameter - 3 µm, platelet concentration /mm 3, platelet density fluid density Simulation time - 28 s venules platelet aggregate

Growth Rate vs. Blood Velocity Experiments: Begent and Born, Nature, Vol. 227, No. 5261, pp , 1970

Second Parallel TeraGrid Paradigm Multiscale Simulation of Arterial Tree

Arterial-Tree: Cross-Site Performance (Homogeneous Network) Three arteries; 4 Million DOFs per artery 1CPU/node on ANL; 2CPUs/node on NCSA/SDSC No slown-down, full scalability SDSC TG ANL TG NCSA TG Fixed problem size Fixed workload

SDSC TG NCSA TG PSC TG Arterial-Tree: Cross-Site Performance (Heterogeneous Network) PSC connects to TG via application gateway (qsockets) Two arteries per site PSC proc:2 GF vs 6 GF IA-64

New Unique Capability Potentially unlimited salability; Enabling technology –Integrate “real and virtual” in projects like: –Digital human, digital ocean, digital space, … Predictability and Uncertainty –Stochastic simulations –Prediction vs. Postdiction –Risk-based/Reliability-based design –Sensitivity analysis – steering of experiments (e.g., DDDAS concept) Inverse Problems –Engineering design –Biomedical sciences –Geological/Climate Modeling

What Users Need Debuggers for TG (a la TotalView) New topology-aware parallel algorithms Sustained network/cluster performance TG visualization capability Middleware –Robust MPICH-G2 –Co-scheduling –Network & Globus diagnostics –Authentication/Security – often in conflict Consultants/Referees with TG-Expertise