SI2 Project Highlights SI2-SSI: Community Software for Extreme-Scale Computing in Earthquake System Science PIs: Thomas H. Jordan, Yifeng Cui, Kim B. Olsen,

Slides:



Advertisements
Similar presentations
A Pseudo-Dynamic Rupture Model Generator for Earthquakes on Geometrically Complex Faults Daniel Trugman, July 2013.
Advertisements

XEON PHI. TOPICS What are multicore processors? Intel MIC architecture Xeon Phi Programming for Xeon Phi Performance Applications.
CyberShake Project and ShakeMaps. CyberShake Project CyberShake is a SCEC research project that is a physics-based high performance computational approach.
Numerical methods in the Earth Sciences: seismic wave propagation Heiner Igel, LMU Munich III The latest developments, outlook Grenoble Valley Benchmark.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO High-Frequency Simulations of Global Seismic Wave Propagation A seismology challenge:
1 High Performance Computing at SCEC Scott Callaghan Southern California Earthquake Center University of Southern California.
CyberShake Study 14.2 Technical Readiness Review.
SCEC Information Technology Overview for 2012 Philip J. Maechling Information Technology Architect Southern California Earthquake Center SCEC Board of.
NSF Geoinformatics Project (Sept 2012 – August 2014) Geoinformatics: Community Computational Platforms for Developing Three-Dimensional Models of Earth.
Large-scale 3-D Simulations of Spontaneous Rupture and Wave Propagation in Complex, Nonlinear Media Roten, D. 1, Olsen, K.B. 2, Day, S.M. 2, Dalguer, L.A.
VISUALIZING EARTHQUAKE SIMULATION DATA Amit Chourasia 1, Steve Cutchin 1, Alex DeCastro 1, Geoffrey Ely 2 1 San Diego Supercomputer Center 2 Scripps Institute.
1.UCERF3 development (Field/Milner) 2.Broadband Platform development (Silva/Goulet/Somerville and others) 3.CVM development to support higher frequencies.
Development of Scalable and Accurate Earthquake Simulations Xue-bin Chi Computer Network Information Center, Chinese Academy of Sciences
CyberShake Study 15.4 Technical Readiness Review.
Simulations of Large Earthquakes on the Southern San Andreas Fault Amit Chourasia Visualization Scientist San Diego Supercomputer Center Presented to:
CyberShake Study 2.3 Technical Readiness Review. Study re-versioning SCEC software uses year.month versioning Suggest renaming this study to 13.4.
Fig. 1. A wiring diagram for the SCEC computational pathways of earthquake system science (left) and large-scale calculations exemplifying each of the.
SCEC Community Modeling Environment (SCEC/CME): SCEC TeraShake Platform: Dynamic Rupture and Wave Propagation Simulations Seismological Society of America.
Validation of physics-based ground motion earthquake simulations using a velocity model improved by tomographic inversion results 1 Ricardo Taborda, 1.
Visualizing TERASHAKE Amit Chourasia Visualization Scientist Visualization Services San Diego Supercomputer center Geon Visualization Workshop March 1-2,
CyberShake Study 15.3 Science Readiness Review. Study 15.3 Scientific Goals Calculate a 1 Hz map of Southern California Produce meaningful 2 second results.
Phase 1: Comparison of Results at 4Hz Phase 1 Goal: Compare 4Hz ground motion results from different codes to establish whether the codes produce equivalent.
Southern California Earthquake Center CyberShake Progress Update 3 November 2014 through 4 May 2015 UGMS May 2015 Meeting Philip Maechling SCEC IT Architect.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
UCERF3 Uniform California Earthquake Rupture Forecast (UCERF3) 14 Full-3D tomographic model CVM-S4.26 of S. California 2 CyberShake 14.2 seismic hazard.
1 1.Used AWP-ODC-GPU to run 10Hz Wave propagation simulation with rough fault rupture in half-space with and without small scale heterogeneities. 2.Used.
Rapid Centroid Moment Tensor (CMT) Inversion in 3D Earth Structure Model for Earthquakes in Southern California 1 En-Jui Lee, 1 Po Chen, 2 Thomas H. Jordan,
Southern California Earthquake Center SI2-SSI: Community Software for Extreme-Scale Computing in Earthquake System Science (SEISM2) Wrap-up Session Thomas.
Visualizing large scale earthquake simulations Amit Chourasia Visualization Scientist San Diego Supercomputer Center Presented to: Advanced User Support,
Southern California Earthquake Center CyberShake Progress Update November 3, 2014 – 4 May 2015 UGMS May 2015 Meeting Philip Maechling SCEC IT Architect.
Welcome to the CME Project Meeting 2013 Philip J. Maechling Information Technology Architect Southern California Earthquake Center.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Single CPU Optimizations of SCEC AWP-Olsen Application Hieu Nguyen (UCSD), Yifeng Cui (SDSC), Kim Olsen (SDSU), Kwangyoon Lee (SDSC) Introduction Loop.
SCEC Capability Simulations on TeraGrid
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Recent TeraGrid Visualization Support Projects at NCSA Dave.
Cabled systems for near-field tsunami early warning: An observation system simulation experiment (OSSE) offshore Portugal A. Babeyko1, M. Nosov2 and.
Special Project Highlights ( )
CyberShake Study 2.3 Readiness Review
CyberShake Study 16.9 Science Readiness Review
Modern supercomputers, Georgian supercomputer project and usage areas
Early Results of Deep Learning on the Stampede2 Supercomputer
G. Cheng, R. Rimmer, H. Wang (Jefferson Lab, Newport News, VA, USA)
R. Rastogi, A. Srivastava , K. Sirasala , H. Chavhan , K. Khonde
Seismic Hazard Analysis Using Distributed Workflows
High Performance Computing at SCEC
Geant4 MT Performance Soon Yung Jun (Fermilab)
SI2 Project Highlights SI2-SSI: Community Software for Extreme-Scale Computing in Earthquake System Science PIs: Thomas H. Jordan, Yifeng Cui, Kim B. Olsen,
High-F Project Southern California Earthquake Center
Philip J. Maechling (SCEC) September 13, 2015

Ray-Cast Rendering in VTK-m
Linchuan Chen, Xin Huo and Gagan Agrawal
Scientific Discovery via Visualization Using Accelerated Computing
2University of Southern California, 3San Diego Supercomputer Center
17-Nov-18 Parallel 2D and 3D Acoustic Modeling Application for hybrid computing platform of PARAM Yuva II Abhishek Srivastava, Ashutosh Londhe*, Richa.
Large Data Visualization of Seismic Data (TeraShake)
Early Results of Deep Learning on the Stampede2 Supercomputer
Douglas Dreger, Gabriel Hurtado, and Anil Chopra
Douglas Dreger, Gabriel Hurtado, and Anil Chopra
Scalable Parallel Interoperable Data Analytics Library
CyberShake Study 17.3 Science Readiness Review
Hybrid Programming with OpenMP and MPI
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
CyberShake Study 14.2 Science Readiness Review
Southern California Earthquake Center
Southern California Earthquake Center
CyberShake Study 18.8 Technical Readiness Review
CyberShake Study 18.8 Planning
by Hiro Nimiya, Tatsunori Ikeda, and Takeshi Tsuji
Presentation transcript:

SI2 Project Highlights SI2-SSI: Community Software for Extreme-Scale Computing in Earthquake System Science PIs: Thomas H. Jordan, Yifeng Cui, Kim B. Olsen, Ricardo Taborda Project Dates: 1 September 2015 through August 31, 2019 Award Number: ACI-1450451 SCEC Extreme Scale Earthquake Simulation Software Recent Accomplishments 19 July 2017 Thomas H. Jordan (tjordan@usc.edu) Yifeng Cui (yfcui@sdsc.edu) Kim Bak Olsen (kbolsen@mail.sdsu.edu) Ricardo Taborda (ricardo.taborda@memphis.edu)

6.5x AWP-ODC on NVIDIA GPUs A first 4-Hz nonlinear M7.7 earthquake simulation on the southern San Andreas Fault conducted using 4,200 Blue Waters GPUs 100% of parallel efficiency achieved for both linear/ nonlinear versions of AWP-ODC up to 8,192 GPUs Accelerated time-to-solution from original nonlinear 0.68sec to 0.29sec per iteration Blue Waters PAID project provided additional support. Snapshots from the 4 Hz San Andreas simulation. (a-c) and (d-f) show fault-parallel velocity for the linear and nonlinear cases, respectively, and (g-i) depict the evolution of permanent plastic strain at the surface obtained from the nonlinear simulation. The dashed line shows the fault trace. (Roten, D., Y. Cui, K. Olsen, S. Day, K. Withers, W. Savran, P. Wang and D. Mu, High-frequency nonlinear earthquake simulations on petascale heterogeneous supercomputers, SC’16, 1-10, Nov 13-18, Salt Lake City, 2016) 6.5x Speedup of CyberShake SGT version on Cray XK7 compared to XE6 at node-to-node level (Roten et al., SC’16) (https://blogs.nvidia.com/blog/2015/08/31/gpu-quake-hazard)

NVIDIA Share Your Science Video – SC’16 https://www.youtube.com/watch?v=fYV_DawfuCg&list=PL5B692fm6--tF7nnrkGv02V9lrKVqoCpl&index=59 http://on-demand.gputechconf.com/supercomputing/2016/video/sc6106-phil-maechling-earthquake-simulations-extreme-scaling.html (Roten et al., SC’16)

AWP-ODC on Intel Xeon Phi Stencil generation and vector folding through YASK tool: https://github.com/01org/yask Hybrid placement of grids in DDR and MCDRAM Normalized cross architecture evaluation in Mega Lattice Updates per Second (MLUPS): Xeon Phi KNL 7290 achieves 2x speedup over NVIDIA K20X, 97% of NVIDIA Tesla P100 performance Performance on 9,000 nodes of Cori-II equivalent  to performance of over 20,000 K20X GPUs at 100% scaling Open Source: https://github.com/HPGeoC/awp-odc- os Single node performance comparison of AWP-ODC-OS on a variety of architectures. Also displayed is the bandwidth of each architecture, as measured by a STREAM and HPCG-SpMv. AWP-ODC-OS weak scaling on Cori Phase II and TACC Stampede KNL. We attain 91% scaling from 1 to 9000 nodes. The problem size required 14GB on each node. (Tobin, J., Breuer, A., Heinecke, A., Yount, C. and Cui, Y., Accelerating Seismic Simulations using the Intel Xeon Phi Knights Landing Processor, ISC High Performance'17, Frankfurt, June 18-22, 2017) https://www.hpcwire.com/off-the-wire/sdsc-achieves-record-performance-seismic-simulations-intel/

AWP-ODC on Sunway TaihuLight Tsinghua University/Wuxi SC Center ported open source AWP-ODC using Sunway OpenACC and fully optimized the code on TaihuLight Sustained 15-Pflop/s or 12.5% of the peak achieved, a ACM Gordon Bell Finalist in 2017 1976 Mw7.2 Tangshan earthquake scenario and high-fidelity simulation using 10 million cores A spatial resolution of 25m 320km x 320km x 60km Frequency of up to 10Hz Included non-linear near-fault physics (Haohuan Fu, Conghui He, Bingwei Chen, Zekun Yin, Zhenguo Zhang, Wenqiang Zhang, Tingjian Zhang, Wei Xue, Weiguo Liu, Wanwang Yin, Guangwen Yang, Xiaofei Chen (2017) 15-Pflops Nonlinear Earthquake Simulation on Sunway TaihuLight: Enabling Depiction of Realistic 10 Hz Scenarios, SC’17, Nov 13-16, Denver, 2017) Dr. Haohuan Fu of Tsinghua University/NSCW addressing Gordon Bell Weather and earthquake work at PASC’17 keynote

AWP-DM, Wavefield Estimation using a Discontinuous Mesh Interface (WEDMI) Motivation: Uniform-grid methods inefficient for large contrasts in seismic wave speeds, such as basin models. Challenges: Stability is inherently difficult to obtain in overlap between fine and coarse meshes. Approach: Factor-of-three contrast in grid spacing along all three dimensions (1), 4th-oder staggered grid (2,3). Status: Stable to 1M+ timesteps for factor-of-three velocity contrast inside overlap zone (4), accurate in realistic basin velocity models using finite fault sources in overlap zone (5,6), scalable to 1024+ GPUs (7), manuscript in press (Nie et al., 2017), student (Nie, MS, graduated) and researcher (Roten) supported from SI2. 4) 1) 2) 3) fine overlap zone coarse 5) 6) 7) Uniform Mesh DM Ideal Scaling DM (Nie, S., Wang, Y., Olsen, K.B. and Day, S.M., 4th-order Staggered-grid Finite Difference Seismic Wavefield Estimation using a Discontinuous Mesh Interface (WEDMI), Bull. Seism. Soc. Am. 2017, in press)

References In the News Publications Invited Talks 15-Pflop/s Nonlinear Earthquake Simulation on Sunway TaihuLight (using AWP-ODC), Gordon Bell Finalist in 2017, Tsinghua University/Wuxi SC Center, appeared on Tshinghua News, ISC’17 Invited Talk, PASC’17 Keynote SDSC Achieved Record Seismic Simulation Performance with Intel, appeared on HPCWire, insideHPC, NERSC news, Phys.Org News, UCSD News, SDSC facebook, diane bryant on Twitter Yifeng Cui Named SDSC Pi Person of 2016, SDSC Magazine, SCEC Twitter Publications Tobin, J., A. Breuer, C. Yount, A. Heinecke, Y. Cui, Accelerating Seismic Simulations Using the Intel Xeon Phi Knights Landing Processor, Proceedings of International Supercomputing ISC’17, June 18-22, Frankfurt, 2017 Nie, S., Y. Wang, K. Olsen and S. Day: 4th-order Staggered-grid Finite Difference Seismic Wavefield Estimation using a Discontinuous Mesh Interface (WEDMI), BSSA (accepted), 2017 Roten, D., Y. Cui, K. Olsen, S. Day, K. Withers, W. Savran, P. Wang and D. Mu, High-frequency nonlinear earthquake simulations on petascale heterogeneous supercomputers, SC’16, Nov 13-18, Salt Lake City, 2016 Roten, D., K.B. Olsen, S.M. Day, Y. Cui, Quantification of Fault-Zone Plasticity Effects with Spontaneous Rupture Simulations, Pure Appl. Geophys., pp 1-23, doi:10.1007/s00024-017-1466-5, 2017 Invited Talks Breuer, A., Intel Booth, at ISC’17, June 18-22, Frankfurt, 2017 Cui, Y., High-frequency nonlinear earthquake simulations on Titan and Blue Waters, GTC’17, May 8-10, San Jose, 2017 Maechling, P., Earthquake Simulations at Extreme Scales, NVIDIA Technology Theater @ SC’16, Nov 13-18, Salt Lake City, 2016 Jordan, T., Earthquake Simulations at Extreme Scales, GTC-DC, Oct 26-28, Washington DC, 2016 Cui, Y., Regional scale earthquake simulations on OLCF Titan and NCSA Blue Waters, Perspectives of GPU Computing in Science 2016, Sept 26-28, Rome, 2016 (Keynote) Tobin, J., Breuer, A., Yount, C., Heinecke, A. and Cui, Y., Accelerating AWP-ODC-OS using Intel Xeon Phi Processors, Intel IXPUG Workshop, Sept 19-22, Chicago, 2016 (Keynote)