PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.

Slides:



Advertisements
Similar presentations
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Advertisements

The ADAMANT Project: Linking Scientific Workflows and Networks “Adaptive Data-Aware Multi-Domain Application Network Topologies” Ilia Baldine, Charles.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
1 High Performance Computing at SCEC Scott Callaghan Southern California Earthquake Center University of Southern California.
Simulation and Information Technologies Gregory L. Fenves, UC Berkeley PEER Summative Meeting – June 13, 2007.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
Capability Maturity Model
System Design/Implementation and Support for Build 2 PDS Management Council Face-to-Face Mountain View, CA Nov 30 - Dec 1, 2011 Sean Hardman.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Thomas Hacker Barb Fossum Matthew Lawrence Open Science Grid May 19, 2011.
V. Chandrasekar (CSU), Mike Daniels (NCAR), Sara Graves (UAH), Branko Kerkez (Michigan), Frank Vernon (USCD) Integrating Real-time Data into the EarthCube.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Computational codes, structural models, and simulation results versioned with associated tests. Development of new computational, data, and physical models.
N By: Md Rezaul Huda Reza n
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Effective User Services for High Performance Computing A White Paper by the TeraGrid Science Advisory Board May 2009.
The Climate Prediction Project Global Climate Information for Regional Adaptation and Decision-Making in the 21 st Century.
Leveraging research and future funding opportunities Hajo Eicken Geophysical Institute & International Arctic Research Center University of Alaska Fairbanks.
NSF Geoinformatics Project (Sept 2012 – August 2014) Geoinformatics: Community Computational Platforms for Developing Three-Dimensional Models of Earth.
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
Future role of DMR in Cyber Infrastructure D. Ceperley NCSA, University of Illinois Urbana-Champaign N.B. All views expressed are my own.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Event Management & ITIL V3
1 SCEC Broadband Platform Development Using USC HPCC Philip Maechling 12 Nov 2012.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
RI User Support in DEISA/PRACE EEF meeting 2 November 2010, Geneva Jules Wolfrat/Axel Berg SARA.
OPENQUAKE Mission and Vision It is GEM’s mission to engage a global community in the design, development and deployment of state-of-the-art models and.
Session 1A – Ground Motions and Intensity Measures Paul Somerville Andrew Whittaker Greg Deierlein.
Geosciences - Observations (Bob Wilhelmson) The geosciences in NSF’s world consists of atmospheric science, ocean science, and earth science Many of the.
Fig. 1. A wiring diagram for the SCEC computational pathways of earthquake system science (left) and large-scale calculations exemplifying each of the.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Materials Innovation Platforms (MIP): A New NSF Mid-scale Instrumentation and User Program to Accelerate The Discovery of New Materials MRSEC Director’s.
1 Critical Water Information for Floods to Droughts NOAA’s Hydrology Program January 4, 2006 Responsive to Natural Disasters Forecasts for Hazard Risk.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CISN: Draft Plans for Funding Sources: OES/FEMA/ANSS/Others CISN-PMG Sacramento 10/19/2004.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
SCEC: An NSF + USGS Research Center Evaluation of Earthquake Early Warnings as External Earthquake Forecasts Philip Maechling Information Technology Architect.
Southern California Earthquake Center SCEC Technical Activity Groups (TAGs) Self-organized to develop and test critical methodologies for solving specific.
06/22/041 Data-Gathering Systems IRIS Stanford/ USGS UNAVCO JPL/UCSD Data Management Organizations PI’s, Groups, Centers, etc. Publications, Presentations,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
ISTeC Research Computing Open Forum: Using NSF or National Laboratory Resources for High Performance Computing Bhavesh Khemka.
Experiences Running Seismic Hazard Workflows Scott Callaghan Southern California Earthquake Center University of Southern California SC13 Workflow BoF.
Southern California Earthquake Center SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) Infrastructure Philip J. Maechling (SCEC) September.
Funded by the NSF OCI program grants OCI and OCI Mats Rynge, Gideon Juve, Karan Vahi, Gaurang Mehta, Ewa Deelman Information Sciences Institute,
Southern California Earthquake Center SI2-SSI: Community Software for Extreme-Scale Computing in Earthquake System Science (SEISM2) Wrap-up Session Thomas.
Tackling I/O Issues 1 David Race 16 March 2010.
Southern California Earthquake Center CyberShake Progress Update November 3, 2014 – 4 May 2015 UGMS May 2015 Meeting Philip Maechling SCEC IT Architect.
SCEC: An NSF + USGS Research Center Focus on Forecasts Motivation.
Welcome to the CME Project Meeting 2013 Philip J. Maechling Information Technology Architect Southern California Earthquake Center.
NASA Earth Exchange (NEX) Earth Science Division/NASA Advanced Supercomputing (NAS) Ames Research Center.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
SCEC CyberShake on TG & OSG: Options and Experiments Allan Espinosa*°, Daniel S. Katz*, Michael Wilde*, Ian Foster*°,
ShakeAlert CISN Testing Center (CTC) Development
Strategies for NIS Development
Joslynn Lee – Data Science Educator
Seismic Hazard Analysis Using Distributed Workflows
Meeting Objectives Discuss proposed CISM structure and activities
Scott Callaghan Southern California Earthquake Center
SCEC Community Modeling Environment (SCEC/CME)
High-F Project Southern California Earthquake Center
VII. Earthquake Mitigation
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
Presentation transcript:

PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions Seismic Hazard Analysis Implementation interface Risk Assessment & Mitigation (e.g. PEER)

When Producing Broad Impact Computational Data Products, Forecast Testing Should Increase Along with Forecast Impact Public and Governmental Forecasts Engineering and Interdisciplinary Research Collaborative Research Project Individual Research Project Computational codes, structural models, and simulation results versioned with associated tests. Development of new computational, data, and physical models. Automated retrospective testing of forecast models using community defined validation problems. Automated prospective testing of forecast models over time within collaborative forecast testing center. SCEC Computational Forecast Users Scientific and Engineering Requirements for Forecast Modeling Systems

Productive Open-Science Research SCEC bridges research (NSF) and operational (USGS,DOE) organizations. –SCEC Mission is to translate latest research results into use with public impact. Apply best techniques to established seismic hazard data calculations. The CME bridges the scientific and computer scientists within SCEC. –CME provides an organization that integrates existing SCEC scientific codes for research purposes. Best and most qualified codes within SCEC are adopted and highly optimized.CME simulations at scales beyond individual investigtors. Develops techniques for improving accuracy and precision of existing widely used, high impact calculations including PSHA, siesmograms etc

Computational Science Broader Impact Seismology has several existing information interfaces to broad impact users. Each interface represents a transfer of specific computational data products between groups. Each type of computation requires specialized techniques. –Probabilistic Seismic Hazard Curves : Forecast of peak ground motion at site over years : Building code development –Scenario Earthquake Seismograms : Forecast ground motions up to 10Hz at any site for arbitrary earthquake : Building engineers. –Earthquake Early Warning : Ground Motion Alert : Public and Work Place –Earthquake ShakeMaps : Geographic Distribution and levels of Peak Shaking : Public and Press

Need for Balanced Software System HPC systems seek a balance between compute, communications, and input/output to maximize performance and effectiveness. Performance improvements in one area move performance bottlenecks around system. Similarly, research results are obtained using simulation software systems (computational platforms), well integrated suites of code. As simulations scale up, all simulation software system elements must advance together.

Translating Performance into Production At milestone scale, every simulation is custom hand crafted. This is unavoidable to start, but problematic in the long run. Once milestone simulation is achieved, we drive performance improvements into routine, automated, production computing.

Productive Open-Science Research In our experience, when working with highly-talented and highly- motivated basic research groups, computational grand challenges can be highly productive open-science basic research. Challenges designed to produce: –Significant new scientific result –Significant new cyber-infrastructure that enables you to repeat the calculation. Key is this approach is to select right grand challenge, one that drives both broad impact science and the computational capabilities most needed. This talk will describe why M8 was the right grand challenge. And offer suggestions on what the next right grand challenge will be.

SCEC/CME Annual Allocation Targets 1.USC HPCC –Linux Cluster (some large memory nodes) and 6 GPU Nodes 2.NSF XSEDE –Linux Clusters, Shared Memory Computers, GPU nodes 3.DOE INCITE –Leadership-class Cray Supercomputers some with GPU nodes 4.Blue Waters –NSF Track 1 Cray cluster with GPU nodes accessible in Jan 2013

Typical Elements in Allocation Request 1.PI and co-PIs: –Add users, transfer units etc. 2.Research Objectives and Potential Impact: –Goals of research for non specialists and potential impact 3.Supporting Grants –List of grants funding proposed research 4.Status of current allocation and current results –Hours used and current results from current allocation? 5.Selection of Appropriate Computer Resources –Specify computing resources you wish to use 6.List of Software To Be Used with Performance Information –Describe software implementation and performance benchmarks 7.List of Simulations with CPU usage and Data Storage –Total of all simulations to determine total request on each machine 8.Project Milestones and Schedule –Progress reports must describe status of milestones

Allocation Responsibilities 1.Register PI, co-Pi, and Users: –Some systems like DOE have stricter registration process 2.Must learn and abide by Supercomputer Terms of Use: –Don’t share accts, don’t run jobs on the head-node etc 3.Receive and use physical crypto-key –Larger computing systems require physical password devices 4.Monitor allocation usage and burn rate –Monitoring reports and tools differ by systems 5.Keep project data storage within quotas –Large data requires close coordination with Resource Providers 6.Submit quarterly progress reports –Most Programs require quarterly reports tracking progress 7.Science results for non-scientists with images –Resource Providers want to show results from their systems

Conclusions Numerical simulations of large earthquakes have now advanced to the point where they can usefully predict the strong ground motions from anticipated earthquake sources –M8 simulation demonstrates the capability for full-scale simulations –Computational tools now facilitate verification, validation, and data assimilation Simulations can enhance the technologies for time-dependent seismic hazard analysis –Long-term probabilistic seismic hazard analysis –Operational earthquake forecasting –Earthquake and tsunami early warning –Post-earthquake information These applications offer new (and urgent) computational challenges –Exascale problems –Data-intensive computing –Rapid access to very large data sets –Robust on-demand computing