Deploying a Petascale-Capable Visualization and Analysis Tool April 15, 2010.

Slides:



Advertisements
Similar presentations
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Advertisements

S ITE R EPORT : L AWRENCE B ERKELEY N ATIONAL L ABORATORY J OERG M EYER
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
SAN DIEGO SUPERCOMPUTER CENTER Choonhan Youn Viswanath Nandigam, Nancy Wilkins-Diehr, Chaitan Baru San Diego Supercomputer Center, University of California,
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
VisIt Software Engineering Infrastructure and Release Process LLNL-PRES Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
Rockville, MD 28 April 2009 Rockville, MD 28 April 2009 Answers to Review Panel Questions.
Challenges and Solutions for Visual Data Analysis on Current and Emerging HPC Platforms Wes Bethel & Hank Childs, Lawrence Berkeley Lab July 20, 2011.
Effective Methods for Software and Systems Integration
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Role of Deputy Director for Code Architecture and Strategy for Integration of Advanced Computing R&D Andrew Siegel FSP Deputy Director for Code Architecture.
Experiments with Pure Parallelism Hank Childs, Dave Pugmire, Sean Ahern, Brad Whitlock, Mark Howison, Prabhat, Gunther Weber, & Wes Bethel April 13, 2010.
VisIt: a visualization tool for large turbulence simulations  Outline Success stories with turbulent simulations Overview of VisIt project 1 Hank Childs.
The Materials Genome Initiative and Materials Innovation Infrastructure Meredith Drosback White House Office of Science and Technology Policy September.
Presented by National Institute for Computational Sciences (NICS): Education, Outreach and Training Julia C. White User Support National Institute for.
U.S. Department of the Interior U.S. Geological Survey Next Generation Data Integration Challenges National Workshop on Large Landscape Conservation Sean.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Computational Literacy NPACI Site Visit July 22, 1999 Gregory A. Moses EOT Thrust Leader.
One Body, Many Heads for Repository-Powered Digital Content Applications Hydra Europe Symposium, Trinity College, Dublin, 7 th April 2014 Chris Awre Head.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
November 13, 2006 Performance Engineering Research Institute 1 Scientific Discovery through Advanced Computation Performance Engineering.
SciDAC Projects: Groundwater Tim Scheibe PNNL-SA
What is Internet2? Ted Hanss, Internet2 5 March
VACET: Deploying Technology for Visualizing and Analyzing Astrophysics Simulations Author May 19, 2009.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER 1 NERSC Visualization Greenbook Workshop Report June 2002 Wes Bethel LBNL.
Advanced Networking Infrastructure and Research The Division includes two lines of effort: Networking Research at a point of explosive growth, new definition.
A Community Information Management System by Pangea Foundation.
NanoHUB.org and HUBzero™ Platform for Reproducible Computational Experiments Michael McLennan Director and Chief Architect, Hub Technology Group and George.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
Futures Lab: Biology Greenhouse gasses. Carbon-neutral fuels. Cleaning Waste Sites. All of these problems have possible solutions originating in the biology.
E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU.
S2I2: Enabling grand challenge data intensive problems using future computing platforms Project Manager: Shel Swenson (USC & GATech)
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
LBNL VACET Activities Hank Childs Computer Systems Engineer - Visualization Group August 24, 2009.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation Analyzing.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation Enabling.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
MESQUITE: Mesh Optimization Toolkit Brian Miller, LLNL
1 Spallation Neutron Source Data Analysis Jessica Travierso Research Alliance in Math and Science Program Austin Peay State University Mentor: Vickie E.
Hank Childs, University of Oregon Volume Rendering Primer / Intro to VisIt.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Overview and status of the project EuroVO-AIDA – Final review – 5 October 2010 Françoise Genova, Project Coordinator, 1 Overview and status of the project.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Title 24 Report Generator Robert Scott R ASENT Solutions Nov. 20, 2012.
By David P. Schissel and Reza Shakoori Presented at DOE Office of Science High-Performance Network Research PI Meeting Brookhaven National Lab September.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
1 27B element Rayleigh-Taylor Instability (MIRANDA, BG/L) VisIt: a visualization tool for large turbulence simulations Large data requires special techniques.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
SOFTWARE LIFECYCLE. What functions would ISEES perform?
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
VisIt Project Overview
XSEDE Value Added and Financial Economies
HDF5 for Real-Time and/or Embedded Test Data
Improve Troubleshooting & Performance Analysis
Engineered nanoBIO Node
Scientific Discovery via Visualization Using Accelerated Computing
9 Block Therma-Tech Engineering – President – Ron O’Dell
Science and Technology Centers
Presentation transcript:

Deploying a Petascale-Capable Visualization and Analysis Tool April 15, 2010

Purpose of the next three talks Detail the VACET activities to deliver a petascale-capable tool to the Office of Science community (and others) –Ensuring S/W is capable of processing tomorrow’s data –Ensuring that S/W scales (Joule) –Software engineering & deployment –Providing infrastructure to support the community –Outreach to the community Detail the VACET activities to deliver a petascale-capable tool to the Office of Science community (and others) –Ensuring S/W is capable of processing tomorrow’s data –Ensuring that S/W scales (Joule) –Software engineering & deployment –Providing infrastructure to support the community –Outreach to the community

VisIt: Delivering a petascale-capable visualization and analysis tool to the Office of Science, the DoE, and more Problem Office of Science application scientists need tools for visualization and analysis (exploration, confirmation, & communication) Solution VACET has extended VisIt to deal with unique Office of Science problems, including data size, and deployed to the community This includes data with trillions of cells using 10K’s cores Impact Many Office of Science simulation codes now use VisIt. 11 letters of support from SciDAC-funded groups for VACET review Large capability delivered in a cost effective manner Fall 2006 Project started 2007 VACET enables multi-institution development APDEC retires ChomboVis for VisIt. Repurposes $’s for math 2008 VACET enables VisIt to run on trillions of cells and 10K’s cores Summer 2009 VisIt becomes first ever non- simulation Joule code. Fall GNEP/NEAMS choose VisIt due to VACET leadership 2008 Both NSF XD centers commit to supporting VisIt SW repository has ~30 developers from >10 institutions

We studied isocontouring and volume rendering, looking at up to 4T cells. Visualization of 2 trillion cells, visualized with VisIt on JaguarPF using 32,000 cores. Visualization of 1 trillion cells, visualized with VisIt on Franklin using 16,000 cores.

We demonstrated that VisIt performs well on tens of thousands of cores with trillions of cells. Goal was to uncover bottlenecks on tomorrow’s data. Experiments varied over supercomputing environment, data generation patterns, and I/O pattern. Goal was to uncover bottlenecks on tomorrow’s data. Experiments varied over supercomputing environment, data generation patterns, and I/O pattern.

Outreach We have worked hard to deploy VisIt, through tutorials, user support, documentation, etc. Tutorials: We have worked hard to deploy VisIt, through tutorials, user support, documentation, etc. Tutorials: EventLocationDateAttendance SC09Portland, ORNovember 2009~50 Vis09Atlantic City, NJOctober 2009~75 NUG 2009Boulder, COOctober 2009~30 ACTSBerkeley, CAAugust 2009~35 PrincetonPrinceton, NJJuly 2009~30 SciDAC 2009San Diego, CAJune 2009~40 CScADSSnowbird, UTJuly 2008~30 SciDAC 2008Seattle, WAJuly 2008~40 SciDAC 2007Cambridge, MAJune 2007~20