National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.

Slides:



Advertisements
Similar presentations
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Advertisements

U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
GPU System Architecture Alan Gray EPCC The University of Edinburgh.
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
May 17, Capabilities Description of a Rapid Prototyping Capability for Earth-Sun System Sciences RPC Project Team Mississippi State University.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
GPU Computing with CUDA as a focus Christie Donovan.
Advanced Scientific Visualization Paul Navrátil 28 May 2009.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
CATIA V6 Live Rendering Need permission from Xavier Melkonian at 3DS before any NDA discussion with CATIA users. NVIDIA/mental images.
Amit Chourasia Visualization Scientist San Diego Supercomputer Center Presented at : Cyberinfrastructure Internship Experiences for Graduate Students Spring.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
3D stereo scientific & information visualization environments NCSA Strategic Planning Presentation (April 20,2010) Donna Cox, Robert Patterson, Alex Betts,
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign S5246—Innovations in.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Peter Bajcsy, Rob Kooper, Luigi Marini, Barbara Minsker and Jim Myers National Center for Supercomputing Applications (NCSA) University of Illinois at.
Gregory Fotiades.  Global illumination techniques are highly desirable for realistic interaction due to their high level of accuracy and photorealism.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Large Scale Visualization on the Cray XT3 Using ParaView Cray User’s Group 2008 May 8, 2008 Sandia is a multiprogram laboratory operated by Sandia Corporation,
Visualization Workshop David Bock Visualization Research Programmer National Center for Supercomputing Applications - NCSA University of Illinois at Urbana-Champaign.
PetaApps: Update on software engineering and performance J. Dennis M. Vertenstein N. Hearn.
Geosciences - Observations (Bob Wilhelmson) The geosciences in NSF’s world consists of atmospheric science, ocean science, and earth science Many of the.
Stellar Stars: Reflections of a Center CIO James F. Williams Ames Research Center August 15, 2011.
Fig. 1. A wiring diagram for the SCEC computational pathways of earthquake system science (left) and large-scale calculations exemplifying each of the.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Presented by Visualization at the Leadership Computing Facility Sean Ahern Scientific Computing Center for Computational Sciences.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
ISTeC Research Computing Open Forum: Using NSF or National Laboratory Resources for High Performance Computing Bhavesh Khemka.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483, University of Illinois, Urbana-Champaign 1 Graphic Processing Processors (GPUs) Parallel.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
PACI Program : One Partner’s View Paul R. Woodward LCSE, Univ. of Minnesota NSF Blue Ribbon Committee Meeting Pasadena, CA, 1/22/02.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Scientific Computing Goals Past progress Future. Goals Numerical algorithms & computational strategies Solve specific set of problems associated with.
3/12/2013Computer Engg, IIT(BHU)1 CUDA-3. GPGPU ● General Purpose computation using GPU in applications other than 3D graphics – GPU accelerates critical.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
EGI-InSPIRE RI An Introduction to European Grid Infrastructure (EGI) March An Introduction to the European Grid Infrastructure.
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
VirtualGL.
Dynamic Data Driven Application Systems
Ray-Cast Rendering in VTK-m
Theoretical and Computational Biophysics Group
Development of the Nanoconfinement Science Gateway
Dynamic Data Driven Application Systems
Presentation transcript:

National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics Forum 2014

Organization Blue Waters XSEDE Advanced Digital Services

Common Blue Waters and XSEDE functions User support (including visualization) Network infrastructure Storage Operations Support for future NCSA efforts Resources managed by service level agreements

Advanced Digital Services Visualization Group – Support for data analysis and visualization. XSEDE Dave Bock Mark VanMoer Blue Waters Dave Semeraro Rob Sisneros

Visualization Support Software Paraview VisIt YT IDL (coming soon) (ncview, matplotlib, ffmpeg, …) Opengl driver on Blue Waters XK nodes Direct user support Data analysis / custom rendering. Scaling analysis tools / parallel IO

XSEDE

Blue Waters Stellar magnetic and temperature field Supernova ignition bubble Atmospheric downburst

Blue Waters Compute System. System Total Peak Performance PF Total System Memory PB XE Bulldozer Cores* 362,240 XK Bulldozer Cores* (CPU) 33,792 XK Kepler Accelerators (GPU)4,224 Interconnect Architecture3D Torus Topology24x24x24 Compute nodes per Gemini 2 Storage 26.4 PB Bandwidth > 1 TB/sec

Blue Waters Visualization System. System Total Peak Performance PF Total System Memory PB XE Bulldozer Cores* 362,240 XK Bulldozer Cores* (CPU) 33,792 XK Kepler Accelerators (GPU) 4,224 Interconnect Architecture3D Torus Topology24x24x24 Compute nodes per Gemini 2 Storage 26.4 PB Bandwidth > 1 TB/sec

Blue Waters Allocations GLCPC – 2% PRAC – over 80% Illinois – 7% Education – 1% Project Innovation and Exploration Industry

GLCPC Great Lakes Consortium for Petascale Computing GLCPC Mission: “…facilitate and coordinate multi-institutional efforts to advance computational and computer science engineering, and technology research and education and their applications to outstanding problems of regional or national importance…” 2% allocation 501c3 Organization* 28 Charter members** Executive Committee Allocations Committee Education Committee * State 501c3 filing complete, federal filing in progress ** 28 Charter members represent over 80 universities, national laboratories, and other education agencies

Industry S&E Teams High interest shared by partner companies in the following: Scaling capability of a well-known and validated CFD code Temporal and transient modeling techniques and understanding. Two example cases under discussion: NASA OVERFLOW at scale for CFD flows Temporal modeling techniques using the freezing of H2O molecules as a use case and as a reason to conduct both large-scale single runs and to gain significant insight by reducing uncertainty. Industry can participate in the NSF PRAC process 5+% allocation can dedicated to industrial use Specialized support by the NCSA Private Sector Program (PSP) staff Blue Waters staff will support the PSP staff as needed Potential to provide specialized services within Service Level Agreements parameters E.g. throughput, extra expertise, specialized storage provisions, etc.

Impact of OpenGL on XK NCSA, following S&E team suggestions, convinced Cray and NVIDIA to run a Kepler driver that enables OpenGL applications Allows visualization tools to run directly on XK nodes First Cray system to do this? Two early impacts Schulten – NAMD 10X to 50X rendering speedup in VMD. OpenGL render backup to ray tracing. Used to fill in failed ray traced frames. Potential for interactive remote display. Woodward Eliminate need to move data. Created movies of large data simulation in days rather than a year.

10,560 3 grid Inertial confinement fusion (ICF) calculation with multifluid PPM Rendered 13,688 frames at 2048x1080 pixelsv4 panels per view & 2 views per stereo 4096x2160 pixels. Stereo movie is 1711 frames Real Improvement in Time to Solution Local (Minnesota)Remote (NCSA) Raw data transfer 20MB/s = 15 days 0 secs Rendering time 13,688 frames Estimated 33 days (6 nodes with 1 GPU/node) 24 hours (128 GPUs) Visualized data transfer0 20MB/s = 32 minutes Total timeMin 33, max 48 Days24.5 hours About 40x speedup + better analysis