Scientific Discovery via Visualization Using Accelerated Computing

Slides:



Advertisements
Similar presentations
The 7 th Ultrascale Visualization Workshop November 12, 2012 Salt Lake City.
Advertisements

HARDWARE ACCELERATED WEB BROWSER Berlian Juliartha M.P Indah Yudi Suryani Wais Al Qonri H
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
A 4-year $2.6 million grant from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), to perform “real-time” CT imaging dose calculations.
Exploiting Graphics Processors for High- performance IP Lookup in Software Routers Author: Jin Zhao, Xinya Zhang, Xin Wang, Yangdong Deng, Xiaoming Fu.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
1 Intellectual Architecture Leverage existing domain research strengths and organize around multidisciplinary challenges Institute for Computational Research.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
ASCR Scientific Data Management Analysis & Visualization PI Meeting Exploration of Exascale In Situ Visualization and Analysis Approaches LANL: James Ahrens,
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Assessment of Core Services provided to USLHC by OSG.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Roadmap for Many-core Visualization Software in DOE Jeremy Meredith Oak Ridge National Laboratory.
NVDA Preetam Jinka Akhil Kolluri Pavan Naik. Background Graphics processing units (GPUs) Chipsets Workstations Personal computers Mobile devices Servers.
Role of Deputy Director for Code Architecture and Strategy for Integration of Advanced Computing R&D Andrew Siegel FSP Deputy Director for Code Architecture.
Data Management Subsystem: Data Processing, Calibration and Archive Systems for JWST with implications for HST Gretchen Greene & Perry Greenfield.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
GPU Computing April GPU Outpacing CPU in Raw Processing GPU NVIDIA GTX cores 1.04 TFLOPS CPU GPU CUDA Architecture Introduced DP HW Introduced.
Nov. 14, 2012 Hank Childs, Lawrence Berkeley Jeremy Meredith, Oak Ridge Pat McCormick, Los Alamos Chris Sewell, Los Alamos Ken Moreland, Sandia Panel at.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
The Scalable Data Management, Analysis, and Visualization Institute VTK-m: Accelerating the Visualization Toolkit for Multi-core.
CyberInfrastructure workshop CSG May Ann Arbor, Michigan.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
NIH NCRR Overview The SCIRun and BioPSE Problem Solving Environments Chris Johnson, Rob MacLeod, and David Weinstein Scientific Computing and Imaging Institute.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 Latest Generations of Multi Core Processors
BalticGrid-II Project The Second BalticGrid-II All-Hands Meeting, Riga, May, Joint Research Activity Enhanced Application Services on Sustainable.
S2I2: Enabling grand challenge data intensive problems using future computing platforms Project Manager: Shel Swenson (USC & GATech)
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
1 DOE Office of Science October 2003 SciDAC Scientific Discovery through Advanced Computing Alan J. Laub.
1 1 Office of Science Jean-Luc Vay Accelerator Technology & Applied Physics Division Lawrence Berkeley National Laboratory HEP Software Foundation Workshop,
BESAC August Part III IV. Connecting Theory with Experiment V. The Essential Resources for Success Co-Chairs Bruce Harmon – Ames Lab and Iowa.
Site Report DOECGF April 26, 2011 W. Alan Scott Sandia National Laboratories Sandia National Laboratories is a multi-program laboratory managed and operated.
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
NA-MIC National Alliance for Medical Image Computing Core 1b – Engineering Computational Platform Jim Miller GE Research.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
Some GPU activities at the CMS experiment Felice Pantaleo EP-CMG-CO EP-CMG-CO 1.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
GPU Computing for GIS James Mower Department of Geography and Planning University at Albany.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Scaling up R computation with high performance computing resources.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Hadoop Javad Azimi May What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data. It includes:
overview of activities on High Performance Computing
Pragmatic appliance of GPGPU technology
VisIt Project Overview
ASCR CS PI Meeting Kenneth Moreland Sandia National Laboratories
Early Results of Deep Learning on the Stampede2 Supercomputer
R. Rastogi, A. Srivastava , K. Sirasala , H. Chavhan , K. Khonde
PanDA engagement with theoretical community via SciDAC-4
R.Mashinistov (UTA) July
Recap: introduction to e-science
PP POMPA status Xavier Lapillonne.
Ray-Cast Rendering in VTK-m
Theoretical and Computational Biophysics Group
Scientific Achievement
Early Results of Deep Learning on the Stampede2 Supercomputer
Physics-based simulation for visual computing applications
Scalable Parallel Interoperable Data Analytics Library
The SCIRun and BioPSE Problem Solving Environments
Gordon Erlebacher Florida State University
Multicore and GPU Programming
Portable Performance for Many-Core Particle Advection
In Situ Fusion Simulation Particle Data Reduction Through Binning
Multicore and GPU Programming
Wavelet Compression for In Situ Data Reduction
Presentation transcript:

Scientific Discovery via Visualization Using Accelerated Computing Scientific Achievement VTK-m updates scientific visualization software to run on the accelerated processors (like GPU and Xeon Phi) driving the most powerful HPC systems. Significance and Impact If we fail to update our scientific visualization software to run efficiently on modern HPC systems, we will lose the tools required to make scientific discoveries. Research Details Initial research was fragmented into 3 separate projects. SDAV brought the PIs together to engage and to transition to the unified software used today. ParaView 5.3 and VisIt 2.11 offer VTK-m acceleration. Upcoming VTK 8.0 will offer VTK-m accelerated filters. In situ tools demonstrated on Titan with thousands of GPUS. ParaView using VTK-m acceleration to visualize data from a supernova simulation. Sewell, et al. Proceedings of the Ultrascale Visualization Workshop, November 2012, DOI 10.1109/SC.Companion.2012.36. Moreland, et al. IEEE Computer Graphics and Applications, Volume 36, Number 3, May/June 2016, DOI 10.1109/MCG.2016.48. VTK-m In Situ with Catalyst/ PyFR batch mode on Titan with 5000 GPUS We are updating our HPC scientific visualization tools to enable scientists to make discoveries from large-scale simulation data. DOE HPC system design is undergoing a revolution that incorporates new “accelerator” processor technology, such as NVIDIA GPU and Intel Xeon Phi, that require a major redesign of software. Visualization software is particularly difficult as it involves complex data structures with many data dependencies that must be coordinated across computing threads. Early work beginning around 2010 started independently in three projects, named EAVL, Dax, and Piston, which had no coordination between them. The SDAV SciDAC institute brought together the PIs to enable this coordination. The eventual result of this coordination was a plan and agreement to replace these unconnected products with a single software framework, which was named VTK-m. This plan was proposed, accepted, and implemented under ASCR research funding in coordination with SDAV. The roadmap for VTK-m includes an integration with the existing scientific visualization software so that we can continue the leverage the 100’s of thousands of man hours that have gone into these projects over many years. Today, the initial functionality of VTK-m is available in the latest versions of ParaView and VisIt, and VTK-m will be available in the next major release of VTK. VTK-m has also been demonstrated at scale to perform an in situ visualization on the Titan supercomputer with 5000 GPU processors. The project page is at http://m.vtk.org. VTK-m 1.0.0 Released VTK-m Live In Situ with Catalyst/ PyFR VTK-m Added to VTK and ParaView VTK-m started by merging EAVL, Dax, and Piston VTK-m Added to VisIt EAVL, Dax, and Piston being developed 12 13 14 15 16 17