Large Scale Visualization on the Cray XT3 Using ParaView Cray User’s Group 2008 May 8, 2008 Sandia is a multiprogram laboratory operated by Sandia Corporation,

Slides:



Advertisements
Similar presentations
Conclusion Kenneth Moreland Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company,
Advertisements

Advanced ParaView Visualization Kenneth Moreland, David Thompson, Timothy Shead Sandia National Laboratories Utkarsh Ayachit Kitware Inc. John Biddiscombe.
Parallel Visualization Kenneth Moreland Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
Ultra-Scale Visualization with Open-Source Software Berk Geveci Kitware Inc.
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
Adding scalability to legacy PHP web applications Overview Mario A. Valdez-Ramirez.
1 Approved for unlimited release as SAND C Verification Practices for Code Development Teams Greg Weirs Computational Shock and Multiphysics.
Unstructured Data Partitioning for Large Scale Visualization CSCAPES Workshop June, 2008 Kenneth Moreland Sandia National Laboratories Sandia is a multiprogram.
Desktop Computing Strategic Project Sandia National Labs May 2, 2009 Jeremy Allison Andy Ambabo James Mcdonald Sandia is a multiprogram laboratory operated.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
MIT iCampus iLabs Software Architecture Workshop June , 2006.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
Brad Whitlock October 14, 2009 Brad Whitlock October 14, 2009 Porting VisIt to BG/P.
Systems Architecture, Fourth Edition1 Internet and Distributed Application Services Chapter 13.
SM3121 Software Technology Mark Green School of Creative Media.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Automated Tests in NICOS Nightly Control System Alexander Undrus Brookhaven National Laboratory, Upton, NY Software testing is a difficult, time-consuming.
Barracuda Networks Confidential1 Barracuda Backup Service Integrated Local & Offsite Data Backup.
Fundamentals of Python: From First Programs Through Data Structures
Presented by Brian Griffin On behalf of Manu Goel Mohit Goel Nov 12 th, 2014 Building a dynamic GUI, configurable at runtime by backend tool.
1 Introduction to Tool chains. 2 Tool chain for the Sitara Family (but it is true for other ARM based devices as well) A tool chain is a collection of.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
U.S. Department of the Interior U.S. Geological Survey David V. Hill, Information Dynamics, Contractor to USGS/EROS 12/08/2011 Satellite Image Processing.
Remote OMNeT++ v2.0 Introduction What is Remote OMNeT++? Remote environment for OMNeT++ Remote simulation execution Remote data storage.
A Cloud is a type of parallel and distributed system consisting of a collection of inter- connected and virtualized computers that are dynamically provisioned.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
COMP 410 & Sky.NET May 2 nd, What is COMP 410? Forming an independent company The customer The planning Learning teamwork.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
A Comparison of Library Tracking Methods in High Performance Computing Computer System Cluster and Networking Summer Institute 2013 Poster Seminar William.
| nectar.org.au NECTAR TRAINING Module 5 The Research Cloud Lifecycle.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
SSS Test Results Scalability, Durability, Anomalies Todd Kordenbrock Technology Consultant Scalable Computing Division Sandia is a multiprogram.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
Replay Compilation: Improving Debuggability of a Just-in Time Complier Presenter: Jun Tao.
Microelectronic Systems Institute Leandro Soares Indrusiak Manfred Glesner Ricardo Reis Lookup-based Remote Laboratory for FPGA Digital Design Prototyping.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Cloud Age Time to change the programming paradigm?
Technical Presentation
SciDAC SSS Quarterly Report Sandia Labs August 27, 2004 William McLendon Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed.
NA-MIC National Alliance for Medical Image Computing ParaView Server Manager Berk Geveci Andy Cedilnik.
Server to Server Communication Redis as an enabler Orion Free
LAMMPS Users’ Workshop
October 10-11, 2002 Houston, Texas Erik DeBenedictis William McLendon Mike Carifio Sandia is a multiprogram laboratory operated by Sandia Corporation,
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
A scalable and flexible platform to run various types of resource intensive applications on clouds ISWG June 2015 Budapest, Hungary Tamas Kiss,
Site Report DOECGF April 26, 2011 W. Alan Scott Sandia National Laboratories Sandia National Laboratories is a multi-program laboratory managed and operated.
ParaView III Software Methodology Update Where to now November 29, 2006 John Greenfield Sandia is a multiprogram laboratory operated by Sandia Corporation,
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
PDAC-10 Middleware Solutions for Data- Intensive (Scientific) Computing on Clouds Gagan Agrawal Ohio State University (Joint Work with Tekin Bicer, David.
Introduction TO Network Administration
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Clusters Rule! (SMPs DRUEL!) David R. White Sandia National Labs Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin.
Lab 9 Department of Computer Science and Information Engineering National Taiwan University Lab9 - Debugging I 2014/11/4/ 28 1.
Other Tools HPC Code Development Tools July 29, 2010 Sue Kelly Sandia is a multiprogram laboratory operated by Sandia Corporation, a.
CEN6502, Spring Understanding the ORB: Client Side Structure of ORB (fig 4.1) Client requests may be passed to ORB via either SII or DII SII decide.
Photos placed in horizontal position with even amount of white space between photos and header Sandia National Laboratories is a multi-program laboratory.
Virtual Directory Services and Directory Synchronization May 13 th, 2008 Bill Claycomb Computer Systems Analyst Infrastructure Computing Systems Department.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
SciDAC SSS Quarterly Report Sandia Labs January 25, 2005 William McLendon Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
Managing, Storing, and Executing DTS Packages
ARP and RARP Objectives Chapter 7 Upon completion you will be able to:
Recap: introduction to e-science
Gain powerful insights into your print environment
Ray-Cast Rendering in VTK-m
DHCP, DNS, Client Connection, Assignment 1 1.3
Presentation transcript:

Large Scale Visualization on the Cray XT3 Using ParaView Cray User’s Group 2008 May 8, 2008 Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL Kenneth Moreland David Rogers John Greenfield Sandia National Laboratories Alexander Neundorf Technical University of Kaiserslautern Berk Geveci Patrick Marion Kitware, Inc. Kent Eschenberg Pittsburgh Supercomputing Center

Motivation We’ve spent 20 years developing specialized hardware for graphics/visualization applications. Originally, it was customary to move data from the simulation platform to the visualization platform. Simulation Platform Visualization Platform Move Data

Motivation Moving data from large simulations is prohibitive. –It can take from hours to weeks depending on the data and the connection. Simulation Platform Visualization Platform Move Data

Motivation Moving data from large simulations is prohibitive. –It can take from hours to weeks depending on the data and the connection. We’ve learned to co-locate the visualization platform with the data. –Or at least have a dedicated high speed network. Simulation Platform Visualization Platform Data Storage

Motivation Bottlenecks are (in order): –File data management (moving/reading). –Data Processing (isosurfaces, mathematics, computational geometry, etc.). –Rendering. In fact, we’ve had great success deploying on clusters with no graphics hardware. Simulation Platform Visualization Platform Data Storage

Ideal Visualization Computer Cloud 9: A large parallel computer with direct access to data files and dedicated graphics hardware.

Ideal Visualization Computer Cloud 9: A large parallel computer with direct access to data files and dedicated graphics hardware. Cloud 8.5: A large parallel computer with direct access to data files. –Hey, that could be the Cray thingy we ran the simulation on.

Problem with Interactive Visualization For real time interaction, you need a remote connection to a GUI (usually through a socket). Scripting GUI Socket ClientServer

Problem with Interactive Visualization For real time interaction, you need a remote connection to a GUI (usually through a socket). Scripting GUI Socket ClientServer No sockets with Catamount

Problem with Interactive Visualization For real time interaction, you need a remote connection to a GUI (usually through a socket). We get around this problem by removing the client altogether and move the scripting over to the parallel “server,” which is doing the heavy lifting. Scripting Scripted “Server”

Ideal Visualization Computer Cloud 9: A large parallel computer with direct access to data files and dedicated graphics hardware and on-demand interactive visualization. Cloud 8.5: A large parallel computer with direct access to data files and on-demand interactive visualization. Cloud 6.2: A large parallel computer with direct access to data files and scriptable visualization.

Why Not Portals? Previous work has solved the same problem using VisIt and portals. Even if we implemented this, we may not be able to use it. –Use of portals this way concerns administrators. Extra network traffic. Security issues (limit comm in/out compute nodes). –Allocated nodes sitting idle waiting for user input (common during interactive) is frowned upon. Compute time is expensive. –Job queues cannot quickly start interactive jobs. Compute time is expensive, nodes constantly busy. We could pursue this, but are unmotivated.

Implementation Details Python for Catamount. OpenGL for Catamount with no rendering hardware. Cross Compiling ParaView source.

Python for Catamount Initial port completed last year. –Reported implementation at CUG No dynamic libraries: compile modules statically. –Must know modules a-priori. Cannot directly execute cross-compiled executables. –Used yod to get return codes for the configure/build process. Other Catamount-specific build configuration.

Python for Catamount Improvement for 2008: CMake build scripts Created CMake build scripts to use in place of Python’s autoconf scripts. –Leverages cross-compile support implemented for ParaView source (discussed later). “Toolchain” files (small system-specific scripts) set up cross-compile build parameters. –Set up Catamount-specific configuration. Can pass configure/build runtime checks. –Don’t need to use yod during build. Makes specifying modules to statically link easier to select.

OpenGL for Catamount Use Mesa 3D: the de facto software implementation of OpenGL. –Also contains code for using OpenGL without an X11 host. Mesa 3D build comes with cross-compile support. –We added cross-compile configuration for Catamount. –Our configuration is now included with Mesa 3D version and later.

Cross Compiling ParaView ParaView uses the CMake build tool. –12 months ago CMake had no special support for cross-compilation, and it was difficult. Added explicit cross-compilation controls. –Toolchain files make specifying target system parameters simple. –“Try run” queries are skipped. CMake variables simplify hand-coding the info. CMake creates an editable script that can be edited to the target system’s behavior to automate filling these variables. –A completed script is packaged with the ParaView source code.

Cross Compiling ParaView ParaView build creates programs that generate new source code to compile. –Example: ParaView builds a program that parses VTK header files and generates C++ code for Python bindings. –These programs cannot be run locally when cross- compiling. Solution: build ParaView twice, once for the local machine and once for the target machine. –Target machine uses programs from the local build. –CMake options to build only these intermediate applications trivializes the local build time.

Example Usage ParaView deployed on Bigben Cray XT3 at Pittsburgh Supercomputing Center. Bigben used by institutions around the country. –Many users have no direct access to visualization resources local to PSC. Moving simulation results time consuming. Instead, perform visualization on Bigben. Visualization results typically much smaller than simulation results.

Hydrodynamic Turbulence and Star Formation in Molecular Clouds Alexei Kritsuk, University of California, San Diego. Density on grid.

Scaling Time to Render and Save a Frame with Respect to Job Size

Conclusions The port of ParaView to Cray XT3 ready for anyone wishing to use it. Scripted visualization is straightforward and scalable. Interactive visualization on the Cray XT3 is unavailable due to the lack of sockets. –Anyone interested could implement communication in and out of compute nodes if they have a use case (any takers?). –As Cray is moving to Compute Node Linux, which supports sockets, the point is probably moot.