April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management.

Slides:



Advertisements
Similar presentations
Distributed Processing, Client/Server and Clusters
Advertisements

Components of GIS.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
1 Slides presented by Hank Childs at the VACET/SDM workshop at the SDM Center All-Hands Meeting. November 26, 2007 Snoqualmie, Wa Work performed under.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
SALSA HPC Group School of Informatics and Computing Indiana University.
Chapter 5 Operating Systems. 5 The Operating System When working with multimedia, the operating system is perhaps the most important, the most complex,
Summary Role of Software (1 slide) ARCS Software Architecture (4 slides) SNS -- Caltech Interactions (3 slides)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
IT Infrastructure: Software September 18, LEARNING GOALS Identify the different types of systems software. Explain the main functions of operating.
Two Broad Categories of Software
DAFFIE and the Wall Erik Brisson IS&T Scientific Visualization Tutorial - Spring 2010.
Thraxion: Three Dimensional Action Simulator Justin Gerthoffer, Jon Studebaker, David Colborne, Jeff Stuart, Frederick C. Harris, Jr Department of Computer.
DAFFIE and the Wall Erik Brisson SCV Visualization Workshop – Fall 2008.
Virtual Machines for HPC Paul Lu, Cam Macdonell Dept of Computing Science.
HIPerSpace The Highly Interactive Parallelized Display Space.
Operating Systems.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Types of software. Sonam Dema..
Computer System System Software. Learning Objective Students should understand the different types of systems software and their functions. Students should.
SOFTWARE.
Operating Systems Operating System
Chapter Three OPERATING SYSTEMS.
Visualization Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems Shared Visualization 1.1 Software Scalable Visualization 1.1.
Lesson 6 Operating Systems and Software
CS101: Introduction to Computing Instructors: Badre Munir, Usman Adeel, Zahid Irfan & Maria Riaz Faculty of Computer Science and Engineering GIK Institute.
An Extensible Python User Environment Jeff Daily Karen Schuchardt, PI Todd Elsethagen Jared Chase H41G-0956 Website Acknowledgements.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
A Cloud is a type of parallel and distributed system consisting of a collection of inter- connected and virtualized computers that are dynamically provisioned.
Chep06 1 High End Visualization with Scalable Display System By Dinesh M. Sarode, S.K.Bose, P.S.Dhekne, Venkata P.P.K Computer Division, BARC, Mumbai.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
 What is OS? What is OS?  What OS does? What OS does?  Structure of Operating System: Structure of Operating System:  Evolution of OS Evolution of.
Software Software essential is coded programs that perform a serious of algorithms. Instructions loaded into primary memory (RAM) from secondary storage.
Company Overview for GDF Suez December 29, Enthought’s Business Enthought provides products and consulting services for scientific software solutions.
Software Writer:-Rashedul Hasan Editor:- Jasim Uddin.
VirtualBox What you need to know to build a Virtual Machine.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
CCS Overview Rene Salmon Center for Computational Science.
DISTRIBUTED COMPUTING. Computing? Computing is usually defined as the activity of using and improving computer technology, computer hardware and software.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
Computer Software Types Three layers of software Operation.
Application Software System Software.
Site Report DOECGF April 26, 2011 W. Alan Scott Sandia National Laboratories Sandia National Laboratories is a multi-program laboratory managed and operated.
Linux History C151 Multi-User Operating Systems. Open Source Programming Open source programming: 1983, Richard Stallman started the GNU Project (GNU.
Advanced Research Computing Projects & Services at U-M
Presented by Visualization at the Leadership Computing Facility Sean Ahern Scientific Computing Center for Computational Sciences.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Architecture & Cybersecurity – Module 3 ELO-100Identify the features of virtualization. (Figure 3) ELO-060Identify the different components of a cloud.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
CONTENT  Introduction Introduction  Operating System (OS) Operating System (OS) Operating System (OS)  Summary Summary  Application Software Application.
Software Rashedul Hasan. Software Instructions and associated data, stored in electronic format, that direct the computer to accomplish a task. Instructions.
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy’s NNSA U N C L A S S I F I E D Slide 1 Sun Ray Deployment in a Scientific.
Computing at SSRL: Experimental User Support Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Week1: Introduction to Computer Networks. Copyright © 2012 Cengage Learning. All rights reserved.2 Objectives 2 Describe basic computer components and.
Operating Systems.
Lawrence Livermore National Laboratory 1 Science & Technology Principal Directorate - Computation Directorate Scalable Fault Tolerance for Petascale Systems.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
A computer contains two major sets of tools, software and hardware. Software is generally divided into Systems software and Applications software. Systems.
THE WINDOWS OPERATING SYSTEM Computer Basics 1.2.
Tackling I/O Issues 1 David Race 16 March 2010.
Automating Installations by Using the Microsoft Windows 2000 Setup Manager Create setup scripts simply and easily. Create and modify answer files and UDFs.
© 2012 IBM Corporation IBM Linear Tape File System (LTFS) Overview and Demo.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Chapter 5 Operating Systems.
Happy Endings: Reengineering Wesleyan’s Software Deployment to Labs and Classrooms Kyle Tousignant 03/22/2016.
VisIt Project Overview
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
VirtualGL.
TeraScale Supernova Initiative
Presentation transcript:

April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management & Graphics Richard Cook

April 26, 2011 Page DOECGF11 Where is Graphics Expertise at LLNL?  At the High-Performance Computing Center in the Information Management and Graphics (IMG) Group  In the Applications, Simulations, and Quality Division, in the Data Group (under Eric Brugger)  At the Center for Applied Scientific Computing in the Data Analysis Group (under Daniel Laney)

April 26, 2011 Page DOECGF11 Who are our users and what are their requirements?  Who?  Physicists, chemists, biologists  Computer scientists  HPC users – novice to expert  Major science applications  ALE3d, ddcMD, pf3d, Miranda, CPMD,  Qbox, MDCask, ParaDiS, climate, bio, …  What?  Need to analyze data, often interactively.  Need to visualize data for scientific insight, publication, and presentation, sometimes collaborating with vis specialists.  Need to interact with all or part of the data. For the largest data sets, zero-copy access is a must and data management is key.

April 26, 2011 Page DOECGF11 Information Management & Graphics Group Data exploration of distributed, complex, unique data sets Develops and supports tools for managing, visualizing, analyzing, and presenting scientific data Multi-TB datasets with 10s of billions of zones 1000s of files/timestep and 100s of timesteps Using vis servers with high I/O rates Graphics consulting and video production Presentation support for PowerWalls Visualization hardware procurement and support Data management with Hopper file manager

April 26, 2011 Page DOECGF11 LC Graphics Consulting  Support and maintain graphics packages  Tools and Libraries: VisIt, EnSight, AVS/Express, Tecplot...  Everyday Utilities: ImageMagick, xv, xmgrace, gnuplot…  Custom development and consulting  Custom scripts and compiled code to automate tool use  Data conversion  Tool support in parallel environments

April 26, 2011 Page DOECGF11 Visualization Theater Software Development  Blockbuster Movie Player  Distributed parallel design with streaming I/O system  Effective cache and I/O utilization for high frame rates  Sidecar provides “movie cues” and remote control  Cross platform (Linux, Windows*, Mac OS) -- works on vis clusters and desktops with same copy of movie  Technologies: C++, Qt, OpenGL, MPI, pthreads  Blockbuster is open source:  Telepath Session Manager  Simple interface to hide the complexity of the environment that includes vis servers, displays, switches, and software application layers including resource managers, xservers  Orchestrates vis sessions: allocates nodes, configures services, sets up environments, and manages session  Technologies: Python, Tkinter.  Interface to DMX -- Distributed Multihead X (X Server of Servers) and SLURM

7 DOE Computer Graphics Forum 7  When possible or necessary users run vis tools on HPC platforms where data was generated and use “vis nodes”.  When advantageous or necessary, users run vis tools on interactive vis servers that share a file system with the compute platforms.  Small display clusters drive PowerWalls, removing the need for large vis servers to drive displays.  Some applications require graphics cards in vis servers; others benefit from high bandwidth to file system and large memory footprint without need for graphics cards. Visualization Hardware Usage Model Vis cluster (typically fraction of size of other clusters) See next slide for LC description

April 26, 2011 Page DOECGF11 Current LLNL Visualization Hardware  Two large servers, several small display clusters, all running Linux with same admin support as compute clusters. Four machine rooms.  Users access clusters over the network using diskless workstations on SCF and various workstations on the OCF. No RGB to offices. Graph Edge Lustre PW 1 PW 2 TSF PowerWall 451 PowerWall TSF PowerWall Open Computing Facility Secure Computing Facility NAS Grant Boole Moebius Stagg Thriller

April 26, 2011 Page DOECGF11 Specs for two production vis servers and five wall drivers PowerWall clusters are 6-10 nodes and all have Opteron CPUs with IB interconnect, with Quadro FX5600 for walls with stereo and FX 4600s for the two without stereo. The newest have 8 GB RAM per node and older have 4 GB RAM.

April 26, 2011 Page DOECGF11 HPC at LLNL - Livermore Computing  Production vis systems: Edge and Graph

April 26, 2011 Page DOECGF11 Our petascale driver - Sequoia We have a multi-PetaFlop machine arriving going into production in 2012 Furthers our ability to simulate complex phenomena “just like God does it – one atom at a time” Uncertainty quantification 3D confirmations of 2D discoveries for more predictive models The success of Sequoia will depend on an enormous off-machine petascale storage infrastructure

April 26, 2011 Page DOECGF11 More Information/Contacts  General LLNL Computing Information   DNT - B Division’s Data and Vis Group  Eric Brugger:  Information Management and Graphics Group  Becky Springmeyer:  Rich Cook:   CASC Data Analysis Group  Daniel Laney:   Scientific Data Management Project  Jeff Long: 