Presentation is loading. Please wait.

Presentation is loading. Please wait.

April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management.

Similar presentations


Presentation on theme: "April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management."— Presentation transcript:

1 April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management & Graphics Richard Cook

2 April 26, 2011 Page DOECGF11 Where is Graphics Expertise at LLNL?  At the High-Performance Computing Center in the Information Management and Graphics (IMG) Group  In the Applications, Simulations, and Quality Division, in the Data Group (under Eric Brugger)  At the Center for Applied Scientific Computing in the Data Analysis Group (under Daniel Laney)

3 April 26, 2011 Page DOECGF11 Who are our users and what are their requirements?  Who?  Physicists, chemists, biologists  Computer scientists  HPC users – novice to expert  Major science applications  ALE3d, ddcMD, pf3d, Miranda, CPMD,  Qbox, MDCask, ParaDiS, climate, bio, …  What?  Need to analyze data, often interactively.  Need to visualize data for scientific insight, publication, and presentation, sometimes collaborating with vis specialists.  Need to interact with all or part of the data. For the largest data sets, zero-copy access is a must and data management is key.

4 April 26, 2011 Page DOECGF11 Information Management & Graphics Group Data exploration of distributed, complex, unique data sets Develops and supports tools for managing, visualizing, analyzing, and presenting scientific data Multi-TB datasets with 10s of billions of zones 1000s of files/timestep and 100s of timesteps Using vis servers with high I/O rates Graphics consulting and video production Presentation support for PowerWalls Visualization hardware procurement and support Data management with Hopper file manager

5 April 26, 2011 Page DOECGF11 LC Graphics Consulting  Support and maintain graphics packages  Tools and Libraries: VisIt, EnSight, AVS/Express, Tecplot...  Everyday Utilities: ImageMagick, xv, xmgrace, gnuplot…  Custom development and consulting  Custom scripts and compiled code to automate tool use  Data conversion  Tool support in parallel environments

6 April 26, 2011 Page DOECGF11 Visualization Theater Software Development  Blockbuster Movie Player  Distributed parallel design with streaming I/O system  Effective cache and I/O utilization for high frame rates  Sidecar provides “movie cues” and remote control  Cross platform (Linux, Windows*, Mac OS) -- works on vis clusters and desktops with same copy of movie  Technologies: C++, Qt, OpenGL, MPI, pthreads  Blockbuster is open source: http://www.sourceforge.net/projects/blockbusterhttp://www.sourceforge.net/projects/blockbuster  Telepath Session Manager  Simple interface to hide the complexity of the environment that includes vis servers, displays, switches, and software application layers including resource managers, xservers  Orchestrates vis sessions: allocates nodes, configures services, sets up environments, and manages session  Technologies: Python, Tkinter.  Interface to DMX -- Distributed Multihead X (X Server of Servers) and SLURM

7 7 DOE Computer Graphics Forum 7  When possible or necessary users run vis tools on HPC platforms where data was generated and use “vis nodes”.  When advantageous or necessary, users run vis tools on interactive vis servers that share a file system with the compute platforms.  Small display clusters drive PowerWalls, removing the need for large vis servers to drive displays.  Some applications require graphics cards in vis servers; others benefit from high bandwidth to file system and large memory footprint without need for graphics cards. Visualization Hardware Usage Model Vis cluster (typically fraction of size of other clusters) See next slide for LC description

8 April 26, 2011 Page DOECGF11 Current LLNL Visualization Hardware  Two large servers, several small display clusters, all running Linux with same admin support as compute clusters. Four machine rooms.  Users access clusters over the network using diskless workstations on SCF and various workstations on the OCF. No RGB to offices. Graph Edge Lustre PW 1 PW 2 TSF PowerWall 451 PowerWall TSF PowerWall Open Computing Facility Secure Computing Facility NAS Grant Boole Moebius Stagg Thriller

9 April 26, 2011 Page DOECGF11 Specs for two production vis servers and five wall drivers PowerWall clusters are 6-10 nodes and all have Opteron CPUs with IB interconnect, with Quadro FX5600 for walls with stereo and FX 4600s for the two without stereo. The newest have 8 GB RAM per node and older have 4 GB RAM.

10 April 26, 2011 Page DOECGF11 HPC at LLNL - Livermore Computing  Production vis systems: Edge and Graph

11 April 26, 2011 Page DOECGF11 Our petascale driver - Sequoia We have a multi-PetaFlop machine arriving going into production in 2012 Furthers our ability to simulate complex phenomena “just like God does it – one atom at a time” Uncertainty quantification 3D confirmations of 2D discoveries for more predictive models The success of Sequoia will depend on an enormous off-machine petascale storage infrastructure

12 April 26, 2011 Page DOECGF11 More Information/Contacts  General LLNL Computing Information  http://computing.llnl.gov  DNT - B Division’s Data and Vis Group  Eric Brugger: brugger1@llnl.govbrugger1@llnl.gov  Information Management and Graphics Group  Becky Springmeyer: springme@llnl.govspringme@llnl.gov  Rich Cook: rcook@llnl.govrcook@llnl.gov  https://computing.llnl.gov/vis https://computing.llnl.gov/vis  CASC Data Analysis Group  Daniel Laney: laney1@llnl.gov@llnl.gov  https://computation.llnl.gov/casc/  Scientific Data Management Project  Jeff Long: jwlong@llnl.gov  https://computing.llnl.gov/resources/hopper/ https://computing.llnl.gov/resources/hopper/


Download ppt "April 26, 2011 Page DOECGF11 DOECGF 2011: LLNL Site Report Integrated Computing & Communications Department Livermore Computing Information Management."

Similar presentations


Ads by Google