BlueGene/L Facts Platform Characteristics 512-node prototype 64 rack BlueGene/L Machine Peak Performance 1.0 / 2.0 TFlops/s 180 / 360 TFlops/s Total Memory.

Slides:



Advertisements
Similar presentations
DOE ASCI TeraFLOPS Rejitha Anand CMPS Accelerated Strategic Computing Initiative Large, complex, multifaceted, highly integrated research and development.
Advertisements

© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
The Richmond Supercomputing Cluster Undergraduate Education Students who choose to participate in an undergraduate research project develop a deeper understanding.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Today’s topics Single processors and the Memory Hierarchy
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
Parallel Research at Illinois Parallel Everywhere
Running Large Graph Algorithms – Evaluation of Current State-of-the-Art Andy Yoo Lawrence Livermore National Laboratory – Google Tech Talk Feb Summarized.
.1 Network Connected Multi’s [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005]
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
Protein Threading Zhanggroup Overview Background protein structure protein folding and designability Protein threading Current limitations.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
StreamMD Molecular Dynamics Eric Darve. MD of water molecules Cutoff is used to truncate electrostatic potential Gridding technique: water molecules are.
Tools for Engineering Analysis of High Performance Parallel Programs David Culler, Frederick Wong, Alan Mainwaring Computer Science Division U.C.Berkeley.
1 Computer Science, University of Warwick Metrics  FLOPS (FLoating point Operations Per Sec) - a measure of the numerical processing of a CPU which can.
Review: Bus Connected SMPs (UMAs)
Application Performance Analysis on Blue Gene/L Jim Pool, P.I. Maciej Brodowicz, Sharon Brunett, Tom Gottschalk, Dan Meiron, Paul Springer, Thomas Sterling,
Topology Aware Mapping for Performance Optimization of Science Applications Abhinav S Bhatele Parallel Programming Lab, UIUC.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
BlueGene/L Power, Packaging and Cooling Todd Takken IBM Research February 6, 2004 (edited 2/11/04 version of viewgraphs)
Molecules of Life Chapter 22 Great Idea:
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Herbert Huber, Axel Auweter, Torsten Wilde, High Performance Computing Group, Leibniz Supercomputing Centre Charles Archer, Torsten Bloth, Achim Bömelburg,
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Overview of the New Blue Gene/L Computer Dr. Richard D. Loft Deputy Director of R&D Scientific Computing Division National Center for Atmospheric Research.
Computer Science Section National Center for Atmospheric Research Department of Computer Science University of Colorado at Boulder Blue Gene Experience.
Panel: BlueGene/L The Next 100 weeks BlueGene/L Workshop Watson, 6-Feb-04 IBM Life Sciences – WW Acad-Govt Robert A. Eades.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
Problem is to compute: f(latitude, longitude, elevation, time)  temperature, pressure, humidity, wind velocity Approach: –Discretize the.
Anton Supercomputer Brandon Dean 4/28/15. History Named after Antonie van Leeuwenhoek – “father of microbiology” Molecular Dynamics (MD) simulations were.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
A lower bound to energy consumption of an exascale computer Luděk Kučera Charles University Prague, Czech Republic.
Workshop BigSim Large Parallel Machine Simulation Presented by Eric Bohm PPL Charm Workshop 2004.
Argonne Leadership Computing Facility ALCF at Argonne  Opened in 2006  Operated by the Department of Energy’s Office of Science  Located at Argonne.
Diskless Checkpointing on Super-scale Architectures Applied to the Fast Fourier Transform Christian Engelmann, Al Geist Oak Ridge National Laboratory Februrary,
Modeling Billion-Node Torus Networks Using Massively Parallel Discrete-Event Simulation Ning Liu, Christopher Carothers 1.
Lawrence Livermore National Laboratory S&T Principal Directorate - Computation Directorate Tools and Scalable Application Preparation Project Computation.
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey.
Lecture 29Fall 2006 Computer Architecture Fall 2006 Lecture 29: Network Connected Multiprocessors Adapted from Mary Jane Irwin ( )
Interconnection network network interface and a case study.
IBM Research © 2006 IBM Corporation Qbox: First Principles Molecular Dynamics  Treats electrons quantum mechanically  Treats nuclii classically  Developed.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Dynameomics: Protein Mechanics, Folding and Unfolding through Large Scale All-Atom Molecular Dynamics Simulations INCITE 6 David A. C. Beck Valerie Daggett.
Chapter 2 Chemistry of Life Section 1: Nature of Matter Section 2: Water and Solutions Section 3: Chemistry of Cells Section 4: Energy and Chemical Reactions.
BluesGene/L Supercomputer A System Overview Pietro Cicotti October 10, 2005 University of California, San Diego.
Visualization in Scientific Computing (or Scientific Visualization) Multiresolution,...
Architecture of Parallel Computers CSC / ECE 506 BlueGene Architecture 4/26/2007 Dr Steve Hunter.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
SUSE Linux Enterprise Server for SAP Applications
Network Connected Multiprocessors
Super Computing By RIsaj t r S3 ece, roll 50.
examples of Gibbs free energy calculations
Nicole Ondrus Top 500 Parallel System Presentation
BlueGene/L Supercomputer
Course Description: Parallel Computer Architecture
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
Molecular dynamics simulations
Presentation transcript:

BlueGene/L Facts Platform Characteristics 512-node prototype 64 rack BlueGene/L Machine Peak Performance 1.0 / 2.0 TFlops/s 180 / 360 TFlops/s Total Memory Size128 GByte16 / 32 TByte Foot Print9 sq feet2500 sq feet Total Power9 KW1.5 MW Compute Nodes512 dual proc65,536 dual proc Clock Frequency500 MHz700 MHz NetworksTorus, Tree, Barrier Torus Bandwidth3 B/cycle S/w componentKey feature Compute Node Kernel Scalability via simplicity and determinism LinuxMore complete range of OS services Compilers: XLF, XLC/C++ Industry standard; automatic support for SIMD FPU MPI libraryBased on MPICH2, highly tuned to BG/L Control systemNovel, database- centric design 512-node prototypeBlueGene/L compute ASIC

BlueGene Molecular Dynamics Demo: The Blue Matter molecular dynamics framework is running on a 32-node BlueGene/L system. The IBM T221 display is updated in real-time by the simulation. Blue Matter is a modular system developed for the BlueGene science program. The simulations are running at constant particle number, volume, and energy (NVE). The first simulation is a small patch of lipid bi-layer in water representing a cell membrane. A large number of potential drug targets are membrane-bound proteins, making lipid bi-layers and proteins in lipid bi-layers important subjects of molecular dynamics research. Scientific simulations of membrane systems can involve tens to hundreds of thousands of atoms, making them ideal candidates for a BlueGene/L scale supercomputer. The second simulation is of a small peptide in water, a beta-hairpin with 16 amino acid residues taken from the C-terminus of Protein G. The folding of this hairpin shares many characteristics of folding in larger proteins and has been studied extensively. The folded system is at 500K and unfolds as the simulation proceeds. Each molecular dynamics time-step represents femtosecond ( sec), while the folding time of the peptide chain at room temperature is measured in microseconds (10 -6 sec). Billions of time-steps are required to simulate folding even in this small system. The Prototype Protein Viewer (PPV; component of Blue Matter, is fed atom coordinates as they are generated by the molecular dynamics code. The viewer shows hydrogen bonds within the peptide and lipid and also displays derived information. Since hydrogen bonds are essential in determining the shape and function of proteins, seeing the bonds form and break provides a microscopic view of the folding process in action. The BlueGene/L Supercomputer: IBM, collaborating with Lawrence Livermore National Laboratory, other DOE/NNSA labs, and other partners, has developed a 512-node prototype of BlueGene/L at IBM Research in Yorktown. This prototype is ranked number 73 in the world with a peak speed of 2 TF/s, and sustains 1.4 TF/s on the LINPACK benchmark. The full BlueGene/L Supercomputer, to be completed in early 2005, will have 65,536 compute nodes and 1,024 I/O nodes. Each I/O node, running Linux, will manage a group of 64 compute nodes. BlueGene/L is expected to top the list of the world’s supercomputers with a peak processing speed of 180/360 TF/s and 16/32 TB of memory. It will process data at a rate of one terabit per second. Each BlueGene/L node has dual processors, each with two floating point pipes, and is capable of a peak speed of 2.8/5.6 GF/s. Two nodes are mounted onto a module; 16 modules fit into a chassis and 32 chassis to a rack. A total of 64 racks will be installed at the Lawrence Livermore National Laboratory by early In contrast to the ASCI White Computer, which has a footprint of 10,000 square feet, BlueGene/L will fit into about 2,500 square feet and use approximately 1.5 MW.