A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.

Slides:



Advertisements
Similar presentations
March 2008MPI Forum Voting 1 MPI Forum Voting March 2008.
Advertisements

Office of Science U.S. Department of Energy FASTOS – June FASTOS – June 2005 WELCOME!!
Application of modern tools for the thermo-acoustic study of annular combustion chambers Franck Nicoud University Montpellier II – I3M CNRS UMR 5149 and.
1 Computational models of the physical world Cortical bone Trabecular bone.
Istituto Tecnico Industriale "A. Monaco"
IBM aims to reclaim supercomputer title By Jatin Chopra.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
BY MANISHA JOSHI.  Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For.
What is a Computer?.
Presented by High Fidelity Numerical Simulations of Turbulent Combustion Jacqueline H. Chen (PI) Chun S. Yoo David H. Lignell Combustion Research Facility.
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
ASU/TGen Computational Facility.
Heterogeneous Computing: New Directions for Efficient and Scalable High-Performance Computing Dr. Jason D. Bakos.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
1 PSC update July 3, 2008 Ralph Roskies, Scientific Director Pittsburgh Supercomputing Center
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
~Closing Lesson~ What did you learn from the solar activity? What is the sun made of? What elements are most abundant on the sun? How do you know?
Office of Science U.S. Department of Energy U.S. Department of Energy’s Office of Science Raymond L. Orbach Director, Office of Science April 20, 2005.
Computing At Argonne (A Sampler) William Gropp
History of Computing at IBM by Brian Ho CS147 – Dr. Sin-Min Lee Fall 2009.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
What’s a Supercomputer Good for Anyway? Ruth Poole – IBM Software Engineer Blue Gene Control System.
Computer Science in UNEDF George Fann, Oak Ridge National Laboratory Rusty Lusk, Argonne National Laboratory Jorge Moré, Argonne National Laboratory Esmond.
L21: Parallel Programming Language Features November 24, 2009.
High Performance Computing: Applications in Science and Engineering REACH Symposium on HPC 10 October IITK REACH Symposia October’10.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
Opportunities in Graduate Physics University of Alabama at Birmingham Physics Department The Cahaba Watershed Home.
Cross Council ICT Conference May High Performance Computing Ron Perrott Chairman High End Computing Strategy Committee Queen’s University Belfast.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
ESMF Performance Evaluation and Optimization Peggy Li(1), Samson Cheung(2), Gerhard Theurich(2), Cecelia Deluca(3) (1)Jet Propulsion Laboratory, California.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
October 12, 2004Thomas Sterling - Caltech & JPL 1 Roadmap and Change How Much and How Fast Thomas Sterling California Institute of Technology and NASA.
NERSC Policy Board Meeting, February 19, SOS8, Charleston, SC, April 12-14, 2004 State of the Labs: NERSC Update Juan Meza Lawrence Berkeley National.
CPU Benyovszky Máté. Bevezetés Szubjektív témák Horizontális kitekintés – Aktualitások, újdonságok – Nem sok történelem Itt képek - doksiban linkek főleg.
1 Accelerator Modeling (SciDAC). 2 Magneto-rotational instability and turbulent angular momentum transport (INCITE)
NCAR Undergraduate Leadership Workshop. What is NCAR?  National Center for Atmospheric Research  Located in Boulder Colorado  Managed by University.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
U.S. Department of Energy’s Office of Science News from the Office of Science Presentation to the Basic Energy Sciences Advisory Committee August 6, 2004.
NCAR Tour  Cray 1, Serial 3  First supper computer online  20 TB  2 TF.
February 25, 2008 The Emerging Front Range HPC Collaboratory Dr. Rich Loft: Director, Technology Development Computational.
Physics of carbon nanotube electronic devices M.P. Anantram and F.Leonard – Center for Nanotechnology, NASA Ames Research Center – Nanoscale Science and.
High Performance Computing
1 High Performance Computing: A Look Behind and Ahead Jack Dongarra Computer Science Department University of Tennessee.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
High Fidelity Numerical Simulations of Turbulent Combustion
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
High Performance Computing Kyaw Zwa Soe (Director) Ministry of Science & Technology Centre of Advanced Science & Technology.
Computational Chemistry Trygve Helgaker CTCC, Department of Chemistry, University of Oslo.
TEMPLATE DESIGN © H. Che 2, E. D’Azevedo 1, M. Sekachev 3, K. Wong 3 1 Oak Ridge National Laboratory, 2 Chinese University.
NESG Site Ambassador Mike Bennett Inder Monga Summer ESCC, July 2013 Berkeley, CA.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
U.S. Department of Energy’s Office of Science Presentation to the Basic Energy Sciences Advisory Committee (BESAC) Dr. Raymond L. Orbach, Director November.
High Energy Physics at the OU Supercomputing Center for Education & Research Henry Neeman, Director OU Supercomputing Center for Education & Research University.
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
Performance Technology for Scalable Parallel Systems
Computer usage Notur 2007.
Super Computing By RIsaj t r S3 ece, roll 50.
32nd TOP500 List SC08, Austin, TX.
Parallel Computers Today
Nicole Ondrus Top 500 Parallel System Presentation
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
Presentation transcript:

A look at computing performance and usage

 3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS  IBM Roadrunner: 1.1 PFLOPS

 1,105 TFLOPS  120,000 cores  AMD Opteron (1.8 GHz)  PowerXCell 8i (3.2 Ghz)  Los Alamos National Laboratory, New Mexico  $133M  Manages US nuclear weapons

 1,059 TFLOPS  150,000 cores: Opteron (2.3 GHz)  Oak Ridge National Laboratory, Tennessee  Usage awarded by the INCITE program  “Computational Protein Structure Prediction and Protein Design”  “Interaction of Turbulence and Chemistry in Lean Premixed Laboratory Flames”  Climate research, combustion, nuclear physics, fusion energy, space physics, and fluid turbulence

 487 TFLOPS  51,000 cores: Xeon (3.0 GHz)  NASA Ames Research Center, California

 478 TFLOPS  213,000 cores: PowerPC (700 MHz)  Lawrence Livermore National Laboratory, California  Made in 2007, when it was the world’s fastest  Manages the US stockpile of nuclear weapons (as the Roadrunner also does)

 450 TFLOPS  164,000 cores: PowerPC (850 MHz)  Argonne National Laboratory, Illinois  Uses an architecture newer than Blue Gene/L; can be expanded to 3 PFLOPS  Usage granted by the U.S. Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment program  Physics of star explosions

 #6: Sun Ranger, nano-scale technology, Opteron  #7: Cray Franklin XT4, simulation and modeling  #8: Cray Jaguar XT4, Department of Energy projects  #9: Cray Red Storm XT3, nuclear stockpile testing  #10: Dawning 5000A, Opteron, China’s fastest  weather forecasting  oil exploration  genetic research  aviation and aeronautics

 top500.org, networkworld.com top500.orgnetworkworld.com  nytimes.com nytimes.com  cpu-world.com, intel.com, amd.com cpu-world.comintel.comamd.com src=hm_list