Nicole Ondrus Top 500 Parallel System Presentation

Slides:



Advertisements
Similar presentations
Istituto Tecnico Industriale "A. Monaco"
Advertisements

IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.
Today’s topics Single processors and the Memory Hierarchy
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
BY MANISHA JOSHI.  Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For.
SUPERCOMPUTERS By: Cooper Couch. WHAT IS A SUPERCOMPUTER? In the most Basic sense a supercomputer is one, that is at the forefront of modern processing.
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
Supermicro © 2009Confidential HPC Case Study & References.
New HPC technologies Arunas Birmontas, BGM BalticGrid II Kick-off meeting, Vilnius May 13, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
1 PSC update July 3, 2008 Ralph Roskies, Scientific Director Pittsburgh Supercomputing Center
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
Lecture 1: Introduction to High Performance Computing.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Managing Scale and Complexity of Next Generation HPC Systems and Clouds Peter ffoulkes Vice President of Marketing April 2011.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
What’s a Supercomputer Good for Anyway? Ruth Poole – IBM Software Engineer Blue Gene Control System.
Benchmarks for Parallel Systems Sources/Credits:  “Performance of Various Computers Using Standard Linear Equations Software”, Jack Dongarra, University.
and beyond Office of Vice President for Information Technology.
National Weather Service National Weather Service Central Computer System Backup System Brig. Gen. David L. Johnson, USAF (Ret.) National Oceanic and Atmospheric.
High Performance Computing: Applications in Science and Engineering REACH Symposium on HPC 10 October IITK REACH Symposia October’10.
Company LOGO High Performance Processors Miguel J. González Blanco Miguel A. Padilla Puig Felix Rivera Rivas.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
BlueGene/L Facts Platform Characteristics 512-node prototype 64 rack BlueGene/L Machine Peak Performance 1.0 / 2.0 TFlops/s 180 / 360 TFlops/s Total Memory.
HPC Business update HP Confidential – CDA Required
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.
CS591x -Cluster Computing and Parallel Programming
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
High Performance Computing
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
1 High Performance Computing: A Look Behind and Ahead Jack Dongarra Computer Science Department University of Tennessee.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Understanding Parallel Computers Parallel Processing EE 613.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
TEMPLATE DESIGN © H. Che 2, E. D’Azevedo 1, M. Sekachev 3, K. Wong 3 1 Oak Ridge National Laboratory, 2 Chinese University.
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
High Performance Computing (HPC)
NIIF HPC services for research and education
Modern supercomputers, Georgian supercomputer project and usage areas
Performance Technology for Scalable Parallel Systems
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
Modern Processor Design: Superscalar and Superpipelining
32nd TOP500 List SC08, Austin, TX.
Parallel Computers Today
BlueGene/L Supercomputer
Title of Presentation Client
Footer.
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

Nicole Ondrus Top 500 Parallel System Presentation #12 Encanto Nicole Ondrus Top 500 Parallel System Presentation 5/06/09 1

Basic Facts Current Rank: 12 System Name: Encanto Location: Intel’s New Mexico headquarters in Rio Rancho Vendor: SGI URL: http://www.newmexicosupercomputer.com/encanto.html Application Area: Research Installation Year: 2007 Operating System: SLES10 + SGI ProPack 5 5/06/09 2

5/06/09 3

More Facts about Encanto Interconnect: Infiniband DDR Processor: Altix ICE 8200 Cluster Total Number of Processors: 14,336 Computer Cores, 1,792 Nodes, 28 Racks Total Memory: 28000 GB Benchmark Name and Peak Performance Measurement (FLOPS) : TACC- 500 TFLOPS Additional Fun Fact:  As of February 19, 2009, Dreamworks made a deal with New Mexico Computing Applications Center that will allow DreamWorks to use Encanto to help render 3D films. http://www.ukfast.co.uk/green-daily-news/dreamworks-to-use-efficient-desert-cloud.html http://news.cnet.com/newsblog/?keyword=supercomputers 5/06/09 4

Micheal Perolis Top 500 Parallel System Presentation #11 JUGENE Micheal Perolis Top 500 Parallel System Presentation 11/18/2018 5

Basic Facts Current Rank: 11 System Name: JUGENE Location: Forschungszentrum Juelich, Juelich, Germany Vendor: IBM URL: http://www.fz-juelich.de/portal/ Application Area: Research Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 6

11/18/2018 7 7

More Facts about JUGENE Interconnect: Proprietary Processor: PowerPC 450 850 MHz (3.4 GFlops) Total Number of Processors: 16 Cabinets with 65,536 total cores. Total Memory: 64-128 gigabytes per node for a total of 512-1024 terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) : Rpeak– 222.8 TFLOPS Additional Fun Fact:  “In February 2009 it was announced that. JUGENE would be upgraded to reach petaflops performance by June 2009, making it the first petascale supercomputer in Europe.” Additionally, it is an order of magnitude more effective than a common x86based supercomputer for energy costs. 11/18/2018 8

11/18/2018 9

11/18/2018 10

Matthew Variola Top 500 Parallel System Presentation #10 Dawning 5000A Matthew Variola Top 500 Parallel System Presentation 11/18/2018 11

Basic Facts Current Rank: 10 System Name: Dawning 5000A Location: Shanghai Supercomputer Center, Shanghai, China Vendor: Dawning URL: http://www.dawning.com.cn/en/index.asp Application Area: genome mapping, quake appraisal, precise weather forecast, mining survey and stock exchange data Installation Year: 2008 Operating System: Windows HPC 2008 11/18/2018 12

11/18/2018 13

More Facts about Dawning 5000A Interconnect: Infiniband DDR Processor: AMD x86_64 Opteron Quad Core 1900 MHz (7.6 GFlops) Total Number of Processors: 7,680 processors totaling 30,720 cores Total Memory: 122,880 GB Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – 180.60 TFLOPS Additional Fun Fact: ”The Dawning 5000A can process a 36-hour weather forecast for Beijing in 3 minutes.” 11/18/2018 14

#9 NNSA/Sandia National Laboratories Mathew Skelly Top 500 Parallel System Presentation 11/18/2018 15

Basic Facts Current Rank: 9 System Name: Red Storm Location: Albuquerque, New Mexico (USA) Vendor: Cray Inc. URL: http://www.sandia.gov/ Application Area: Security Research Installation Year: 2008 Operating System: UNICOS/SUSE Linux 11/18/2018 16

11/18/2018 17

More Facts about Red Storm Interconnect: XT3 Internal Interconnect Processor: 2.4 GHz dual-core processors and AMD Opteron™ 2.2 GHz quad-core processors Total Number of Processors: 12,960 compute node processors, 320 + 320 service and I/O node processors Total Memory: 75 terabytes of DDR memory and 1753 terabytes of disk storage and 2.5 megawatts of power. Benchmark Name and Peak Performance Measurement (FLOPS) :284.16 tera OPS theoretical peak performance 11/18/2018 18 18

More Facts about Red Storm Additional Fun Fact:  “Currently, with existing models and existing hardware, the climate modeling community can run simulations on the order of one hundred years on grids with resolution of roughly 150km. Sandia is working on advancing high-performance climate modeling to the 10km resolution and developing agent-based societal and economic models capable of coupling to a climate model.” 11/18/2018 19

XiongZhneg Guo Top 500 Parallel System Presentation #8 Jaguar XiongZhneg Guo Top 500 Parallel System Presentation 11/18/2018

Basic Facts http://www.nccs.gov/jaguar Current Rank: 8 System Name: Jaguar Location: Oak Ridge National Laboratory, Oak Ridge , Tennessee, United States Vendor: Cray Inc. URL: http://www.nccs.gov/jaguar Application Area: Research Installation Year: 2008 Operating System: CNL

More Facts about Jaguar Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2100 MHz (8.4 GFlops) Total Number of Processors: 640 Nodes with 8 vector processors each for a total of 7832 processors Total Memory: 2 gigabytes per node for a total of 362 terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) :dual-socket Benchmark - 73.6 gigaflops Additional Fun Fact:  Jaguar could not be built without using some form of liquid cooling to prevent hot spots. At 4,400 square feet, using Cray's new ECOphlex cooling technology. This technology uses R-134a refrigerant, the same as in automobile air-conditioners, to remove the heat as the air enters and exits each cabinet 11/18/2018

#7 Franklin Supercomputer Caleb Colvin Top 500 Parallel System Presentation 11/18/2018

Basic Facts Current Rank: 7 System Name: Franklin Location: National Energy Research Scientific Computing Center in Berkeley, California Vendor: Cray Inc. URL: http://www.nersc.gov/ Application Area: Not Specified Installation Year: 2008 Operating System: CNL

More Facts about Franklin Interconnect: XT4 Internal Interconnect Processor: Cray XT4 Quadcore 2.3 Ghz (9.2 Gflops) Total Number of Processors: 9,532 nodes with quad processor core for total of 38,138 processor cores Total Memory: 77.280 GB or 77.2 TB of memory Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark - 356 TFLOPS Additional Fun Fact:  “The Supercomputer System was named after Benjamin Franklin in honor of him being one of the first great scientists in America." 11/18/2018

Anthony Young Top 500 Parallel System Presentation 11/18/2018 29

Basic Facts Current Rank: 4 System Name: BlueGene/L Location: Terascale Simulation Facility at Lawrence Livermore National Laboratory Vendor: IBM URL: https://asc.llnl.gov/computing_resources/bluegenel/ Application Area: Not Specified Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 30

11/18/2018 31

More Facts about BlueGene/L Interconnect: Proprietary Processor: PowerPC 440 700 MHz (2.8 GFlops) Total Number of Processors: Original number of nodes 65,536 now 106,496 dual-processor compute nodes, the 40,960 new nodes that have doubled the memory of the original machine. Total Memory: Original memory 73728 GB Now it is doubled. Benchmark Name and Peak Performance Measurement (FLOPS): HPL Benchmark – originally 478.2 teraFLOPS, now had a peak speed of 596 teraFLOPS. Additional Fun Fact:  “To accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations.” 11/18/2018 32

Kyle Greenfield Top 500 Parallel System Presentation #3 Pleiades Kyle Greenfield Top 500 Parallel System Presentation 11/18/2018

Basic Facts Current Rank: 3 System Name: Pleiades Location: NASA/Ames Research Center/NAS, Mountain View, California Vendor: SGI URL: http://www.nas.nasa.gov/Resources/Systems/pleiades.html Application Area: Conduct simulation and modeling for agency missions. Installation Year: 2008 Operating System: SLES10 + SGI ProPack 5 11/18/2018

11/18/2018

More Facts about Pleiades Interconnect: Infiniband Processor: Intel EM64T Xeon E54xx (Harpertown) 3000 MHz (12 GFlops) Total Number of Processors: 6400 Nodes with 2 quad core processors per node for a total of 51200 processors. Total Memory: 8 gigabytes per node for a total of 51.2 terabytes of memory. Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – 487 teraflop/s Additional Fun Fact:  “Pleiades ranks high on the Green500 supercomputer list at #22, based on computational efficiency.” 11/18/2018

Effie Kisgeropoulos Top 500 Parallel System Presentation #2 Jaguar Effie Kisgeropoulos Top 500 Parallel System Presentation 11/18/2018

Basic Facts Current Rank: 2 System Name: Jaguar Location: Oak Ridge National Laboratory, U.S. Vendor: Cray Inc. URL: http://www.nccs.gov/jaguar/ Application Area: Not Specified Installation Year: 2008 Operating System: CNL

More Facts about Jaguar Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2300 MHz (9.2 GFlops) Total Number of Processors: 7,832 Nodes with a total of 150,152 processors. Total Memory: 62 Terrabytes Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark - 1059.00 TFLOPS Additional Fun Fact: Jaguar uses a consistent programming model; this allows users to continue to evolve their existing codes, rather than write new ones for every updated version. 11/18/2018

George Abuzakhm Top 500 Parallel System Presentation #1 Roadrunner George Abuzakhm Top 500 Parallel System Presentation 11/18/2018 41

Basic Facts Current Rank: 1 System Name: Roadrunner Location: Los Alamos, New Mexico Vendor: IBM URL: http://www.lanl.gov/ Application Area: Los Alamos National Laboratory Installation Year: 2008 Operating System: Linux 11/18/2018 42

11/18/2018 43

More Facts about Roadrunner Interconnect: Infiniband Processor: PowerXCell 8i 3200 MHz (12.8 GFlops) Total Number of Processors: 116,640 cores Total Memory: 1.026 petaflops Benchmark Name and Peak Performance Measurement (FLOPS) : 376 MFLOPS/Watt Additional Fun Fact:  “Roadrunner will primarily be used to ensure the safety and reliability of the nation’s nuclear weapons stockpile. It will also be used for research into astronomy, energy, human genome science and climate change.” 44 11/18/2018