Download presentation
Presentation is loading. Please wait.
Published byJustina Lawrence Modified over 6 years ago
1
Nicole Ondrus Top 500 Parallel System Presentation
#12 Encanto Nicole Ondrus Top 500 Parallel System Presentation 5/06/09 1
2
Basic Facts Current Rank: 12 System Name: Encanto
Location: Intel’s New Mexico headquarters in Rio Rancho Vendor: SGI URL: Application Area: Research Installation Year: 2007 Operating System: SLES10 + SGI ProPack 5 5/06/09 2
3
5/06/09 3
4
More Facts about Encanto
Interconnect: Infiniband DDR Processor: Altix ICE 8200 Cluster Total Number of Processors: 14,336 Computer Cores, 1,792 Nodes, 28 Racks Total Memory: GB Benchmark Name and Peak Performance Measurement (FLOPS) : TACC- 500 TFLOPS Additional Fun Fact: As of February 19, 2009, Dreamworks made a deal with New Mexico Computing Applications Center that will allow DreamWorks to use Encanto to help render 3D films. 5/06/09 4
5
Micheal Perolis Top 500 Parallel System Presentation
#11 JUGENE Micheal Perolis Top 500 Parallel System Presentation 11/18/2018 5
6
Basic Facts Current Rank: 11 System Name: JUGENE
Location: Forschungszentrum Juelich, Juelich, Germany Vendor: IBM URL: Application Area: Research Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 6
7
11/18/2018 7 7
8
More Facts about JUGENE
Interconnect: Proprietary Processor: PowerPC MHz (3.4 GFlops) Total Number of Processors: 16 Cabinets with 65,536 total cores. Total Memory: gigabytes per node for a total of terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) : Rpeak– TFLOPS Additional Fun Fact: “In February 2009 it was announced that. JUGENE would be upgraded to reach petaflops performance by June 2009, making it the first petascale supercomputer in Europe.” Additionally, it is an order of magnitude more effective than a common x86based supercomputer for energy costs. 11/18/2018 8
9
11/18/2018 9
10
11/18/2018 10
11
Matthew Variola Top 500 Parallel System Presentation
#10 Dawning 5000A Matthew Variola Top 500 Parallel System Presentation 11/18/2018 11
12
Basic Facts Current Rank: 10 System Name: Dawning 5000A
Location: Shanghai Supercomputer Center, Shanghai, China Vendor: Dawning URL: Application Area: genome mapping, quake appraisal, precise weather forecast, mining survey and stock exchange data Installation Year: 2008 Operating System: Windows HPC 2008 11/18/2018 12
13
11/18/2018 13
14
More Facts about Dawning 5000A
Interconnect: Infiniband DDR Processor: AMD x86_64 Opteron Quad Core 1900 MHz (7.6 GFlops) Total Number of Processors: 7,680 processors totaling 30,720 cores Total Memory: 122,880 GB Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – TFLOPS Additional Fun Fact: ”The Dawning 5000A can process a 36-hour weather forecast for Beijing in 3 minutes.” 11/18/2018 14
15
#9 NNSA/Sandia National Laboratories
Mathew Skelly Top 500 Parallel System Presentation 11/18/2018 15
16
Basic Facts Current Rank: 9 System Name: Red Storm
Location: Albuquerque, New Mexico (USA) Vendor: Cray Inc. URL: Application Area: Security Research Installation Year: 2008 Operating System: UNICOS/SUSE Linux 11/18/2018 16
17
11/18/2018 17
18
More Facts about Red Storm
Interconnect: XT3 Internal Interconnect Processor: 2.4 GHz dual-core processors and AMD Opteron™ 2.2 GHz quad-core processors Total Number of Processors: 12,960 compute node processors, service and I/O node processors Total Memory: 75 terabytes of DDR memory and 1753 terabytes of disk storage and 2.5 megawatts of power. Benchmark Name and Peak Performance Measurement (FLOPS) : tera OPS theoretical peak performance 11/18/2018 18 18
19
More Facts about Red Storm
Additional Fun Fact: “Currently, with existing models and existing hardware, the climate modeling community can run simulations on the order of one hundred years on grids with resolution of roughly 150km. Sandia is working on advancing high-performance climate modeling to the 10km resolution and developing agent-based societal and economic models capable of coupling to a climate model.” 11/18/2018 19
20
XiongZhneg Guo Top 500 Parallel System Presentation
#8 Jaguar XiongZhneg Guo Top 500 Parallel System Presentation 11/18/2018
21
Basic Facts http://www.nccs.gov/jaguar Current Rank: 8
System Name: Jaguar Location: Oak Ridge National Laboratory, Oak Ridge , Tennessee, United States Vendor: Cray Inc. URL: Application Area: Research Installation Year: 2008 Operating System: CNL
23
More Facts about Jaguar
Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2100 MHz (8.4 GFlops) Total Number of Processors: 640 Nodes with 8 vector processors each for a total of processors Total Memory: 2 gigabytes per node for a total of 362 terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) :dual-socket Benchmark gigaflops Additional Fun Fact: Jaguar could not be built without using some form of liquid cooling to prevent hot spots. At 4,400 square feet, using Cray's new ECOphlex cooling technology. This technology uses R-134a refrigerant, the same as in automobile air-conditioners, to remove the heat as the air enters and exits each cabinet 11/18/2018
24
#7 Franklin Supercomputer
Caleb Colvin Top 500 Parallel System Presentation 11/18/2018
25
Basic Facts Current Rank: 7 System Name: Franklin
Location: National Energy Research Scientific Computing Center in Berkeley, California Vendor: Cray Inc. URL: Application Area: Not Specified Installation Year: 2008 Operating System: CNL
27
More Facts about Franklin
Interconnect: XT4 Internal Interconnect Processor: Cray XT4 Quadcore 2.3 Ghz (9.2 Gflops) Total Number of Processors: 9,532 nodes with quad processor core for total of 38,138 processor cores Total Memory: GB or 77.2 TB of memory Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark TFLOPS Additional Fun Fact: “The Supercomputer System was named after Benjamin Franklin in honor of him being one of the first great scientists in America." 11/18/2018
29
Anthony Young Top 500 Parallel System Presentation
11/18/2018 29
30
Basic Facts Current Rank: 4 System Name: BlueGene/L
Location: Terascale Simulation Facility at Lawrence Livermore National Laboratory Vendor: IBM URL: Application Area: Not Specified Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 30
31
11/18/2018 31
32
More Facts about BlueGene/L
Interconnect: Proprietary Processor: PowerPC MHz (2.8 GFlops) Total Number of Processors: Original number of nodes 65,536 now 106,496 dual-processor compute nodes, the 40,960 new nodes that have doubled the memory of the original machine. Total Memory: Original memory GB Now it is doubled. Benchmark Name and Peak Performance Measurement (FLOPS): HPL Benchmark – originally 478.2 teraFLOPS, now had a peak speed of 596 teraFLOPS. Additional Fun Fact: “To accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations.” 11/18/2018 32
33
Kyle Greenfield Top 500 Parallel System Presentation
#3 Pleiades Kyle Greenfield Top 500 Parallel System Presentation 11/18/2018
34
Basic Facts Current Rank: 3 System Name: Pleiades
Location: NASA/Ames Research Center/NAS, Mountain View, California Vendor: SGI URL: Application Area: Conduct simulation and modeling for agency missions. Installation Year: 2008 Operating System: SLES10 + SGI ProPack 5 11/18/2018
35
11/18/2018
36
More Facts about Pleiades
Interconnect: Infiniband Processor: Intel EM64T Xeon E54xx (Harpertown) 3000 MHz (12 GFlops) Total Number of Processors: 6400 Nodes with 2 quad core processors per node for a total of processors. Total Memory: 8 gigabytes per node for a total of 51.2 terabytes of memory. Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – 487 teraflop/s Additional Fun Fact: “Pleiades ranks high on the Green500 supercomputer list at #22, based on computational efficiency.” 11/18/2018
37
Effie Kisgeropoulos Top 500 Parallel System Presentation
#2 Jaguar Effie Kisgeropoulos Top 500 Parallel System Presentation 11/18/2018
38
Basic Facts Current Rank: 2 System Name: Jaguar
Location: Oak Ridge National Laboratory, U.S. Vendor: Cray Inc. URL: Application Area: Not Specified Installation Year: 2008 Operating System: CNL
40
More Facts about Jaguar
Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2300 MHz (9.2 GFlops) Total Number of Processors: 7,832 Nodes with a total of 150,152 processors. Total Memory: 62 Terrabytes Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark TFLOPS Additional Fun Fact: Jaguar uses a consistent programming model; this allows users to continue to evolve their existing codes, rather than write new ones for every updated version. 11/18/2018
41
George Abuzakhm Top 500 Parallel System Presentation
#1 Roadrunner George Abuzakhm Top 500 Parallel System Presentation 11/18/2018 41
42
Basic Facts Current Rank: 1 System Name: Roadrunner
Location: Los Alamos, New Mexico Vendor: IBM URL: Application Area: Los Alamos National Laboratory Installation Year: 2008 Operating System: Linux 11/18/2018 42
43
11/18/2018 43
44
More Facts about Roadrunner
Interconnect: Infiniband Processor: PowerXCell 8i 3200 MHz (12.8 GFlops) Total Number of Processors: 116,640 cores Total Memory: petaflops Benchmark Name and Peak Performance Measurement (FLOPS) : 376 MFLOPS/Watt Additional Fun Fact: “Roadrunner will primarily be used to ensure the safety and reliability of the nation’s nuclear weapons stockpile. It will also be used for research into astronomy, energy, human genome science and climate change.” 44 11/18/2018
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.