Nicole Ondrus Top 500 Parallel System Presentation #12 Encanto Nicole Ondrus Top 500 Parallel System Presentation 5/06/09 1
Basic Facts Current Rank: 12 System Name: Encanto Location: Intel’s New Mexico headquarters in Rio Rancho Vendor: SGI URL: http://www.newmexicosupercomputer.com/encanto.html Application Area: Research Installation Year: 2007 Operating System: SLES10 + SGI ProPack 5 5/06/09 2
5/06/09 3
More Facts about Encanto Interconnect: Infiniband DDR Processor: Altix ICE 8200 Cluster Total Number of Processors: 14,336 Computer Cores, 1,792 Nodes, 28 Racks Total Memory: 28000 GB Benchmark Name and Peak Performance Measurement (FLOPS) : TACC- 500 TFLOPS Additional Fun Fact: As of February 19, 2009, Dreamworks made a deal with New Mexico Computing Applications Center that will allow DreamWorks to use Encanto to help render 3D films. http://www.ukfast.co.uk/green-daily-news/dreamworks-to-use-efficient-desert-cloud.html http://news.cnet.com/newsblog/?keyword=supercomputers 5/06/09 4
Micheal Perolis Top 500 Parallel System Presentation #11 JUGENE Micheal Perolis Top 500 Parallel System Presentation 11/18/2018 5
Basic Facts Current Rank: 11 System Name: JUGENE Location: Forschungszentrum Juelich, Juelich, Germany Vendor: IBM URL: http://www.fz-juelich.de/portal/ Application Area: Research Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 6
11/18/2018 7 7
More Facts about JUGENE Interconnect: Proprietary Processor: PowerPC 450 850 MHz (3.4 GFlops) Total Number of Processors: 16 Cabinets with 65,536 total cores. Total Memory: 64-128 gigabytes per node for a total of 512-1024 terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) : Rpeak– 222.8 TFLOPS Additional Fun Fact: “In February 2009 it was announced that. JUGENE would be upgraded to reach petaflops performance by June 2009, making it the first petascale supercomputer in Europe.” Additionally, it is an order of magnitude more effective than a common x86based supercomputer for energy costs. 11/18/2018 8
11/18/2018 9
11/18/2018 10
Matthew Variola Top 500 Parallel System Presentation #10 Dawning 5000A Matthew Variola Top 500 Parallel System Presentation 11/18/2018 11
Basic Facts Current Rank: 10 System Name: Dawning 5000A Location: Shanghai Supercomputer Center, Shanghai, China Vendor: Dawning URL: http://www.dawning.com.cn/en/index.asp Application Area: genome mapping, quake appraisal, precise weather forecast, mining survey and stock exchange data Installation Year: 2008 Operating System: Windows HPC 2008 11/18/2018 12
11/18/2018 13
More Facts about Dawning 5000A Interconnect: Infiniband DDR Processor: AMD x86_64 Opteron Quad Core 1900 MHz (7.6 GFlops) Total Number of Processors: 7,680 processors totaling 30,720 cores Total Memory: 122,880 GB Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – 180.60 TFLOPS Additional Fun Fact: ”The Dawning 5000A can process a 36-hour weather forecast for Beijing in 3 minutes.” 11/18/2018 14
#9 NNSA/Sandia National Laboratories Mathew Skelly Top 500 Parallel System Presentation 11/18/2018 15
Basic Facts Current Rank: 9 System Name: Red Storm Location: Albuquerque, New Mexico (USA) Vendor: Cray Inc. URL: http://www.sandia.gov/ Application Area: Security Research Installation Year: 2008 Operating System: UNICOS/SUSE Linux 11/18/2018 16
11/18/2018 17
More Facts about Red Storm Interconnect: XT3 Internal Interconnect Processor: 2.4 GHz dual-core processors and AMD Opteron™ 2.2 GHz quad-core processors Total Number of Processors: 12,960 compute node processors, 320 + 320 service and I/O node processors Total Memory: 75 terabytes of DDR memory and 1753 terabytes of disk storage and 2.5 megawatts of power. Benchmark Name and Peak Performance Measurement (FLOPS) :284.16 tera OPS theoretical peak performance 11/18/2018 18 18
More Facts about Red Storm Additional Fun Fact: “Currently, with existing models and existing hardware, the climate modeling community can run simulations on the order of one hundred years on grids with resolution of roughly 150km. Sandia is working on advancing high-performance climate modeling to the 10km resolution and developing agent-based societal and economic models capable of coupling to a climate model.” 11/18/2018 19
XiongZhneg Guo Top 500 Parallel System Presentation #8 Jaguar XiongZhneg Guo Top 500 Parallel System Presentation 11/18/2018
Basic Facts http://www.nccs.gov/jaguar Current Rank: 8 System Name: Jaguar Location: Oak Ridge National Laboratory, Oak Ridge , Tennessee, United States Vendor: Cray Inc. URL: http://www.nccs.gov/jaguar Application Area: Research Installation Year: 2008 Operating System: CNL
More Facts about Jaguar Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2100 MHz (8.4 GFlops) Total Number of Processors: 640 Nodes with 8 vector processors each for a total of 7832 processors Total Memory: 2 gigabytes per node for a total of 362 terabytes of memory Benchmark Name and Peak Performance Measurement (FLOPS) :dual-socket Benchmark - 73.6 gigaflops Additional Fun Fact: Jaguar could not be built without using some form of liquid cooling to prevent hot spots. At 4,400 square feet, using Cray's new ECOphlex cooling technology. This technology uses R-134a refrigerant, the same as in automobile air-conditioners, to remove the heat as the air enters and exits each cabinet 11/18/2018
#7 Franklin Supercomputer Caleb Colvin Top 500 Parallel System Presentation 11/18/2018
Basic Facts Current Rank: 7 System Name: Franklin Location: National Energy Research Scientific Computing Center in Berkeley, California Vendor: Cray Inc. URL: http://www.nersc.gov/ Application Area: Not Specified Installation Year: 2008 Operating System: CNL
More Facts about Franklin Interconnect: XT4 Internal Interconnect Processor: Cray XT4 Quadcore 2.3 Ghz (9.2 Gflops) Total Number of Processors: 9,532 nodes with quad processor core for total of 38,138 processor cores Total Memory: 77.280 GB or 77.2 TB of memory Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark - 356 TFLOPS Additional Fun Fact: “The Supercomputer System was named after Benjamin Franklin in honor of him being one of the first great scientists in America." 11/18/2018
Anthony Young Top 500 Parallel System Presentation 11/18/2018 29
Basic Facts Current Rank: 4 System Name: BlueGene/L Location: Terascale Simulation Facility at Lawrence Livermore National Laboratory Vendor: IBM URL: https://asc.llnl.gov/computing_resources/bluegenel/ Application Area: Not Specified Installation Year: 2007 Operating System: CNK/SLES 9 11/18/2018 30
11/18/2018 31
More Facts about BlueGene/L Interconnect: Proprietary Processor: PowerPC 440 700 MHz (2.8 GFlops) Total Number of Processors: Original number of nodes 65,536 now 106,496 dual-processor compute nodes, the 40,960 new nodes that have doubled the memory of the original machine. Total Memory: Original memory 73728 GB Now it is doubled. Benchmark Name and Peak Performance Measurement (FLOPS): HPL Benchmark – originally 478.2 teraFLOPS, now had a peak speed of 596 teraFLOPS. Additional Fun Fact: “To accommodate growing demand for high-performance systems able to run the most complex nuclear weapons science calculations.” 11/18/2018 32
Kyle Greenfield Top 500 Parallel System Presentation #3 Pleiades Kyle Greenfield Top 500 Parallel System Presentation 11/18/2018
Basic Facts Current Rank: 3 System Name: Pleiades Location: NASA/Ames Research Center/NAS, Mountain View, California Vendor: SGI URL: http://www.nas.nasa.gov/Resources/Systems/pleiades.html Application Area: Conduct simulation and modeling for agency missions. Installation Year: 2008 Operating System: SLES10 + SGI ProPack 5 11/18/2018
11/18/2018
More Facts about Pleiades Interconnect: Infiniband Processor: Intel EM64T Xeon E54xx (Harpertown) 3000 MHz (12 GFlops) Total Number of Processors: 6400 Nodes with 2 quad core processors per node for a total of 51200 processors. Total Memory: 8 gigabytes per node for a total of 51.2 terabytes of memory. Benchmark Name and Peak Performance Measurement (FLOPS) : Linpack Benchmark – 487 teraflop/s Additional Fun Fact: “Pleiades ranks high on the Green500 supercomputer list at #22, based on computational efficiency.” 11/18/2018
Effie Kisgeropoulos Top 500 Parallel System Presentation #2 Jaguar Effie Kisgeropoulos Top 500 Parallel System Presentation 11/18/2018
Basic Facts Current Rank: 2 System Name: Jaguar Location: Oak Ridge National Laboratory, U.S. Vendor: Cray Inc. URL: http://www.nccs.gov/jaguar/ Application Area: Not Specified Installation Year: 2008 Operating System: CNL
More Facts about Jaguar Interconnect: XT4 Internal Interconnect Processor: AMD x86_64 Opteron Quad Core 2300 MHz (9.2 GFlops) Total Number of Processors: 7,832 Nodes with a total of 150,152 processors. Total Memory: 62 Terrabytes Benchmark Name and Peak Performance Measurement (FLOPS) : HPL Benchmark - 1059.00 TFLOPS Additional Fun Fact: Jaguar uses a consistent programming model; this allows users to continue to evolve their existing codes, rather than write new ones for every updated version. 11/18/2018
George Abuzakhm Top 500 Parallel System Presentation #1 Roadrunner George Abuzakhm Top 500 Parallel System Presentation 11/18/2018 41
Basic Facts Current Rank: 1 System Name: Roadrunner Location: Los Alamos, New Mexico Vendor: IBM URL: http://www.lanl.gov/ Application Area: Los Alamos National Laboratory Installation Year: 2008 Operating System: Linux 11/18/2018 42
11/18/2018 43
More Facts about Roadrunner Interconnect: Infiniband Processor: PowerXCell 8i 3200 MHz (12.8 GFlops) Total Number of Processors: 116,640 cores Total Memory: 1.026 petaflops Benchmark Name and Peak Performance Measurement (FLOPS) : 376 MFLOPS/Watt Additional Fun Fact: “Roadrunner will primarily be used to ensure the safety and reliability of the nation’s nuclear weapons stockpile. It will also be used for research into astronomy, energy, human genome science and climate change.” 44 11/18/2018