Supermicro © 2009Confidential HPC Case Study & References.

Slides:



Advertisements
Similar presentations
Supermicro © 2009Confidential Integrated GPU Systems Optimized for Record-Shattering Performance Presented by Don Clegg, Vice President 10/2/2009.
Advertisements

Supermicro © 2009Confidential 06/01/2009 Supermicro GPU Server Solutions SYS-7046T-GRF SYS-6016T-GF-TM2 SYS-6016T-GF-TC2 SYS-6016T-GF SYS-6016T-XF.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
GPU SuperComputing High-Performance, Enterprise-Class Super Computing GPU Accelerated Computation Engineering & Scientific Server Dept.
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Today’s topics Single processors and the Memory Hierarchy
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
© Supermicro 2013 The New Era of Coprocessor in Supercomputing - 并行计算中协处理应用的新时代- BAH! Oil & Gas - Rio de Janeiro, Brazil Marc XAB, M.A. - 桜美林大学大学院.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
Resource Optimized (WIO/UIO) Twin Architecture GPU SuperComputing Embedded SuperBlade Storage Server Workstation Mainstream Business Solutions Application.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
1 The Problem of Power Consumption in Servers L. Minas and B. Ellison Intel-Lab In Dr. Dobb’s Journal, May 2009 Prepared and presented by Yan Cai Fall.
Supermicro © 2009 GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage Confidential Mainstream Server.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Supermicro © 2009 GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage Present by Edward Li Server dept.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 4/16/09.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Twin + Platform DCO Platform GPU Tower
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Diamond Computing Status Update Nick Rees et al..
HPC Business update HP Confidential – CDA Required
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Copyright/Privacy message 1 HP Solution Block: SAP ERP 6.0 & HP ProLiant ® SAP ERP 6.0 & HP ProLiant Multiprocessor Servers Turn up the volume on success.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Chia-Shen Hsu HP ProLiant.
Rackable Systems Company Update SWsoft Conference Jeff Stilwell – Sr. Director of Systems Engineering.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
NIIF HPC services for research and education
LHCb and InfiniBand on FPGA
Modern supercomputers, Georgian supercomputer project and usage areas
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Appro Xtreme-X Supercomputers
System G And CHECS Cal Ribbens
Low Latency Analytics HPC Clusters
Parallel Computers Today
Nicole Ondrus Top 500 Parallel System Presentation
IBM Power Systems.
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

Supermicro © 2009Confidential HPC Case Study & References

Supermicro © 2009Confidential Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in application-optimized, high performance server solutions, participated in the inaugural ceremony for the CERNs LHC (Large Hadron Collider) Project in Geneva. Supermicro’s SuperBlade® servers, housed at CERN, (one of the world’s largest research labs) enabled the LHC Project with superior computational performance, scalability, and energy efficiency. “We are honored to have Supermicro’s industry-leading blade server technology installed at the foundation of this monumental scientific research project,” said Charles Liang, CEO and president of Supermicro. “Our SuperBlade® platforms deliver unsurpassed performance, computing density and energy efficiency, making them ideal for HPC clusters and data centers.” The LHC Project deploys, amongst others, Supermicro’s award-winning SuperBlade servers. These optimized solutions empower Supermicro customers with the most advanced green server technology available, including 93%* peak power supply efficiency, innovative and highly efficient thermal and cooling system designs, and industry-leading performance-per-watt (290+ GFLOPS/kW*). Installation Example – CERN 100+ nodes SuperBlades® along with rack mount servers 14-Blade 10-Blade

Supermicro © 2009Confidential CERN LHC (Large Hadron Collider) Project CERN: Largest Hadron Collider Research Center 14-Blade 10-Blade Source: Dr. Helge Meinhard, CERN

Supermicro © 2009Confidential Source: Dr. Helge Meinhard, CERN Tunnel of 27 km circumference, 4 m diameter, 50…150 m below ground; Detectors at four collision points 15 Petabytes per year for four experiments CERN LHC (Large Hadron Collider) Project

Supermicro © 2009Confidential Installation Example – Research LABs Total 4000 nodes / 4-way AMD Barcelona quad-core With Infiniband connection 300 nodes / 2 DP nodes per 1U 1U Twin™ / Intel Hapertown With Infiniband (onboard) connection Shanghai-ready

Supermicro © 2009Confidential LLNL Hyperion Petescale Cluster

Supermicro © 2009Confidential  Supermicro A+ Server 2041M-32R+B  H8QM3-2 motherboard 2U 4-socket AMD quad-core  One with 64 GB RAM & one with 256 GB RAM  Two x16, Two x4 PCIe 1.0 slots  One Mellanox Connect-X IB DDR 2-port PCIe CX4 HCA  One LSI SAS controller 8 external ports LLNL Hyperion Compute Node

Supermicro © 2009Confidential Speeding up science: CSIRO's CPU-GPU CSIRO GPU supercomputer configuration: The new CSIRO high performance computing cluster will deliver up to 200 plus Teraflops of computing performance and will consist of the following components:  Supermicro 100 Dual Xeon E5462 Compute Nodes (i.e. a total of GHz compute cores) with 16GB of RAM, 500GB SATA storage and DDR InfiniBand interconnect  50 Tesla S1070 (200 GPUs with a total of 48,000 streaming processor cores)  96 port DDR InfiniBand Switch On TOP500 / Green500 (June, 2009):  Delivered by NEC (Hybrid Cluster): Supermicro Twin + NVIDIA GPUs (50TFLOPS)  TOP500: ranking 77  Green500: ranking 20: the most efficient cluster system in the X86 space

Supermicro © 2009Confidential Oil & Gas Application Oil & Gas Exploration - Seismic Data Analysis Supermicro 2U Twin² System – 512 nodes

Supermicro © 2009Confidential HPC Implementation with PRACE Challenge Customer: Swedish National Infrastructure for Computing (SNIC), Royal Institute of Technology (KTH), Sweden, jointly with the Partnership for Advanced Computing in Europe (PRACE) Need: A general purpose HPC cluster for assessment of energy efficiency and compute density achievable using standard components Buying Criteria:  Superior performance/watt/sq.ft.  Ability to control energy consumption based on workload  End-to-End Non-blocking, high throughput IO connectivity  Latest server chipset supporting PCI-E Gen2  Non-proprietary x86 architecture Solution Collaborative effort between Supermicro and AMD Infrastructure (SuperBlade ® ):  18 7U SuperBlade® enclosures  10 4-way blades with 240 processor cores per 7U  QDR InfiniBand switch (40Gb/s quad data rate)  High efficiency, N+1 redundant power supplies (93% efficiency)  Blade enclosure management solution including KVM/IP on each node Processor: Six-Core AMD Opteron™ HE processor Chipset: AMD SR5670 chipset supporting HyperTransport™ 3 interface, PCI-E Gen2 IO connectivity, APML power management Computing Capacity: way systems with total 720 processors – 4320 cores

Supermicro © 2009Confidential References LLNL: Lawrence Livermore National Laboratory  Dr. Mark Seager  Director of the Advanced Simulation and Computing Program  V  P  POBOX 808, L-554, East Ave., Livermore, CA  CERN: European Organization for Nuclear Research  Dr. Helge Meinhard  Server & Storage IT in charge  1211 Geneva 23, Switzerland  Tel  PRACE  Prof. Lennart Johnsson  PDC Director, Teknikringen 14, Royal Institute of Technology  SE Stockholm, Sweden 