Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.

Slides:



Advertisements
Similar presentations
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Advertisements

Technology Drivers Traditional HPC application drivers – OS noise, resource monitoring and management, memory footprint – Complexity of resources to be.
Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Parallel Research at Illinois Parallel Everywhere
GPU System Architecture Alan Gray EPCC The University of Edinburgh.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
Chapter1 Fundamental of Computer Design Dr. Bernard Chen Ph.D. University of Central Arkansas.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
ISC’11 | Hamburg, Germany | June 19 – June 23, 2011 | Frühjahrstreffen des ZKI-Arbeitskreises "Supercomputing“ Dr. Horst Gietl.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy’s NNSA U N C L A S S I F I E D Slide 1 Exascale? No problem! Paul Henning.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Energy Model for Multiprocess Applications Texas Tech University.
CoolAir Temperature- and Variation-Aware Management for Free-Cooled Datacenters Íñigo Goiri, Thu D. Nguyen, and Ricardo Bianchini 1.
INTROUDCTION TO ENVIRONMENTAL MANAGEMENT Julia Brown Project Manager Waste Research Management & Training Centre C/o Scientific Research Council.
2nd Workshop on Energy for Sustainable Science at Research Infrastructures Report on parallel session A3 Wayne Salter on behalf of Dr. Mike Ashworth (STFC)
Presenter MaxAcademy Lecture Series – V1.0, September 2011 Introduction and Motivation.
Exascale Evolution 1 Brad Benton, IBM March 15, 2010.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Oricane AB Breakthrough in Green Software Technology.
Farzaneh Rahmani Mazandaran University of Science And Technology February, 2011 Supervisor: Hadi Salimi.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Folklore Confirmed: Compiling for Speed = Compiling for Energy Tomofumi Yuki INRIA, Rennes Sanjay Rajopadhye Colorado State University 1.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Herbert Huber, Axel Auweter, Torsten Wilde, High Performance Computing Group, Leibniz Supercomputing Centre Charles Archer, Torsten Bloth, Achim Bömelburg,
© 2012 MELLANOX TECHNOLOGIES 1 The Exascale Interconnect Technology Rich Graham – Sr. Solutions Architect.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
“Green IT” and the University of Leeds Colin Coghill, Formerly ISS Director, University of Leeds (Lead for the.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Directed Reading 2 Key issues for the future of Software and Hardware for large scale Parallel Computing and the approaches to address these. Submitted.
Massive Supercomputing Coping with Heterogeneity of Modern Accelerators Toshio Endo and Satoshi Matsuoka Tokyo Institute of Technology, Japan.
Headline in Arial Bold 30pt HPC User Forum, April 2008 John Hesterberg HPC OS Directions and Requirements.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
A 1.7 Petaflops Warm-Water- Cooled System: Operational Experiences and Scientific Results Łukasz Flis, Karol Krawentek, Marek Magryś.
Lawrence Livermore National Laboratory BRdeS-1 Science & Technology Principal Directorate - Computation Directorate How to Stop Worrying and Learn to Love.
Patryk Lasoń, Marek Magryś
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Exscale – when will it happen? William Kramer National Center for Supercomputing Applications.
Ensieea Rizwani An energy-efficient management mechanism for large-scale server clusters By: Zhenghua Xue, Dong, Ma, Fan, Mei 1.
CERN VISIONS LEP  web LHC  grid-cloud HL-LHC/FCC  ?? Proposal: von-Neumann  NON-Neumann Table 1: Nick Tredennick’s Paradigm Classification Scheme Early.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Organizations Are Embracing New Opportunities
Flex System Enterprise Chassis
Tohoku University, Japan
Appro Xtreme-X Supercomputers
The University of Adelaide, School of Computer Science
Toward a Unified HPC and Big Data Runtime
Lecture 18 Warehouse Scale Computing
Degree-aware Hybrid Graph Traversal on FPGA-HMC Platform
Computer Services Business challenge
Lecture 18 Warehouse Scale Computing
Lecture 18 Warehouse Scale Computing
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO

Enterprise is Key to Lenovo “Triple Plus” Strategy

Lenovo and High Performance Computing 77 supercomputers in the Top500 list ( are powered by Lenovo (or IBM/Lenovo) (IBM System x is now part of Lenovo)

Lenovo CAE solutions

Energy efficiency of HPC systems is one of the main goals of the HPC community. The world's most powerful HPC systems have been outperforming Moore's law for years. The power consumption of the leading edge supercomputers has reached a level of more than 10 MegaWatts (MW), yet it continues to grow. Due to rising energy prices, climate protection policies and technical challenges and limitations it is commonly accepted that Power consumption of sustainable many-Peta to Exascale computing needs to stay in a 1 to 20 MW range.

Lenovo focus on 3 main areas Efficient hardware design – More efficient power supply – More efficient fans Power and cooling – Reduce PUE – Reduce chillers – Reduce power consumption Energy Aware Scheduling – Monitor Power – Control Power and Energy

Efficient Hardware Design Use latest semiconductor technology Use energy saving processor and memory technologies Consider using special hardware or accelerators designed for specific scientific problems or numerical algorithms

Traditional Air Cooling

Rear Door Heat Exchangers

Direct Water Cooling

Leibniz Supercomputing Centre (LRZ) SuperMUC supercomputer More than 10,000 compute nodes Infiniband FDR10 Interconnect, 10 PB Storage (200 GB/s bandwidth) 5 MW power consumption (Max. 10 MW)

Energy Aware Scheduling (EAS): Optimize Power Consumption of Active Nodes Set a default cpu frequency on nodes Ability to set specified frequency on core/node level for a given job/application/queue Ability to use Energy Policies to automatically select optimal cpu frequency based on power and performance predication

Energy Aware Scheduling: Predicting Power Consumption at CPU Frequency f n

Energy Aware Scheduling: Predicting Runtime at CPU Frequency f n

Scheduler EAS Implementation

Results Overview (Worstcase Prediction Error) Quantum Espresso Nodes: 16 Parallelization: Hybrid (4 MPI Tasks, 4 OpenMP Threads) WPE Power: 1.4% WPE Runtime: 4.6% Gadget Nodes: 8 Parallelization: Hybrid (4 MPI Tasks, 4 OpenMP Threads) WPE Power: 2.7% WPE Runtime: 0.7% Seissol Nodes: 16 Parallelization: Hybrid (1 MPI Tasks, 16 OpenMP Threads) WPE Power: 2.6% WPE Runtime: 2.6% WaLBerla Nodes: 64 Parallelization: MPI only (1024 MPI Tasks) WPE Power: 2.4% WPE Runtime: 1.8% PMatMul Nodes: 64 Parallelization: MPI only (1024 MPI Tasks) WPE Power: 0.9% WPE Runtime: 6.7% STREAM Nodes: 1 Parallelization: OpenMP only (16 OpenMP Threads) WPE Power: 4.9% WPE Runtime: 6.3%

Result: Energy Savings LRZ presented this work at ISC14 and SC14 which shows EAS saved 6% of electricty without performance degradation

Thank you!!!