Driving HPC energy efficiency  Energy Efficient HPC Working Group Forum for sharing of information Peer-to-peer exchange Best practices and case studies.

Slides:



Advertisements
Similar presentations
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Green Datacenter Initiatives at SDSC Matt Campbell SDSC Data Center Services.
Advertisements

Upgrade of Jaguar from Cray XT5 to XK6 Cray Linux Environment operating system Gemini interconnect 3-D Torus Globally addressable memory Advanced synchronization.
Power Measurements on Mira Argonne Leadership Computing Facility Susan Coghlan Mira Project Manager Argonne National Laboratory June 18, 2012.
NEW CHALLENGES IN LABORATORY INFORMATION SYSTEMS DR. MILIND M. BHIDE.
Erich Strohmaier, Berkeley Lab & TOP500 Wu Feng, Virginia Tech & Green500 with Natalie Bates, EE HPC Working Group Nic Dube, HP & The Green Grid Craig.
Building Technologies Program. FLEXLAB Background LBNL responded to a 2009 RFP for ARRA funds to develop a facility that: o Develops new test methods.
02/24/09 Green Data Center project Alan Crosswell.
Copyright © 2005 Department of Computer Science CPSC 641 Winter LAN Traffic Measurements Some of the first network traffic measurement papers were.
1 The Problem of Power Consumption in Servers L. Minas and B. Ellison Intel-Lab In Dr. Dobb’s Journal, May 2009 Prepared and presented by Yan Cai Fall.
Expert Group on New and Renewable Energy Technologies (EGNRET) APEC 21 st Century Renewable Energy Development Initiative Cary Bloyd Argonne National Laboratory.
Energy Audit- a small introduction A presentation by Pune Power Development Pvt. Ltd.
September 9, 2003 Lee Jay Fingersh National Renewable Energy Laboratory Overview of Wind-H 2 Configuration & Control Model (WindSTORM)
1 | Program Name or Ancillary Texteere.energy.gov Water Power Peer Review 51-Mile Hydroelectric Power Project PI: Jim Gordon Earth By Design, Inc.
Benchmarking Energy Use in Commercial Buildings Satish Kumar, Ph.D. Lawrence Berkeley National Laboratory Public Sector Energy Management Workshop Mumbai,
Breakout Session 2 – Track B International Standards on Auditing: Adoption and Implementation Challenges and Tools Prof. Arnold Schilder, IAASB Chairman.
An Automated Component-Based Performance Experiment and Modeling Environment Van Bui, Boyana Norris, Lois Curfman McInnes, and Li Li Argonne National Laboratory,
Management Information Systems
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
Hunt for Molecules, Paris, 2005-Sep-20 Software Development for ALMA Robert LUCAS IRAM Grenoble France.
Page 1 Public Interest Energy Research (PIER) California Institute for Energy & Environment (CIEE) Bill Tschudi Lawrence Berkeley National Laboratory
Layered Protocol. 2 Types of Networks by Logical Connectivity Peer to Peer and Client-Server Peer-to-peer Networks  Every computer can communicate directly.
Herbert Huber, Axel Auweter, Torsten Wilde, High Performance Computing Group, Leibniz Supercomputing Centre Charles Archer, Torsten Bloth, Achim Bömelburg,
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
An Approach To Automate a Process of Detecting Unauthorised Accesses M. Chmielewski, A. Gowdiak, N. Meyer, T. Ostwald, M. Stroiński
Managing Your Data: Backing Up Your Data Robert Cook Oak Ridge National Laboratory Section: Local Data Management Version 1.0 October 2012.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
Theoretical Model: Goals Flexible framework able to simulate wide variety of models Provide ability to run simulations of arbitrary complexity and size.
1 Raspberry Pi HPC Testbed By Bradford W. Bazemore Georgia Southern University.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Energy Efficient Power Supplies 1 Page 1 Arshad Mansoor Sponsored by: Public Interest Energy Research (PIER), California Energy.
1 Data Center Energy Efficiency Metrics ASHRAE June 24, 2008 Bill Tschudi Lawrence Berkeley National Laboratory
Lawrence Livermore National Laboratory Centralized Desktop Management at LLNL A Major Paradigm Shift CDM David Frye This work performed under the auspices.
The Data Center Challenge
1 UCA/CIM TSC Enterprise Application Integration Task Force / CIM Tools methodologies  Vasteras meeting  A. Maizener
1 Université Laval, Calcul Québec, Calcul Canada Colosse Sun 6048 system 10 compute racks 960 nodes, 7680 cores 1 PB Lustre filesystem 1.
Software Development in HPC environments: A SE perspective Rakan Alseghayer.
A. Hangan, L. Vacariu, O. Cret, H. Hedesiu Technical University of Cluj-Napoca A Prototype for the Remote Monitoring of Water Parameters.
Approximate Networking H. T. Kung Harvard University Panel 1 on “What Are the Biggest Opportunities in Networking Problem?” NITRD Workshop on Complex Engineered.
Energy Efficient Data Centers Update on LBNL data center energy efficiency projects June 23, 2005 Bill Tschudi Lawrence Berkeley National Laboratory
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Increasing DC Efficiency by 4x Berkeley RAD Lab
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
7th International Scientific Conference on “Energy and Climate Change”
Joint Research Centre the European Commission's in-house science service JRC Science Hub: ec.europa.eu/jrc 38th UNECE IWG PMP MEETING Non- exhaust particle.
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
Metrics "A science is as mature as its measurement tools."
Seaborg Decommission James M. Craw Computational Systems Group Lead NERSC User Group Meeting September 17, 2007.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
A Measured Approach to Virtualization Don Mendonsa Lawrence Livermore National Laboratory NLIT 2008 by LLNL-PRES
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
Design and Planning Tools John Grosh Lawrence Livermore National Laboratory April 2016.
Photos placed in horizontal position with even amount of white space between photos and header Sandia National Laboratories is a multi-program laboratory.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
C Loomis (CNRS/LAL) and V. Floros (GRNET)
Testing Tools & Standards
APEC 21st Century Renewable Energy Development Initiative
Data Center Energy Efficiency: Scale-Up/Scale-Out Processor Design Background & Analysis By Nick.
Things To Avoid: 1-Never your password to anyone.
Summit 2017 Breakout Group 1: Advanced Research Computing (ARC)
Title of Presentation Client
Carlos J. Bernardos, Alain Mourad, Akbar Rahman
Title of Presentation Client
Working Group on Statistical Confidentiality Item 3 of the Agenda
Computer Systems Unit 1 – Lesson 1 Starter
Еxperience of EPAC in PPIF good practice and obstacles
Presentation transcript:

Driving HPC energy efficiency  Energy Efficient HPC Working Group Forum for sharing of information Peer-to-peer exchange Best practices and case studies Collective action Open to all interested parties EE HPC WG Website Energy Efficient HPC Linked-in Group With a lot of support from Lawrence Berkeley National Laboratory

Power Measurement Methodology: Complexities and Issues  Fuzzy lines between the computer system and the data center, e.g., fans, cooling systems  Shared resources, e.g., storage and networking  Data center not instrumented for computer system level measurement  Measurement tool limitations, e.g., frequency, power verses energy  dc system level measurements don’t include power supply losses

 Current power measurement methodology is very flexible, but compromises consistency between submissions  Proposal is to keep flexibility, but keep track of rules used and quality of power measurement  3 Levels of power measurement quality  Sampling rate; more measurements means higher quality  Completeness of what is being measured; more of the system translates to higher quality  Common rules for start/stop times  Vision is to continuously ‘raise the bar’ with higher levels Proposed Methodology Improvements

Power/Energy Measurement Methodology (Current Proposed) LevelAspect 1: Time Fraction & Granularity Aspect 2: Machine Fraction Aspect 3: Subsystems Measured 1 20% of run Power measurement > 1 per second Report > 1 avg. power measurement (larger of) 1/64 of machine or 1kW [Y] Compute nodes [ ] Interconnect net [ ]Storage [ ]Storage Network [ ]Login/Head nodes 2 100% of run Power measurement > 1 per second Report > 10 avg. power measurements (Larger of) 1/8 of machine or 10kW 3 100% of run Total integrated energy measurement Report > 10 running total energy measurements Whole machine

Testing proposed improvements: early adopter phase  5 early adopters to use methodology for June’12 submissions to Top500 and Green500  Herbert Huber, Leibniz Supercomputing Center  Buddy Bland, Oak Ridge National Laboratory  Kim Cupps, Lawrence Livermore National Laboratory  Nic Dube, Université Laval, Calcul Québec, Compute Canada  Susan Coghlan, Argonne National Laboratory  Seeking feedback  Early adopter  ISC Birds of Feather  Next revision expected by September 2012

Driving HPC energy efficiency  Energy Efficient HPC Working Group Forum for sharing of information Peer-to-peer exchange Best practices and case studies Collective action Open to all interested parties EE HPC WG Website Energy Efficient HPC Linked-in Group With a lot of support from Lawrence Berkeley National Laboratory