Benchmarking status Status of Benchmarking Helge Meinhard, CERN-IT WLCG Management Board 14-Jul-2015 2 Helge Meinhard (at) CERN.ch.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Configuration management
CDA 3101 Fall 2010 Discussion Section 08 CPU Performance
Performance of Cache Memory
TU/e Processor Design 5Z032 1 Processor Design 5Z032 The role of Performance Henk Corporaal Eindhoven University of Technology 2009.
Evaluating Performance
Performance Analysis of Multiprocessor Architectures
CPE 731 Advanced Computer Architecture ILP: Part II – Branch Prediction Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University.
Chapter 4 M. Keshtgary Spring 91 Type of Workloads.
Computer Performance Evaluation: Cycles Per Instruction (CPI)
A comparison of HEP code with SPEC benchmark on multicore worker nodes HEPiX Benchmarking Group Michele Michelotto at pd.infn.it.
MisSPECulation Partial and Misleading use of SPEC CPU2000 in Computer Architecture Conferences Dave Kroondyk.
1 Lecture 10: FP, Performance Metrics Today’s topics:  IEEE 754 representations  FP arithmetic  Evaluating a system Reminder: assignment 4 due in a.
CS430 – Computer Architecture Lecture - Introduction to Performance
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna.
Transition to a new CPU benchmark on behalf of the “GDB benchmarking WG”: HEPIX: Manfred Alef, Helge Meinhard, Michelle Michelotto Experiments: Peter Hristov,
Cache-Conscious Runtime Optimization for Ranking Ensembles Xun Tang, Xin Jin, Tao Yang Department of Computer Science University of California at Santa.
StudioSysAdmins 2 nd Annual SIGGRAPH Birds-of-a-Feather John Hickson - 08/09/2011 StudioSysAdmins 2 nd Annual SIGGRAPH Birds-of-a-Feather John Hickson.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Ch4b- 2 EE/CS/CPE Computer Organization  Seattle Pacific University Performance metrics I’m concerned with how long it takes to run my program.
BİL 221 Bilgisayar Yapısı Lab. – 1: Benchmarking.
Memory/Storage Architecture Lab Computer Architecture Performance.
Lessons from HLT benchmarking (For the new Farm) Rainer Schwemmer, LHCb Computing Workshop 2014.
Meeting Wrap-up Spring 2012 FZU Prague Michel Jouvin / Sandy Philpott.
HS06 on new CPU, KVM virtual machines and commercial cloud Michele Michelotto 1.
Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1.
1 CS/COE0447 Computer Organization & Assembly Language CHAPTER 4 Assessing and Understanding Performance.
1 Seoul National University Performance. 2 Performance Example Seoul National University Sonata Boeing 727 Speed 100 km/h 1000km/h Seoul to Pusan 10 hours.
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
Pipelining and Parallelism Mark Staveley
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Benchmarking Benchmarking in WLCG Helge Meinhard, CERN-IT HEPiX Fall 2015 at BNL 16-Oct Helge Meinhard (at) CERN.ch.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
6.1 Advanced Operating Systems Lies, Damn Lies and Benchmarks Are your benchmark tests reliable?
HS06 on last generation of HEP worker nodes Berkeley, Hepix Fall ‘09 INFN - Padova michele.michelotto at pd.infn.it.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks John Gordon SA1 Face to Face CERN, June.
Lec2.1 Computer Architecture Chapter 2 The Role of Performance.
HS06 performance per watt and transition to SL6 Michele Michelotto – INFN Padova 1.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
WLCG Information System Use Cases Review WLCG Operations Coordination Meeting 18 th June 2015 Maria Alandes IT/SDC.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Next Steps after WLCG workshop Information System Task Force 11 th February
– Past, Present, Future Volunteer Computing at CERN Helge Meinhard, Nils Høimyr / CERN for the CERN BOINC service team H. Meinhard et al. - Volunteer.
Definitions Information System Task Force 8 th January
Computer Architecture CSE 3322 Web Site crystal.uta.edu/~jpatters/cse3322 Send to Pramod Kumar, with the names and s.
New CPU, new arch, KVM and commercial cloud Michele Michelotto 1.
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT Grid Deployment Board 09 January.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
WLCG Workshop 2017 [Manchester] Operations Session Summary
CCR Autunno 2008 Gruppo Server
Solid State Disks Testing with PROOF
The “Understanding Performance!” team in CERN IT
How INFN is moving out of SI2K has a benchmark for Worker Nodes
CSC 4250 Computer Architectures
THE CPU i Bytes 1.1.
CPU accounting of public cloud resources
Transition to a new CPU benchmark
CERN Benchmarking Cluster
ECE 445 – Computer Organization
Out-of-Order Commit Processor
CMSC 611: Advanced Computer Architecture
CMSC 611: Advanced Computer Architecture
Adapted from the slides of Prof
Presentation transcript:

Benchmarking status Status of Benchmarking Helge Meinhard, CERN-IT WLCG Management Board 14-Jul Helge Meinhard (at) CERN.ch

Benchmarking status Outline Purposes of CPU benchmark HEP-SPEC06 New benchmark: motivation, status Requirement for fast benchmark HEPiX benchmarking plans 14-Jul-2015 Helge Meinhard (at) CERN.ch3

Benchmarking status Purpose Describe pledges and installed capacity Provide usage information (accounting) Serve as metrics for procurements 14-Jul-2015 Helge Meinhard (at) CERN.ch4

Benchmarking status HEP-SPEC06 (1) (abbr. HS06) Current work-horse for CPU benchmarking Established in 2008/ Proposal by HEPiX working group adopted by WLCG Aim: reflect behaviour of typical applications on WLCG compute farms to some 10% accuracy Choice: - Application: Subset of SPECcpu 2006 integer and floating-point test suites written in C++ - To be run on OS typical for WLCG compute farms Currently RHEL 6 x86_64 compatible, gcc Strict choice of compiler flags: -O2 -pthread -fPIC -m32 Following advice by Architect’s Forum at the time 14-Jul-2015 Helge Meinhard (at) CERN.ch5

Benchmarking status HEP-SPEC06 (2) Reason for choice of C++ subset of SPEC cpu 2006: Good agreement of relevant low-level CPU counters between these applications and compute farms with LHC load - FP over integer instructions - Wrong branch predictions - L3 cache misses - … 14-Jul-2015 Helge Meinhard (at) CERN.ch6

Benchmarking status HEP-SPEC06 (3) Single LHC applications may differ significantly from HEP-SPEC06 behaviour - LHCb called for offers for HLT farm nodes using their proper trigger application rather than HS06 Typical mix, as seen on lxbatch, still scaling with HS06 to the ~10% level No immediate problem, but… 14-Jul-2015 Helge Meinhard (at) CERN.ch7

Benchmarking status New Benchmark – Why? SPECcpu 2006 support will run out soon after release of new SPECcpu suite Benchmark based on 32-bit applications with low optimisation no longer fully adequate HS06 rather more sensitive on speed of memory than typical applications - Issue with interpretation of perfmon counters at the time? 14-Jul-2015 Helge Meinhard (at) CERN.ch8

Benchmarking status New Benchmark – Status SPEC known to work on new release of SPECcpu - Initially expected for end 2014, now rather for 2016 Potential alternatives: - Application commonly used by all LHC experiments Geant 4 application being tested Advantage over SPEC: licence-free Industry standards maintained by third parties much preferred over HEP-specific solutions 14-Jul-2015 Helge Meinhard (at) CERN.ch9

Benchmarking status Fast Benchmark HEP-SPEC06 typically runs for 5…6 hours Fast(er) benchmark required for quick-and- dirty normalisation of resources (e. g. when a VM is created) - Not suitable for capacity management and procurement - Several candidates: Python script provided by LHCb Hammercloud application developed within ATLAS 14-Jul-2015 Helge Meinhard (at) CERN.ch10

Benchmarking status HEPiX Benchmarking Plans Benchmarking working group being revived - Co-chairs: Manfred Alef / KIT, Michele Michelotto / INFN Potential contributors contacted - More welcome – feel free to contact co-chairs Studies started around fast benchmarks and Geant4-based application Group waiting for next SPEC release 14-Jul-2015 Helge Meinhard (at) CERN.ch11

Benchmarking status Summary HEP-SPEC06 has served WLCG very well over 6…7 years No immediate significant problem Need to move forward in line with outside community Fast, less precise benchmark is required as well HEPiX benchmarking working group revived, started work 14-Jul-2015 Helge Meinhard (at) CERN.ch12

Benchmarking status Questions? 14-Jul-2015 Helge Meinhard (at) CERN.ch13