Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1.

Slides:



Advertisements
Similar presentations
Hepmark project Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it.
Advertisements

Evaluating Performance
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.5 Comparing and Summarizing Performance.
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
Lecture 3: Computer Performance
HS06 on the last generation of CPU for HEP server farm Michele Michelotto 1.
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna.
Transition to a new CPU benchmark on behalf of the “GDB benchmarking WG”: HEPIX: Manfred Alef, Helge Meinhard, Michelle Michelotto Experiments: Peter Hristov,
 Introduction Introduction  Definition of Operating System Definition of Operating System  Abstract View of OperatingSystem Abstract View of OperatingSystem.
Content Project Goals. Term A Goals. Quick Overview of Term A Goals. Term B Goals. Gantt Chart. Requests.
University of Maryland Compiler-Assisted Binary Parsing Tugrul Ince PD Week – 27 March 2012.
Testing Virtual Machine Performance Running ATLAS Software Yushu Yao Paolo Calafiura LBNL April 15,
A performance analysis of multicore computer architectures Michel Schelske.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
Usability Issues Documentation J. Apostolakis for Geant4 16 January 2009.
The Cluster Computing Project Robert L. Tureman Paul D. Camp Community College.
F. Brasolin / A. De Salvo – The ATLAS benchmark suite – May, Benchmarking ATLAS applications Franco Brasolin - INFN Bologna - Alessandro.
GNU Compiler Collection (GCC) and GNU C compiler (gcc) tools used to compile programs in Linux.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
HS06 on new CPU, KVM virtual machines and commercial cloud Michele Michelotto 1.
Julia Andreeva, CERN IT-ES GDB Every experiment does evaluation of the site status and experiment activities at the site As a rule the state.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
Benchmarking status Status of Benchmarking Helge Meinhard, CERN-IT WLCG Management Board 14-Jul Helge Meinhard (at) CERN.ch.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
Benchmarking Benchmarking in WLCG Helge Meinhard, CERN-IT HEPiX Fall 2015 at BNL 16-Oct Helge Meinhard (at) CERN.ch.
G.Govi CERN/IT-DB 1 September 26, 2003 POOL Integration, Testing and Release Procedure Integration  Packages structure  External dependencies  Configuration.
Shouqing Hao Institute of Computing Technology, Chinese Academy of Sciences Processes Scheduling on Heterogeneous Multi-core Architecture.
PROOF Benchmark on Different Hardware Configurations 1 11/29/2007 Neng Xu, University of Wisconsin-Madison Mengmeng Chen, Annabelle Leung, Bruce Mellado,
HS06 performance per watt and transition to SL6 Michele Michelotto – INFN Padova 1.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
Next Steps after WLCG workshop Information System Task Force 11 th February
Definitions Information System Task Force 8 th January
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Michele Michelotto – CCR Marzo 2015 Past present future Fall 2012 Beijing IHEP Spring 2013 – INFN Cnaf Fall 2013 – Ann Arbour, Michigan Univ. Spring.
New CPU, new arch, KVM and commercial cloud Michele Michelotto 1.
Galaxy in Production Nate Coraor Galaxy Team Penn State University.
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
The last generation of CPU processor for server farm. New challenges Michele Michelotto 1.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
Accounting Update John Gordon. Outline Multicore CPU Accounting Developments Cloud Accounting Storage Accounting Miscellaneous.
CMS Multicore jobs at RAL Andrew Lahiff, RAL WLCG Multicore TF Meeting 1 st July 2014.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT Grid Deployment Board 09 January.
Accounting Review Summary and action list from the (pre)GDB Julia Andreeva CERN-IT WLCG MB 19th April
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
Main objectives To have by the end of August :
CCR Autunno 2008 Gruppo Server
Community Grids Laboratory
Gruppo Server CCR michele.michelotto at pd.infn.it
First proposal for a modification of the GIS schema
How to benchmark an HEP worker node
Gruppo Server CCR michele.michelotto at pd.infn.it
How INFN is moving out of SI2K has a benchmark for Worker Nodes
Sharing Memory: A Kernel Approach AA meeting, March ‘09 High Performance Computing for High Energy Physics Vincenzo Innocente July 20, 2018 V.I. --
Morgan Kaufmann Publishers
Artem Trunov, Günter Quast EKP – Uni Karlsruhe
Transition to a new CPU benchmark
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Backfilling the Grid with Containerized BOINC in the ATLAS computing
Computer Organization and Design Chapter 4
Presentation transcript:

Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1

Fast Benchmark 2  Request mainly from WLCG community via machine/job task force to recommend a fast benchmark to estimate the performance of the provided job slots, since some sites don’t disclose performance scores or hardware details  Requirements clear  Open source  Easy to run  Fast (few minutes)  Small, no download (apart from first download)  Requirement not clear  Reproducible? Reliable?  Single core or multicore?  Use Cases  Run everytime we land on a queue/VM/Cloud machine?  Run to sample the resources available?  Run to crosscheck is the HS06 declared are reliable?

An example with Geant4 3  Thanks to G.Cosmo and A.Dotti  Based on Geant4  Runs on linux x86-64 and ARM  realist description of the geometry of the detector  footprint 1/3 to ¼ of real experiment  No digitization, no analysis.  Cpu bound, no I/O  Download a bootstrap.sh script from Cern  Running the script download the rest of the program and compile (5 – 10 minutes) ./run.sh

Single core 4

Multicore 5  We have 32 Logical CPU  I’m forced to use to wall clock time from the shell instead of the Real Time computed  Now it takes more time to a steady number

Variance 6  Xeon E C / 32Lcpu  16 thread in parallel, 10Kevts, about 20 minutes  Average Wall clock time 1077 Stdev.S = 58  Average User time seconds  Stdev.S = 976

User time / Wall Clock time < 16 7

LHCB fast benchmark 8  New contact with P.Charpentier (LHCB) provided by Manfred Alef  Manfred is investigating this tool

HS14 update from Manfred Alef 9  HS06 based on widely used, industry standard, SPEC CPU 2006  SPEC is shipping well tested tools, on several architectures, professionally maintained  Very stable: 3 minor version in 8 years  Hardware vendors and technical press are familiar with it  Widely adopted in GRID, WLCG and also other scientific communities

Next version coming soon 10  Benchmark tests need to be revised to reflect improvements of hardware  SPEC is working on the next revision of CPU intensive benchmark suite currently designated as CPUv6  after original specmark, SPEC92, SPEC CPU95, SPEC CPU2000, SPEC CPU 2006, this will be the 6 th version.  KIT is an SPEC OSG associate and had the CPUv6 in beta (closed source, no permission to redistribute)  GridKa will provide a config file to run the benchmark on SL and GNU  CPUv6 is running with SL6 default compiler gcc but not all the tests  however SL7 is coming with gcc  Using gcc all the tests compile