Transition to a new CPU benchmark on behalf of the “GDB benchmarking WG”: HEPIX: Manfred Alef, Helge Meinhard, Michelle Michelotto Experiments: Peter Hristov,

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Ian Bird LCG Project Leader LHCC + C-RSG review. 2 Review of WLCG  To be held Feb 16 at CERN  LHCC Reviewers:  Amber Boehnlein  Chris.
Hepmark project Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it.
Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
A comparison of HEP code with SPEC benchmark on multicore worker nodes HEPiX Benchmarking Group Michele Michelotto at pd.infn.it.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Site report: CERN Helge Meinhard (at) cern ch HEPiX spring CASPUR.
MPI and OFA Divergent interests? Dan Caldwell, VP WW Channel Sales Scali, Inc.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
F. Brasolin / A. De Salvo – The ATLAS benchmark suite – May, Benchmarking ATLAS applications Franco Brasolin - INFN Bologna - Alessandro.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Benchmarking status Status of Benchmarking Helge Meinhard, CERN-IT WLCG Management Board 14-Jul Helge Meinhard (at) CERN.ch.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
Benchmarking Benchmarking in WLCG Helge Meinhard, CERN-IT HEPiX Fall 2015 at BNL 16-Oct Helge Meinhard (at) CERN.ch.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
HS06 on last generation of HEP worker nodes Berkeley, Hepix Fall ‘09 INFN - Padova michele.michelotto at pd.infn.it.
WLCG Planning Issues GDB June Harry Renshall, Jamie Shiers.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks John Gordon SA1 Face to Face CERN, June.
DJ: WLCG CB – 25 January WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at:
Multi-core CPU’s April 9, Multi-Core at BNL First purchase of AMD dual-core in 2006 First purchase of Intel multi-core in 2007 –dual-core in early.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
HS06 performance per watt and transition to SL6 Michele Michelotto – INFN Padova 1.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Next Steps after WLCG workshop Information System Task Force 11 th February
Definitions Information System Task Force 8 th January
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
WLCG Accounting Task Force Update Julia Andreeva CERN GDB, 8 th of June,
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT Grid Deployment Board 09 January.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
Operations Coordination Team Maria Girone, CERN IT-ES GDB, 11 July 2012.
Accounting Review Summary and action list from the (pre)GDB Julia Andreeva CERN-IT WLCG MB 19th April
SI2K and beyond Michele Michelotto – INFN Padova CCR – Frascati 2007, May 30th.
WLCG Accounting Task Force Introduction Julia Andreeva CERN 9 th of June,
Benchmarking of CPU models for HEP application
WLCG IPv6 deployment strategy
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
CCR Autunno 2008 Gruppo Server
Gruppo Server CCR michele.michelotto at pd.infn.it
U.S. ATLAS Tier 2 Computing Center
How to benchmark an HEP worker node
Outline Benchmarking in ATLAS Performance scaling
Gruppo Server CCR michele.michelotto at pd.infn.it
How INFN is moving out of SI2K has a benchmark for Worker Nodes
Update on Plan for KISTI-GSDC
Proposal for obtaining installed capacity
Passive benchmarking of ATLAS Tier-0 CPUs
John Gordon, STFC-RAL GDB April 8th 2009
Update from the HEPiX IPv6 WG
John Gordon, STFC GDB October 12th 2011
Transition to a new CPU benchmark
CERN Benchmarking Cluster
HEPiX Spring 2009 Highlights
Presentation transcript:

Transition to a new CPU benchmark on behalf of the “GDB benchmarking WG”: HEPIX: Manfred Alef, Helge Meinhard, Michelle Michelotto Experiments: Peter Hristov, Alessandro De Salvo and Franco Brasolin, Marie-Christine Sawley, Hubert Degaudenzi Chair: Gonzalo Merino

2 Background July 2008: The HEPIX CPU WG recommended the MB to adopt SPECall_cpp2006 as new benchmark. –Document describing the new benchmark chosen and the conditions in which it will have to be run is in preparation by the HEPIX WG. Mandate of our Working Group: –Publish the recipe for running the benchmark (see above) –Agree on conversion factors to convert experiment requirement and site pledges tables from si2k units to the new units Aim of this talk is not to present a formal proposal, but to report on the Group discussions up to now and get reactions/feedback.

3 LXBENCH cluster at CERN Cluster with 8 reference machines for benchmarking at CERN: –lxbench01, lxbench02, … lxbench08 –(see Old (SPECint2000) and new (SPECall_cpp2006) benchmarks have been run in every machine. Experiments also ran their apps in every machine (May-08, HEPIX CPU WG) –Conclusion: All 4 experiments see good correlation between their applications and both benchmarks. –NOTE: Not all the experiments ran in lxbench08 Only ATLAS managed to run and provide results for lxbench08 LHCb says it did run, but results still not provided to the HEPIX group CMS and ALICE did not manage to run in lxbench08 in May

4 SPECs on LXBENCH SPECall_cpp2006 (new) KSI2K-LCG (old) Ratio new/old Intel Xeon 2.8 GHz lxbench0110,242,254,55 Intel Xeon 2.8 GHz lxbench029,632,244,29 AMD Opteron 275 lxbench0328,036,204,52 Intel Xeon 5150 lxbench0435,588,514,18 Intel Xeon 5160 lxbench0538,219,274,12 AMD Opteron 2218 lxbench0631,676,854,62 Intel Xeon E5345 lxbench0757,5214,194,05 Intel Xeon E5410 lxbench0860,7615,833,84

5 Current processors at the T0/T1s Result of a poll the Tier-0 and Tier-1s showing a snapshot of the number of cores currently installed of each processor type Xeon E5345 (Quad-core) Xeon E5430 (Quad-core) AMDIntel Dual-CoreQuad-Core

6 Transition to a new CPU unit The most representative platform, providing the majority of capacity at T0/T1s today are the Intel QuadCore processors. Propose to focus on the conversion factor results got at the newest (Intel QuadCore) machines: lxbench07 and lxbench08 –lxbench07: 4,05 –lxbench08: 3,84 Manfred Alef did the exercise of computing the CPU performance delivered by the whole GridKa farm with both the old and the new SPEC benchmarks –large site with quite a lot of different processors  realistic “mixture” –Result: new/old conversion factor = 4,19 Precision O(5%) intrinsic to these measurements: not significative.

7 Transition to a new CPU unit Proposal: Take a simple approach and adopt 4,00 as conversion factor. –Give more importance to having a simple rule for the transition period than discussing about decimals within the 5% precision. –Caveat: during the next days, ALICE and CMS will complete their benchmarking runs up to lxbench08 to confirm the good correlation (no surprises expected). Transition period: before end April (Spring C-RRB meeting) –Experiments will re-compute requirements tables given the new LHC schedule: new numbers should be computed already with the new unit. –Sites should buy SPECcpu2006 and calibrate their farms to report their current CPU power in the new unit. Pledges for the Spring CRRB should be expressed in the new unit.