CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT Grid Deployment Board 09 January.

Slides:



Advertisements
Similar presentations
Hepmark project Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it.
Advertisements

A comparison of HEP code with SPEC benchmark on multicore worker nodes HEPiX Benchmarking Group Michele Michelotto at pd.infn.it.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
HS06 on the last generation of CPU for HEP server farm Michele Michelotto 1.
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
Test results Test definition (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma; (2) Istituto Nazionale di Fisica Nucleare, Sezione di Bologna.
Transition to a new CPU benchmark on behalf of the “GDB benchmarking WG”: HEPIX: Manfred Alef, Helge Meinhard, Michelle Michelotto Experiments: Peter Hristov,
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES WLCG operations: communication channels Andrea Sciabà WLCG operations.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
STRATEGIC NAMING: MULTI-THREADED ALGORITHM (Ch 27, Cormen et al.) Parallelization Four types of computing: –Instruction (single, multiple) per clock cycle.
CERN IT Department CH-1211 Genève 23 Switzerland t Service Management GLM 15 November 2010 Mats Moller IT-DI-SM.
HEPiX FSWG Progress Report Andrei Maslennikov Michel Jouvin GDB, May 2 nd, 2007.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
The HEPiX IPv6 Working Group David Kelsey EGI TF, Prague 18 Sep 2012.
HEPiX Spring 2012 Highlights Helge Meinhard CERN-IT GDB 09-May-2012.
15-Apr-1999D.P.Kelsey - HEPNT update - HEPiX/RAL1 HEPNT an update David Kelsey CLRC Rutherford Appleton Lab, UK rl.ac.uk
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Linux Directions. This is the proposed support structure for current Fermi Linux releases. FERMI LINUX 7.1.x (based on RedHat 7.1) Support will end on.
CERN IT Department CH-1211 Genève 23 Switzerland t Monitoring: Tracking your tasks with Task Monitoring PAT eLearning – Module 11 Edward.
Steve Traylen PPD Rutherford Lab Grid Operations PPD Christmas Lectures Steve Traylen RAL Tier1 Grid Deployment
HS06 on new CPU, KVM virtual machines and commercial cloud Michele Michelotto 1.
Security Vulnerabilities Linda Cornwall, GridPP15, RAL, 11 th January 2006
Fast Benchmark Michele Michelotto – INFN Padova Manfred Alef – GridKa Karlsruhe 1.
HEPiX CASPUR Highlights and Comments. Computer Room Issues Computer room cooling and air conditioning systems: mentioned in a majority of site reports,
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Grid Deployment Board – 10 February GD LCG Workshop Goals Give overview where we are Stimulate cooperation between the centres Improve the communication.
Benchmarking status Status of Benchmarking Helge Meinhard, CERN-IT WLCG Management Board 14-Jul Helge Meinhard (at) CERN.ch.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
HEPiX IPv6 Working Group David Kelsey GDB, CERN 11 Jan 2012.
Benchmarking Benchmarking in WLCG Helge Meinhard, CERN-IT HEPiX Fall 2015 at BNL 16-Oct Helge Meinhard (at) CERN.ch.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
HS06 on last generation of HEP worker nodes Berkeley, Hepix Fall ‘09 INFN - Padova michele.michelotto at pd.infn.it.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks John Gordon SA1 Face to Face CERN, June.
DJ: WLCG CB – 25 January WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at:
Multi-core CPU’s April 9, Multi-Core at BNL First purchase of AMD dual-core in 2006 First purchase of Intel multi-core in 2007 –dual-core in early.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
HS06 performance per watt and transition to SL6 Michele Michelotto – INFN Padova 1.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Definitions Information System Task Force 8 th January
CERN IT Department CH-1211 Genève 23 Switzerland t Future Needs of User Support (in ATLAS) Dan van der Ster, CERN IT-GS & ATLAS WLCG Workshop.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
ARDA Massimo Lamanna / CERN Massimo Lamanna 2 TOC ARDA Workshop Post-workshop activities Milestones (already shown in December)
New CPU, new arch, KVM and commercial cloud Michele Michelotto 1.
The last generation of CPU processor for server farm. New challenges Michele Michelotto 1.
Moving out of SI2K How INFN is moving out of SI2K as a benchmark for Worker Nodes performance evaluation Michele Michelotto at pd.infn.it.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
SI2K and beyond Michele Michelotto – INFN Padova CCR – Frascati 2007, May 30th.
Benchmarking of CPU models for HEP application
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
CCR Autunno 2008 Gruppo Server
Gruppo Server CCR michele.michelotto at pd.infn.it
How to benchmark an HEP worker node
Gruppo Server CCR michele.michelotto at pd.infn.it
How INFN is moving out of SI2K has a benchmark for Worker Nodes
Passive benchmarking of ATLAS Tier-0 CPUs
Transition to a new CPU benchmark
Comparing dual- and quad-core performance
Procurements at CERN: Status and Plans
HEPiX Fall 2005, SLAC October 10th 2005
CERN Benchmarking Cluster
HEPiX Spring 2009 Highlights
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT Grid Deployment Board 09 January 2008

IHEPCCC/HEPiX Benchmarking WG - 2 History (1) Autumn 2006: IHEPCCC chair contacts HEPiX conveners about help on two technical topics –File systems –(CPU) Benchmarking HEPiX meeting at Jefferson Lab (Oct 2006) reacted favourably Participation sorted out –Chairs: Maslennikov (CASPUR): File systems, HM (CERN): Benchmarking File system WG started immediately… not so the Benchmarking WG –Mea culpa… and my apologies to everyone

History (2) Ad-hoc discussion at HEPiX at DESY (April 2007) HEPiX in St Louis (November 2007) –Benchmarking track INFN move to SI2006 (Michele Michelotto) Update on GridKA benchmarking (Manfred Alef) Topics around CERN benchmarking (HM) AMD and Intel performance for lattice QCD codes (Don Holmgren) –2 more formal face-to-face meetings of people interested in benchmarking, formal start of WG Two phone conferences since (30-Nov, 13-Dec), next one scheduled for this afternoon

Motivation Used so far (at least in LCG): int_base part of SPEC CPU 2000 –Inconsistent usage across institutions Measurements under real conditions vs. looking up values –Experiment code shown not to scale well with SI2k In particular on multicore processors SI2k memory footprint too small –SPEC CPU 2000 phased out in 2007 No more support by SPEC No new entries in table of results maintained by SPEC New benchmark required –Consistent description of experiment requirements, lab commitments and existing resources –Tool for procurements Buy performance rather than boxes “Natural” candidate: SPEC CPU 2006 (int_base or int_rate part)

WG composition Currently on mailing list: –Ian Gable (U Victoria) –Alex Iribarren (CERN) –Helge Meinhard (CERN) –Michael Barnes (Jefferson Lab) –Sandy Philpott (Jefferson Lab) –Manfred Alef (GridKA) –Michele Michelotto (INFN Padova) –Martin Bly (RAL) –Peter Wegner (DESY) –Alberto Aimar (CERN) –Sverre Jarp, Andreas Hirstius, Andrzej Nowak (all CERN) Actively looking for participation from FNAL and SLAC More expressions of interest to collaborate very welcome

WG status – Agreements Focus initially on benchmarking of processing power for worker nodes Representative sample of machines needed; centres who can spare a machine (at least temporarily) will announce this to the list Environment to be fixed Standard set of benchmarks to be run Experiments to be invited (i.e. pushed) to run their code –Check how well experiments’ code scales with industry- standard benchmarks

Machines A number of sites announced that they could make machines available, at least temporarily –Aim for interactive logins for experiments At CERN, environment much like lxplus/lxbatch –Keep track of number, type, speed of CPUs, chipset, number, config, CAS latency and clock speed of memory modules Potentially BIOS settings (and even temperatures) to be added  At CERN: lxbench cluster with currently 7 machines –Nocona 2.8 GHz, Irvindale 2.8 GHz, Opteron 2 x 2.2 GHz, Woodcrest 2.66 GHz, Woodcrest 3 GHz, Opteron Rev. F 2 x 2.6 GHz, Clovertown 2.33 GHz –Harpertown 2.33 GHz to be added soon –Some more machines (similar to one type or another) added temporarily for consistency checks –Standard benchmarks run –Perfmon to be installed

Environment –SL 4 x86_64, 32-bit applications Some cross-checks with SL 5 –Gcc 3.4.x (system compiler) Cross-checks with gcc 4; will need gcc 4 for SPEC FP 2006 –Compilation options by LHC architects’ forum (-O2 –fPIC –pthread –m32) Check whether other communities (CDF/D0, BaBar, …) prefer different options –Multiple independent concurrent runs Multi-threaded benchmarks (SPECrate) checked as well – although normalisation is the same, values too low by ~5% on fast machines

Standard benchmarks Compulsory: –SPECint_base 2000 (concurrent, transition only) –SPECint_base 2006 (concurrent) –SPECfp_base 2006 (concurrent, gcc 4) Optional, cross-checks only: –SPECint_rate 2006 –All above as 64-bit applications –SPECint compiled with gcc 4 –All above under SL5 Notes: –“concurrent” – as many instances as there are cores run in parallel, results added over all cores –Applications are 32 bit unless noted otherwise

Experiment participation Important in order to check how well real-life applications scale with industry-standard benchmarks Discussed in LCB Management Board December 2007: Alberto Aimar contacted reps, warned them that benchmarking will come up, and asked for people to contact Replies received from ATLAS, ALICE, LHCb Detailed information about CERN cluster to be distributed this week

Conclusions WG has taken long to get started, but is now up and running Momentum is there, interest and (some) person power as well Needs a bit of focussing in order to ensure results are comparable Needs active participation from experiments Interim report expected at HEPiX at CERN May 2008 Some interest in looking at power consumption as well