HEPiX Spring 2009 Highlights

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Advertisements

Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
WLCG Cloud Traceability Working Group progress Ian Collier Pre-GDB Amsterdam 10th March 2015.
Secure Off Site Backup at CERN Katrine Aam Svendsen.
Virtualization for Cloud Computing
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL, UK) 4 May 2011 HEPiX, GSI, Darmstadt david.kelsey at stfc.ac.uk.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
HEPiX Summary Martin Bly HEPSysMan - RAL, June 2013.
Roger Jones, Lancaster University1 Experiment Requirements from Evolving Architectures RWL Jones, Lancaster University Ambleside 26 August 2010.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
HEPiX FSWG Progress Report Andrei Maslennikov Michel Jouvin GDB, May 2 nd, 2007.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
HEPiX Spring 2012 Highlights Helge Meinhard CERN-IT GDB 09-May-2012.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Virtualisation & Cloud Computing at RAL Ian Collier- RAL Tier 1 HEPiX Prague 25 April 2012.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
HEPiX CASPUR Highlights and Comments. Computer Room Issues Computer room cooling and air conditioning systems: mentioned in a majority of site reports,
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF Automatic server registration and burn-in framework HEPIX’13 28.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
LAL Site Report Michel Jouvin LAL / IN2P3
HEPiX Fall 2009 Highlights Michel Jouvin LAL, Orsay November 10, 2009 GDB, CERN.
23 January 2007WLCG workshop, CERN System Management Working Group Alessandra Forti WLCG workshop CERN, 23 January 2007.
HS06 performance per watt and transition to SL6 Michele Michelotto – INFN Padova 1.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Virtual Server Server Self Service Center (S3C) JI July.
HEPiX IPv6 Working Group David Kelsey david DOT kelsey AT stfc DOT ac DOT uk (STFC-RAL) HEPiX, Vancouver 26 Oct 2011.
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
HEPiX FSWG Progress Report (Phase 2) Andrei Maslennikov November 2007 – St. Louis.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF CCR 2009.
Scientific Linux Connie Sieh CSAM Meeting May 2, 2006.
HEPiX Fall 2011 Summary Report QMUL 10 November 2011 Martin Bly, STFC-RAL.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
Operations Coordination Team Maria Girone, CERN IT-ES GDB, 11 July 2012.
Evaluation of HEP worker nodes Michele Michelotto at pd.infn.it
WLCG Workshop 2017 [Manchester] Operations Session Summary
C Loomis (CNRS/LAL) and V. Floros (GRNET)
NA61/NA49 virtualisation:
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Belle II Physics Analysis Center at TIFR
Dag Toppe Larsen UiB/CERN CERN,
The “Understanding Performance!” team in CERN IT
Outline Benchmarking in ATLAS Performance scaling
Oxford Site Report HEPSYSMAN
Meeting Wrap-up Fall 2011 TRIUMF, Vancouver
Статус ГРИД-кластера ИЯФ СО РАН.
PES Lessons learned from large scale LSF scalability tests
Michel Jouvin / Sandy Philpott
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Introduction to HEPiX Helge Meinhard, CERN-IT
The Problem ~6,000 PCs Another ~1,000 boxes But! Affected by:
GRIF : an EGEE site in Paris Region
Pete Gronbech, Kashif Mohammad and Vipul Davda
HEPix - SLAC 99 Michel Jouvin
Presentation transcript:

HEPiX Spring 2009 Highlights Stolen for use at HEPSYSMAN by Pete Gronbech 30th June 2009 Michel Jouvin LAL, Orsay jouvin@lal.in2p3.fr http://grif.fr June 10, 2009 GDB, CERN

HEPiX at a Glance 15+ years old informal forum of people involved in “fabric management” in HEP No mandate from any other body More sysadmins than IT managers Open to anybody interested http://www.hepix.org : archive of all meetings Main activity is a 1-week meeting twice a year 60-100 attendees per meeting, “stable” core Focus : exchange of experience and technology review Mix of large and small sites Better understanding of each others Each one benefit of the others Most of the sites involved in grid computing “On-demand” working groups Currently distributed file systems 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

Main Topics Main topics are always the same but main focus changes at each meeting HEPiX value is the periodic report on work in progress Umea: focus on virtualization Main tracks Site reports : very valuable part, update on changes, give a « picture » of what’s happening aroud the world Scientific Linux status and future directions Data Centers : cooling, power consumption, … Less active, projects in building phase Storage : convened by File System WG LUSTRE growing Evolving as a storage-focused forum to share experience Virtualization Benchmarking : still active, new activities related to virtualization Security and networking 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

Virtualization A track at each meeting for at least 2 years… Initial focus was mainly on service consolidation Coverage extended to virtualized environments for applications Virtualized WN Integration with batch schedulers Image management and resource allocation (openNebula) Grid and clouds (eg. StratusLab) CERNVM: very minimal and generic VM approach Position of sites about VO-maintained images “First” discussion rather decision : sites not comfortable but no “No” Feasibility of a limited number of images per VO ? Part of SW area ? GDB is the place for further discussions Benchmarking: CPU, IO, Xen vs. KVM… 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

File Systems WG Set up 2 years ago with a mandate to review distributed file systems technologies Initial mandate from former IHEPCCC Continued since on a voluntary basis with 2 objectives Benchmarking activities with more realistic and diverse use cases: LUSTRE outperforming (2x) all other solutions in every case so far. Share experience and expertise with new technologies Mainly LUSTRE in fact as it is used in several places now New members joined: CERN (LUSTRE evaluation), FNAL, Lisbon Producing a report at each HEPiX meeting This meeting new topic: potential AFS + LUSTRE combination 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

Miscellaneous iSCSI evaluation at CERN Alternative to FC ? SLURM, an alternative to Torque (MAUI?) Command-compatible with Torque First benchmarks of Nehalem-based machines 50% improvement in power efficiency compared to Hapertown CERN R&D on network anomaly (CINBAD) 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

PDG Notes Troy Dawson gave an update on what SL6 will be like! Also Fermi STS based on Fedora (good for desktops/laptops) http://indico.cern.ch/contributionDisplay.py?contribI d=16&sessionId=15&confId=45282 CERN move from CVS to SVN (long overlap), new computer centre will be green (heat to be used for heating other buildings) continuous rounds of procurement, aim for 50000 hepspec06 (~500 systems) in Oct 09 Skype now tolerated if correctly configured http://indico.cern.ch/contributionDisplay.py?contribId=28 &sessionId=19&confId=45282 UMEA File servers have mirrored sys disk on USB sticks, one internal one external. 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

PDG Notes 2 Many sites reported problems with A/C and cooking computers with surprisingly few failures. A few sites eg LAL, Oxford have installed SL5 WNs. LAL reported problems with PBSpro. Triumf – uses bacula for backups One observation that Intel servers lasted longer than AMD over 4 year period CERN Security talk good. http://indico.cern.ch/contributionDisplay.py?contribI d=40&sessionId=5&confId=45282 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

PDG notes 3 Benchmarking from CERN http://indico.cern.ch/contributionDisplay.py?contribId=29&s essionId=23&confId=45282 Platform Processor HEP-SPEC06 Baseline 2 x L5420 2.5 GHz 67.75 1 2 x E5530 2.4 GHz 100.17 2 2 x E5540 2.53 GHz 103.22 3 2 x 2376HE 2.3 GHz 70.47 Platform OS Compiler HEP-SPEC06 1 SLC5 gcc4.1.2 100.17 1 SLC4 gcc3.4.6 84.88 2 SLC5 gcc4.1.2 103.22 2 SLC5 gcc4.3 106.45 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

PDG Notes 4 10G card tests (require some work) http://indico.cern.ch/contributionDisplay.py?contribI d=24&sessionId=9&confId=45282 CERN Lustre evaluation talk good http://indico.cern.ch/contributionDisplay.py?contribI d=14&sessionId=10&confId=45282 Benchmarking of VMs talks from INFN and Victoria, excellent cpu perf. but interesting thing is slow i/o http://indico.cern.ch/contributionDisplay.py?contribI d=5&sessionId=7&confId=45282 18/01/201910/6/2009 HEPiX Spring 2009 Highlights

Conclusions HEPiX is a very « useful » forum opened to any site interested Complementary to GDB, focused on fabric management rather than grid services No formal membership : just register to the meeting.. Next meeting in Berkeley, October 26-30th Look for announcement soon at http://www.hepix.org Ask to be registered on HEPiX mailing list (low volume…) New track on monitoring tools and practices ? Material produced is mainly presentations during the workshops Look at agendas if interested by a presentation. Start at http://www.hepix.org 18/01/201910/6/2009 HEPiX Spring 2009 Highlights