Download presentation
Presentation is loading. Please wait.
Published byJody Short Modified over 9 years ago
1
HEPiX report Helge Meinhard, Harry Renshall / CERN-IT Computing Seminar / After-C5 27 May 2005
2
2 Helge Meinhard (at) cern.chHEPiX report Outline Site reports, other topics (Helge Meinhard) Storage topics, Large Cluster SIG workshop (Harry Renshall) LCSIG subject: Batch schedulers locally and for the Grid
3
3 Helge Meinhard (at) cern.chHEPiX report HEPiX Global organisation of service managers and support staff providing computing facilities for HEP Covering all platforms of interest (Unix/Linux, Windows, Grid, …) Aim: Present recent work and future plans, share experience Meetings ~ 2 / y (spring in Europe, autumn in North America)
4
4 Helge Meinhard (at) cern.chHEPiX report HEPiX Spring 2005 (1) Held 09 – 13 May 2005 at Forschungszentrum Karlsruhe (FZK), Germany Broad multidisciplinary research centre, 3500 employees Home of the German Tier 1 centre for LCG Format: Mon – Wed Site reports, HEPiX talks Thu – Fri Large Cluster SIG on batch schedulers locally and for the Grid Well organised by Jos van Wezel and helpers Full details incl. slides: http://www.fzk.de/hepix
5
5 Helge Meinhard (at) cern.chHEPiX report HEPiX Spring 2005 (2) 101 participants, of which 12 from CERN-IT Baud, Bell, Cass, Christaller, Field, Iven, Lemaitre, Meinhard, Pace, Polok, Renshall, Silverman Other sites: FZK, DESY Hamburg, SLAC, LAL, FNAL, INFN, JLAB, DAPNIA, RAL, Prague, TRIUMF, NIKHEF, GSI, IN2P3, CNRS, DESY Zeuthen, Caspur, Aachen, Braunschweig, PIC, PSI, London eSc centre, Wisconsin Vendors: DataDirectNet, IBM, Platform, Altair, Sun 60 talks, of which 11 from CERN Cass: organised and chaired LCSIG workshop Silverman: chaired two discussion sessions, provided fine trip report
6
6 Helge Meinhard (at) cern.chHEPiX report Next meetings SLAC, 10 – 14 October 2005 Rome, 03 – 07 April 2006 European meetings after Rome: tentatively: Spring 2007: DESY Hamburg Spring 2008: CERN Further application: GSI
7
7 Helge Meinhard (at) cern.chHEPiX report Politics Budget cuts in all major North American labs (≥ 5%) SLAC: Accelerator restarted after 6m stop due to electrical accident. BaBar ends data taking in 2008 (rather than 2010), Changing focus towards Linear Coherent Light Source, i.e. away from HEP IHEPCCC and HEPiX Guy Wormser explained desired role of HEPiX (forming ad-hoc task forces for technical advice to IHEPCCC) Broad agreement with a few caveats Suggested topics: HEP VO, Linux; software life-cycle, storage, collaborative tools, security Work started on Linux and storage already
8
8 Helge Meinhard (at) cern.chHEPiX report Linux Scientific Linux is 1 year old, now also x86_64 4.0 as beta now, 3.0.5 after next RH quarterly update (i.e. imminent) Proposals: No one going for ia64 – drop support Split system and experiment compiler LHC startup: Certify, but don’t deploy, SL4. Delay decision between SL4 and SL5 until late 2006 Most labs seem prepared to skip SL4, but some uncertainty left – feedback requested 2.6 kernel needed for LHC expts online and laptops?
9
9 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (1) Opterons more and more used In production by FZK, GSI, LAL, INFN, … Considered/tendered by SLAC, FNAL, … FNAL: Task force concluded that Opteron-based machines are mature for production farms both under i386 and x86_64 64/64 offers up to 40% more performance than 32/64 or 32/32 Up to 30% less power consumption than Xeons of comparable performance
10
10 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (2) Blade systems not taking off in HEP (Almost) everyone buying 1U systems LAN: FNAL moving to GigE as standard Sites with high-end tape drives rare Most sites are using LTO-2, LTO-1 Nobody mentioned high-end evaluations
11
11 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (3) Storage: still no clear trend CERN price of 1.4 EUR/GB usable unbeaten Some specialised solutions (FNAL: Ibrix; BNL: Panasas) Panasas perhaps interesting for AFS services File systems XFS found only reliable well-scaling solution for large non-parallel applications Parallel file systems being investigated (mostly in non-HEP context), no high-priority item
12
12 Helge Meinhard (at) cern.chHEPiX report Windows, mail, authentification Very few Windows talks Are all Windows problems solved? SMS used at CERN (talk) and FNAL Too expensive for smaller sites Spam fighting is still an issue Greylisting = delayed delivery? X.509 certificates mentioned a few times Talk by Pace on Web authentication and cross- authentication with Kerberos
13
13 Helge Meinhard (at) cern.chHEPiX report Collaborative tools Videoconferencing: tried twice during conference – not brilliant Wiki: TWiki used at GSI CMS: Plone used at FNAL Both with good success, taking off beyond initial target users
14
14 Helge Meinhard (at) cern.chHEPiX report Service challenges SC2: Throughput goal reached, but operations and monitoring far from adequate for a service Setting up for (staged) SC3, including few selected Tier 2 centres Role of Tier 2s starting to be clarified Tools for SC3: gLite file transfer software, LCG file catalog, lightweight DPM (disk pool manager)
15
15 Helge Meinhard (at) cern.chHEPiX report Collaboration and sharing Scientific Linux is a big success Areas to work on: Monitoring: CERN uses Lemon (talk well received), FNAL and IN2P3 use NGOP. All other people (including SLAC!) try things like Nagios or Ganglia with documented scalability limits Installation and configuration: some (increasing) use of Quattor, many other sites mentioned Rocks DPM: Only few sites use Castor, more interest in dCache
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.