HEPiX report Helge Meinhard, Harry Renshall / CERN-IT Computing Seminar / After-C5 27 May 2005.

Slides:



Advertisements
Similar presentations
Chris Brew RAL PPD HEPiX Workshop Summery Brookhaven National Laboratory October Chris Brew CCLRC.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
HEPiX Meeting Wrap Up Fall 2000 JLab. Meeting Highlights Monitoring –Several projects underway –Collaboration of ideas occurred –Communication earlier.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
GRIF Status Michel Jouvin LAL / IN2P3
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
HEPiX Orsay 27 th April 2001 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 27 th April 2001 HEPiX 2001, Orsay.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Site report: CERN Helge Meinhard (at) cern ch HEPiX spring CASPUR.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
HEPiX FSWG Progress Report Andrei Maslennikov Michel Jouvin GDB, May 2 nd, 2007.
LAL Site Report Michel Jouvin LAL / IN2P3
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
John Gordon STFC-RAL Tier1 Status 9 th July, 2008 Grid Deployment Board.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
HEPiX Spring 2012 Highlights Helge Meinhard CERN-IT GDB 09-May-2012.
15-Apr-1999D.P.Kelsey - HEPNT update - HEPiX/RAL1 HEPNT an update David Kelsey CLRC Rutherford Appleton Lab, UK rl.ac.uk
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
LAL Site Report Michel Jouvin LAL / IN2P3
Jens G Jensen e-Science Centre hepsysmanix HEPiX report for hepsysman RAL, 10 May 2006.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
HEPiX CASPUR Highlights and Comments. Computer Room Issues Computer room cooling and air conditioning systems: mentioned in a majority of site reports,
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
SC4 Planning Planning for the Initial LCG Service September 2005.
HEPiX 2 nd Nov 2000 Alan Silverman Proposal to form a Large Cluster SIG Alan Silverman 2 nd Nov 2000 HEPiX – Jefferson Lab.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
HEPIX Backup Survey David Asbury CERN/IT/FIO HEPIX, Rome, 6 April 2006.
HEPiX/HEPNT report Helge Meinhard, Alberto Pace, Denise Heagerty / CERN-IT Computing Seminar 05 November 2003.
Randall Sobie University of Victoria IHEPCCC - HEPiX April International HEP Computing Coordination Committee Randall Sobie.
23 January 2007WLCG workshop, CERN System Management Working Group Alessandra Forti WLCG workshop CERN, 23 January 2007.
16-Nov-01D.P.Kelsey, HTASC report1 HTASC - Report to HEP-CCC David Kelsey, RAL rl.ac.uk 16 November 2001, CERN ( )
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
HEPiX report Spring 2015 HEPiX meeting Oxford University, UK Helge Meinhard, CERN-IT Grid Deployment Board 10-Jun Helge Meinhard (at) CERN.ch.
The HEPiX IPv6 Working Group David Kelsey HEPiX, Prague 26 April 2012.
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
HEPiX report Spring 2016 HEPiX meeting DESY Zeuthen (Berlin), Germany Helge Meinhard, CERN-IT Grid Deployment Board 11-May Helge Meinhard (at)
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
HEPiX FSWG Progress Report (Phase 2) Andrei Maslennikov November 2007 – St. Louis.
HEPi-X-Perience. Thomas Finnern | Hepi-X-Perience | Vancouver Fall 2011 | Page 2 > 1 Slide / Badge > About Me > From Junior to Senior > HEPiX Activities.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
Alice Operations In France
Status of Task Forces Ian Bird GDB 8 May 2003.
“A Data Movement Service for the LHC”
James Casey, IT-GD, CERN CERN, 5th September 2005
Database Readiness Workshop Intro & Goals
Randall Sobie University of Victoria IHEPCCC Meeting
Olof Bärring LCG-LHCC Review, 22nd September 2008
Helge Meinhard, CERN-IT Grid Deployment Board 10-May-2017
Introduction to HEPiX Helge Meinhard, CERN-IT
LHC Data Analysis using a worldwide computing grid
HEPiX Spring 2009 Highlights
Presentation transcript:

HEPiX report Helge Meinhard, Harry Renshall / CERN-IT Computing Seminar / After-C5 27 May 2005

2 Helge Meinhard (at) cern.chHEPiX report Outline Site reports, other topics (Helge Meinhard) Storage topics, Large Cluster SIG workshop (Harry Renshall) LCSIG subject: Batch schedulers locally and for the Grid

3 Helge Meinhard (at) cern.chHEPiX report HEPiX Global organisation of service managers and support staff providing computing facilities for HEP Covering all platforms of interest (Unix/Linux, Windows, Grid, …) Aim: Present recent work and future plans, share experience Meetings ~ 2 / y (spring in Europe, autumn in North America)

4 Helge Meinhard (at) cern.chHEPiX report HEPiX Spring 2005 (1) Held 09 – 13 May 2005 at Forschungszentrum Karlsruhe (FZK), Germany Broad multidisciplinary research centre, 3500 employees Home of the German Tier 1 centre for LCG Format: Mon – Wed Site reports, HEPiX talks Thu – Fri Large Cluster SIG on batch schedulers locally and for the Grid Well organised by Jos van Wezel and helpers Full details incl. slides:

5 Helge Meinhard (at) cern.chHEPiX report HEPiX Spring 2005 (2) 101 participants, of which 12 from CERN-IT Baud, Bell, Cass, Christaller, Field, Iven, Lemaitre, Meinhard, Pace, Polok, Renshall, Silverman Other sites: FZK, DESY Hamburg, SLAC, LAL, FNAL, INFN, JLAB, DAPNIA, RAL, Prague, TRIUMF, NIKHEF, GSI, IN2P3, CNRS, DESY Zeuthen, Caspur, Aachen, Braunschweig, PIC, PSI, London eSc centre, Wisconsin Vendors: DataDirectNet, IBM, Platform, Altair, Sun 60 talks, of which 11 from CERN Cass: organised and chaired LCSIG workshop Silverman: chaired two discussion sessions, provided fine trip report

6 Helge Meinhard (at) cern.chHEPiX report Next meetings SLAC, 10 – 14 October 2005 Rome, 03 – 07 April 2006 European meetings after Rome: tentatively: Spring 2007: DESY Hamburg Spring 2008: CERN Further application: GSI

7 Helge Meinhard (at) cern.chHEPiX report Politics Budget cuts in all major North American labs (≥ 5%) SLAC: Accelerator restarted after 6m stop due to electrical accident. BaBar ends data taking in 2008 (rather than 2010), Changing focus towards Linear Coherent Light Source, i.e. away from HEP IHEPCCC and HEPiX Guy Wormser explained desired role of HEPiX (forming ad-hoc task forces for technical advice to IHEPCCC) Broad agreement with a few caveats Suggested topics: HEP VO, Linux; software life-cycle, storage, collaborative tools, security Work started on Linux and storage already

8 Helge Meinhard (at) cern.chHEPiX report Linux Scientific Linux is 1 year old, now also x86_ as beta now, after next RH quarterly update (i.e. imminent) Proposals: No one going for ia64 – drop support Split system and experiment compiler LHC startup: Certify, but don’t deploy, SL4. Delay decision between SL4 and SL5 until late 2006 Most labs seem prepared to skip SL4, but some uncertainty left – feedback requested 2.6 kernel needed for LHC expts online and laptops?

9 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (1) Opterons more and more used In production by FZK, GSI, LAL, INFN, … Considered/tendered by SLAC, FNAL, … FNAL: Task force concluded that Opteron-based machines are mature for production farms both under i386 and x86_64 64/64 offers up to 40% more performance than 32/64 or 32/32 Up to 30% less power consumption than Xeons of comparable performance

10 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (2) Blade systems not taking off in HEP (Almost) everyone buying 1U systems LAN: FNAL moving to GigE as standard Sites with high-end tape drives rare Most sites are using LTO-2, LTO-1 Nobody mentioned high-end evaluations

11 Helge Meinhard (at) cern.chHEPiX report Hardware and OS (3) Storage: still no clear trend CERN price of 1.4 EUR/GB usable unbeaten Some specialised solutions (FNAL: Ibrix; BNL: Panasas) Panasas perhaps interesting for AFS services File systems XFS found only reliable well-scaling solution for large non-parallel applications Parallel file systems being investigated (mostly in non-HEP context), no high-priority item

12 Helge Meinhard (at) cern.chHEPiX report Windows, mail, authentification Very few Windows talks Are all Windows problems solved? SMS used at CERN (talk) and FNAL Too expensive for smaller sites Spam fighting is still an issue Greylisting = delayed delivery? X.509 certificates mentioned a few times Talk by Pace on Web authentication and cross- authentication with Kerberos

13 Helge Meinhard (at) cern.chHEPiX report Collaborative tools Videoconferencing: tried twice during conference – not brilliant Wiki: TWiki used at GSI CMS: Plone used at FNAL Both with good success, taking off beyond initial target users

14 Helge Meinhard (at) cern.chHEPiX report Service challenges SC2: Throughput goal reached, but operations and monitoring far from adequate for a service Setting up for (staged) SC3, including few selected Tier 2 centres Role of Tier 2s starting to be clarified Tools for SC3: gLite file transfer software, LCG file catalog, lightweight DPM (disk pool manager)

15 Helge Meinhard (at) cern.chHEPiX report Collaboration and sharing Scientific Linux is a big success Areas to work on: Monitoring: CERN uses Lemon (talk well received), FNAL and IN2P3 use NGOP. All other people (including SLAC!) try things like Nagios or Ganglia with documented scalability limits Installation and configuration: some (increasing) use of Quattor, many other sites mentioned Rocks DPM: Only few sites use Castor, more interest in dCache