HEPiX/HEPNT report Helge Meinhard, Alberto Pace, Denise Heagerty / CERN-IT Computing Seminar 05 November 2003.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Chris Brew RAL PPD HEPiX Workshop Summery Brookhaven National Laboratory October Chris Brew CCLRC.
Catania, Italy. 15 – 19 April 2002 Report on the HEPIX HEPNT Meeting. Catania, Italy. Barry Saunders Gareth Smith
HEPiX Meeting Wrap Up Fall 2000 JLab. Meeting Highlights Monitoring –Several projects underway –Collaboration of ideas occurred –Communication earlier.
LAL Site Report Michel Jouvin LAL / IN2P3
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
A Commodity Cluster for Lattice QCD Calculations at DESY Andreas Gellrich *, Peter Wegner, Hartmut Wittig DESY CHEP03, 25 March 2003 Category 6: Lattice.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
HEPiX Orsay 27 th April 2001 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 27 th April 2001 HEPiX 2001, Orsay.
10 Years of Scientific Linux HEPiX Spring 2014, LAPP, Annecy Alan Silverman (CERN, retired)
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Managing Mature White Box Clusters at CERN LCW: Practical Experience Tim Smith CERN/IT.
Site report: CERN HEPiX/HEPNT Autumn 2003 Vancouver.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Console Infrastructure in the CERN Computer Centre HEPiX / HEPNT Autumn 2003 Vancouver Mostly work done by
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
26-Jun-99D.P.Kelsey, HTASC report1 HTASC - Report to HEP-CCC David Kelsey, RAL rl.ac.uk 26 June 1999, FNAL (
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
HEPiX Spring 2012 Highlights Helge Meinhard CERN-IT GDB 09-May-2012.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
15-Apr-1999D.P.Kelsey - HEPNT update - HEPiX/RAL1 HEPNT an update David Kelsey CLRC Rutherford Appleton Lab, UK rl.ac.uk
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
1 The new Fabric Management Tools in Production at CERN Thorsten Kleinwort for CERN IT/FIO HEPiX Autumn 2003 Triumf Vancouver Monday, October 20, 2003.
LAL Site Report Michel Jouvin LAL / IN2P3
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
4-8 th October 1999CERN Site Report, HEPiX SLAC. A.Silverman CERN Site Report HEPNT/HEPiX October 1999 SLAC Alan Silverman CERN/IT/DIS.
Jens G Jensen e-Science Centre hepsysmanix HEPiX report for hepsysman RAL, 10 May 2006.
HEPiX Meeting Summary Autumn 2000 Jefferson Laboratory, Newport News, Virginia, USA.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
HEPiX CASPUR Highlights and Comments. Computer Room Issues Computer room cooling and air conditioning systems: mentioned in a majority of site reports,
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
TiBS Fermilab – HEPiX-HEPNT Ray Pasetes October 22, 2003.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
HEPiX 2 nd Nov 2000 Alan Silverman Proposal to form a Large Cluster SIG Alan Silverman 2 nd Nov 2000 HEPiX – Jefferson Lab.
HEPiX report Helge Meinhard, Harry Renshall / CERN-IT Computing Seminar / After-C5 27 May 2005.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Lisa Giacchetti AFS: What is everyone doing? LISA GIACCHETTI Operating Systems Support.
HEPIX Backup Survey David Asbury CERN/IT/FIO HEPIX, Rome, 6 April 2006.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
12-Nov-99D.P.Kelsey, HTASC report1 HTASC - Report to HEP-CCC David Kelsey, RAL rl.ac.uk 12 November 1999, CERN (
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Sandy Philpott HEPiX Fall 07 Genome Sequencing Center Washington University at St. Louis.
Few Highlights from HEPIX/HEPNT Alberto Pace. Warning  This is not a comprehensive report.  See Alan Silverman’s excellent summary if you need this.
HEPiX Fall 2011 Summary Report QMUL 10 November 2011 Martin Bly, STFC-RAL.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC
Presentation transcript:

HEPiX/HEPNT report Helge Meinhard, Alberto Pace, Denise Heagerty / CERN-IT Computing Seminar 05 November 2003

HEPiX/HEPNT Autumn 2003 (1) Held 20 – 24 October at TRIUMF, Vancouver Format: Mon – Wed Site reports, HEPiX and HEPNT talks Thu: Large Cluster SIG on security issues Fri am: Parallel sessions on storage, security and Windows issues Excellent organisation by Corrie Kost / TRIUMF Weather not too tempting to skip sessions Full details:

HEPiX/HEPNT Autumn 2003 (2) 76 participants, of which 11 from CERN Barring, Durand, Heagerty, Iven, Kleinwort, Lopienski, Meinhard, Neilson, Pace, Silverman, D Smith 59 talks, of which 19 from CERN Vendor presence (Ibrix, Panasas, RedHat, Microsoft) Friday pm: WestGrid Next meetings: Spring: May 24 th to 28 th in Edinburgh Autumn: BNL expressing interest

Highlights Unix-related (me) Windows-related (Alberto Pace) Security-related (Denise Heagerty)

Site reports: Hardware (1) Major investments: Xeons, Solaris, IBM SP, Athlon MP Disappointing experience with HT Increasing interest: Blades (e.g. WestGrid – 14 blades with 2 x Xeon 3.06 GHz each in 7U chassis) AMD Opteron US sites require cluster mgmt software with HW acquisitions

Site reports: Hardware (2) Physical limits becoming ever more important Floor space UPS Cooling power Weight capacity per unit of floor space Disk storage Some reports of bad experience with IDE-based file servers No clear tendency

Site reports: Software (1) RedHat 6.x diminishing, but still in production use at many sites Solaris 9 being rolled out Multiple compilers needed on Linux (IN2P3: 6), but not considered a big problem SLAC looking at Solarix/x86 AFS not considered a problem at all SLAC organising a ‘best practices’ workshop (complementing LISA and USENIX workshops) – see f)

Site reports: Software (2) NFS in use at large scale Kerberos 5: No clear preference for MIT vs. Heimdal vs. Microsoft; lots of home bricolage around to have them synchronise Reports about migrating out of Remedy DESY and GSI happy with SuSE and Debian (except for laptops) Condor getting more popular, considered as LSF replacement; Sun GridEngine mentioned as well

CERN talks Castor evolution (Durand) Fabric mgmt tools (Kleinwort) CVS status and tools (Lopienski) Solaris service update (Lopienski) Console management (Meinhard) ADC tests and benchmarks (Iven) New HEPiX scripts (Iven) LCG deployment status and issues (Neilson) LCG scalability issues (D Smith) Windows and/or security related

RedHat support (1) Tue: Talk by Don Langley/RedHat Described new model, and technical features of RHEL 3 released the day after RHEL releases every 12…18 months, with guaranteed support for 5 years Yearly subscriptions (per machine) grant access to sources, binaries, updates, and to support (different levels) Said that RedHat would be able to find the right model for HEP Reactions: Not everyone was convinced, no clear commitment to react to our needs, not the right level

RedHat support (2) Wed: Interactive discussion Labs currently using RH wish to stay and go for RHEL; HEP-wide agreement preferred High level of HEP-internal Linux support must be taken into account by RH HEP- or site-wide licences much preferred over per- node SLAC, FNAL and CERN contact RedHat in common in order to negotiate for HEP Other HEP sites should be able to join if they so wish

Other highlights (1) PDSF Host Database project (Shane Canon) Inventory mgmt, purchase information, technical details, connectivity, … Similar objectives to some combination of BIS/CDB, HMS, LanDB, … Unix and AFS backup at FNAL (Jack Schmidt) Investigated TSM, Veritas, Amanda, some smaller vendors Decided to go for TiBS (True incremental Backup System - Carnegie-Mellon offspring) – 1.6 TB in 5 hours Large disk cache of backup data on server

Other highlights (2) Mosix and PBS clustering at TRIUMF (Steve Mcdonald) Challenge: provide interactive and batch services with little budget 7 dual-proc systems, three running OpenMosix all the time (one of them serving as a head node), rest running OpenPBS if jobs, migrating to Mosix if no jobs

Mass storage workshop A meta-meeting… Discussed what and how to discuss Joining in by VRVS: FNAL, RAL, IN2P3, DESY, FZK, … Launched forum for MSS and their interoperability list: Each site to report (to Don Petravick/FNAL) about capabilities and needs conerning WAN interfaces, security, monitoring and protocols, file transfer protocols, mgmt protocols, replica system Next meeting: VRVS conference in December Next HEPiX: LCSIG will be on storage

My personal comments Excellent means of building relationships of trust between centres No impression of cheating by anybody Clear concrete steps towards sharing tools… LAL using CERN printer wizard CERN using SLAC console software A lot of interest in ELFms … and even when not sharing implementations, sharing ideas and information is very beneficial