12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status

Slides:



Advertisements
Similar presentations
GridPP7 – June 30 – July 2, 2003 – Fabric monitoring– n° 1 Fabric monitoring for LCG-1 in the CERN Computer Center Jan van Eldik CERN-IT/FIO/SM 7 th GridPP.
Advertisements

Level 1 Components of the Project. Level 0 Goal or Aim of GridPP. Level 2 Elements of the components. Level 2 Milestones for the elements.
Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
Site report: CERN Helge Meinhard (at) cern ch HEPiX fall SLAC.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
7/2/2003Supervision & Monitoring section1 Supervision & Monitoring Organization and work plan Olof Bärring.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
1 Linux in the Computer Center at CERN Zeuthen Thorsten Kleinwort CERN-IT.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
3. April 2006Bernd Panzer-Steindel, CERN/IT1 HEPIX 2006 CPU technology session some ‘random walk’
May PEM status report. O.Bärring 1 PEM status report Large-Scale Cluster Computing Workshop FNAL, May Olof Bärring, CERN.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
1 The new Fabric Management Tools in Production at CERN Thorsten Kleinwort for CERN IT/FIO HEPiX Autumn 2003 Triumf Vancouver Monday, October 20, 2003.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
LCG LHC Computing Grid Project – LCG CERN – European Organisation for Nuclear Research Geneva, Switzerland LCG LHCC Comprehensive.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
CERN IT Department CH-1211 Genève 23 Switzerland Introduction to CERN Computing Services Bernd Panzer-Steindel, CERN/IT.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Tier1 Andrew Sansum GRIDPP 10 June GRIDPP10 June 2004Tier1A2 Production Service for HEP (PPARC) GRIDPP ( ). –“ GridPP will enable testing.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Storage issues for end-user analysis Bernd Panzer-Steindel, CERN/IT 08 July
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
24. November 2003Bernd Panzer-Steindel, CERN/IT1 LCG LHCC Review Computing Fabric Overview and Status.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
27 February 2003Wolfgang von Rüden, IT Update for Focus1 Update on ongoing IT activities FOCUS February 2003 Thanks to all contributors from IT groups.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
ALICE Computing Data Challenge VI
NL Service Challenge Plans
IT-DB Physics Services Planning for LHC start-up
Database Services at CERN Status Update
Grid Deployment Area Status Report
Grid related projects CERN openlab LCG EDG F.Fluckiger
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
Bernd Panzer-Steindel CERN/IT
Short to middle term GRID deployment plan for LHCb
Presentation transcript:

12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status

12. March 2003Bernd Panzer-Steindel, CERN/IT2 Major achievements during the last 3 month (1)  In December 2002 the ALICE-IT Computing Data Challenge reached MB/s sustained (for 7 days) dataflow from an emulated DAQ system into the HSM system CASTOR (disk and tape) with peak values of 350 MB/s. The goal for the MDC4 in 2002 was 200 MB/s.  In January ATLAS used 230 testbed nodes successfully for an Online Computing Data Challenge (postponed from October 2002) to test event building and run control issues  In January and February the move of Lxplus and Lxbatch to RH 7.3 at the >75% level (very high support load, problems in reaching all users)

12. March 2003Bernd Panzer-Steindel, CERN/IT3 Major achievements during the last 3 month (2)  In February the migration of the COMPASS data from Objectivity to Oracle finished (270 TB processed)  LCG  EDG consolidation hardware resource allocation and planning improved  HEP wide availability of ORACLE licenses (more details and summary over major IT activities over the last 6 month can be found in the last IT FOCUS presentation

12. March 2003Bernd Panzer-Steindel, CERN/IT4 Important decisions during the last 3 month  No more investment into tape infrastructure between 2003 and 2005, necessary Computing Data Challenges need to take equipment from the production systems  fixed budget lines for extra resources (cpu, disks) in COCOTIME takes care of non-LHC experiments, PEB for LHC  back to ‘mainframe’ type resource sharing within fixed envelope ( agreed budget profile for the prototype agreed budget profile for the physics part (lxbatch, lxplus)  ~ 1.2 million per year

12. March 2003Bernd Panzer-Steindel, CERN/IT5 Manpower changes (I)  In January/February reorganization in IT  better group structure mapping onto the LCG project, merge of three (ADC,DS,FIO) groups into two (ADC,FIO), clear separation of the Grid Deployment activities into one newly created group (GD) 1.GD, Grid Deployment Group Leader : Ian Bird  testing, certification and integration of GRID middleware, LCG-1 preparation, management of the CERN GRID infrastructure, first level support, 2.FIO, Fabric Infrastructure and Operations Group Leader : Tony Cass  Lxbatch, Lxplus, system administration, computer centre infrastructure, fabric automation, CASTOR service 2.ADC, Architecture and Data Challenges Group Leader : Bernd Panzer-Steindel  Linux expertise, AFS service, CASTOR development, DC organization, openlab more details :

12. March 2003Bernd Panzer-Steindel, CERN/IT6 Manpower changes (II)  Olof Barring will replace Jean-Philippe Baud as the project leader of the CASTOR project There is now an open post for an additional CASTOR developer  Two LCG persons have left the Fabric area, one will be replaced in April

12. March 2003Bernd Panzer-Steindel, CERN/IT7 Milestones (I)  L2M Production Pilot I starts 1/15/03  hardware was made available, not heavily used due to late LCG1 software definition (milestone okay)  L3M Deployment of large monitoring prototype 1/6/03  disk server performance monitoring and cpu server exception monitoring aready since last year, since end February. both metrices on all systems ORACLE database used as repository very good consolidation between WP4 (DataGrid) and PVSS (commercial), the IT reorganization streamlined the activities Installation+configuration and monitoring in one group (FIO) (milestone okay, small delay)  L3M Basic infrastructure for the Prototype in production 3/10/03  40 nodes done since January, 150 expected in mid March (milestone okay)

12. March 2003Bernd Panzer-Steindel, CERN/IT8  L3M CPU and disk capacity upgrade of Lxbatch 2/24/03  delayed until end March (delays in the acquisition process) (milestone late)  L3M Node capacity upgrade of the Prototype 2/24/03  delayed until end April (delays in the acquisition process) (milestone late)  L3M Lxbatch job scheduler pilot 2/3/03  postponed to April, due to late definition of LCG-1 software, intensive collaboration between GD and FIO group has started (GRID  Fabric) (milestone late) Milestones (II)

12. March 2003Bernd Panzer-Steindel, CERN/IT9 Fabric area LCG(Q402) LCG(Q103) EDG IT System Management and Operation Development (management automation) Data Storage Management Fabric+Grid Security Grid-Fabric Interface Personnel in the Fabrics area Focus of the IT personnel is on service After the IT reorganization this table is currently under review and will change

12. March 2003Bernd Panzer-Steindel, CERN/IT10 Outlook for the next 3 month (I)  Move of major part of the equipment into the vault in Building 513 especially the move of the STK Tape Silos  Delivery of the 28 new 9940B tape drives and removal of the old 9940A tape drives  1 GByte/s IT Computing Data Challenge in April The overlap period of old and new tape drives offers an opportunity to test a ‘CDR’ system at a large scale  50 cpu server + 50 disk server + 50 tape drives coupled through mock data challenge programs and CASTOR  1 GByte/s into CASTOR onto tape

12. March 2003Bernd Panzer-Steindel, CERN/IT11  Delivery and Installation of this years new resources (cpu and disk server) in April  New cost calculations for the CERN T0 T1 centre until beginning of April  Integration of Lxbatch into the LCG-1 environment  Multiple smaller IT and ALICE-IT Computing Data Challenges Outlook for the next 3 month (II)