23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.

Slides:



Advertisements
Similar presentations
GridPP7 – June 30 – July 2, 2003 – Fabric monitoring– n° 1 Fabric monitoring for LCG-1 in the CERN Computer Center Jan van Eldik CERN-IT/FIO/SM 7 th GridPP.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
HEPiX Meeting Wrap Up Fall 2000 JLab. Meeting Highlights Monitoring –Several projects underway –Collaboration of ideas occurred –Communication earlier.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
CERN IT Department CH-1211 Genève 23 Switzerland Introduction to CERN Computing Services Bernd Panzer-Steindel, CERN/IT.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
Storage issues for end-user analysis Bernd Panzer-Steindel, CERN/IT 08 July
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
24. November 2003Bernd Panzer-Steindel, CERN/IT1 LCG LHCC Review Computing Fabric Overview and Status.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
A proposal: from CDR to CDH 1 Paolo Valente – INFN Roma [Acknowledgements to A. Di Girolamo] Liverpool, Aug. 2013NA62 collaboration meeting.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
Physics Data Processing at NIKHEF Jeff Templon WAR 7 May 2004.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
(Prague, March 2009) Andrey Y Shevel
Database Replication and Monitoring
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
Data Challenge with the Grid in ATLAS
Grid related projects CERN openlab LCG EDG F.Fluckiger
Bernd Panzer-Steindel, CERN/IT
Virtualisation for NA49/NA61
Readiness of ATLAS Computing - A personal view
Computing Report ATLAS Bern
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
LHC Computing re-costing for
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric

23.March 2004Bernd Panzer-Steindel, CERN/IT2 The preparations for the MoU papers with the different Tiers have started, which will define the responsibilities and functionalities of the Tiers CERN is a Tier0 and has also a Tier1, which is in the current planning nothing ‘special’, same as any other Tier1 e.g. T0 : stores the raw data on tape, does the first pass reconstruction,…… T1 : stores part of a raw data copy on tape, has x % of the ESD on disk, produces AODs, supports Tier2, access to repositories (software, data, calibration,etc.), …. T2 : AOD analysis, …….. (just examples) To do this some assumptions about the computing model need to be done, especially about the data flow e.g. :  one copy of the raw data per experiment is at CERN, a second one is distributed online to all Tier1 associated with an experiment  the first pass of the reprocessing is done online at CERN and the EOD is stored and 2? copies are send to the Tier1  re-processings are done at the Tier1 and a copy of each EOD version is send to CERN  frequent AOD productions are done from the ESDs at the Tier1 centers, distributed to the T2 centers

23.March 2004Bernd Panzer-Steindel, CERN/IT3 ‘Technical’ coupling of the Tier 0/Tier 1 centers independent developments sharing experience common activities synchronization dependency level Online raw data and ESD copy, WAN Operating system (Linux version x) Cluster management Mass storage Local security Basic infrastructure (box size, electricity, cooling) Large disk poolsEquipment quality, stability Mass storage interfaces Filesystems, repositories (software,calibration,metadata,etc.) Batch systems Grid middleware

23.March 2004Bernd Panzer-Steindel, CERN/IT4 some ‘random’ questions………. Where does one have to have more collaboration efforts ? Do we need dedicated WAN ‘data challenges’ between T0 and Tier1 centers and within Tier1 centers ? within the GRID middleware framework or separate for bulk synchronous data transfers ? GRID middleware  Fabric interface How much is the middleware effecting the choices of the fabric (OS, security, node monitoring, etc.) ? Is HEPIX frequent enough to share experience ? RedHat commercial versus HEP support, MySQL versus Oracle Mixture of support for LCG, other experiments/sciences, LHC experiment specific software (e.g. Alien),……