LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
INFN Testbed status report L. Gaido WP6 meeting CERN - October 30th, 2002.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
BNL Service Challenge 3 Site Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
CSCS Status Peter Kunszt Manager Swiss Grid Initiative CHIPP, 21 April, 2006.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
WLCG 1 Service Challenge 4: Preparation, Planning and Outstanding Issues at INFN Workshop sul Calcolo e Reti dell'INFN Jun.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
LHCC meeting – Feb’06 1 SC3 - Experiments’ Experiences Nick Brook In chronological order: ALICE CMS LHCb ATLAS.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
Service Challenge 3 CERN
CMS — Service Challenge 3 Requirements and Objectives
Update on Plan for KISTI-GSDC
Luca dell’Agnello INFN-CNAF
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
The INFN TIER1 Regional Centre
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete Tiziana Ferrari INFN CNAF CCR, Roma, Ottobre 2005

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 2 Service Challenge 3 Now in SC3 Service Phase (Sep – Dec 2005) ALICE, CMS and LHCb have all started their production ATLAS are preparing for November 1 st start Service Challenge 4: [May 2006, Sep 2006]

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 3 SC3: INFN Sites (1/2) CNAF –LCG File Catalogue (LFC) –File Transport Service (FTS) –Myproxy server and BDII –Storage: CASTOR with SRM interface Interim installations of sw components from some of the experiments (not currently available from LCG) - VObox

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 4 SC3: INFN sites (2/2) Torino (ALICE): –FTS, LFC, dCache (LCG 2.6.0) –Storage Space: 2 TBy Milano (ATLAS): –FTS, LFC, DPM –Storage space: 5.29 TBy Pisa (ATLAS/CMS): –FTS, PhEDEx, POOL file cat, PubDB, LFC, DPM –Storage space: 5 TBy available, 5 TBy expected Legnaro (CMS): –FTS, PhEDEx, Pool file cat., PubDB, DPM (1 pool, 80 Gby) –Storage space: 4 TBy Bari (ATLAS/CMS): –FTS, PhEDEx, POOL file cat., PubDB, LFC, dCache, DPM –Storage space: 5 TBy available LHCb –CNAF

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 5 INFN Tier-1: SC4 short-term plans (1/2) SC4: storage and computing resources will be shared with production Storage –Oct 2005: Data disk 50 TB (Castor front-end) WAN  Disk performance: 125 MB/s (demonstrated, SC3) –Oct 2005: Tape 200 TB (4 9940B + 6 LTO2 drives) Drives shared with production WAN  Tape performance: mean sustained ~ 50 MB/s (SC3, throughput phase, July 2005) Computing –Oct 2005: min 1200 kSI2K, max1550 kSI2K (as the farm is shared)

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 6 INFN Tier-1: SC4 short-term plans (2/2) Network –Oct 2005: 2 x 1 GEthernet links CNAF  GARR, dedicated to SC traffic to/from CERN –Future: ongoing upgrade to 10 GEthernet, CNAF   GARR, dedicated to SC Usage of policy routing at the GARR access point Type of connectivity to INFN Tier-2 under discussion Backup link Tier-1   Tier-1 (Karlsruhe) under discussion Software –Oct 2005: SRM/Castor and FTS farm middleware: LCG 2.6 –Future: dCache and StoRM under evaluation (for disk-only SRMs) Possibility to upgrade to CASTOR v2 under evaluation (end of year 2005)

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 7 What target rates for Tier-2’s? LHC Computing Grid: Technical Design Report –Access link at 1 Gb/s by the time LHC starts –Traffic 1 Tier-1   1 Tier-2 ~ 10% Traffic Tier-0   Tier-1 –Estimations of traffic Tier-1   Tier-2 represent an upper limit –Tier-2   Tier-2 replications will lower loan on the Tier-1 *WSF: with safety factor

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 8 TDR: bandwidth estimation LHC Computing Grid Technical Design Report (giugno 2005):

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 9 IN (real data Mb/s) IN (MC, Mb/s) Total IN (Mb/s) OUT (MC, Mb/s) Alice 31.9 / / / / 11.2 Atlas 32.9 / / / / 10.2 CMS 68.5 / / / / LHCb 0./ 0 0 / / 15.3 Expected rates (rough / with_safety-factor)

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 10 Necessita’ vs specifiche Tier-2 Esperimento 2006 (Mb/s) BGA/BEA in 2006 (Mb/s) BGA/BEA out Necessità totali IN / OUT (Mb/si) Bari Alice – CMS 500 / / Catania Alice 0 / 1000=118.3 / 11.2 Frascati Atlas 100 / 200=113.3 / 10.2 Legnaro Alice – CMS 450 / / / Milano Atlas 100 / 1000=113.3 / 10.2 Napoli Atlas1000 / 1000=113.3 / 10.2 Pisa CMSNon specificato / Roma1 Atlas – CMS 100 / 2500 atlas 200 / 1000 cms =318.8 / Torino Alice 70 / 1000 = / 11.2 With GigEth (Oct 2005) No Yes no Yes No Yes

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 11 The Worldwide LHC Computing Grid Level 1 Service Milestones 31 Dec 05 –Tier-0/1 high-performance network operational at CERN and 3 Tier-1s. 31 Dec 05 –750 MB/s data recording demonstration at CERN: Data generator disk tape sustaining 750 MB/s for one week using the CASTOR 2 mass storage system. Jan 06 – Feb 06 –Throughput tests SC4a 28 Feb 06 –All required software for baseline services deployed and operational at all Tier-1s and at least 20 Tier-2 sites Mar 06 –Tier-0/1 high-performance network operational at CERN and 6 Tier-1s, at least 3 via GEANT. SC4b 30 Apr 06 –Service Challenge 4 Set-up: Set-up complete and basic service demonstrated, capable of running experiment-supplied packaged test jobs, data distribution tested. 30 Apr 06 –1.0 GB/s data recording demonstration at CERN: Data generator disk tape sustaining 1.0 GB/s for one week using the CASTOR 2 mass storage system and the new tape equipment. SC4 31 May 06 Service Challenge 4: –Start of stable service phase, including all Tier-1s and 40 Tier-2 sites.

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 12 The Worldwide LHC Computing Grid Level 1 Service Milestones The service must be able to support the full computing model of each experiment, including simulation and end-user batch analysis at Tier-2 sites. Criteria for successful completion of SC4: By end of service phase (end September 2006) 1.8 Tier-1s and 20 Tier-2s must have demonstrated availability better than 90% of the levels specified the WLCG MoU [adjusted for sites that do not provide a 24 hour service] 2.Success rate of standard application test jobs greater than 90% (excluding failures due to the applications environment and non-availability of sites) 3.Performance and throughput tests complete: Performance goal for each Tier-1 is –the nominal data rate that the centre must sustain during LHC operation (200 MB/s for CNAF): –CERN-disk  network  Tier-1-tape. –Throughput test goal is to maintain for one week an average throughput of 1.6 GB/s from disk at CERN to tape at the Tier-1 sites. All Tier-1 sites must participate. 30 Sept 06 –1.6 GB/s data recording demonstration at CERN: Data generator  disk  tape sustaining 1.6 GB/s for one week using the CASTOR mass storage system. 30 Sept 06 –Initial LHC Service in operation: Capable of handling the full nominal data rate between CERN and Tier-1s. The service will be used for extended testing of the computing systems of the four experiments, for simulation and for processing of cosmic-ray data. During the following six months each site will build up to the full throughput needed for LHC operation, which is twice the nominal data rate. 24 hour operational coverage is required at all Tier-1 centres from January 2007

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 13 Milestones in brief Jan 2006: –throughput tests (rate targets not specified) May 2006: –high-speed network infrastructure operational By Sep 2006: –avg throughput of 1.6 GB/s from disk at CERN to tape at the Tier-1 sites (nominal rate for LHC operation, 200 MB/s for CNAF) Oct 2006 – Mar 2007: –avg throughput up to twice the nominal rate

LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 14 Target (nominal) data rates for Tier-1 sites and CERN in SC4