Data Challenge with the Grid in ATLAS

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
G.Rahal LHC Computing Grid: CCIN2P3 role and Contribution KISTI-CCIN2P3 Workshop Ghita Rahal KISTI, December 1st, 2008.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
USATLAS SC4. 2 ?! …… The same host name for dual NIC dCache door is resolved to different IP addresses depending.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
USATLAS SC4. 2 ?! …… The same host name for dual NIC dCache door is resolved to different IP addresses depending.
The ATLAS Grid Progress Roger Jones Lancaster University GridPP CM QMUL, 28 June 2006.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
France-Asia Initiative
Gene Oleynik, Head of Data Storage and Caching,
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
University of Texas At Arlington Louisiana Tech University
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Xiaomei Zhang CMS IHEP Group Meeting December
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Status report on LHC_2: ATLAS computing
Status Report on LHC_2 : ATLAS computing
ATLAS Use and Experience of FTS
LHC Computing Grid Project Status
LCG France Network Infrastructures
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
CMS — Service Challenge 3 Requirements and Objectives
LCG-France activities
Update on Plan for KISTI-GSDC
Status and Prospects of The LHC Experiments Computing
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Simulation use cases for T2 in ALICE
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
Organization of ATLAS computing in France
US ATLAS Physics & Computing
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
Grid Computing 6th FCPPL Workshop
DØ MC and Data Processing on the Grid
lundi 25 février 2019 FTS configuration
ATLAS DC2 & Continuous production
The ATLAS Computing Model
GRIF : an EGEE site in Paris Region
The LHCb Computing Data Challenge DC06
Presentation transcript:

Data Challenge with the Grid in ATLAS First Chinese-French Workshop on LHC Physics and associated Grid Computing IHEP, Beijing 14-12-2006 Ghita Rahal CC-IN2P3

Chinese-French Workshop on LHC Physics an Associated Grid Computing Guidelines Presentation of Lyon Tier-1 Test and data Challenges Atlas Data management and data Model Jobs processing and priorities Some statistics on job production at CC 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing LYON Tier 1 Situation Tier-2: GRIF CEA/DAPNIA LAL LLR LPNHE IPNO Strasbourg Tier-3: IPHC Paris-IdF Nantes Tier-2: Subatech Tier-3: LAPP Annecy Clermont-Ferrand Tier-2: LPC Lyon Tier-3: CPPM Analysis Facility Marseille Tier-1: CC-IN2P3 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing LYON Tier-1 LCG France Setup, develop and maintain an LCG Tier-1 and an Analysis Facility at CC-IN2P3 (Lyon) Promote and coordinate the integration of Tier-2/Tier-3 French (and non French) sites into the LCG collaboration Schedule Started in July 2004 Two phases 2004-2008: development and ramp-up 2009-…: cruise phase Equipment budget for Tier-1 and Analysis Facility 2005-2008: 16,4 M€ PROS: previous experience with large collaborations (BaBar, D0,…) 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

LYON TIER-1 contribution Lyon participate to the 4 LHC experiment 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Guidelines Presentation of Lyon Tier-1 Test and data Challenges Atlas Data management and data Model Jobs processing and priorities Some statistics on job production at CC 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Tests TIER-1 Lyon in 2006 Lyon part of all test challenges to gain experience Performance tests: June-July 2006 September-October 2006 Goal: transfers Tier-0 to 10 Tier-1s: 800 MB/s in total of which 120MB/s to Lyon Goal: transfer Tier-1 (Lyon) to Tier2: 75 MB/s ATLAS Functional test October 22 to 30 Goal: Check the system functionality during data transfer from CERN to Tier-1s and within Tier-1 clouds. 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Performance tests T0=>T1 July 2006 Almost reached the goal for few hours Problems from various sides (availability of the sites, of the services…. See summary) 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Performance tests T0=>T1 October 2006 Overall weaker throughput: Simultaneous tests decide to run multi VO tests in December and January. 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Atlas Performance tests T1=>T2 ATLAS: continuous transfer from T1 to T2 sites initiated by the Tier 1 July 2006: 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Atlas Performance tests T1=>T2 July 2006 Transfers to 7 Sites, T2 and non-T2 simultaneously Some problem of limitations in the bandwidth for simultaneous transfers 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Multi-VO tests 2 days tests involving multi VO Generate data at Tier-0 according to the rate transfer of each experiment Transfer to all sites 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Multi-VO tests Transfer Alice-Atlas-CMS to LYON Tier-1 Reached nominal transfer rates after few improvements… 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Conclusions from Atlas tests Achievements: Global bandwidth from T0 reached 600 MB/s (700 MB/s peak) for few hours. Transfers included most of the Tier-1 sites Lyon one of the very few sites performing the whole chain, down to the distribution to the Tier-2 and -3 sites. Problems (concerning all sites and Lyon also) Overall system still unstable: instabilities coming from the different components (DQ2, FTS, dCache..) Lack of monitoring => develop monitoring of FTS channels and dCache disks Next : Focus on Tier-1 to Tiers-2 /Tiers-3 transfers 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Guidelines Presentation of Lyon Tier-1 Test and data Challenges Atlas Data management and data Model Jobs processing and priorities Some statistics on job production at CC 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing ATLAS Data Management Atlas uses 3 grids: LCG, OSG and NorduGrid with their own services Requires an ATLAS layer over the Grid middleware Atlas Model of computing and data distribution: Storage capacity spread in T1 sites Data stored in different storage systems with different access technologies. Computing power distributed over all Tiers, 1, 2, 3 to produce MC and (re)process data Tool to Distribute the data must: Allow high performance and reliable data movement Include information about data location and replication Support multiple grid flavours. 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

ATLAS Tier-1 Data Flow (2008) Real data storage, reprocessing and distribution Tape RAW ESD2 AODm2 RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day 0.044 Hz 3.74K f/day 44 MB/s 3.66 TB/day Plus simulation & analysis data flow Tier-0 disk buffer AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day AODm2 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day ESD1 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm1 500 MB/file 0.04 Hz 3.4K f/day 20 MB/s 1.6 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AOD2 10 MB/file 0.2 Hz 17K f/day 2 MB/s 0.16 TB/day AODm2 500 MB/file 0.004 Hz 0.34K f/day 2 MB/s 0.16 TB/day RAW 1.6 GB/file 0.02 Hz 1.7K f/day 32 MB/s 2.7 TB/day T1 Tier-2s CPU farm ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm2 500 MB/file 0.036 Hz 3.1K f/day 18 MB/s 1.44 TB/day AODm2 500 MB/file 0.004 Hz 0.34K f/day 2 MB/s 0.16 TB/day ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day Data access for analysis: ESDm, AODm ESD2 0.5 GB/file 0.02 Hz 1.7K f/day 10 MB/s 0.8 TB/day AODm2 500 MB/file 0.036 Hz 3.1K f/day 18 MB/s 1.44 TB/day T1 Other Tier-1s disk storage T1 Other Tier-1s 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Bandwidth Requirements INPUT Source Data Type BANDWIDTH (MB/s) Tier-0 RAW1 43,20 ESD1 51,00 AODm1 20,00 Total from T0   114,20 Total BNL 26,40 Total from other T1s 14,90 Associate Tier-2s SIM MC 17,82 Total Input 173,32 OUTPUT 29,70 Other Tier-1s AODm2 21,60 Total Tier-2s 74,93 Total Output 126,23 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing ATLAS tool : DDM Tool to distribute the data within Atlas sites: based on DQ2 package. Functionalities: Transfer data between sites Catalog data at each site Remove old data at each site All data movement is automated using subscription system. How is it implemented? Based on external services: Set of DQ2 services running at each site (T0, T1) gLite FTS LFC: Local Replica Catalog Tests of the tool: Performance and functional tests shown earlier 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Guidelines Presentation of Lyon Tier-1 Test and data Challenges Atlas Data management and data Model Jobs processing and priorities Some statistics on job production at CC 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Priorities at the Sites Sites deal with different types of jobs, user’s analysis jobs, production,… Priorities must be established to favour processing/reprocessing of data or Monte-Carlo production Roles defined at the level of the certificates 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Atlas Production jobs versus all Atlas jobs Goal: 80% of production jobs running at the site All Atlas Jobs % production jobs ~93% Production Jobs 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Atlas Production Jobs: submission But works only if enough production jobs submitted by Atlas Example: Peak of numbers of prod jobs running at ~60%; Not enough jobs in queue No steady stream of jobs in queue Needs better tuning at the level of the Grid Production Jobs in queue Production Jobs running 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Guidelines Presentation of Lyon Tier-1 Test and data Challenges Atlas Data management and data Model Establishment of job priorities Some statistics on job production at CC 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Production Jobs on the French Sites 22% of LCG 15% for July-September 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Global Contribution 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing

Chinese-French Workshop on LHC Physics an Associated Grid Computing Summary Tools to transfer, process and analyse the data on the grid are identified and extensively tested. Still need to improve: reliability, stability, monitoring. Communication between computing experts community and physicist community better established but needs more sustained efforts from both sides. 14/12/2006 Chinese-French Workshop on LHC Physics an Associated Grid Computing