CC IN2P3 - T1 for CMS: CSA07: production and transfer

Slides:



Advertisements
Similar presentations
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Advertisements

Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
CMS STEP09 C. Charlot / LLR LCG-DIR 19/06/2009. Réunion LCG-France, 19/06/2009 C.Charlot STEP09: scale tests STEP09 was: A series of tests, not an integrated.
Overview of day-to-day operations Suzanne Poulat.
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Claudio Grandi INFN Bologna CERN - WLCG Workshop 13 November 2008 CMS - Plan for shutdown and data-taking preparation Claudio Grandi Outline: Global Runs.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.
1 Commissioning/Facilities Issues LYON November, 2007 Site and Networking Commissioning Appendix specific to Lyon visit Stefano Belforte Daniele Bonacorsi.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
Software framework and batch computing Jochen Markert.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
Operation team at Ccin2p3 Suzanne Poulat –
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
CMS-specific services and activities at CC-IN2P3 Farida Fassi October 23th.
WLCG IPv6 deployment strategy
Kevin Thaddeus Flood University of Wisconsin
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
SuperB – INFN-Bari Giacinto DONVITO.
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Cluster Optimisation using Cgroups
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
Production Resources & Issues p20.09 MC-data Regeneration
The Status of Beijing site, and CMS local DBS
INFN-GRID Workshop Bari, October, 26, 2004
Update on Plan for KISTI-GSDC
Pierre Girard Réunion CMS
CMS transferts massif Artem Trunov.
Farida Fassi, Damien Mercie
AAA from HEP* Perspective
Grid services for CMS at CC-IN2P3
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Computing Board Report CHIPP Plenary Meeting
CMS Computing in France
Glexec/SCAS Pilot: IN2P3-CC status
NET2.
ALICE Computing Model in Run3
Splitting a Database: How and Why
CSA07 data production and transfers
Institut de Physique Nucléaire de Lyon
LHC Data Analysis using a worldwide computing grid
Pierre Girard ATLAS Visit
lundi 25 février 2019 FTS configuration
The CMS Beijing Site: Status and Application
T2D Idea Metrics T2 directly connected to T1s
Presentation transcript:

CC IN2P3 - T1 for CMS: CSA07: production and transfer Nelli PUKHAEVA 30 November 2007

CCIN2P3 on CMS computing model The CCIN2P3 is one of the CMS T1 target is to be around 10 % of the overall T1 resources. Several national and foreign T2 are attached to the CC T1: CC-IN2P3 T2 GRIF federated T2 Belgium_IIHE, UCL T2 Beijing T2 Nelli Pukhaeva / CMS at CC

CC in CMS computing model At CCIN2P3 we are sharing T1-T2-T3 functionality ~230TB fully deployed and available Current spliting is: T1 - 160TB T2 - 50TB T3 - 20TB The splitting between T1 and T2 is done through dedicated CEs. Separate CEs: T1 - cclcgceli01(SL4), T2 - cclcgceli02(SL4), cclcgceli05(SL3) The splitting in term of resources is done by priority for jobs according to CEs and job user (prod vs else). Nelli Pukhaeva / CMS at CC

CSA07 general objective for T1 site CSA07 -- 1 Oct. -- 10 Nov. This was important event to access CC-IN2P3 capabilities for transfer, CMS job production etc… Tasks to be completed for physics: alacreco, background data sets, ‘enhanced lepton’ reco Nelli Pukhaeva / CMS at CC

Daily CMS work operation at CC Number of issues with storage and transfers: HPSS, dCache, srm, afs… merging jobs problem - solved Cleaning unmerged files What we can do for the 2GB job problem? FTS parameter tuning for matrix transfer Can we avoid accessing HF libraries on /afs and just copy them on WN? Nelli Pukhaeva / CMS at CC

CMS jobs monitoring One good day for us Nelli Pukhaeva / CMS at CC

Monitoring jobs at CC: CMS jobs at CC IN2P3 Nelli Pukhaeva / CMS at CC Max all cmsf running jobs: 1070 Average all cmsf running jobs: 691 Current all cmsf running jobs: 538 Max all running jobs: 5002 Average all running jobs: 4146 Current all running jobs: 4068 Max Percentage 23.0 % Average Percentage 17.0 % Current Percentage 13.0 % Nelli Pukhaeva / CMS at CC

CMS jobs monitoring at CC T1 jobs T2 jobs Nelli Pukhaeva / CMS at CC

Monitoring of jobs at CC: Grid CMS jobs at CC IN2P3 Analysis Nelli Pukhaeva / CMS at CC

T1-T1 Transfer Nelli Pukhaeva / CMS at CC

CMS data transfer by PhEDEx Nelli Pukhaeva / CMS at CC

It should started from good link to all T1s DDT It should started from good link to all T1s Things have improved BUT after 1 month of hard effort. Reasonable strategy - to get in priority T1-T1 links and then T1-associated T2 links, then enabling any T1-T2 connection through other T1s. We don’t see a need for continuous commissioning rather recommission after upgrades or several shutdown periods Nelli Pukhaeva / CMS at CC

Data transfer at CC FTS Dedicated channels for each T1 Dedicated channels for each corresponding T2 –GRIF, Belgium, Beijing STAR channel with 10 strims During of CSA07 we have tune the FTS parameters for each channel https://cctools.in2p3.fr/dcache/monitoring/ftsmonitor.php?vo=cms https://cctools.in2p3.fr/dcache/monitoring/ftschannel.php?channel=STAR-IN2P3&vo=cms Nelli Pukhaeva / CMS at CC

transfer and production Next step - get more CSA07 at CCIN2P3 CCIN2P3 have done GOOD There is room for improvement for transfer and production Next step - get more efficient use of these ressources Nelli Pukhaeva / CMS at CC

support Ready for NY ? Number of jobs: Today - 900, 250 production jobs Transfer: Why we cant to keep transfer stable? We believe a LOT can be done improving CMS application and tools - examples of job merging with connection pb, reading data file on /afs to be avoided, manual cleaning of files to be avoided, CMSSW 64 bit build… Nelli Pukhaeva / CMS at CC