Farida Fassi, Damien Mercie

Slides:



Advertisements
Similar presentations
Using Secondary Storage Effectively In most studies of algorithms, one assumes the "RAM model“: –The data is in main memory, –Access to any item of data.
Advertisements

1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
FZU participation in the Tier0 test CERN August 3, 2006.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI AMOD report – Fernando H. Barreiro Megino CERN-IT-ES-VOS.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
08/30/05GDM Project Presentation Lower Storage Summary of activity on 8/30/2005.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
WLCG Service Report ~~~ WLCG Management Board, 16 th December 2008.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Storage Classes report GDB Oct Artem Trunov
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Alibaba and PostgreSQL Guangzhou Zhang Practices on providing PG as a cloud service in Alibaba.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
TAG and iELSSI Progress Elisabeth Vinek, CERN & University of Vienna on behalf of the TAG developers group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
( ) 1 Chapter # 8 How Data is stored DATABASE.
5/12/06T.Kurca - D0 Meeting FNAL1 p20 Reprocessing Introduction Computing Resources Architecture Operational Model Technical Issues Operational Issues.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
CMS-specific services and activities at CC-IN2P3 Farida Fassi October 23th.
Pierre Girard LCG France 2010 Lyon, November 22th-23th, 2010 Lyon Tier-1 infrastructure & services : status and issues.
 Session Objectives:  Understand how to upgrade your private cloud: Windows Server 2008 R2  Windows Server 2012 R2 Windows Server 2012  Windows.
Servizi core INFN Grid presso il CNAF: setup attuale
Real Time Fake Analysis at PIC
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
WP18, High-speed data recording Krzysztof Wrona, European XFEL
LCG Service Challenge: Planning and Milestones
Report of Dubna discussion
James Casey, IT-GD, CERN CERN, 5th September 2005
Bulk production of Monte Carlo
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
The Status of Beijing site, and CMS local DBS
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
Bernd Panzer-Steindel, CERN/IT
SAM at CCIN2P3 configuration issues
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
CMS staging from tape Natalia Ratnikova, Fermilab
CC IN2P3 - T1 for CMS: CSA07: production and transfer
CMS Computing in France
Conditions Data access using FroNTier Squid cache Server
Simulation use cases for T2 in ALICE
jeudi 13 septembre 2018jeudi 13 septembre 2018 CMS Tapes Farida Fassi
CSA07 data production and transfers
US CMS Testbed.
Artem Trunov Computing Center IN2P3
lundi 25 février 2019 FTS configuration
The CMS Beijing Site: Status and Application
Dirk Duellmann ~~~ WLCG Management Board, 27th July 2010
Presentation transcript:

Farida Fassi, Damien Mercie vendredi 20 juillet 2018vendredi 20 juillet 2018 T1 activity report Farida Fassi, Damien Mercie

Current disk allocation Disk deployment dCache setup Outlines Current disk allocation Disk deployment dCache setup CMS computing visit and actions triggered Brief review on the CMS activities

dCache: current disk allocation Import pools: 161TB, Where 120TB allocated for Space-token CMSCUSTODIAL. This pools is the buffer for all the incoming data: from T0, T1s, T2s and T3s. Export pools: ~27TB, buffer that holds the outcome data, for both T1 and T2. - Reprocessing pools: ~327TB New Configuration: data flagged at import pools will be kept on buffer, more details later on Unmerge pools (T0D1): 228TB for T2 and 30TB for unmerge activity Due to technical limitation the T2 disk pools and unmerge pools have to be the same pools. CMS support has to be aware of the status of this pools permanently since they hold the output of a chaotic activity, plus reprocessing a big pools can reduce the impact in case of space issue

Disk allocation for T2 and T1 T2: prod: to be increased by 100TB (To be checked) T1: prod pools, unmerged area was increased by 32 TB import pools increased by 62 TB Debug pools increased by ~2TB 200 TB need to be installed in the Reprocessing pools

dCache pools setup HPSS Analysis Jobs PhEDEx transfer T0, T1, T2 rfcp Unmerge Pool, T2 POOLS Import pools T2 pools Unmerge Export pools Re-processing pools Prod-Merging Jobs Reprocessing Jobs Analysis Jobs Production Jobs January 09

Preprocessing and Presaging CMS requests that real and Gen datasets has to be kept online during one year, to guaranty a fast access to data for reprocessing purpose. For that purpose an implementation was done to tag this kind of data once they lend on the import pools. The data flagged as such will be moved to the Reprocessing pools to be kept there, while a copy of the same data will be migrated to HPSS This option of keeping data on Reprocessing pools was chosen by the fact that is more safety and transparent than the one of keeping data on the import pools. Presaging: A prestaging test using “central srm-bring-online” script was done with Treqs. The test was successful, and the average transfer rate was 106 MB/s. SRM script will be used for automatic prestaging for CCIN2P3 An automatic mechamis was put in place for cleanup unmerge area

CMS computing visit: Actions on CCIN2P3 (1) Since the CMS visit to CCIN2P3 on 23th October several actions were triggered. Three meetings were organized (attended by Fabio and Farida) to establish the list and follow up all the progress . The goal is to bring CCIN2P3 to a stable and a competitive T1 site. 17 actions were considered and split in three categories depending on the issue urgency: high, medium and low. The actions cover mainly the following: Transfer in both instances Prod and Debug Specific services deployed at CCIN2P3 such as Phedex and Squid Storage configuration, in particular dCache Reprocessing and Prestage More details are on: http://cctools.in2p3.fr/elog/support-cms/111?mail0=Farida+Fassi++%26lt;ffassi@in2p3.fr%26gt;

CMS computing visit: Actions on CCIN2P3 (2) The monitoring of the Voboxes was improved. This includes the CMS-specific services (all phedex agents, Squids) and the machine level parameters More details: Nadia’s talk Transfer in Debug instance was improved considerably by the following intervention: Re-configuration of the dcache pools done, this includes increasing the disk pools used for this activity up to 5 pools of 5TB total size. Reconfiguration of the TFC to the new dCache space-name The configuration of each pool was reviewed to support 30 concurrent transfer The FTS configuration was reviewed to handle more than 60 links for this instance The effort has been done in the IN2P3<->start channel since the diagnostic and issue detection is not easy in this case. An automatic clean up for the Debug import area was setupped Transfer in PROD instance, the agents configuration was reviewed and updated deployment and configuration of the prestage agent for Transfer

Brief activity report

Transfer Prod: * -> IN2P3 F. Fassi 02/12/10 10

Transfer rates Prod: * -> IN2P3 Damien proxy FNAL import: main issue Timeout : it was increase up to 10000s  stable situation. Files size up 40GB - Other issues that impacted the import: Schedule Downtime on 4th and 25-26th January SRM issue on 12, 11 and 17th January Damien proxy issues F. Fassi 02/12/10 11

Transfer rates: IN2P3->* (PROD) SRM issue on 11-12 and 17 Few activity + SD on 4th -01-10 Issues at the site that import 25-26th SD F. Fassi 02/12/10 12

Transfer rates: * -> IN2P3 (Debug) Proxy issue F. Fassi 02/12/10 13

Transfer rates: IN2P3->* (DEBUG) More links were added F. Fassi 02/12/10 14

Reprocessing Few activity since the end of year 2009

Comments/questions are welcome