Status and Plans of the Spanish ATLAS

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Detailed Plans on Production Service Operations in Spain Andreu Pacheco (IFAE/PIC) GDB Meeting
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
SC4 Planning Planning for the Initial LCG Service September 2005.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
TAGS in the Analysis Model Jack Cranshaw, Argonne National Lab September 10, 2009.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
1 Studies for setting up an Analysis Facility for ATLAS Data Analysis at CNRST and its application to Top Physics Acuerdo Bilateral CSIC-CNRST: 2007MA0057.
1 ATLAS-VALENCIA ANALYSIS MEETING 24- Sept-2007 José SALT The Spanish ATLAS TIER-2. Interaction Model with the IFIC TIER Description of the Spanish.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
1 Presented by : José F. Salt Cairols UAM (Madrid), 5 th and 6th June 2012 XIII Presential Meeting of the Spanish ATLAS TIER-2.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Bob Jones EGEE Technical Director
Status: ATLAS Grid Computing
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Grid site as a tool for data processing and data analysis
LHC Science Goals & Objectives
ATLAS Distributed Analysis tests in the Spanish Cloud
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Data Challenge with the Grid in ATLAS
Dagmar Adamova, NPI AS CR Prague/Rez
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Upgrade Predrag Buncic
Cloud Computing R&D Proposal
Connecting the European Grid Infrastructure to Research Communities
Organization of ATLAS computing in France
Input on Sustainability
US ATLAS Physics & Computing
LHC Data Analysis using a worldwide computing grid
Collaboration Board Meeting
ATLAS Distributed Analysis tests in the Spanish Cloud
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Status and Plans of the Spanish ATLAS TIER-2 ( ES-ATLAS-T2) WORKSHOP on ‘GRID Computing and e-Science: Data Analysis of ATLAS Experiment and Medical Physics’ Tier2 Team: G. Amorós, A. Fernández, S. González de la Hoz, A. Lamas, E. Oliver, J. Salt, J. Sánchez, M. Villaplana Presented by : José F. Salt Cairols 4-6 th October 2010

Overview 1.- The Spanish ATLAS Tier-2 2.- The Role of the ES-ATLAS-T2 within the ATLAS Computing Model 3.- Response of the ES-ATLAS-T2 at the start of LHC 4.- The ES-ATLAS –T2 and the Integration of the Morocco’s Tier-3 5.- Conclusions and perspectives

1.- The Spanish ATLAS TIER-2 The ATLAS distributed TIER-2 is conceived as a Computing Project for ATLAS experiment. Its main goals are: Enable Physics Analysis for all ATLAS Users and, in particular, for the Spanish ATLAS users Continuous production of ATLAS MC events To contribute to ATLAS + LHC Computing Common Tasks Sustainable growth of infrastructure according to the scheduled ATLAS ramp-up and stable operation To provide the User Support for the Spanish ATLAS physicists List of ‘Tier-2’ related projects in chronological order: ‘Acción Especial’ (2000-2001) LCG- oriented project (2002-2004) ATLAS Tier-2 project (Development & construction): 2005-2007 (2 years) ATLAS Tier-2 project (phase I): 2008-2010 ( 3 years ) ATLAS Tier-2 project (phase II): 2011-2013 (3 years) I + D Service Infrast. Recently awarded

IFAE UAM IFIC 14 FTE CPU = 10.400 HS06 Disk = 1045TBytes Universidad Autónoma de Madrid (Madrid) Instituto de Física de Altas Energías (Barcelona) Instituto de Física Corpuscular (Valencia) IFAE Deployed Equipment: CPU = 10.400 HS06 Disk = 1045TBytes UAM IFIC Human Resources: 14 FTE (as of Sept. 2010)

Update of the CPD 2009 has been a tough year: IFIC a big update of the CPD Present view of the IFIC CPD Details of the Renovation work of the IFIC CPD (June 2009) Based in SUN X4500 + X4540 LUSTRE V1.8 View of the Storage Resources

Table : List of the active Tier-2 personnel and the commitments (as of 31 May 2010) The underlined names are payed by the project funds Names with asterisc(s) are contracted by the Program of Support Techician (Tecnico de Apoyo). A. Lamas is not included in the Scientific members of the project

Framing the Tier-2 project in ATLAS UAM: Construction and Commissioning of the ATLAS Electromagnetic End Cap Calorimeter. Calibration of the ATLAS Electromagnetic Calorimeter. ATLAS Physics: Higgs search through 4 leptons decay mode IFIC: Construction and Commissioning of the ATLAS Hadronic Calorimeter (TileCal) Construction and Commissioning of the ATLAS SCT-Fwd. Alignment of the Inner Detector ATLAS Physics: b tagging algorithms for event selection; MC studies of different process beyond SM ( Exotics, SUSY, Little Higgs and Extra Dimensions models); Top Physics, Higgs Searches. IFAE: Construction and Commissioning of ATLAS Hadronic Calorimeter (TileCal) Development and Deployment of the ATLAS High Level Trigger: second level Tau trigger software, operation (Event Filter Farm); commisioning of event selection software, tau & jet trigger. ATLAS Physics: TileCal Calibration, Reconstruction and Calibration of Jet/Tau/Missing Transverse Energy, SUSY searches, Standard Model processes, Charged Higgs.

2.- The role of the ES-ATLAS-T2 within the ATLAS Computing Model > 90% Reliability = Uptime / (Total time - Scheduled Downtime - Time_status_was_UNKNOWN) Prob (1 site down) ≈ 0.05 Prob (3 sites down) ≈ = 0.0001 Availability = Uptime / (Total time - Time_status_was_UNKNOWN)

Evolution of the T2 Resources April’2010

Estimated resources since 2010 onwards (numbers are cumulative): Evolution of ALL ATLAS T-2 including the pledged already delivered plus the Estimated resources since 2010 onwards (numbers are cumulative): Año 2010 2011 2012 2013 CPU(HS06) 240000 278000 295000 342200 Disk (TB) 20900 37600 44000 66000 Spanish ATLAS T-2 assuming a contribution of a 5% to the whole effort: New Project Pledges 2011-12 Año 2010 2011 2012 2013 CPU(HS06) 12000 13900 14750 17110 Disk (TB) 1045 1880 2200 3300 2012: Preliminary Estimates Prov. By ATLAS 2013: Preliminary Estimated By us 2010: Official requirements 2011: Official requirements

ES-ATLAS-T2 Activity Groups: SA: System Administrators CO: Cloud and T2 Operation T3 : Support T3 US: User Support AT: ATLAS Computing Tasks PM: Operation and Project Management User Support (US) The number of ATLAS end-users is ramping-up fast Physicists have different profiles T2 project provide the Support to he Spanish ATLAS end-users The US include: Certificates, Submit jobs File transfer Answer questions about ATLAS Software To solve problems using the T2 and T3 infrastructure To organize tutorials

Support to T3 The same T2 sites host the ATLAS physicists who need an infrastructure to perform the last stages of the Physics Analysis Experience of the T2 staff is being used to build and maintain such an infrastructure (Tier3) in a proper way to fullfill the requirements The technical work includes the design and maintainance of a storage system, of a GRID-type computing facility and of a system for paralles processing ( like PROOF) All these systems must be compatible with the T2

3.- The Response of the ES-ATLAS-T2 at the start of LHC

Distributed Analysis Analysis on collision events performed on the ES-T2 resources fall in two categories: Detector and Combined performance Physics Analysis Typical Analysis First stage is to submit a job to the Grid (Distributed analysis) The output files will be stored in the corresponding Tier2 storage Later the output files can be retrieved for a further analysis on the local facilities of the institute. ATLAS distributed analysis in the ES-T2 Two front-ends were used: Ganga and Panda Differ in the way they interact with the user and the Grid resources

Data Management Tier0 CERN Tier1 10 centers Tier2 several centers Raw 3 replicas Raw ESD ESD 10 replicas dESD AOD AOD 13 replicas dAOD

Access to the data Jobs run where the data are located. Tier2 Jobs run where the data are located. User can ask for a replica in other site. Dataset Job Tier2 'A' DaTRI Dataset Tier2 'B' Replica Job

Detector and combined performance ATLAS Analysis Types Tier1/2 Analysis with different inputs Output from storage site to user local facility (DQ2 or DaTRI) Raw ESD dESD Job Ntuple Ntuple Ntuple Detector and combined performance Tier1/2 AOD dAOD Job DPD DPD Physics Analysis

4.- The ES-ATLAS-T2 and the integration of the Morocco’s T3 (Mo-T3) From the point of view of the Infrastructure: to establish and maintain a Tier-3 in Morocco supported by CNRST The NREN is also maintained by the CNSRT (Marwan) Period of task identification (February-June 2010). it is a stand-alone T3 with important differences wrt the IFIC Tier-3 Steps in the integration of the Morocco’s T3 Evaluation of the usage of the Morocco’s T3 Memorandum of understanding for the collaboration between the ES-ATLAS-T2 snd the M-T3 To ask for the inclusion of the M.T3 in the ToA To check the advantages obtained

5 Tier-2 Tier-1 small desktops RAL centres portables IN2P3 Tier-3 FNAL USC NIKHEF Krakow Legnaro IFAE Taipei TRIUMF IFIC UAM UB IFCA CIEMAT MSU Prague Budapest Cambridge Tier-1 small centres desktops portables RAL IN2P3 Tier-3 FNAL CNAF FZK PIC ICEPP BNL UAM-T3 UAM-T3 IFIC-T3 IFAE-T3 Morocco-T3 5

Main objective: to include the Morocco T3 in the ‘Tiers of ATLAS’: ‘Tiers of ATLAS’: Joint effort for managing the ATLAS DDM (Distributed Data Management) for the Tiers of ATLAS (ToA). The ToA describes all storages used for reading and writing ATLAS official data, their properties, transfer tools, local catalogues as well as the topology and associations between sites. Data transfer and data access To take profit of DDM Procedure To finish the process: Memorandum of Understanding (MoU) To have the green light from Spanish Authorities To inform PiC (Spanish Tier-1) evaluation and inclusion in the ‘Tiers of ATLAS list’ Requirements for ATLAS Computing : space tokens (MCDisk, DATAdisk) See example of Valparaiso To add the M-T3 to the ToA list

5.- Conclusions and Perspectives From the Tier-2 Project point of view This project falls entirely within the period of ATLAS as an running experiment Changes to a more ‘transversal operative point of view’ The Transition from Operational mode with Simulated Data to the Real Data Operational mode has been completed . It works without major problems Strong improvement of reliability/availability Infrastructure: Strengthen monitoring, to optimize the access to LUSTRE and dCACHE Computing shifts and Core Computing tasks are becoming a no-negligeble task (an activity task devoted to this activity) User Support: more effort is needed to get a more efficient system Distributed Analysis: the present status is not ‘the end of the story’, DA is evolving. Interactive Analysis: to provide a stable PROOF facility Perspectives The follow-up expressed in the conclusions Sustainability of the Infrastructure: Resource Renewal, personnel, … WLCG interaction/framework in the NGI and EGI (?)

From the Rabat-Valencia Collaboration point of vieW: CONCLUSIONS: We are delayed wrt the previous plans in the ATLAS GRID collaboration management side; Emphasis has been in HEP and Medical Physics interaction between our two groups : Very positive synergy! HEP and Medical Physics have very good scientific outcomes From the e-Science point of view a large scope has been obtained and it is OK!! PERSPECTIVES: … we need to speed up the process to include the Morocco’s T3 in the ToA list and this achievement will give us a better interplay between our institutions Non-presential meetings (video or audioconference) will be needed to discuss the evolution of the T3-T2 issues -

Backup Slides