Distributed analysis for the ATLAS Experiment in the S.Co.P.E Project

Slides:



Advertisements
Similar presentations
First results from the ATLAS experiment at the LHC
Advertisements

Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
…building the next IT revolution From Web to Grid…
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CRAB: the CMS tool to allow data analysis.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
Job Priorities and Resource sharing in CMS A. Sciabà ECGI meeting on job priorities 15 May 2006.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
Dynamic Extension of the INFN Tier-1 on external resources
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
(Prague, March 2009) Andrey Y Shevel
Computing Operations Roadmap
SuperB – INFN-Bari Giacinto DONVITO.
LCG Service Challenge: Planning and Milestones
Grid site as a tool for data processing and data analysis
Status Report on LHC_2 : ATLAS computing
U.S. ATLAS Tier 2 Computing Center
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Distributed Computing for HEP - present and future
Pasquale Migliozzi INFN Napoli
LHC experiments Requirements and Concepts ALICE
Data Challenge with the Grid in ATLAS
ALICE Physics Data Challenge 3
LHC DATA ANALYSIS INFN (LNL – PADOVA)
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
The INFN TIER1 Regional Centre
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Southwest Tier 2.
An introduction to the ATLAS Computing Model Alessandro De Salvo
US CMS Testbed.
CERN, the LHC and the Grid
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
The LHC Computing Grid Visit of Professor Andreas Demetriou
The LHCb Computing Data Challenge DC06
Presentation transcript:

Distributed analysis for the ATLAS Experiment in the S.Co.P.E Project Gianpaolo Carlino INFN Napoli Workshop finale dei Progetti Grid del PON Ricerca 2000-2006 - Avviso 1575 Catania, 10-12 February 2009 The LHC Physics Motivations and the ATLAS Experiment The ATLAS Computing Model The Napoli ATLAS Tier2 & SCoPE 1 1

The LHC Physics Motivations The study of elementary particles and fields and their interactions gauge x8 Main goal of LHC is the Higgs discovery Direct search at LEP: MH > 114,4 GeV The Standard Model is incomplete LHC addresses several leading questions in particle physics Source of Mass (Mg = 0 vs. MW = 89.4 GeV) Higgs particle? Supersimmetry? Are quarks fundamental? Are there more quarks? Source of Dark Matter 2

The LHC Physics Motivations Supersymmetry (SUSY) Establishes a symmetry between fermions (matter) and bosons (forces): - Each particle p with spin s has a SUSY partner p with spin s -1/2 - Examples q (s=1/2)  q (s=0) squark g (s=1)  g (s=1/2) gluino ~ ~ ~ Our known world Maybe a new world? Why SUSY is so interesting? - Unification - Solves some deep problems of the Standard Model - If the Lightest SUSY particle is stable we have a “natural” explanation of Dark Matter 3

Physics at LHC: the challenge Rate (events/sec) = σ x L

The LHC machine CMS Lake of Geneva LHCb Airport ALICE ATLAS The Large Hadron Collider is a 27 km long collider ring housed in a tunnel about 100 m underground near Geneva. The LHC machine is fully installed and was ready to start operation with single beams on 10th September 2008, but it is now delayed for several months until next spring after an incident that happened on 19th September

Collisions at LHC  very powerful detectors needed 25 ns Event rate: N = L x  (pp)  109 interactions/s  very powerful detectors needed 6

ATLAS Collaboration ITALY 167 Institutions A Toroidal LHC ApparatuS 37 Countries 167 Institutions 2000 Scientific Authors total ITALY 13 Institutions ~ 200 Members

The ATLAS Experiment Length: ~ 46 m Radius: ~ 24 m Weight: 7000 Tons ~ 108 electronic channels ~ 3000 km of cables

The very first beam-splash event from the LHC in ATLAS 6/22/2018 First LHC Events The very first beam-splash event from the LHC in ATLAS on 10:19, 10th September 2008 9

The ATLAS events … and in large quantity p … are heavy …  p H  bb event @ high luminosity … and in large quantity p Event rate = 1 GHz Trigger rate = 200 Hz Event data flow from on-line to off-line = 320 MB/s  ~ 2 PB/year of RAW data Processing, Reprocessing and user analysis need O(105) CPUs full time and produce O(PB/year) of derived data … are heavy … p There is no way to concentrate all needed computing power and storage capacity at CERN Necessity of Distributed Computing and GRID. The LHC Computing Grid (LCG) provides the world-wide infrastructure used by ATLAS to distribute, process and analyse the data. High granularity detectors needed (108 electronic channels)  Very large event size = 1.6 MB

The ATLAS Computing Model Hierarchical Computing Model optimising the use of the resources need to distribute RAW data for storage and reprocessing distribute data for analysis in various formats in order to make access for analysis as easy as possible for all members of the collaboration “Tier Cloud Model” CERN NG BNL FZK RAL ASGC PIC TRIUMF SARA CNAF SWT2 GLT2 NET2 WT2 MWT2 BNL Cloud NA MI CNAF Cloud RM1 LNF LYON lapp lpc Tokyo Beijing Romania grif T3 Tier0 at CERN. Immediate data processing and detector data quality monitoring. Stores on tape all ATLAS data Tier-2. Simulation and user Analysis T1 T2 T0 Tier1. Data storage and reprocessing of data with better calibration or alignment constants All Tier-1s have predefined (software) channel with CERN and with each other Tier-2s are associated with one Tier-1 and form the cloud Tier-2s have predefined channel with the parent Tier-1 only.

The ATLAS Computing Model 6/22/2018 The ATLAS Computing Model 12

The ATLAS Computing Model Three Grid middleware infrastructures are used by the ATLAS distributed computing project: EGEE (Enabling Grids for E-sciencE) in most of Europe and the rest of the world NorduGrid in the Scandinavian countries OSG (Open Science Grid) in the USA The Atlas Grid tools interface to all middleware types and provide uniform access to the Grid environment The VOMS database contains the computing information and the privileges of all ATLAS members; it is used to allow ATLAS jobs to run on ATLAS resources and store their output files on ATLAS disks the DDM (Distributed Data Management) system catalogues all ATLAS data and manages the data transfers The ProdSys/Panda production system schedules all organised data processing and simulation activities The Ganga and Pathena interfaces allow the analysis job submission: jobs go the site(s) holding input data and output data can be stored locally or sent back to the submitting site. Event Builder Event Filter Tier3 10 GB/s 320 MB/s ~ 150 MB/s ~10 ~50 Mb/s ~PB/s Tier2 ~3-4/Tier1 Tier0 Tier1

The Distributed Analysis GANGA: a single tool to submit/manage/monitor the user jobs on GRID Functional Blocks: a job is configured by a set of functional blocks, each one with a well defined purpose Grid and local execution allowed. Grid backends include all the three middleware flavours Interface via a command-line or a GUI Users don’t need to know where data are located and where jobs run.

Napoli ATLAS Tier2 Napoli is one of the four ATLAS Italian Tier2s with Milano, Roma1 and Frascati (sj). Provides pledges resources to the whole ATLAS collaboration for Monte Carlo simulation and analysis and in particular to italian users. Computing Resources: 50 WN, 260 cores, 580 kSI2k Storage Resources: SAN architecture 260 TB 12 disk servers Local Services: CE, SE, DPM, HLR, Monitoring Two-site structure: INFN site in the Physics Dept. SCoPE “Federico II” site linked by 10 Gbps fibre connection INFN site 3 ATLAS racks 1 SCoPE prototype rack

Napoli ATLAS Tier2 The Napoli ATLAS farm started its operation as official Tier2 in the first half of 2006. Since then we have been running continuously. Wall time usage since Oct 06. It’s evident the increasing in the available computing resources Oct 06 Feb 09 Main activities in the Tier2 by the Napoli and other ATLAS groups: Muon Trigger (LVL1 and EF) Muon Reconstruction RPC calibration and analysis Supersimmetry Searches Waiting for data to get physics results In the meantime we participated to every kind of test organised by ATLAS or WLCG: Functional Tests Throughput Tests Distributed Analysis Tests (Hammer Tests) CCRC08 (Combined Computing Readiness Challenge 2009) with all the LHC experiments

Network connection between SCoPE, INFN and POP GARR Napoli ATLAS Tier2 Network connection between SCoPE, INFN and POP GARR Sala controllo SCoPE Sala Campus Grid Centro stella Fisica TIER-2 ATLAS (edif. Fisica) Capannone SCoPE 10 coppie monomodali 6 coppie monomodali 10 rack TIER-2 POP GARR M.S. Angelo 1 Gbit 2x1 Gbit 10 fibres connecting at 10 Gbps the 10 ATLAS racks in the SCoPE site with the INFN site. TIER2 1 Gbit 2x1 Gbit INFN NAPOLI 2x10 Gbit UNINA 10 Gbit SCOPE TIER2

Napoli ATLAS Tier2 SCoPE site 10 racks for ATLAS the new ATLAS resources (WN and storage) have been installed in the SCoPE site the SCoPE resources will be used by ATLAS accordingly to the fair share policies among the supported VOs So far the ATLAS job submission on the SCoPE prototype has been successfully tested Ganga configuration has been modified in order to use the SCoPE Grid services (CE and Resource Broker) The ATLAS central sw installation procedure includes the SCoPE CE.

Conclusions The LHC Physics Program has breakthrough discovery potential, will make an enormous number of precision measurements, and therefore is the leading component of the world High Energy Physics program from for the next ~20 years. We are really excited with the physics starting soon! The new LHC schedule has been defined last week. We will be taking data starting from next autumn over the whole next year. The ATLAS computing model had been defined in order to exploit the advantages of the GRID and the large amount of resources spread over the world We have tested the functionality of the ATLAS sw on SCoPE and will be able to use, thanks to the interoperability, these and the other PON facilities