1 Studies for setting up an Analysis Facility for ATLAS Data Analysis at CNRST and its application to Top Physics Acuerdo Bilateral CSIC-CNRST: 2007MA0057.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
Detailed Plans on Production Service Operations in Spain Andreu Pacheco (IFAE/PIC) GDB Meeting
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
CDF computing in the GRID framework in Santander
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Response of the ATLAS Spanish Tier2 for.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
1 ATLAS-VALENCIA COORDINATION MEETING 9th May 2008 José SALT TIER2 Report 1.- Present Status of the ES-ATLAS-T2 2.- Computing-Analysis issues Overview.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
1 ATLAS-VALENCIA ANALYSIS MEETING 24- Sept-2007 José SALT The Spanish ATLAS TIER-2. Interaction Model with the IFIC TIER Description of the Spanish.
Pedro Andrade > IT-GD > D4Science Pedro Andrade CERN European Organization for Nuclear Research GD Group Meeting 27 October 2007 CERN (Switzerland)
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
EXPERIENCE WITH ATLAS DISTRIBUTED ANALYSIS TOOLS S. González de la Hoz L. March IFIC, Instituto.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
1 Presented by : José F. Salt Cairols UAM (Madrid), 5 th and 6th June 2012 XIII Presential Meeting of the Spanish ATLAS TIER-2.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
Bob Jones EGEE Technical Director
Grid site as a tool for data processing and data analysis
Belle II Physics Analysis Center at TIFR
Moving the LHCb Monte Carlo production system to the GRID
ATLAS Distributed Analysis tests in the Spanish Cloud
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Data Challenge with the Grid in ATLAS
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
LHC Data Analysis using a worldwide computing grid
Status and Plans of the Spanish ATLAS
ATLAS Distributed Analysis tests in the Spanish Cloud
Presentation transcript:

1 Studies for setting up an Analysis Facility for ATLAS Data Analysis at CNRST and its application to Top Physics Acuerdo Bilateral CSIC-CNRST: 2007MA0057 José Salt Overview of the Spanish ATLAS Distributed TIER-2/IFIC

2 Overview 1.- Description of the Spanish ATLAS Tier The ATLAS Computing Model and our Tier Project Activities 4.- Ramp-up of resources 5.- Role of TIER-2 in the MA-SP Agreement

3 1.- Description of the Spanish ATLAS TIER-2 –The ATLAS distributed TIER-2 is conceived as a Computing Infrastructure for ATLAS experiment. Its main goals are: Enable Physics Analysis by Spanish ATLAS Users Continuous production of ATLAS MC events To contribute to ATLAS + LCG Computing Common Tasks Sustainable growth of infrastructure according to the scheduled ATLAS ramp-up and stable operation –Sequence of projects: ‘Acción Especial’ ( ) LCG- oriented project ( ) ATLAS Tier-2 project (phase I): (2 years) ATLAS Tier-2 project (phase II): (3 years)

4 The sites… IFIC IFAE UAM … and the resources CPU = 608 KSI2k Disk = 244 TBytes 14 FTE Equipment: Human Resources: (as of October 2007)

5 IFIC New Worker Nodes + Disk servers PC Farm (former one) Tape Storage Robot

6 UAM Storage devices Worker Nodes

7 IFAE Top view of the PIC Computer Room: where TIER-2 equipment is placed Worker Nodes

8 UAM: –José del Peso (PL) –Juanjo Pardo (*) –Luís Muñoz –Pablo Fernández (*T.MEC) IFAE: –Andreu Pacheco (PL) –Jordi Nadal (*) –Carlos Borrego (*) (<-) –Marc Campos –Hegoi Garaitogandia ( NIKHEF) IFIC –José Salt (PL and Coordinator) –Javier Sánchez (Tier-2 Operation Coordination) –Santiago González (Tier-3) –Alvaro Fernández (EGEE) –Mohammed Kaci (*T.MEC) –Gabriel Amorós (EGEE) –Alejandro Lamas –Elena Oliver (Ph. D. student) –Miguel Villaplana (Ph. D. student) –Luis March (postdoc, CERN) –Farida Fassi ( FR Tier-1) Total FTE = 14 FTEs Human Resources of the Spanish ATLAS TIER-2 MA_ES Agreement

9 UAM: –Construction and Commissioning of the ATLAS Electromagnetic End Cap Calorimeter –ATLAS Physics: Higgs search through 4 leptons decay mode IFIC: –Construction and Commissioning of the ATLAS Hadronic Calorimeter (TileCal) –Construction and Commissioning of the ATLAS SCT-Fwd. –Alignment of the Inner Detector –ATLAS Physics: b tagging algorithms for event selection; MC studies of different process beyond SM ( Exotics, Little Higgs and Extra Dimensions models); Top Physics IFAE: –Construction and Commissioning of ATLAS Hadronic Calorimeter (TileCal) –Development and Deployment of the ATLAS High Level Trigger: third level trigger software, infrastructure (Event Filter Farm); on-line commisioning of event selection software, tau trigger. –ATLAS Physics: TileCal Calibration, Reconstruction and Calibration of Jet/Tau/Missing Transverse Energy, SUSY searches, Standard Model processes, Charged Higgs. ATLAS Physics & Detectors (2008) Involvement of Spain in ATLAS (Det. & Phys)

The ATLAS Computing Model and our TIER-2 Data will undergo transformations to obtain a reduction in size and the extraction of the relevant information Commitment of the participant ATLAS Institutes to contribute with their resources TIERed Structure: starting from a simple hierarchical organization of geographically scattered centers

11 RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL USC NIKHEF Krakow Legnaro IFAE Taipei TRIUMF IFIC UAM UB IFCA CIEMAT MSU Prague Budapest Cambridge small centres desktops portables Services of disk storage for data files and data bases To provide analysis capacity for the Physics Working groups. To conduct the operation of an installation of an ‘end-user’ Data Analysis System that gives service to at least 20 physics topics running in pararel To provide simulation according to the requirements of the experiments To provide Network services for the interchange of data with TIER-1 centers Tier-1 5 to ensure a sustainable growth of the TIER-2 Infrastructure distributed between these centers and its stable operations Tier-2 Tier-3 TIER-2 Funcionalities

12 Last grid use WLCG Workshop

13 SAT-2 Activity Groups: SA: System Administrators MCP: Monte Carlo Production DAS: Distributed Analysis System US: User Support DMM: Data Movement Manager OM: Operation Manager PM: Project Management Contribution to CHEP’07: ‘Experience Running a Distributed Tier-2 for the ATLAS Experiment’ Spanish ATLAS Tier Project activities

14 Infrastructure: Hardware & Operations OM SA Operation Manager: OM System Administrators: SA Responsible of the Overall Coordination of TIER-2 (J.Sánchez) Design and development of the technical specification and policies to ensure the distributed TIER-2 will be seen as an unique virtual TiER-2 Technical link with the TIER-1’s Constellation Coordination : –Processing and storage resources in order to achieve an efficient and optimal operation of individual centers (fault-tolerance,response speed, etc) –Global policies of security and access –Establishment of global monitoring tools and policies to obtain usage metrics, estability of TIER-2 –Usage statistics and QoS to the responsibles of the project To manage the local cluster: hardware installation and maintenance, installation and configuration of the OS releases, GRID Mw updates, monitoring the cluster, solve problems detected by the local monitoring or the Global GRID Operation Center, etc Service Requirements: 12h/day, 5 days/week Now: 1’5 FTE/ site needed; usual service during ATLAS Data Taking: 6 FTE will be required

15 User Support (US) Site User Support has started at the end of 2006: first feedback from ‘end-users’ was studied at the 3 sites In next months it’s needed to go towards a ‘coordinated User Support’ for our distributed Tier-2: On-line/direct support Off-line support using a simple ticketing system and providing an internal tree-structure to solve different class of problems (T-2 Helpdesk To provide periodical tutorials/hands on software tools for analysis

16 DB EX CE SE CE Production System for Simulated Data (MC) : The ATLAS Production System manages the official massive productions of simulated data. The system consists of: - a data base (definition of jobs) - supervisors/executors (they take the jobs from the DB and are managed by the computing resources of ATLAS ) Since january 2008, the ATLAS collaboration has migrated to the unique executor : PANDA - Management of Distributed Data (the produced data are recorded and stored at different centers) Production of Simulated Data

17 Spanish physicists are doing physics analysis using GANGA on data distributed around the world This tool has been installed in our Tier2 infrastructure GANGA is an easy-to-use front-end for job definition, management and submission. Users interaction via: Its own Python shell (command line) A Graphical User Interface (GUI) In our case the jobs are sent to the LCG/EGEE Grid flavour. We are doing performance test: LCG Resource Broker (RB) gLite Workload Management System (WMS). New RB from EGEE Distributed Analysis System (DAS)

18 Data Movement Management(DMM) Data Management is one of the main activities inside the ATLAS Spanish federated Tier-2. AOD physics data are going to be stored at these sites (distributed geographically) Data Management main activities: –Data monitoring at federated Tier-2: – –Old and unnecessary data cleaning –Inconsistency check and cleaning up (catalogue entries and/or file size mismatch) –Data replication to Tier-2 for Tier-2 users (using DQ2 client and doing subscriptions) Coordination with Tier-1: –Bi-weekly Tier-1/Tier-2 coordination meetings where Tier- 1/Tier-2 data management issues are discussed –There is a collaboration project between Tier-1/Tier-2 to monitor some ATLAS/Grid services at both sites

19 Main objective: to achieve a more federative structure Organization & management of the project is responsibility of the subproject leaders –SL constitute the Project Management Board A. Pacheco (IFAE), J. del Peso (UAM) and J. Salt (IFIC) –PMB chairman is the Coordinator of the Project J. Salt (Tier-2 Coordinator) PMB meeting every 4-5 weeks Tier-2 Operation Meetings: Virtual, biweakly Presential Tier-2 meetings: every 6 months –February’06 : Valencia –October’06: Madrid –May’07: Barcelona –October’07: foreseen in Valencia –May’08: Madrid –November’08: Barcelona PROJECT MANAGEMENT (PM)

Ramp-up of the TIER-2 Resources Ramp-up of Tier-2 Resources (after LHC rescheduling) numbers are cumulative Evolution of ALL ATLAS T-2 resources according to the estimations made by ATLAS CB (October 2006) Spanish ATLAS T-2 assuming a contribution of a 5% to the whole effort Strong increase of resources Spanish Tier-2 Size (October 2007): CPU = 608 KSI2k Disk = 244 TB

21 IFIC Networking UAM IFAE Connection at 1 Gbps to University backbone (10 Gbps) Universitat de València hosts the RedIris PoP in the Comunitat Valenciana Bought a new equipment: Cisco Catalyst 3506 Will connect at 10 Gpbs to University backbone Aggregate WNs and Diskservers Public IP addresses in the subnet / 24 (to be increase to 23 bits) reserved to Tier2 Connection at 1 Gbps between UAM and RedIRIS new switch needed in UAM site to connect all servers at 1Gb/s Direct gigabit connection to PIC backbone (Cisco 6509E 100x1Gb, 8x10Gb) PIC hosts the 10 Gbs link to LHCOPN (CERN) and 2 Gbs to Tier2 / Internet

22 To increase surface from 90 m**2 to 150 m**2 Upgrade UPS: 50 KVA to 250 KVA To install 70 lines of 16 Amps ( 3 lines/rack) To increase the power for the building (Electrical 20KV transformer, DIESEL generator, Low Voltage distribution, …) To change the air conditioning (impulsion on technical floor) New racks Redistribution of all machines located at Computer Center Execution: in progress Upgrade of IFIC Computing Room new area

Role of the Spanish ATLAS-T2 in the MA_ES Agreement ATLAS Collaboration Infrastructure (GRID framework) to plug in the Analysis Facilities (TIER-3) Support in term of expertise such as installing, configuring, tune, troubleshooting of a GRID Insfrastructure Project Activities To enable the work of Physics groups of both countries. – Analysis Use cases – collaboration in (several) Physics Analysis (Top Physics) Interface to the EGEE/EGI (European GRID Infrastructure) South West Federation ROC

24 Infrastructure Maintenance Production of Sim. Data (Public) Production of Sim. Data (Private) User Support Data Management Distributed Analysis Interactive Analysis Local Batch Facility End Users Interaction Scheme TIER-2 TIER-3

25 BACKUP SLIDES

26 Service Level Agreement To ensure the service availability of our TIER-2 : –12 hours/day ; 5 days/week To ensure a delay time in responding to operational problems Contribution to the M&O A and B (in 1 or 2 people) Service Maximum delay in responding to operational problems Averag e availab ility Measu red on annual basis Prime Time Other periods End-user analysis facility 2 hours 72 hours 95% Other services 12 hours 72 hours 95% Fellowhships: our groups have a high level training ability TTP: Technical Training Program