Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla Offline Software for the ATLAS Combined Test Beam Ada Farilla – I.N.F.N. Roma3.

Slides:



Advertisements
Similar presentations
Use of G EANT 4 in CMS AIHENP’99 Crete, April 1999 Véronique Lefébure CERN EP/CMC.
Advertisements

TRT LAr Tilecal MDT-RPC BOS Pixels&SCT 1 The Atlas combined testbeam Thijs Cornelissen, NIKHEF Jamboree, Nijmegen, December 2004.
ATLAS LHCC 28/11/00 A. Henriques/CERN 1 H8 : Pixel, SCT, TRT, Tiles, LAr em barrel, Muons SPS H6: EMEC, HEC, FCAL, Background studies GIF: Muons (MDT,
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
Information Systems and Data Acquisition for ATLAS What was achievedWhat is proposedTasks Database Access DCS TDAQ Athena ConditionsDB Time varying data.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
The digitization procedure occurs separately for each Muon technology, MDT, CSC, RPC and TGC. Here the main steps of each MuonHits to Muon Digits conversion.
David Adams ATLAS ATLAS Distributed Analysis David Adams BNL March 18, 2004 ATLAS Software Workshop Grid session.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
HPS Online Software Discussion Jeremy McCormick, SLAC Status and Plans.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Event Data Model in ATLAS Edward Moyse (CERN) On behalf of the ATLAS collaboration CHEP 2004, Interlaken.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER Charles Leggett The Athena Control Framework in Production, New Developments and Lessons Learned.
Manoj Kumar Jha INFN – Bologna On Behalf of ATLAS Muon Calibration Group 20 th October 2010/CHEP 2010, Taipei ATLAS Muon Calibration Framework.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
The Region of Interest Strategy for the ATLAS Second Level Trigger
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Gnam Monitoring Overview M. Della Pietra, D. della Volpe (Napoli), A. Di Girolamo (Roma1), R. Ferrari, G. Gaudio, W. Vandelli (Pavia) D. Salvatore, P.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
CHEP06, Mumbai-India, Feb 2006V. Daniel Elvira 1 The CMS Simulation Validation Suite V. Daniel Elvira (Fermilab) for the CMS Collaboration.
24/06/03 ATLAS WeekAlexandre Solodkov1 Status of TileCal software.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
8 June 2006V. Niess- CALOR Chicago1 The Simulation of the ATLAS Liquid Argon Calorimetry V. Niess CPPM - IN2P3/CNRS - U. Méditerranée – France On.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
The CMS Simulation Software Julia Yarba, Fermilab on behalf of CMS Collaboration 22 m long, 15 m in diameter Over a million geometrical volumes Many complex.
The Status of the ATLAS Experiment Dr Alan Watson University of Birmingham on behalf of the ATLAS Collaboration.
September 2007CHEP 07 Conference 1 A software framework for Data Quality Monitoring in ATLAS S.Kolos, A.Corso-Radu University of California, Irvine, M.Hauschild.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Michele de Gruttola 2008 Report: Online to Offline tool for non event data data transferring using database.
Jean-Roch Vlimant, CERN Physics Performance and Dataset Project Physics Data & MC Validation Group McM : The Evolution of PREP. The CMS tool for Monte-Carlo.
Development of the CMS Databases and Interfaces for CMS Experiment: Current Status and Future Plans D.A Oleinik, A.Sh. Petrosyan, R.N.Semenov, I.A. Filozova,
S t a t u s a n d u pd a t e s Gabriella Cataldi (INFN Lecce) & the group Moore … in the H8 test-beam … in the HLT(Pesa environment) … work in progress.
K. Harrison CERN, 22nd September 2004 GANGA: ADA USER INTERFACE - Ganga release status - Job-Options Editor - Python support for AJDL - Job Builder - Python.
Andrea Valassi (CERN IT-DB)CHEP 2004 Poster Session (Thursday, 30 September 2004) 1 HARP DATA AND SOFTWARE MIGRATION FROM TO ORACLE Authors: A.Valassi,
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
The Detector Performance Study for the Barrel Section of the ATLAS Semiconductor Tracker (SCT) with Cosmic Rays Yoshikazu Nagai (Univ. of Tsukuba) For.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ACOS Report ATLAS Software Workshop December 1998 Jürgen Knobloch Slides also on:
David Adams ATLAS ATLAS Distributed Analysis and proposal for ATLAS-LHCb system David Adams BNL March 22, 2004 ATLAS-LHCb-GANGA Meeting.
Database Access Patterns in ATLAS Computing Model G. Gieraltowski, J. Cranshaw, K. Karr, D. Malon, A. Vaniachine ANL P, Nevski, Yu. Smirnov, T. Wenaus.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
Status of Digitization, Testbeam Simulation, and Production ATLAS Muon Week -CERN, February 18th 2005 Daniela Rebuzzi – Pavia University and INFN.
ATLAS Distributed Computing Tutorial Tags: What, Why, When, Where and How? Mike Kenyon University of Glasgow.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
Measuring the B+→J/ψ (μμ) K+ Channel with the first LHC data in Atlas
CMS High Level Trigger Configuration Management
ATLAS MDT HV – LV Detector Control System (DCS)
Readiness of ATLAS Computing - A personal view
Simulation and Physics
ATLAS DC2 & Continuous production
Planning next release of GAUDI
Offline framework for conditions data
Presentation transcript:

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla Offline Software for the ATLAS Combined Test Beam Ada Farilla – I.N.F.N. Roma3 On behalf of the ATLAS Combined Test Beam Offline Team: M. Biglietti, M. Costa, B. Di Girolamo, M. Gallas, R. Hawkings, R. McPherson, C. Padilla, R. Petti, S. Rosati, A. Solodkov

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS Combined Test Beam (CTB)  A full slice of the ATLAS detector (barrel) has been installed on the H8 beam line of the SPS in the CERN North Area: beams of e/  will be delivered in the range GeV  The data taking has started this year on May 17th and will continue until mid November 2004: ATLAS is one of the main users of the SPS beam in 2004  The setup spans more that 50 meters and is enabling the ATLAS team to test the detector’s characteristics long before LHC starts  Among the most important goals for this project: 1.Study the detector performance of an ATLAS barrel slice 2.Calibrate the calorimeters at a wide range of energies 3.Gain experience with the latest offline reconstruction and simulation packages 4.Collect data for a detailed comparison data-MonteCarlo 5.Gain experience with the latest Trigger and Data Acquisition packages 6.Test the ATLAS Computing Model with real data 7.Study commissioning aspects (i.e. integration of many sub-detectors, test the online and offline software with real data, management of conditions data)

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Layout  The setup includes elements of all the ATLAS sub-detectors: 1.Inner Detector: Pixel, Semi-Conductor Tracker (SCT), Transition Radiation Tracker (TRT) 2.Electromagnetic Liquid Argon Calorimeter 3.Hadronic Tile Calorimeter 4.Muon System: Monitored Drift Tubes (MDT), Cathode Strip Chambers (CSC), Resistive Plate Chambers (RPC), Thin Gap Chambers (TGC)  The DAQ includes elements of the Muon and Calorimeter Trigger (Level 1 and Level 2 Triggers) as well as Level 3 farms  A dedicated internal Fast Ethernet Network deals with controls.A dedicated internal Gigabit Ethernet Network deals with the data which are stored on the CERN Advanced Storage Manager (CASTOR)  About 5000 runs (50 Millions events) already collected so far, >1TB of data stored

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Layout CTB Layout as seen by the G4 Simulation About two hundreds people involved in the effort (physicists, engineers and technicians) with about 40 people in four support groups ensuring the smooth running of the detectors and DAQ The ATLAS detector Partial view of the setup in the H8 area Incoming Beam

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Offline Software  The CTB Offline Software is based on the standard ATLAS software, with adaptations due to the different geometry of the CTB setup w.r.t the ATLAS geometry. Very little CTB-specific code was developed, mostly for monitoring and sub-detectors’ code integration.  About 900 packages now in the ATLAS offline software suite, residing in a single CVS repository at CERN. Management of package dependencies libraries and executable building is performed with CMT.  The ATLAS software is based on the Athena/Gaudi framework, which embodies a separation between data and algorithms. Data are handled through an Event Store for event information and a Detector Store for condition information. Algorithms are driven through a flexible event loop. Common utilities (i.e. magnetic field map) are provided trough services. PYTHON JobOptions scripts allow to specify algorithms and services parameters and sequencing.

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Software Release Strategy  New ATLAS releases are built approx. every three weeks, following a predefined plan for software development  New CTB releases are built every week following a CVS-branch of the standard release. During test bean operation, a reason of concern has been to find a compromise between two conflicting needs: the need of new functionalities and the need of stability. The strategy to achieve that was discussed during daily/weekly meetings: a good level of communication between all the involved parties helped in solving problems and take good decisions.

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Simulation  Simulation of the CTB layout is performed in a flexible way by a package (CTB_G4Sim) similar to the ATLAS G4 Simulation package (same code as for the full ATLAS detector)  Python JobOption scripts and macro files allow to select among different configurations (i.e. with and without magnetic field, different positions of the sub-detectors on the beamline, different rotation of the calorimeters w.r.t. the y axis, etc)  We are in pre-production mode: about events already produced for validation studies Y X Z Inner Region Calorimeter Region Muon Region CTB layout H8 (2004) The detailed comparison data-MonteCarlo is expected to be one of the major outcome of the physics analysis

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Reconstruction  Full reconstruction is performed in a package (RecExTB) that integrates all the sub-detectors’ reconstruction algorithms, through the following steps: 1.Conversion of raw data from the online format to their Object representation 2.Access to conditions data 3.Access to ancillary information (i.e. scintillators, beam chambers,trigger patterns) 4.Execution of sub-detector reconstruction 5.Combined reconstruction across sub-detectors 6.Production of Event Summary Data and ROOT Ntuples 7.Production of xml files for event visualization using the Atlantis Event Display  From the full reconstruction of real data we are learning a lot on calibration and alignment procedures and on the performances of new reconstruction algorithms.  Moreover, we expect to gain experience in exercising the available tools for Physics Analysis, aiming at their wide-spread use in a community which is much bigger than the one restricted to software developers: non negligible by-product from an educational point of view

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: (very) preliminary results Events with a track in the Muon system E(TileCal) Z0 parameter for tracks in the Inner Detector (Pixels+TRT) and in the Muon system ID Z0 (mm) Muon Z0 (mm) Energy in Tile Cal for events with/without a muon track Energy in e.m. LAr Cal for 250 GeV electrons

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Detector Description  The detailed description of the detectors and of the experimental set-up is a key issue for the CTB software, along with the possibility to handle in a coherent way different experimental layouts (versioning).  The NOVA MySQL database is presently used for storing detector description information and in few weeks this information will also be fully available in ORACLE  The new ATLAS detector description package (GeoModel) has also been extensively used both for simulation and reconstruction.

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Conditions Database Three components:  Lisbon Interval-Of-Validity Database (IoVDB, MySQL implementation) Data stored internally as blobs (XML strings) or tables Or as external ref to POOL or NOVA Database  POOL for large calibration objects  NOVA for Liquid Argon information The management of large volumes of conditions data (i.e. slow control data, calibration and alignment parameters, run parameters) is a key issue for the CTB. We are now using an infrastructure that will be of help in defining the long term ATLAS solution for the conditions database

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Conditions Database Data aquisition programs Data aquisition programs Online server (atlobk01) Offline server (atlobk02) Browsing, Athena applications Browsing, Athena applications DB replication OBK DBsTest DBs CondDBB CTB DBs OBK DBs CondDB CTB DBs NOVAPOOL cat NOVA Offline Bookkeeping Database Offline Bookkeeping Database Replication using mySQL binary log Unidirectional only online  offline

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Production Infrastructures  GOAL: we have to deal with a big data set (more than 5000 runs, 10 7 events) and be able to perform many reprocessings in the next months. A production structure is in place to achieve that in short time and in a centralized way.  A sub-set of the information from the Conditions Database is copied to the bookkeeping database AMI (ATLAS Metadata Interface) : run number, file name, beam energy, etc. Other run information is entered by the Shifter.  AMI (Java application) contains a generic read-only web interface for searching. It is directly interfaced to AtCom (ATLAS Commander), a tool for automated job definition, submission and monitoring, developed for the CERN LSF Batch System, widely used during the “ATLAS Data Challenge 1” in 2003

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: Production Infrastructures  AMI, AtCom and a set of scripts represent the “production infrastructure” presently used both for real data reconstruction and for MonteCarlo pre- production done at CERN, with Data Challenge 1 tools.  The “bulk” of MonteCarlo production will be performed on Grid (“continuous production”), using the infrastructure presently used for running the “ATLAS Data Challenge 2” Both production infrastructures will be used in the coming months, operated by two different production groups

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla ATLAS CTB: High Level Trigger (HLT)  HLT = combination of Level2 and Level3 Triggers  HLT at the CTB as a service:  It provides the possibility to run “high-level monitoring” algorithms executed in the Athena processes running in the Level3 farms. It is the first place, in the data flow, where to establish correlations between objects reconstructed in the separate sub-detectors  HLT at the CTB as a client:  Complex selection algorithms are under test with the aim to have two full slices (one slice for electrons and photons, one slice for muons) LVL1 -> LVL2 -> LVL3 and producing separate outputs in the Sub Farm Output for the different selected particles

Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla Conclusions  The Combined Test Beam is the first opportunity to exercise the “semi-final” ATLAS software with real data in a complex setup  All the software has been integrated and is running successfully, including the HLT: centralized reconstruction has already started on limited data samples and preliminary results are already available, showing good quality  Calibration, alignment and reconstruction algorithms will continue to improve and to be tested with the largest data sample ever collected in a test beam environment ( O 10 7 events)  It has been a first, important step towards the integration of different sub-detectors and of people from different communities (software developers and sub-detector experts)