The Muon Conditions Data Management: Database Architecture and Software Infrastructure Monica Verducci University of Wuerzburg & CERN 5-9 October 2009.

Slides:



Advertisements
Similar presentations
Inefficiencies in the feet region 40 GeV muons selection efficiency   Barrel – End Cap transition 10th International Conference on Advanced Technology.
Advertisements

1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
General Trigger Philosophy The definition of ROI’s is what allows, by transferring a moderate amount of information, to concentrate on improvements in.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
CMS Alignment and Calibration Yuriy Pakhotin on behalf of CMS Collaboration.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Partitioning in COOL Andrea Valassi (CERN IT-DM) R. Basset (CERN IT-DM) Distributed.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
What’s in the ATLAS data : Trigger Decision ATLAS Offline Software Tutorial CERN, August 2008 Ricardo Gonçalo - RHUL.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Software Solutions for Variable ATLAS Detector Description J. Boudreau, V. Tsulaia University of Pittsburgh R. Hawkings, A. Valassi CERN A. Schaffer LAL,
Manoj Kumar Jha INFN – Bologna On Behalf of ATLAS Muon Calibration Group 20 th October 2010/CHEP 2010, Taipei ATLAS Muon Calibration Framework.
The Region of Interest Strategy for the ATLAS Second Level Trigger
ATLAS Database Operations Invited talk at the XXI International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, September 2007 Alexandre.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Web application for detailed real-time database transaction monitoring for CMS condition data ICCMSE 2009 The 7th International Conference of Computational.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
ALICE, ATLAS, CMS & LHCb joint workshop on
Computing in High Energy Physics – Interlaken - September 2004 Ada Farilla Offline Software for the ATLAS Combined Test Beam Ada Farilla – I.N.F.N. Roma3.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Database Monitoring Requirements Salvatore Di Guida (CERN) On behalf of the CMS DB group.
The Persistency Patterns of Time Evolving Conditions for ATLAS and LCG António Amorim CFNUL- FCUL - Universidade de Lisboa A. António, Dinis.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
3rd November Richard Hawkings Luminosity, detector status and trigger - conditions database and meta-data issues  How we might apply the conditions.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
TRT Offline Software DOE Visit, August 21 st 2008 Outline: oTRT Commissioning oTRT Offline Software Activities oTRT Alignment oTRT Efficiency and Noise.
The Status of the ATLAS Experiment Dr Alan Watson University of Birmingham on behalf of the ATLAS Collaboration.
September 2007CHEP 07 Conference 1 A software framework for Data Quality Monitoring in ATLAS S.Kolos, A.Corso-Radu University of California, Irvine, M.Hauschild.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
A New Tool For Measuring Detector Performance in ATLAS ● Arno Straessner – TU Dresden Matthias Schott – CERN on behalf of the ATLAS Collaboration Computing.
Michele de Gruttola 2008 Report: Online to Offline tool for non event data data transferring using database.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
S t a t u s a n d u pd a t e s Gabriella Cataldi (INFN Lecce) & the group Moore … in the H8 test-beam … in the HLT(Pesa environment) … work in progress.
1 OO Muon Reconstruction in ATLAS Michela Biglietti Univ. of Naples INFN/Naples Atlas offline software MuonSpectrometer reconstruction (Moore) Atlas combined.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
DQM for the RPC subdetector M. Maggi and P. Paolucci.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Валидация TRT DCS CONDITIONS SERVICE Евгений Солдатов, НИЯУ МИФИ “Physics&Computing in ATLAS” – 22/09/2010.
M.Frank, CERN/LHCb Persistency Workshop, Dec, 2004 Distributed Databases in LHCb  Main databases in LHCb Online / Offline and their clients  The cross.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
Online DBs in run Frank Glege on behalf of several contributors of the LHC experiments.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Database Replication and Monitoring
CMS High Level Trigger Configuration Management
ATLAS MDT HV – LV Detector Control System (DCS)
Conditions Data access using FroNTier Squid cache Server
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Barrel RPC Conditions Database
High Level Trigger Studies for the Efstathios (Stathis) Stefanidis
The First-Level Trigger of ATLAS
Bringing the ATLAS Muon Spectrometer to Life with Cosmic Rays
DQM for the RPC subdetector
Presentation transcript:

The Muon Conditions Data Management: Database Architecture and Software Infrastructure Monica Verducci University of Wuerzburg & CERN 5-9 October th ICATPP Villa Olmo Como (Italy)

Outline the ATLAS Muon Spectrometer Muon Spectrometer Data Flow: Trigger, Streams and Conditions Data Muon Conditions Database Storage and Architecture Software Infrastructure Applications and Commissioning test 2 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The ATLAS Muon Spectrometer 3 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The ATLAS Detector 4 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The Muon Spectrometer 5 Three toroidal magnets create a magnetic field with: Barrel: ∫ Bdl = 2 – 6 Tm Endcaps: ∫ Bdl = 4 – 8 Tm RPC & TGC: Trigger the detector and measure the muons in the xy and Rz planes with an accuracy of several mm. CSC: Measure the muons in Rz with ~80  m accuracy and in xy with several mm. Cover 2  2.7 MDT: Measure the muons in Rz with ~80  m accuracy. Cover |  |<2 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The Muon Spectrometer II 6 Cathode-Strip Chambers (CSC) : 32 chambers, 31k channels Monitored Drift Tube (MDT): 1108 chambers, 339k channels Thin Gap Chambers (TGC): 3588 chambers, 359k channels Resistive Plate Chambers (RPC): 560 chambers, 359k channels Trigger chambers Precision chambers Need a good resolution in the timing, pT and position measure to achieve the physics goals! Extremely fine checks of all the parts of each subdetector! Huge amount of information… The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Data Flow and Conditions Data 7 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

subset ATLAS Event Flow 8 Output Streams Detector Parameters Configuration DB TRIGGER Conditions DB Non event data ATHENA Offline reconstruction Event Selection Primary Pathological Express Calibration & Align Hierarchical trigger system ~MB/sec ~PB/year raw data 10 9 events/s =>1GHz 1 event~ 1MB (~PB/s) The Muon Conditions Data Management: Database Architecture and Software Infrastructure

The MUON “Non Event Data” A typical ATLAS “Non-Event data” could be a: ▫ Calibration and Alignment data ( from express and calibration streams for a total data rate of about 32MB/s, dominated by the inclusive high pt leptons (13% EF bandwidth= 20Hz of 1.6MB events). RAW Data -> 450 TB/year. More streams are now subsumed into the express stream) ▫ PVSS Oracle Archive, i.e. the archive for the DCS « slow control system » data, and DAQ via OKS DB. ▫ Detector configuration and connectivity data, specific subdetector data Mainly used for: ▫ Diagnostic by detector experts ▫ Geometry, DCS ▫ Sub-Detector hardware and software ▫ Data defining the configuration of the TDAQ/DCS/subdetector hardware and software to be used for the following run ▫ Calibrations and Alignment ▫ Event Reconstruction and analysis ▫ Conditions data 9 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

10 Muons Conditions Data from Trigger Streams Some Conditions Muon Data will be produced by detector analysis performed on the: ▫ Calibration and Alignment Stream  Muons are extracted from the second level trigger (LVL2) at a rate of ~1 KHz, data are streamlined to 3 Calibration Centres (Ann Arbor, Munich, Rome (from to Naples for RPCs)  100 CPUs each, ~1 day latency for the full chain ▫ Express Stream 10 The ATLAS Trigger will produce 4 streams (200Hz, 320 MB/s): Primary stream (5 streams based on trigger info: e,m,jet) Calibration and Alignment Stream (10%) Express Line Stream (Rapid processing of events also included in the Primary Stream 30 MB/s, 10%) Pathological events (events not accepted by EF) 40 MHz 10 5 Hz 10 3 Hz 10 2 Hz Front end pipelines Readout buffers Processor farms Switching network Detectors µsec ms sec LVL 1 LVL 2 LVL 3 25ns ~PB/sec ~MB/sec The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Muon Conditions Data Calibration for the precision chambers Alignment from sensors and from tracks Efficiency flags for the trigger chambers Data Quality flags (dead / noisy channels) and final status for the monitoring Temperature map, B field map DCS information (HV,LV,gas…) DAQ run information (chamber initialized) SubDetector Configuration parameters (cabling map, commissioning flags…) 11 Calibration Stream & offline algo (express stream ) Analysis algorithms Hardware Sensor OKS2COOL and PVSS2COOL Constructor parameters The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Storage of the ‘non-event’ data There are different Database storage solution to deal the different hardware and software subdetector work point. 1.Hardware Configuration DB ▫ Oracle private DB, architecture and maintenance under detector’s experts 2.Calibration & Alignment DB ▫ Oracle private DBs, one for the MDT Calibration (replicated in three centers) and one for the Alignment sensors. 3.Condition DB ▫ Contains a subset and less granularity information ▫ Cool Production DB 12 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Conditions DataBase The Conditions data are non-event data that could:  Vary with time  May exist in different versions  Data coming from both offline and online The Conditions DB is mainly accessed by the ATLAS offline reconstruction framework (ATHENA) Conditions Databases are distributed world-wide (for scalability) ▫ accessed by an “unlimited” number of computers on the Grid: simulations jobs, reconstruction jobs, analysis jobs,… Within ATLAS, the master conditions database is at CERN and using Oracle replica mechanism will be available in all Tier-1 centers The technology used in the Conditions DB is an LCG product: COOL (COnditions Objects for LHC) implemented using CORAL 13 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Cool Interface for Conditions Database The interface provided by COOL: ▫ LCG RelationalAccessLayer software which allows database applications to be written independently of the underlying database technology (Oracle, in MySQL or in SQLite). COOL provides a C++ API, and an underlying database schema to support the data model. ▫ Once a COOL database has been created and populated, it is possible for users to interact with the database directly, using lower-level database tools COOL implements an interval of validity database ▫ Database schema optimized for IOV retrieval & look-up ▫ objects stored or referenced in COOL have an associated start and end time between which they are valid. ▫ times are specified either as run/event, or as absolute timestamps in agreement with the meta-data stored. 14 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Since (Time) Until (Time) ChannelId (Integer) Payload (Data) Tag (String) 15 COOL data are stored in folders (tables) ▫ Database = set of folders ▫ Within each folder, several objects of the same type are stored, each with their own interval of validity range COOL folders can be ▫ SingleVersion: only one object can be valid at any given time value  DCS data, where the folder simply records the values as they change with time ▫ MultiVersion: several objects can be valid for the same time, distinguished by different tags  calibration data, where several valid calibration sets may exist for the same range of runs (different processing pass or calibration algorithm) The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Muon Spectrometer examples 16 DCS: Temperature or HV values depends on the IoV and are relative simple and small  Inline Payload Calibration Data and Alignment: Parameters with a high granularity, more parts can give the same IoV  CLOB Payload DCSHV 1111 Evt20 Run10 Evt10 Run1 TagPayloadCh IdUntilSince Cosmics M4 T0 CLOB 1Evt20 Run10 Evt10 Run1 TagPayloadCh IdUntilSince LV The Muon Conditions Data Management: Database Architecture and Software Infrastructure

17 Conditions Servers and Schema Each subdetector (MDT, RPC, CSC, TGC) has a different schema necessary because of the options introduced by the Oracle Streams architecture used to replicate data from the ATLAS online server (ATONR) to the ATLAS offline server in the CERN computer centre (ATLR) and on to the servers at each of the Tier-1 sites. The different schema are as follows: ▫ Schema ATLAS_COOLONL_XYZ which can be written on the online server, and is replicated to offline and Tier-1s. ▫ Schema ATLAS_COOLOFL_XYZ which can be written on the offline server, and is replicated to Tier-1s. Each schema is associated with several accounts: ▫ A schema owner account ▫ A writer account ATLAS_COOLONL_XYZ_W is used for insert and tag operations, ▫ A generic reader account ATLAS_COOL_READER is used for read-only access to all the COOL schema and can be used on online, offline and Tier-1 servers. The Muon Conditions Data Management: Database Architecture and Software Infrastructure T0T1 ATONR COOL ATONR COOL ATONR COOL P1 Oracle Stream T0T1 ATR COOL ATR COOL Oracle Stream

Writing in the Conditions DB We have several sources of conditions data (online pit, offline, calibration stream) The software analysis and the publishing interface are in general different, depending on the framework where the analysis code works The final mechanism is unique, every insertion passes via a sqlite file and, after checking, stored in the official production DB, using python scripts. The service names are defined in the Oracle tnsnames.ora file, and the connection/password name are handled automatically in the authentication.xml file. 18 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Muon Software Interface Unique interface inside the reconstruction package: MuonConditionsSummarySvc Every subdetector part has its own code to handle the proper Conditions Data(DCS, Status flags,...)using a XYZConditionsSummarySvc which initializes several tools:  DCSConditionsTool,  DQConditionsTool,... Using the IoVService the information used in the reconstruction algorithms is always “on time” with the current event processed 19 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Access by the Reconstruction 20 7) Set IOV Algorithm or AlgTool Detector Store Service IOVSvc Service IOVDbSvc Service IOV BD 1) Begin Run / Event 6) Get Payload 5) Update Address 2) Delete Objects (expired) 4)Retrieve CondObjColl 3)Callback CondObj Transient Detector Store CondObj Collection Access to COOL from Athena is done via the Athena IOVDbSvc (provides an interface between conditions data objects in the Athena transient detector store (TDS) and the conditions database itself). Reading event data, the IOVDbSvc ensures that the correct Cond Data Obj are always loaded into the Athena TDS for the event currently being analyzed. Payload Time Address CondObjColl ref The Muon Conditions Data Management: Database Architecture and Software Infrastructure

21 Commissioning and Tests Tests of all the chain, transfer of data (streaming), access to the data in reconstruction job have been carried out The cosmics data have been stored successfully (in particular alignment and calibration info) The Muon data replica and access have been tested inside the overall ATLAS test with some dummy data: ◦ The production schema for ATLAS have been replicated from the online RAC ATONR to ATLR and then on to the active Tier-1 sites ◦ Tests on the access by ATHENA and on the replica/transfer data between Tier1 and Tier0 have been done, good Tier1 (~200 jobs in parallel – tests done in 2006). Tests of the Muon Data Access by Reconstruction partially done without any problems The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Conclusions The Muon DataBase layout has been defined and tested. The Access and architecture for most of the Muon Conditions Data have been extensively tested. The Muon data replica and access during the ATLAS test have been positive 22 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

Backup 23 The Muon Conditions Data Management: Database Architecture and Software Infrastructure

24 Muon Calibration Stream Muons in the relevant trigger regions are extracted from the second level trigger (LVL2) at a rate of ~1 KHz Data are streamlined to 4 Calibration Centers ◦ Ann Arbor, Munich, Rome and Naples for RPCs ◦ 100 CPUs each The stream is useful also for Data Quality Assessment, alignment with tracks and trigger efficiency studies ~1 day latency for the full chain ◦ From data extraction, to calibration computation at the Centers, to writing the calibration constants in the Conditions DB at CERN Need to carefully design the data flow and the DB architecture The Muon Conditions Data Management: Database Architecture and Software Infrastructure

25 Calibration and Alignment Stream Production of NON-EVENT DATA in different steps, used for the event recostruction Input Raw Data can come from the event stream or be processed by the sub-detector read-out system. RODs level (not read-out by the standard DAQ path, not physics events: pulse signal) At the event filter level (standard physics events: Z - >ee, muon sample, etc.) After the event filter but before the “prompt reconstruction” Offline after the “prompt reconstruction” (Z ->ee, Z+jet, ll ) The Muon Conditions Data Management: Database Architecture and Software Infrastructure

26 Calibration Stream (technicaldetector stream) ▫ An Inner Detector Alignment Stream (100 Hz reco tracks info 4kB) ▫ A LAr electromagnetic calorimeter stream (50Hz of inclusive electrons pt > 20 GeV up to 50kB) ▫ A muon calibration stream (Level1 trigger region ~10kHz for 6kB) ▫ Isolated hadron (5Hz for 400 kB) Express Stream (processed promptly, i.e. within < 8 hours) ▫ contain all calibration samples needed to extract calibration constants before the 1st-pass reconstruction of the rest of the data: Z  ll, pre-scaled W  l, tt, etc. ▫ Inclusive high-pt electrons and muons (20Hz with full event read-out 1.6MB) These streams sum to a total data rate of about 32MB/s, dominated by the inclusive high pt leptons (13% EF bandwidth= 20Hz of 1.6MB events). RAW Data -> 450 TB/year. More streams are now subsumed into the express stream The Muon Conditions Data Management: Database Architecture and Software Infrastructure

27 Some numbers: CondDB design ▫ ATLAS daily reconstruction and/or analysis job rates will be in the range from 100k to 1M jobs/day ▫ For each of ten Tier-1 centers that corresponds to the Conditions DB access rates of jobs/hour ▫ Each reconstruction job will read MB of data ▫ Atlas requests to Tier-1s is a 3-node RAC cluster dedicated to the experiment. ▫ Expected rate of data flow to Tier-1s is between 1-2 GB/day The Muon Conditions Data Management: Database Architecture and Software Infrastructure

28 CHEP Workload scalability results (2006) In ATLAS we expect 400 to 4,000 jobs/hour for each Tier1 For 1/10th of the Tier1 capacities that corresponds to the rates of 200 to 2,000 jobs/hour Good Results! For the Access by Athena we have obtained 1000 seconds per job events at ATLR due to the DCS and TDAQ schema access! The Muon Conditions Data Management: Database Architecture and Software Infrastructure

29 M.Verducci 29 ATLAS Detector DCS Detector Con. Sys. HV, LV Temperature Allingment Front- End Event Filter Level2 Trigger ROSs Level1 Trigger VME Crate RODs ATHENA code Configuration Database Conditions Database ByteStream Files Manual Input TCord db ROD HLT/D AQ DCS System Online Calib. farm ROD HLT/ DAQ DCS System Monitor queries Reco. farms Offline analysis Geom. Setup Calib CONFIGURATION DB Geom. Setup Calib CONDITION DB Monitor data DCS Databases are organized collections of data Organized according to a certain data model The data model defines not only the structure but also which operations can be performed The Muon Conditions Data Management: Database Architecture and Software Infrastructure

30 Eµ~ 1TeV -> Δ ~500µm  /m ~10% -> δΔ ~50µm Alignment accuracy ~30µm B Field accuracy | Δ B| ~ 1-2 mT Alignment Accuracy testbeam results Muon Spectrometer Strategy for muon PID Dilepton resonances (mostly Z) sensitive to: Tracker-spectrometer misalignment Uncertainties on Magnetic field Detector momentum scale Width is sensitive to muon momentum resolution Calibration Samples in 100pb-1: J/  ~1600k (+~10%  ’)  ~300k (+~40%  /  ) Z  ~60k