Software Solutions for Variable ATLAS Detector Description J. Boudreau, V. Tsulaia University of Pittsburgh R. Hawkings, A. Valassi CERN A. Schaffer LAL,

Slides:



Advertisements
Similar presentations
Chapter 10: Designing Databases
Advertisements

Lectures on File Management
March 24-28, 2003Computing for High-Energy Physics Configuration Database for BaBar On-line Rainer Bartoldus, Gregory Dubois-Felsmann, Yury Kolomensky,
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
EventStore Managing Event Versioning and Data Partitioning using Legacy Data Formats Chris Jones Valentin Kuznetsov Dan Riley Greg Sharp CLEO Collaboration.
NextGRID & OGSA Data Architectures: Example Scenarios Stephen Davey, NeSC, UK ISSGC06 Summer School, Ischia, Italy 12 th July 2006.
LHCb Simulation Tutorial CERN, 21 st -22 nd February B 00 l e How to pass a detector geometry to.
Distributed Data Stores – Facebook Presented by Ben Gooding University of Arkansas – April 21, 2015.
1 The Google File System Reporter: You-Wei Zhang.
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Oracle Data Integrator Changed Data Capture.
IT The Relational DBMS Section 06. Relational Database Theory Physical Database Design.
Mokka and integration of the geometry AIDA kick-off meeting WP2 session: Common software tools 17 February 2011 – CERN Paulo Mora de Freitas and Gabriel.
ITEC224 Database Programming
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Alignment Strategy for ATLAS: Detector Description and Database Issues
RELATIONAL FAULT TOLERANT INTERFACE TO HETEROGENEOUS DISTRIBUTED DATABASES Prof. Osama Abulnaja Afraa Khalifah
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Time and storage patterns in Conditions: old extensions and new proposals António Amorim CFNUL- FCUL - Universidade de Lisboa ● The “Extended”
Progress with migration to SVN Part3: How to work with g4svn and geant4tags tools. Geant4.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Event Data History David Adams BNL Atlas Software Week December 2001.
The european ITM Task Force data structure F. Imbeaux.
CHEP 2003 March 22-28, 2003 POOL Data Storage, Cache and Conversion Mechanism Motivation Data access Generic model Experience & Conclusions D.Düllmann,
Chapter 10 Analysis and Design Discipline. 2 Purpose The purpose is to translate the requirements into a specification that describes how to implement.
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
XML in Atlas: from generic to parametric detector description Stan Bentvelsen NIKHEF Amsterdam XML workshop, CERN, May 22.
ALICE, ATLAS, CMS & LHCb joint workshop on
ATLAS Detector Description Database Vakho Tsulaia University of Pittsburgh 3D workshop, CERN 14-Dec-2004.
® IBM Software Group © 2007 IBM Corporation Best Practices for Session Management
24/06/03 ATLAS WeekAlexandre Solodkov1 Status of TileCal software.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
The Persistency Patterns of Time Evolving Conditions for ATLAS and LCG António Amorim CFNUL- FCUL - Universidade de Lisboa A. António, Dinis.
CHEP /21/03 Detector Description Framework in LHCb Sébastien Ponce CERN.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER Charles Leggett Interval of Validity Service IOVSvc ATLAS Software Week May Architecture.
The GeoModel Toolkit for Detector Description Joe Boudreau Vakho Tsulaia University of Pittsburgh CHEP’04 Interlaken.
1 MuonGeoModel status report focus on issues relevant to , 20 Access to Condition DB with M. Verducci, R. Harrington MDT deformations with I. Logashenco.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
CHEP /21/03 Detector Description Framework in LHCb Sébastien Ponce CERN.
Detector Description in LHCb Detector Description Workshop 13 June 2002 S. Ponce, P. Mato / CERN.
Session 1 Module 1: Introduction to Data Integrity
Database authentication in CORAL and COOL Database authentication in CORAL and COOL Giacomo Govi Giacomo Govi CERN IT/PSS CERN IT/PSS On behalf of the.
Andrea Valassi (CERN IT-DB)CHEP 2004 Poster Session (Thursday, 30 September 2004) 1 HARP DATA AND SOFTWARE MIGRATION FROM TO ORACLE Authors: A.Valassi,
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
CORAL CORAL a software system for vendor-neutral access to relational databases Ioannis Papadopoulos, Radoval Chytracek, Dirk Düllmann, Giacomo Govi, Yulia.
27 March 2003RD Schaffer & C. Arnault CHEP031 Use of a Generic Identification Scheme Connecting Events and Detector Description in Atlas  Authors: C.
30/09/081 Muon Spectrometer Detector Description Outline: - Present situation - Dead Matter - Validation - GeoModel.. Laurent Chevalier, Andrea Dell’Acqua,
Copyright 2007, Information Builders. Slide 1 iWay Web Services and WebFOCUS Consumption Michael Florkowski Information Builders.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
G.Govi CERN/IT-DB 1GridPP7 June30 - July 2, 2003 Data Storage with the POOL persistency framework Motivation Strategy Storage model Storage operation Summary.
The V-Atlas Event Visualization Program J. Boudreau, L. Hines, V. Tsulaia University of Pittsburgh A. Abdesselam University of Oxford T. Cornelissen NIKHEF.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
Detector Description (Overview) C.Cheshkov. 25/9/2006Detector Description (C.Cheshkov)OutlineTerminology Overview on: Detector geometry implementation.
AliRoot Classes for access to Calibration and Alignment objects Magali Gruwé CERN PH/AIP ALICE Offline Meeting February 17 th 2005 To be presented to detector.
M.Frank, CERN/LHCb Persistency Workshop, Dec, 2004 Distributed Databases in LHCb  Main databases in LHCb Online / Offline and their clients  The cross.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
VI/ CERN Dec 4 CMS Software Architecture vs Hybrid Store Vincenzo Innocente CMS Week CERN, Dec
The Database Project a starting work by Arnauld Albert, Cristiano Bozza.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
Database Replication and Monitoring
Conditions Data access using FroNTier Squid cache Server
LHCb Detector Description Framework Radovan Chytracek CERN Switzerland
MuonGeoModel status of progress on main items
Event Storage GAUDI - Data access/storage Framework related issues
LHCb Detector Description Framework Radovan Chytracek CERN Switzerland
Presentation transcript:

Software Solutions for Variable ATLAS Detector Description J. Boudreau, V. Tsulaia University of Pittsburgh R. Hawkings, A. Valassi CERN A. Schaffer LAL, Orsay CHEP 2006 TIFR, Mumbai

Contents GeoModel – toolkit for building the transient representation of detector geometries  ATLAS Detector Description is implemented completely in GeoModel Configuring various geometry layouts  ATLAS Geometry Database and Hierarchical Versioning System (HVS)  Manual and automatic geometry configurations Incorporating time-dependent alignment information into the detector model  ATLAS Conditions database as a storage for alignment information  GeoModel mechanisms for volume alignment  Passing alignment parameters from Conditions database to the geometry model

GeoModel, general notes The transient description of complete ATLAS detector geometry is implemented using GeoModel toolkit  See the presentation “The GeoModel Toolkit for Detector Description” at CHEP04 The key features of GeoModel-based description are  Two layers of geometry description ‘Raw’ (material) geometry Readout geometry synchronized to raw geometry layer  Same description for all clients (simulation, reconstruction …). GeoModel description of material geometry is translated to Geant4 using automatic Geo2G4 translator  Several optimization techniques allow to describe complicated geometries with minimal memory consumption  Mechanisms for applying alignments on top of regular geometry

GeoModel description basics GeoModel description tree is assembled by Physical Volumes  Full Physical Volumes cache their absolute transformation The substructure of a physical volume is: logical volume (shape & material) and transformation in parent’s coordinate system Transform modification moves entire sub-tree Alignable transformations can be adjusted later, after the entire geometry was built There is one top for GeoModel tree – World volume.  ATLAS sub-detectors describe their geometry sub-trees independently  Each system can have more than one Tree Top volumes, which are attached to the World volume  Detector Manager objects cache the pointers to the Tree Tops and provide system specific interfaces to Detector Description client applications

GeoModel description diagram PV FPV PV World PV MUON SILICON CALOR

Various layouts of ATLAS detector geometry Basic idea: we wanted to have a possibility of building various ATLAS geometry layouts with every single version of ATLAS software ATLAS geometry versioning system is based on Hierarchical Versioning of detector description primary numbers stored in the ATLAS Geometry Database In order to switch between different geometry layouts it is enough to change a single parameter – ATLAS top level geometry tag ATLAS geometry tags can be passed across job boundaries  We use persistent TagInfo objects  Subsequent jobs can pick up correct geometry configuration from the input file bypassing manual configuration through job options

ATLAS Geometry DB – general notes The main purpose of the Geometry DB is to store in one common place all primary numbers for geometry description of ATLAS subsystems together with configuration information (tags, switches…) The master copy of the database is located at the Oracle cluster ATLAS RAC, which is supported by CERN IT/DB group The relational schema of the Geometry DB follows the design of Hierarchical Versioning System  The primary numbers are kept within Data Tables  Each Data Table corresponds to a Leaf Node in the logical HVS tree  Branch Nodes in the HVS tree are pure logical entities supposed to group child Nodes and build tree hierarchy

ATLAS Geometry DB Browser Tag hierarchy of branch HVS Nodes Leaf node tag and its contents

ATLAS Geometry DB – tag locking ATLAS subsystems manage their primary numbers in the Geometry DB independently  Load new primary numbers into database using SQL scripts  New numbers are validated by building subsystem geometries, checking against volume clashes and using in simulation/reconstruction  Eventually a new subsystem specific top level tag is created Every new ATLAS geometry tag is assembled by subsystem tags. After validation of new tags is done the ATLAS tag is locked  All children tags are locked recursively  The locked tag cannot be renamed or deleted  Data table records corresponding to some locked tag cannot be updated or deleted Tag locking also means that it can be used in the production

ATLAS Geometry DB – data access ATLAS Detector Description applications get tagged primary numbers from Geometry DB using RAL interface  Now we are moving to CORAL, see the presentation by I. Papadopoulos at CHEP06 RAL provides an uniform access interface to the data residing in different RDBMS  The appropriate client library is chosen at run time using plug-in mechanisms Thanks to this feature of RAL we can provide three equivalent sources of Geometry DB data  Oracle master copy is replicated to MySQL and SQLite The concrete replica is selected either explicitly by setting job option parameters or automatically through usage of DB Connection Service  See the presentation “ATLAS Distributed Database Services Client Library” at CHEP06

Different ways of ATLAS geometry configuration We have developed a special ATHENA service (RDBAccessSvc) which can retrieve HVS leaf node data from geometry database by providing tag of one of its parent branch nodes  Usually it is enough to provide just one ATLAS tag to the whole application for proper retrieval of all necessary leaf nodes We can pass the geometry configuration tags across job boundaries by using persistent Tag Info objects This allows us to have two options for ATLAS geometry configuration  Manual: the geometry tags are provided to the application through job option parameters (simulation)  Automatic: geometry tags are retrieved from Tag Info objects recorded by previous jobs. This mechanism guarantees the consistency of geometry layouts used in job chain (simulation-digitization- reconstruction)

Incorporating time-dependent alignments Two sources of non-event data for Detector Description: Geometry DB versus Conditions DB  Geometry DB. Contains values that are constant over simulation cycle or data-taking period  Conditions DB. Contains values that change within a cycle, or get better determined between reconstruction phases The data necessary to apply alignment corrections on top of regular geometry is kept within ATLAS Conditions DB  Conditions DB is organized primarily by Interval Of Validity (IOV) ATLAS applications access Conditions DB data through COOL interface developed within LCG project  For more information see and two presentations by A. Valassi at this conferencehttp://lcgapp.cern.ch/project/CondDB

Accessing COOL data from ATHENA Access to COOL from ATHENA is done via the dedicated IOV DB Service (IOVDbSvc)  IOVDbSvc provides interface between conditions data objects and ATHENA Transient Detector Store (TDS) IOVDbSvc takes care of the low level interactions with COOL IOVDbSvc ensures that the correct conditions objects are loaded into TDS for the event currently being analyzed  In order to read data from Conditions DB the client application just needs to request conditions objects from TDS Conditions data clients can also register callbacks on conditions objects so that they are notified whenever the objects change  When new constants are retrieved at the start of an event for which the previous constants are no longer valid

GeoModel mechanisms for volume alignment GeoModel comes with intrinsic mechanisms of putting alignments into the geometry and getting them out  To put alignment in, one alters one or more GeoAlignableTransform objects by calling its setDelta(HepTransform3D)  To get alignments out, one just needs to query physical volume for its transformation What has to be done in order to make it work in GeoModel based DD applications? 1. Identify a list of alignable volumes in the system and attach GeoAlignableTransform objects to each of them 2. Develop a routine which retrieves Conditions objects with alignments constants from TDS and alters appropriates GeoAlignableTransforms

Misaligned geometry in Simulation and Reconstruction jobs Different requirements for misaligned geometry by Simulation and Reconstruction  Simulation needs static geometry snapshot for the entire job. Alignments can be applied only once at initialization  Reconstruction should be able to deal with alignment changes event by event In order to satisfy these requirements  In Simulation: IOVDbSvc loads Conditions objects into TDS store at initialization  In Reconstruction: DD applications register callback routines, which are called for every event for which the previous constants are no longer valid … and in both cases DD clients get properly misaligned geometry.

Conclusion – Status & Outlook Using ATLAS Geometry Versioning System we have already implemented many different layouts of ATLAS geometry  Geometry of the entire detector with different level of realism  Various configurations for Commissioning  Test beam setup configurations Implementation of misaligned detector geometries has started recently  We have the complete infrastructure in place and simple prototype examples  The implementation of realistic misaligned subsystem geometries is yet to come…