Download presentation
Presentation is loading. Please wait.
Published byElla Simpson Modified over 9 years ago
1
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER Charles Leggett The Athena Control Framework in Production, New Developments and Lessons Learned September 27 2004 C. Leggett, P. Calafiura, W. Lavrijsen, M. Marino, D. Quarrie
2
2 Charles Leggett 9/27/2004 Athena and Gaudi Ø Gaudi framework shared by LHCb, ATLAS, GLAST, HARP, and OPERA Ø Based on a modular component architecture, structured around dynamically loadable libraries Ø Maintains a separation between transient and persistent layers, allowing new technologies to replace aging ones with minimal impact on the end user Ø Athena comprises the ATLAS specific extensions to Gaudi, most notably: l Storegate – the data store l Interval Of Validity Svc – managing time dependent data l Pileup – combining multiple events l HistorySvc – maintaining a multi-level record
3
3 Charles Leggett 9/27/2004 Athena and Gaudi Converter Algorithm StoreGateSvc Persistency Service Data Files Algorithm StoreGateSvc Persistency Service Data Files Detector Store Message Service JobOptions Service Particle Prop. Service Other Services Histogram Service Application Manager Converter Event LoopMgr Auditors Scripting Service Sequencer Event Store D D D D D D D D D H H H H H T T T
4
4 Charles Leggett 9/27/2004 Athena in Production Ø Data Challenge II: l phase 1: F ~30 Physics channels, 10s of millions of events F several million calibration events F currently producing raw data in a distributed worldwide environment using Grid l phase 2: reconstruction and real time distribution of data to tier 1 institutes l phase 3: worldwide distributed analysis on the Grid Ø Combined TestBeam l taking data since July in various configurations l ~5000 runs, > 1Tb data written l G4 simulation and reconstruction of CTB setup occurring in parallel l conditions databases in production, both read and write. l preparing for phase II: massive reconstruction of all real data and production of MC data.
5
5 Charles Leggett 9/27/2004 Combined Test Beam Setup
6
6 Charles Leggett 9/27/2004 Interval of Validity Service Ø Makes associations between user data and time dependent data that resides in specialized conditions databases Ø Transparent to user Ø Data only read from persistent layer when it is used. Validity interval information is separate from data Ø Hierarchical callback functions can be associated with time dependent data such that they are triggered when data enters a new interval of validity Ø Validity interval information and time dependent data can be preloaded on a job or run basis for trigger or testbeam situations where database access is unwanted
7
7 Charles Leggett 9/27/2004 Access to Time Varying Data Ø Maintains separation on transient and persistent layers Ø Testbeam environment making good use of IOVService for: l Slow control l Calibration
8
8 Charles Leggett 9/27/2004 Detector Pileup in DC2 Ø Overlay ~1000 min bias events to original physics stream l Requirement: digitization algorithms should run unchanged Ø Tuple event iterator: manage multiple input streams l Select random permutations from a circular buffer of min- bias events l Memory optimization: requirement total job size < 1GB 2-dim detector and time-dependent event caching Ø Stress test architecture flexibility Ø Excellent tool to expose memory leaks (they become x1000 bigger)
9
9 Charles Leggett 9/27/2004History Ø Provenance of data must be assured Ø User selection of data based on its history. l full history of generation and processing recorded and associated with all data Ø Important in analysis to know complete source of data, and all cuts applied Ø History Service keeps track of l Environment l Job configuration l Services l Algorithms, AlgTools, SubAlgorithms l DataObjects
10
10 Charles Leggett 9/27/2004 Python Based Scripting Interface Ø Python woven into the framework, replacing flat text configuration files Ø dynamic job configuration l conditional branching l detFlags Ø interactive analysis Ø data object access and manipulation Ø connection to ROOT histogramming facilities Ø object type info for dictionaries and persistency
11
11 Charles Leggett 9/27/2004 Detector Configuration Matrix -+- 28 A DetFlags.Print() : pixel SCT TRT em HEC FCal Tile MDT CSC TGC RPC Truth LVL1 detdescr : ON ON ON -- -- -- ON -- -- -- -- ON -- digitize : ON ON ON -- -- -- ON -- -- -- -- ON -- geometry : ON ON ON -- -- -- ON -- -- -- -- ON -- haveRIO : -- -- -- -- -- -- -- -- -- -- -- -- -- makeRIO : -- -- -- -- -- -- -- -- -- -- -- -- -- pileup : ON ON ON -- -- -- -- -- -- -- -- ON -- readRDOBS : -- -- -- -- -- -- -- -- -- -- -- -- -- readRDOPool : ON ON ON -- -- -- ON -- -- -- -- ON -- readRIOBS : -- -- -- -- -- -- -- -- -- -- -- -- -- readRIOPool : -- -- -- -- -- -- -- -- -- -- -- -- -- simulate : -- -- -- -- -- -- -- -- -- -- -- -- -- writeBS : -- -- -- -- -- -- -- -- -- -- -- -- -- writeRDOPool : ON ON -- -- -- -- ON -- -- -- -- ON -- writeRIOPool : -- -- -- -- -- -- -- -- -- -- -- -- --
12
12 Charles Leggett 9/27/2004 Lessons Learned Ø Design for Performance l pileup excellent testbed l database access can be problematic Ø Design for Persistency l various container classes rewritten with persistency in mind l ClassID Service to globally monitor objects Ø Better support for realtime monitoring l essential for proper testbeam studies
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.