Download presentation
Presentation is loading. Please wait.
Published byLindsey Waters Modified over 9 years ago
1
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory NOVEMBER 14-17, 2000
2
Outline Requirements Technical Choices Approach Organization (U.S. and International ATLAS) Resource Requirements Schedule Fallback Issues
3
Requirements Efficient storage and retrieval of several petabytes of data annually Global access to data; data replication and distribution to ATLAS institutions worldwide Event databases (raw, simulation, reconstruction) Geometry databases Conditions databases (calibrations, alignment, run conditions) Statistics and analysis stores
4
Requirements Access to and storage of physics data through the ATLAS control framework Metadata databases, and query mechanisms for event and data selection Schema evolution Database support for testbeams Support for physical data clustering and storage optimization Tertiary storage access and management Interfaces to fabrication databases, to online data sources, …
5
Technical Choices Objectivity/DB is the ATLAS baseline datastore technology Enforce transient/persistent separation to keep physics codes “independent of database supplier” Use LHC-wide and/or IT-provided technologies wherever possible
6
Approach Build a nontrivial Objectivity-based event store based on TDR data Provide a rudimentary generic Objectivity persistency service for storage and access of user-defined data through the control framework; evolve this as ATLAS event model evolves Use testbeams as testbeds For IT-supported calibration databases For evaluating HepODBMS and approaches to naming and user areas For evaluating alternative transient/persistent separation models Rely on subsystems for subsystem-specific database content
7
Approach Build data distribution infrastructure to be grid-aware from the outset; use common grid tools (initially GDMP from CMS) Participate in the definition of LHC-wide solutions for data and storage management infrastructure Evaluate and compare technologies, and gain widespread user exposure to baseline technology, prior to first mock data challenge Understand and address scalability issues in a series of mock data challenges
8
Organization Shared database coordination responsibility: David Malon (Argonne) and RD Schaffer (Orsay) Database task leaders from each subsystem Inner Detector: Stan Bentvelsen Liquid Argon: Stefan Simion (Nevis), Randy Sobie Muon Spectrometer: Steve Goldfarb (Michigan) Tile Calorimeter: Tom LeCompte (Argonne) Trigger/DAQ: Hans Peter Beck U.S. Organization: David Malon is Level 3 Manager for databases
9
Resource Requirements WBS estimates (see document from Torre Wenaus) show 14.6 FTEs in 2001, 15.8 in 2002, with a maximum of 18.5 in 2005 Proposed U.S. share is 3.5 FTE in 2001 (database coordinator and 2.5 developers), 4.5 in 2002, ramping up to a maximum of 6.5
10
Schedule Framework milestones determine database content support milestones; see David Quarrie’s talk for details Database content support for simulation and reconstruction needed, e.g., by end of 2001 for Mock Data Challenge 0 December 2000 release: Access to Objectivity-based event store (TDR data) through the control framework Rudimentary generic Objectivity persistency service
11
Domain-Specific Schedule LHC-wide requirements reassessment and a common project for data and storage management will almost certainly commence in 2001 Database evaluation, scalability assessment, and technology comparisions to support a 2001 datastore technology decision Series production detectors enter testbeam in May 2001
12
Fallback Issues Expect a shortfall of core database effort in the near term: 3.0 Argonne 0.5 Chicago 3.0 Orsay 1.0-2.0 CERN 0.5-1.0 Milan 0.5-1.0 Lisbon 2.0 U.S. at CERN (pending NSF request) If all of these materialize, shortfall in rate would not be so large, but the count in October 00 is only 3.0 Currently addressing this by blurring lines between core and subsystem work
13
Fallback Issues Cannot compromise milestones related to integration into control framework, or support for storage and retrieval of simulation and reconstruction data in the timeframe of the mock data challenges Have functional flexibility in certain areas (e.g., WBS items relating to security, administration tools); could also delay support for analysis beyond generic services
14
Fallback Issues Database technology decision is a “floating” milestone— independence of database supplier gives us some temporal flexibility Could reduce scope, if absolutely necessary, by limiting support for testbeam to a consulting role
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.