ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.

Slides:



Advertisements
Similar presentations
TYPE TEXT HERE On July 4, 2012, scientists at CERN announced the discovery of an essential part of the Standard Model of particle physics: the Higgs.
Advertisements

Project Overview John Huth Harvard University U.S. ATLAS Physics and Computing Project Review ANL October 2001.
ITIL: Service Transition
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Architecture/Framework Status David R. Quarrie LBNL U.S. ATLAS Physics and Computing Project Review ANL October 2001.
Information Systems and Data Acquisition for ATLAS What was achievedWhat is proposedTasks Database Access DCS TDAQ Athena ConditionsDB Time varying data.
U.S. ATLAS Software WBS 2.2 S. Rajagopalan July 8, 2003 DOE/NSF Review of LHC Computing.
US ATLAS Distributed IT Infrastructure Rob Gardner Indiana University October 26, 2000
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
U.S. ATLAS Physics and Computing Budget and Schedule Review John Huth Harvard University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Argonne National Laboratory ATLAS Core Database Software U.S. ATLAS Collaboration Meeting New York 22 July 1999 David Malon
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
The ATLAS Database Project Richard Hawkings, CERN Torre Wenaus, BNL/CERN ATLAS plenary meeting June 24, 2004.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Argonne National Laboratory Testbeams and Testbeds: Approaches to Object-oriented Data Access in ATLAS Computing in High Energy Physics 2000 Padova 9 February.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory PCAP Review of U.S. ATLAS Computing Project Argonne National Laboratory
June 02 John Huth, LHC Computing 1 U.S. ATLAS Overview  Project ManagementJ. Huth  SoftwareT.Wenaus  ArchitectureD. Quarrie  PhysicsI. Hinchliffe 
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
9 November 98 1 Jürgen Knobloch ATLAS Computing Overview of ATLAS Computing Jürgen Knobloch Slides also on:
Questions for ATLAS  How can the US ATLAS costs per SW FTE be lowered?  Is the scope of the T1 facility matched to the foreseen physics requirements.
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
U.S. ATLAS Software WBS 2.2 S. Rajagopalan July 8, 2004 DOE-NSF Review of U.S. ATLAS Computing.
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
U.S. ATLAS Project Overview John Huth Harvard University LHC Computing Review FNAL November 2001.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
Argonne National Laboratory Tom LeCompte1 Testbeam Requirements and Requests ATLAS Software Week Tom LeCompte Argonne National Laboratory
- Early Adopters (09mar00) May 2000 Prototype Framework Early Adopters Craig E. Tull HCG/NERSC/LBNL ATLAS Arch CERN March 9, 2000.
ATLAS Simulation/Reconstruction Software James Shank DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory NOVEMBER 14-17,
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Atlas Computing Planning Helge Meinhard / CERN-EP Atlas Software Workshop Berkeley, 11 May 2000.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Status of the LAr OO Reconstruction Srini Rajagopalan ATLAS Larg Week December 7, 1999.
Software Project Status Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
Data Management Overview David M. Malon Argonne U.S. LHC Computing Review Berkeley, CA January 2003.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
ATLAS Database Access Library Local Area LCG3D Meeting Fermilab, Batavia, USA October 21, 2004 Alexandre Vaniachine (ANL)
Magda Distributed Data Manager Prototype Torre Wenaus BNL September 2001.
Introduction S. Rajagopalan August 28, 2003 US ATLAS Computing Meeting.
LHCbComputing Personnel status Preparation of discussion at next CB.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
DPS/ CMS RRB-T Core Software for CMS David Stickland for CMS Oct 01, RRB l The Core-Software and Computing was not part of the detector MoU l.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ACOS Report ATLAS Software Workshop December 1998 Jürgen Knobloch Slides also on:
DOE/NSF Quarterly review January 1999 Particle Physics Data Grid Applications David Malon Argonne National Laboratory
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
Architecture/Framework Status David R. Quarrie LBNL DOE/NSF Review of U.S. ATLAS Physics and Computing Project FNAL November 2001.
Argonne National Laboratory ATLAS Tile Calorimeter Testbeam Pilot Project ATLAS Software Workshop Geneva 2 September 1999 David Malon
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
I/O and Metadata Jack Cranshaw Argonne National Laboratory November 9, ATLAS Core Software TIM.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Magda Distributed Data Manager Torre Wenaus BNL October 2001.
Grid Services for Digital Archive Tao-Sheng Chen Academia Sinica Computing Centre
Marco Cattaneo, 20-May Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
S. Rajagopalan August 28, 2003 US ATLAS Computing Meeting
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Simulation Framework Subproject cern
Global Grid Forum (GGF) Orientation
Computing activities at Victoria
Presentation transcript:

ATLAS, U.S. ATLAS, and Databases David Malon Argonne National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory NOVEMBER 14-17, 2000

Outline  Requirements  Technical Choices  Approach  Organization (U.S. and International ATLAS)  Resource Requirements  Schedule  Fallback Issues

Requirements  Efficient storage and retrieval of several petabytes of data annually  Global access to data; data replication and distribution to ATLAS institutions worldwide  Event databases (raw, simulation, reconstruction)  Geometry databases  Conditions databases (calibrations, alignment, run conditions)  Statistics and analysis stores

Requirements  Access to and storage of physics data through the ATLAS control framework  Metadata databases, and query mechanisms for event and data selection  Schema evolution  Database support for testbeams  Support for physical data clustering and storage optimization  Tertiary storage access and management  Interfaces to fabrication databases, to online data sources, …

Technical Choices  Objectivity/DB is the ATLAS baseline datastore technology  Enforce transient/persistent separation to keep physics codes “independent of database supplier”  Use LHC-wide and/or IT-provided technologies wherever possible

Approach  Build a nontrivial Objectivity-based event store based on TDR data  Provide a rudimentary generic Objectivity persistency service for storage and access of user-defined data through the control framework; evolve this as ATLAS event model evolves  Use testbeams as testbeds  For IT-supported calibration databases  For evaluating HepODBMS and approaches to naming and user areas  For evaluating alternative transient/persistent separation models  Rely on subsystems for subsystem-specific database content

Approach  Build data distribution infrastructure to be grid-aware from the outset; use common grid tools (initially GDMP from CMS)  Participate in the definition of LHC-wide solutions for data and storage management infrastructure  Evaluate and compare technologies, and gain widespread user exposure to baseline technology, prior to first mock data challenge  Understand and address scalability issues in a series of mock data challenges

Organization  Shared database coordination responsibility: David Malon (Argonne) and RD Schaffer (Orsay)  Database task leaders from each subsystem  Inner Detector: Stan Bentvelsen  Liquid Argon: Stefan Simion (Nevis), Randy Sobie  Muon Spectrometer: Steve Goldfarb (Michigan)  Tile Calorimeter: Tom LeCompte (Argonne)  Trigger/DAQ: Hans Peter Beck  U.S. Organization: David Malon is Level 3 Manager for databases

Resource Requirements  WBS estimates (see document from Torre Wenaus) show 14.6 FTEs in 2001, 15.8 in 2002, with a maximum of 18.5 in 2005  Proposed U.S. share is 3.5 FTE in 2001 (database coordinator and 2.5 developers), 4.5 in 2002, ramping up to a maximum of 6.5

Schedule  Framework milestones determine database content support milestones; see David Quarrie’s talk for details  Database content support for simulation and reconstruction needed, e.g., by end of 2001 for Mock Data Challenge 0  December 2000 release:  Access to Objectivity-based event store (TDR data) through the control framework  Rudimentary generic Objectivity persistency service

Domain-Specific Schedule  LHC-wide requirements reassessment and a common project for data and storage management will almost certainly commence in 2001  Database evaluation, scalability assessment, and technology comparisions to support a 2001 datastore technology decision  Series production detectors enter testbeam in May 2001

Fallback Issues  Expect a shortfall of core database effort in the near term:  3.0 Argonne  0.5 Chicago  3.0 Orsay  CERN  Milan  Lisbon  2.0 U.S. at CERN (pending NSF request)  If all of these materialize, shortfall in rate would not be so large, but the count in October 00 is only 3.0  Currently addressing this by blurring lines between core and subsystem work

Fallback Issues  Cannot compromise milestones related to integration into control framework, or support for storage and retrieval of simulation and reconstruction data in the timeframe of the mock data challenges  Have functional flexibility in certain areas (e.g., WBS items relating to security, administration tools); could also delay support for analysis beyond generic services

Fallback Issues  Database technology decision is a “floating” milestone— independence of database supplier gives us some temporal flexibility  Could reduce scope, if absolutely necessary, by limiting support for testbeam to a consulting role