Online Access to the ATLAS Conditions Databases Online Access to the ATLAS Conditions Databases L. Lopes, A. Amorim, J. Simões, P. Pereira (Faculty of.

Slides:



Advertisements
Similar presentations
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
Advertisements

ATLAS DAQ Configuration Databases CHEP 2001September 3 - 7, 2001 Beijing, P.R.China I.Alexandrov 1, A.Amorim 2, E.Badescu 3, D.Burckhart-Chromek 4, M.Caprini.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
Technical Architectures
--What is a Database--1 What is a database What is a Database.
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
Tools and Services for the Long Term Preservation and Access of Digital Archives Joseph JaJa, Mike Smorul, and Sangchul Song Institute for Advanced Computer.
A Database Visualization Tool for ATLAS Monitoring Objects A Database Visualization Tool for ATLAS Monitoring Objects J. Batista, A. Amorim, M. Brandão,
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
COMP 410 & Sky.NET May 2 nd, What is COMP 410? Forming an independent company The customer The planning Learning teamwork.
1 INTRODUCTION TO DATABASE MANAGEMENT SYSTEM L E C T U R E
Computer and Automation Research Institute Hungarian Academy of Sciences Presentation and Analysis of Grid Performance Data Norbert Podhorszki and Peter.
Goodbye rows and tables, hello documents and collections.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Distributed Indexing of Web Scale Datasets for the Cloud {ikons, eangelou, Computing Systems Laboratory School of Electrical.
Last News of and
Ramiro Voicu December Design Considerations  Act as a true dynamic service and provide the necessary functionally to be used by any other services.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
1 Alice DAQ Configuration DB
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Time and storage patterns in Conditions: old extensions and new proposals António Amorim CFNUL- FCUL - Universidade de Lisboa ● The “Extended”
Information System Development Courses Figure: ISD Course Structure.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Database Design and Management CPTG /23/2015Chapter 12 of 38 Functions of a Database Store data Store data School: student records, class schedules,
Web application for detailed real-time database transaction monitoring for CMS condition data ICCMSE 2009 The 7th International Conference of Computational.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
ALICE, ATLAS, CMS & LHCb joint workshop on
7. CBM collaboration meetingXDAQ evaluation - J.Adamczewski1.
The Persistency Patterns of Time Evolving Conditions for ATLAS and LCG António Amorim CFNUL- FCUL - Universidade de Lisboa A. António, Dinis.
Computing Division Requests The following is a list of tasks about to be officially submitted to the Computing Division for requested support. D0 personnel.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Application Development
BTeV Database Workshop Summary1 BTeV Database Workshop Summary D. Menasce - I.N.F.N. Milano Day 1 1. Database design database design (constraints) application.
April 2003 Iosif Legrand MONitoring Agents using a Large Integrated Services Architecture Iosif Legrand California Institute of Technology.
PPDG February 2002 Iosif Legrand Monitoring systems requirements, Prototype tools and integration with other services Iosif Legrand California Institute.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
The Lisbon Team - 8 December 2003The Lisbon team - 25 November 2003 ConditionsDB – Lisbon API Wide access to CondDB data and schema LCG Conditions DB Workshop.
Distributed Logging Facility Castor External Operation Workshop, CERN, November 14th 2006 Dennis Waldron CERN / IT.
General requirements for BES III offline & EF selection software Weidong Li.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
MND review. Main directions of work  Development and support of the Experiment Dashboard Applications - Data management monitoring - Job processing monitoring.
CORAL CORAL a software system for vendor-neutral access to relational databases Ioannis Papadopoulos, Radoval Chytracek, Dirk Düllmann, Giacomo Govi, Yulia.
DGC Paris Spitfire A Relational DB Service for the Grid Leanne Guy Peter Z. Kunszt Gavin McCance William Bell European DataGrid Data Management.
A Validation System for the Complex Event Processing Directives of the ATLAS Shifter Assistant Tool G. Anders (CERN), G. Avolio (CERN), A. Kazarov (PNPI),
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
Introduction to Core Database Concepts Getting started with Databases and Structure Query Language (SQL)
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
Online DBs in run Frank Glege on behalf of several contributors of the LHC experiments.
SPECTO TRAINING contact us: , mail :
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
CERN IT Department CH-1211 Genève 23 Switzerland t Load testing & benchmarks on Oracle RAC Romain Basset – IT PSS DP.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
Computing in High Energy and Nuclear Physics 2012 May 21-25, 2012 New York United States The version control service for ATLAS data acquisition configuration.
CT-PPS DB Info (Preliminary) DB design will be the same as currently used for CMS Pixels, HCAL, GEM, HGCAL databases DB is Oracle based A DB for a sub-detector.
L1Calo DBs: Status and Plans ● Overview of L1Calo databases ● Present status ● Plans Murrough Landon 20 November 2006.
Database 12.2 and Oracle Enterprise Manager 13c Liana LUPSA.
Jean-Philippe Baud, IT-GD, CERN November 2007
Database Replication and Monitoring
CMS High Level Trigger Configuration Management
Online Database Status
POOL persistency framework for LHC
TriggerDB copy in TriggerTool
Presentation transcript:

Online Access to the ATLAS Conditions Databases Online Access to the ATLAS Conditions Databases L. Lopes, A. Amorim, J. Simões, P. Pereira (Faculty of Sciences of University of Lisbon, Lisbon, Portugal) I. Soloviev (Petersburg Nuclear Physics Institute (PNPI)) S. Kolos (University of California, Irvine (UCI)) M. Caprini (IFIN/HH Bucharest, Magurele, Romania) Lourenço Vaz (FCUL – University of Lisbon, Lisbon, Portugal) – 15 th IEEE NPSS Real Time Conference 2007 (RT07), Fermilab, Batavia IL, – Apr 29 - May 5, 2007 – A Toroidal LHC ApparatuS (ATLAS/CERN) OKS2COOL is an interface to store Configurations databases (OKS) into COOL each time the TDAQ is booted. Along with OKS2CORAL, they form the interfaces to store TDAQ setups with time of validity and persistency to be browsed back in future. Its main features are: Uses a simple API to load OKS objects into COOL (TIDB2OKSPlugin) It's core module for ONASIC Uses TIDB2 (Temporal Instrumentation Database 2) Handles schema changes by also storing the oks binnary object in COOL OKS2COOL2 context diagram: OKS2COOL Data Mapping: Schema can change at any time, causing no trouble to the storing procedure. Keeps historical evolution easy to track, because data with different schema can be stored on the same folder. Only relevant attributes can be showed, later we can change our mind and show more attributes. OKS2COOL2 can be bidirectional at a low cost. LCG/COOL Conditions Database is used to store TDAQ Monitoring information and supply Configuration setups to High Level Trigger (HLT) algorithms, to ATLAS Subdetectors, and also to Offline event reconstruction jobs. LCG/COOL is a persistent and time managed database, and both the Monitoring and Configurations data may grow to large number of objects per run due to their constant change in time, this poses high reliability standards to the tools interfacing this services with COOL. Configurations setup or Configurations database is managed in the Online framework by Online Kernel System (OKS) an in-memory database, whose content can be dumped to an XML file, and Monitoring data is provided by Information Service (IS). A set of packages has been developed to deal with the transfer of Configuration setups and Monitoring data into COOL: OKS2COOL and ONASIC. On other hand, HLT during event processing and Subdetectors during their configuration, accesses COOL by a very large number of clients, this poses high performance standards to COOL and its backend RDBMS. To evaluate this performance, DBStressor tool was developed and widely used during Large Scale Tests (CERN 07), where a computing farm up to 1000 nodes simulated HLT and SubDetector read accesses to COOL. This set of tools integrate TDAQ s/w as cvs packages since Oct/2006 and presently fully functional. ATLAS T/DAQ Conditions Databases It's a two-tier interface from IS to COOL whose reliability lies in avoiding backpressure from database servers, it does it by using OKS XML files as local and temporary buffer. It includes SetCondition tool which allows the user to configure ONASIC on-the-fly, thus enhacing ONASIC integration with Monitoring WG. It's architecture uses 2 interfaces, ONASIC_IS2OKS (interfaces 2 Online Services) and ONASIC_OKS2COOL (non dependent from TDAQ infrastructure): Main features: Very Flexible Configuration Monitoring data from run integrety preserved. ONASIC: Online Asynchronous Interface to COOL It's a set of tools to test COOL access in a wide networking scale. DBStressor core is an Online Controller wich access the database during TDAQ ConfigureAction. Uses IS both to rapidily configure all controllers and also to collect all individual results. It includes a versatile populate DB tool. Tests during LST07:  Measure Oracle/COOL & LXSHARE Network Bandwidth  Test Capability of Indexing of Oracle/COOL  Measure Times to build Tables in COOL  Test Performance of COOL's queries by ChannelID  Overall results for each scalability step: COOL bandwidth vs scalability: DBStressor / LST07 tests Histogram with the real case: 1000 clients fecthing each a complete COOL Table of 289MB Histogram with simulation of: 100 threads processing with 300s (spread 20%) a total of 1000 clients Real data vs Simulation OKS class as a COOL folder A1,B1 A2,B2,C1,D1 A3,B3,C2,E AB,BB,DB Similar OKS objects Name Attr_name_ 1 Rel_name_ 1 OKS_Class (...) OKS DataCOOL Data Name Attr_name_ 1 Rel_name_ 1 Folder (...) S IS Id Attr_value_ 1 Rel_value_ 1 OKS_Obj (...) Id Attr_value_ 1 Rel_value_ 1 OKS_Obj (...) IS S Id Attr_value_ 1 Rel_value_ 1 Row (...) Since Till Id Attr_value_ 1 Rel_value_ 1 Row (...) Since Till OKS_Data FileSystem OKS2C OOL TIDB2 COOL MySQL Postgre SQL Oracle SQLite OKS data OKS2C OOL2 KTIDBExplorer OKS OKS2COOL2 IS OKS IS2OKS OKS2COOL OKS2COOL2 (OKS2TIDB2) TIDB2 COOL/ COOL2 Oracle MySQL PostgreSQL ONASIC ONASIC2 SetConditions