Online DBs in run 3 2016 Frank Glege on behalf of several contributors of the LHC experiments.

Slides:



Advertisements
Similar presentations
Database Futures Workshop CERN Michael Dahlinger, GSI
Advertisements

JCOP FW Update ALICE DCS Workshop 6 th and 7 th October, 2005 Fernando Varela Rodriguez, IT-CO Outline Organization Current status Future work.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CERN - IT Department CH-1211 Genève 23 Switzerland t LCG Persistency Framework CORAL, POOL, COOL – Status and Outlook A. Valassi, R. Basset,
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
A Database Visualization Tool for ATLAS Monitoring Objects A Database Visualization Tool for ATLAS Monitoring Objects J. Batista, A. Amorim, M. Brandão,
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
Database Infrastructure for Application Development Designing tables and relations (Oracle Designer) Creating and maintaining database tables d0om - ORACLE.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
1 Alice DAQ Configuration DB
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Peter Chochula and Svetozár Kapusta ALICE DCS Workshop, October 6,2005 DCS Databases.
Time and storage patterns in Conditions: old extensions and new proposals António Amorim CFNUL- FCUL - Universidade de Lisboa ● The “Extended”
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
Software Solutions for Variable ATLAS Detector Description J. Boudreau, V. Tsulaia University of Pittsburgh R. Hawkings, A. Valassi CERN A. Schaffer LAL,
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
CMS Databases P. Paolucci. CMS DB structure HLT-CMSSW applicationReconstruction-CMSSW application FronTIER read/write objects.
1 The new Fabric Management Tools in Production at CERN Thorsten Kleinwort for CERN IT/FIO HEPiX Autumn 2003 Triumf Vancouver Monday, October 20, 2003.
ALICE, ATLAS, CMS & LHCb joint workshop on
ATLAS Detector Description Database Vakho Tsulaia University of Pittsburgh 3D workshop, CERN 14-Dec-2004.
20th September 2004ALICE DCS Meeting1 Overview FW News PVSS News PVSS Scaling Up News Front-end News Questions.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Lana Abadie1 Conception et optimisation d’une base de données relationnelle pour la configuration d’expériences HEP Implementation and optimization of.
Database Monitoring Requirements Salvatore Di Guida (CERN) On behalf of the CMS DB group.
The Persistency Patterns of Time Evolving Conditions for ATLAS and LCG António Amorim CFNUL- FCUL - Universidade de Lisboa A. António, Dinis.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
3rd November Richard Hawkings Luminosity, detector status and trigger - conditions database and meta-data issues  How we might apply the conditions.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
Database Server Concepts and Possibilities Lee Lueking D0 Data Browser Workshop April 8, 2002.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
Rack Wizard LECC 2003 Frank Glege. LECC Frank Glege - CERN2/12 Content CMS databases - overview The equipment database The Rack Wizard.
Configuration database status report Eric van Herwijnen September 29 th 2004 work done by: Lana Abadie Felix Schmidt-Eisenlohr.
Andrea Valassi (CERN IT-DB)CHEP 2004 Poster Session (Thursday, 30 September 2004) 1 HARP DATA AND SOFTWARE MIGRATION FROM TO ORACLE Authors: A.Valassi,
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
JCOP Framework and PVSS News ALICE DCS Workshop 14 th March, 2006 Piotr Golonka CERN IT/CO-BE Outline PVSS status Framework: Current status and future.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Distribution of ATLAS Software and configuration data Costin Caramarcu on behalf of ATLAS TDAQ SysAdmins.
L1Calo Databases ● Overview ● Trigger Configuration DB ● L1Calo OKS Database ● L1Calo COOL Database ● ACE Murrough Landon 16 June 2008.
Database Replication and Monitoring
Database Services Katarzyna Dziedziniewicz-Wojcik On behalf of IT-DB.
Peter Chochula Calibration Workshop, February 23, 2005
CMS High Level Trigger Configuration Management
PVSS Evolution in Relation to Databases
Data Management and Database Framework for the MICE Experiment
WinCC-OA Upgrades in LHCb.
POOL persistency framework for LHC
Presentation transcript:

Online DBs in run Frank Glege on behalf of several contributors of the LHC experiments

Scope Configuration DBs DAQ and DCS On line conditions DB DAQ and DCS No off line conditions DB No auxiliary, technical DBs like asset management and tracking, geometry, materials, etc.

CMS database workshop 23 rd to 25 th of February 2004 Frank Glege

Databases at LHC experiments The usage of relational DBs, at least to the current extend, is new DBs have become a vital part of the complete experiment system DBs are often used as data dump rather than intelligent data store Not all DB developers know the concept of normalization and differences in read/write optimized structures DBAs at CERN play a very important role by monitoring performance, consulting and proposing improvements

Frank Glege 5/14 Data flow Configuration Conditions Calibration O ffline R econstruction C onditions DB ON line subset Equipment Configuration Conditions Create rec conDB data set Offline Reconstruction Conditions DB OFfline subset Master copy Bat 513 O nline M aster D ata S torage

Database technology Online DBs fit well into relational scheme Conditions are written sequentially but read randomly Configuration is highly relational and requires integrity to be ensured ORACLE RAC is a good solution for online DBs. No need to change. Current sizes: ALICE: 7TB ATLAS: 12.5TB CMS: 11.5TB LHCb: 15TB

ORACLE evolution Current versions: Oracle 11.2 and 12.1 Supported until 2020 and 2021 respectively LS2 recommended upgrade to Oracle 12.2 (release date 2016 ) Main new feature: In-Memory database bringing real-time responsiveness Bug fixes Replication scheme unchanged:

Additional DB services DB on demand MySQL CE 5.7 Q1/Q To stay for next 3 years PostgreSQL Oracle application server for General Service Hadoop Service in IT Setup and run the infrastructure Provide consultancy Build the community

DCS conditions DB Holds mainly the status of the detector parts at any time Most of the condition data is produced by the DCS. All LHC experiments use ORACLE DBs with Win CC OA schemas for primary storage of DCS conditions data (pure relational). Transfer/conversion to off-line conditions ALICE: WinCC OA-> ADAPOS(ALICE datapoint server)-> data stream ATLAS: ATLAS: WinCC OA -> COOL CMS: WinCC OA -> O2O -> CMS offline conditions format (ConDB v2.) / CORAL LHCb: WinCC OA-> WinCC OA-> DB Writer -> COOL

ALICE DB infra structure layout DCS Cluster (~100 DB clients) DB Replica in IT Standby DB RUN 2 DB Service Primary DB DCSGPN

DAQ conditions DB Holds mainly the condition of the DAQ system during data taking ALICE Using the O2 infrastructure to add conditions to the data stream

ATLAS DAQ conditions DB ATLAS currently uses LCG COOL database. This might be replaced by a new system Conditions DB server provides only REST API Use Squid cache for scalability No relational DB libraries for clients Simplify database schema based on architecture of CMS conditions DB: few tables instead of O(1K) Payload storage (a BLOB) is opaque to the conditions server => cannot perform payload queries Open ended IoV (no until) => no holes in intervals vs. COOL No channels, handled inside payload 2016 – proto, 2017 – development & migration, 2018 – tests / validation DB Client REST http client CondDB Server Squid cache WEB interface persistency module security module RDBMS Other... scalable architecture using Squid cache for read-only access

DAQ conditions DB CMS Run control (JAVA based) writes directly to ORACLE partly using name value pairs Core DAQ SW (XDAQ) provides an DB abstraction layer to write to DB forcing a normalized relational structure

DAQ conditions DB LHCb ECS (WinCC OA) writes to ORACLE via a DB Writer Process

DCS configurations DB Configuration data read and interfaced by DCS has different scopes in the different experiments ALICE Configuration of classical slow control and parts of detector front ends Stored in ORACLE read by proprietary mechanisms ATLAS Uses JCOP FW tools for configuration CMS Only configuration of classical slow control Uses the JCOP FW confDB component LHCb Experiment Control System (ECS) manages the whole experiment (DAQ+DCS) Uses the JCOP FW confDB component

ATLAS DAQ configurations DB Used by T-DAQ and sub-detectors (100 MB of data / online configuration) Is based on OKS (Object Kernel System) supporting object data model (classes and named objects, attributes (primitive or class types), polymorphic multiple inheritance) OKS stores data in XML files supporting inclusion. Data integrity is protected by the OKS server containing full history of DB commits and write access restrictions using role based access manager policy. Used configurations are archived to Oracle using CORAL for offline. Tree of CORBA servers (RDB) is used for remote access. The root server distributes and controls current configuration. Multi-level hierarchy of servers addresses performance & scalability requirements of 10 4 clients. There are several generic schema and data editors (Motif and Qt based).

ATLAS DAQ configurations DB Programming access is implemented via abstract config interface supporting oks-xml, rdb and oracle archive implementations. On top of the config interface a number of DALs (data access libraries) are automatically generated for concrete DB schemas (sets of OKS classes) for C++ and Java. The Python binding creates classes on the fly reading the DB schema. Use template algorithms for similar objects (DQM, applications / segments) to reduce DB size and improve maintenance. The DALs and algorithms are mostly used by user applications accessing configuration description. Application DAL config Application RDBOKSCORAL RepositoryORACLE servers

CMS DAQ configurations DB Run control configuration read by run control (JAVA) directly. FE configuration read from ORCALE using the XDAQ DB abstraction mechanism XDATA, a library for reflecting C++ data structures in various data serialization formats.

LHCb DAQ configuration DB LHCb has the Experiment Control System (ECS) based on the WinCC OA SCADA which controls the whole experiment: Trigger Front-end hardware control Readout Supervisors Event Builder control HLT Monitoring Storage Configuration of all the sub-systems is treated in a similar manner

LHCb DAQ configuration DB DAQ configuration is done using standard JCOP Tools (fwConfDB) Configurations are stored in an Oracle DB (Configuration DB) Configurations are stored as named Recipes Recipe types define for each configurable device type which settings are to be stored in a Recipe Recipes implement the Recipe types with the valued parameters The Recipe names follow a convention - hierarchy of activity type (e.g. “PHYSICS|pA|VdM”) The Recipes are applied automatically according to the currently set activity in the Run Control Recipes to be loaded are matched by name to the set activity Recipes loaded follow the activity type hierarchy e.g. If the set activity name is “PHYSICS|pA|VdM”, the devices to be configured will check if a recipe that matches the name and apply it, if there isn’t, they’ll check if there’s a “PHYSICS|pA”, if there isn’t they’ll proceed to check for one named “PHYSICS”

Summary Databases are a vital part of the LHC experiments The chosen technology (ORACLE RAC) fits well for the on line DBs and will continue to be used The current DB implementations have proven to work well Service will continue as is in run 3 with minor changes