European Organization For Nuclear Research Future Database Requirements in the Accelerator Sector Ronny Billen Database Futures Workshop – 6-7 June 2011.

Slides:



Advertisements
Similar presentations
Controls Configuration Service Overview GSI Antonio on behalf of the Controls Configuration team Beams Department Controls Group Data & Applications.
Advertisements

Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Distributed Systems Management What is management? Strategic factors (planning, control) Tactical factors (how to do support the strategy practically).
Chapter 1 Introduction to Databases
Isabelle Laugier, AT/VAC/ICM Section February 7 th 2008.
Overview of Data Management solutions for the Control and Operation of the CERN Accelerators Database Futures Workshop, CERN June 2011 Zory Zaharieva,
Christophe Delamare EDMS Accelerator Consolidation Workshop GS/ASE activities.
Scale-out databases for CERN use cases Strata Hadoop World London 6 th of May,2015 Zbigniew Baranowski, CERN IT-DB.
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
Quality Control B. von Haller 8th June 2015 CERN.
1 Archiving Workshop (Soleil, May 2010) Archiving System Status.
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
An Example Use Case Scenario
Storage and data services eIRG Workshop Amsterdam Dr. ir. A. Osseyran Managing director SARA
14 December 2006 CO3 Data Management section Controls group Accelerator & Beams department Limits of Responsibilities in our Domains of Activities Ronny.
The 2000 Decennial Census School District Project: Using Census Data for the School District Mapping System **** Development and Implementation Tai A.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
WWWWhat timing services UUUUsage summary HHHHow to access the timing services ›I›I›I›Interface ›N›N›N›Non-functional requirements EEEExamples.
An Integrated Meteorological Surface Observation System By Wan Mohd. Nazri Wan Daud Malaysian Meteorological Department Due to the increasing and evolving.
Online Monitoring and Analysis for Muon Tomography Readout System M. Phipps, M. Staib, C. Zelenka, M. Hohlmann Florida Institute of Technology Department.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
CERN – European Organization for Nuclear Research Administrative Support - Internet Development Services CET and the quest for optimal implementation and.
LHC Utilities & Facilities WAO Daejeong 12/04 – 16/04/2010 Markus Albert on behalf of LHC Operations.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Logging Mike Lamont Georges Henry Hemlesoet AB/OP Discussions with M. Pace & C. Roderick.
Eugenia Hatziangeli Beams Department Controls Group CERN, Accelerators and Technology Sector E.Hatziangeli - CERN-Greece Industry day, Athens 31st March.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Managing the CERN LHC Tier0/Tier1 centre Status and Plans March 27 th 2003 CERN.ch.
PostMortem Workshop January LHC “Post Mortem” Workshop: Introduction Initiative by Robin Lauckner, Adriaan Rijllart and myself, helped by many other.
D0 Status: 01/14-01/28 u Integrated luminosity s delivered luminosity –week of 01/ pb-1 –week of 01/ pb-1 –luminosity to tape: 40% s major.
16 December 2003 LHC Logging Review Meeting LHC Logging Review R. Billen, M. Marczukajtis, M. Peryt AB-CO-DM.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Software development Control system of the new IGBT EE switch.
Instrumentation of the Controls Configuration Directory Service J. Luis González Arias BE3528 Software Developer for the Accelerator Controls Configuration.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
European Organization For Nuclear Research PS2 Naming Conventions: the Options Ronny Billen 15 June 2009 Accelerators and Beams Department Controls Group.
PM System Architecture Front-Ends, Servers, Triggering Ingredients Workshop on LHC Post Mortem Session 1 – What exists - PM System, Logging, Alarms Robin.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
16-17 January 2007 Post-Mortem Workshop Logging data in relation with Post-Mortem and archiving Ronny Billen AB-CO.
Status of the AWAKE Control System Edda Gschwendtner, Janet Schmidt, Marine Gourber-Pace, Roman Gorbonosov, Peter Sherwood AWAKE Technical Board, 27 January.
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN Agile Infrastructure Monitoring Pedro Andrade CERN – IT/GT HEPiX Spring 2012.
Software and TDAQ Peter Lichard, Vito Palladino NA62 Collaboration Meeting, Sept Ferrara.
European Organization For Nuclear Research CERN Accelerator Logging Service Overview Focus on Data Extraction for Offline Analysis Ronny Billen & Chris.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
Design process of the Interlock Systems Patrice Nouvel - CERN / Institut National Polytechnique de Toulouse CLIC Workshop Accelerator / Parameters.
 TE-MPE-PE Clean code workshop – R.Heil, M.Koza, K.Krol Introduction to the MPE software process Raphaela Heil TE-MPE-PE Clean code workshop - 9 th July.
7/8/2016 OAF Jean-Jacques Gras Stephen Jackson Blazej Kolad 1.
Data providers Volume & Type of Analysis Kickers
2007 IEEE Nuclear Science Symposium (NSS)
Status and Plans for InCA
R. Denz, A. Gomez Alonso, AT-MEL-PM
Database Futures Workshop Rapid Summary
CMS High Level Trigger Configuration Management
LHC experiments Requirements and Concepts ALICE
Database involvement in Timing
Georgia Gkioka TE-VSC-ICM National Technical University of Athens
LV Safe Powering from UPS to Clients
Status of the Accelerator Online Operational Databases
Database Futures Workshop cern. ch/conferenceDisplay
Development of built-in diagnostics in the RADE framework (EN2746)
Data Lifecycle Review and Outlook
LHC BLM Software audit June 2008.
Database Futures Workshop cern. ch/conferenceDisplay
LHC Systems Providing Post-Mortem Data
Presentation transcript:

European Organization For Nuclear Research Future Database Requirements in the Accelerator Sector Ronny Billen Database Futures Workshop – 6-7 June 2011 Beams Department Controls Group Data Management Section

R. Billen 6-Jun-2011Database Futures Workshop Outline  Data Managed in the Accelerator Sector  Past Accelerator Logging Experience  Current Accelerator Logging Experience  Future Logging Requirements  How Realistic is this Future? 2

Data managed in the Accelerator Sector 6-Jun-2011Database Futures Workshop 3 Configuration DataLogging Data PurposeReality modeling for exploitationLive data tracking of over time Data Model Complex / Very complex > 100 objects Simple: time series Values may be complex (events) Data Interdependency High Many relations, constraints Low Few relations, constraints Data Evolution Quite static History of changes Very Dynamic Continuous growth Data VolumesSmall < 10GBHuge > 100TB Topics Hardware Installations Controls & Communications Operational parameters Alarm definitions Hardware/Beam Commissioning Equipment monitoring Beam measurements Post-Mortem events Data Criticality Low – High – Very High Data integrity is essential High Data correctness not guaranteed Current Implementation Oracle RDBMS Continue to implement this wayMain question & worry for the future

R. Billen  LEP Logging  Purpose: Centralized storage and accessibility of data acquisitions of interest over time (beam and equipment)  Initial idea to keep one year up to a few years of useful data  Initial estimation of the very large database: 8GB/year  Implementation started with Oracle 6  Provided a generic GUI to visualize and extract data  Pushed by the end-users, this evolved into:  Short-term Measurement DB  Long-term Logging database  Spinoff LEP RF Measurement database  Data exploited several years after the LEP stop  The grand total of 266GB of data still available Past Accelerator Logging Experience 6-Jun-2011Database Futures Workshop4

R. Billen  LHC Logging  Project start in 2001  Design based on experience and technical progress  Initial idea to keep useful data on-line for the lifetime of LHC  Provided a generic GUI to visualize and extract data  Provided Java API, used extensively for data analysis  Evolution of estimations for LHC steady-state  Today’s outstanding request: Store more, analyze faster Current Accelerator Logging Experience 6-Jun-2011Database Futures Workshop5 Year of Estimation Estimated volume TB/year TB/year TB/year TB/year

R. Billen  The 3 TeV CLIC example  50km installation, ~ data acquisition channels  50Hz synchronous data (20ms cycle base)  Raw data +100GB/s i.e. per year in the Exabyte range  Estimated yearly storage requirement in the Petabyte range  Hypothesis  200 days/year  Standard reduction by removing redundancy and zero suppression  Average 100 byte/channel  Several logging policies  Sample size is equipment dependent (RF equipment, Beam Instrumentation,…) Future Logging Requirements 6-Jun-2011Database Futures Workshop6 Logging Policy Retention Time Filtering function Samples Very Short Term 7000s All pulses 350’000 Short Term4 days 1s averages 350’000 Medium Term40 days 10s averages 350’000 Long Term2 years 1.5 min averages 350’000 Very Long Term 20 years 15 min averages 350’000 Post-Interlock Snapshots Short Term 4 days All interlock events 700’000 Post-Interlock Snapshots Long Term 20 years 1/100 interlock events 700’000 Courtesy M. Jonker Can be challenged!!!

R. Billen  Storage requirements in the range Petabyte-Exabyte  Extrapolating from the past   Will we be able to handle this?  Data storage  Data retrieval  Data Backup  Technical Solutions  Database powered  Experiment-type file-based solutions  The issue is not to be addressed in 10 years…but right now…  …which brings me to the important part of this presentation: The Discussion How realistic is this future? 6-Jun-20117Database Futures Workshop Year Accelerator Operation Required Storage 2000LEP250 GB 2010LHC50 TB 2020HL-LHC10 PB 2030 CLIC HE-LHC 2 EB x 200