LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic

Slides:



Advertisements
Similar presentations
19/06/2002WP4 Workshop - CERN WP4 - Monitoring Progress report
Advertisements

RLS Production Services Maria Girone PPARC-LCG, CERN LCG-POOL and IT-DB Physics Services 10 th GridPP Meeting, CERN, 3 rd June What is the RLS -
1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
1 Web Content Delivery Reading: Section and COS 461: Computer Networks Spring 2007 (MW 1:30-2:50 in Friend 004) Ioannis Avramopoulos Instructor:
Module 12: Designing an AD LDS Implementation. AD LDS Usage AD LDS is most commonly used as a solution to the following requirements: Providing an LDAP-based.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
1 Wenguang WangRichard B. Bunt Department of Computer Science University of Saskatchewan November 14, 2000 Simulating DB2 Buffer Pool Management.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
CERN Physics Database Services and Plans Maria Girone, CERN-IT
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
LFC Replication Tests LCG 3D Workshop Barbara Martelli.
…building the next IT revolution From Web to Grid…
Satisfy Your Technical Curiosity Specialists Enterprise Desktop -
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
EGEE is a project funded by the European Union under contract IST “Interfacing to the gLite Prototype” Andrew Maier / CERN LCG-SC2, 13 August.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
MySQL and GRID status Gabriele Carcassi 9 September 2002.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Database Competence Centre openlab Major Review Meeting nd February 2012 Maaike Limper Zbigniew Baranowski Luigi Gallerani Mariusz Piorkowski Anton.
Database authentication in CORAL and COOL Database authentication in CORAL and COOL Giacomo Govi Giacomo Govi CERN IT/PSS CERN IT/PSS On behalf of the.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
CERN IT Department CH-1211 Geneva 23 Switzerland t WLCG Operation Coordination Luca Canali (for IT-DB) Oracle Upgrades.
Marco Cattaneo Core software programme of work Short term tasks (before April 2012) 1.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
Replicazione e QoS nella gestione di database grid-oriented Barbara Martelli INFN - CNAF.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
2007 Workshop on INFN ComputingRimini, 7 May 2007 Conditions Database Service implications at Tier1 and Tier2 sites Andrea Valassi (CERN IT-PSS)
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
Module 6: Configuring and Managing Windows SharePoint Services 3.0.
INFSO-RI Enabling Grids for E-sciencE Running reliable services: the LFC at CERN Sophie Lemaitre
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
Lessons learned administering a larger setup for LHCb
Jean-Philippe Baud, IT-GD, CERN November 2007
CernVM-FS vs Dataset Sharing
Reporting with Reporting Services
Online Database Work Overview Work needed for OKS database
Database Replication and Monitoring
LCG 3D Distributed Deployment of Databases
Andrea Valassi (IT-ES)
3D Application Tests Application test proposals
Database Readiness Workshop Intro & Goals
Database operations in CMS
Using EDB Postgres Replication Server to Offload Oracle Reporting Workloads to Postgres Matthew Lewandowski.
AMI – Status November Solveig Albrand Jerome Fulachier
Conditions Data access using FroNTier Squid cache Server
Workshop Summary Dirk Duellmann.
R. Graciani for LHCb Mumbay, Feb 2006
Replica Placement Model: We consider objects (and don’t worry whether they contain just data or code, or both) Distinguish different processes: A process.
Overview Multimedia: The Role of WINS in the Network Infrastructure
Presentation transcript:

LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic marco.clemencic@cern.ch

Overview LHCb Conditions Database Deployment Model planned actual future Considerations Conclusions

CondDB Deployment Model (plan) Oracle servers @ T0, PIT, T1s Streams Cross replication CERN IT ⇄ PIT Replication CERN IT → T1s Based on the original computing model (T0-1 reconstruction, T2 simulation). Using COOL as CondDB library (CORAL to connect to DBs).

CondDB Deployment Model (real) Use Oracle for first reconstr. required for Online conds. Use SQLite for analysis and reprocessing Tier2s joining 1st reconstruction Oracle currently used only during data taking for first pass reconstruction. SQLite used whenever possible, including the Filter Farm at the PIT. Investigation to use CPU resources of Tier-2s for reconstruction (need direct access to Oracle servers at T1s).

Why the change? Problems of Oracle Authentication no X509 proxy certificates Replication lack of feedback no control Support licensing heavy infrastructure maintenance Authentication: missing support for X509 proxies required to drop authentication or use alternative ways, like LFC. Unfortunately LFC couldn't cope, so we are using DIRAC Configuration System. Replication: several times we had problems with replication not yet done. We need to know when the conditions are available everywhere (possibly to integrate it with the scheduling). Support: T2s providing CPUs cannot afford Oracle licenses, administrators, servers... need to access directly Oracle from T1s.

Alternatives to Oracle SQLite originally meant for disconnected use deployment model not dynamic relative small work to improve cannot scale (limit still far) Frontier better scalability (just add a proxy) requires work for tuning SQLite has been always used by LHCb, before the Oracle setup and for disconnected analysis. The deployment model of SQLite was tuned for the foresaw use cases, and it needs review to make it more dynamic. Unfortunately the distribution of very big databases is problematic. Frontier has better performances than Oracle (CORAL plugin problem?) and is much more easy to scale up.

Plans (near future) Change the SQLite deployment model pull instead of push local caches automatic update of Online conds. Prepare the adoption of Frontier use cache-friendly queries analyze the access pattern tune cache parameters The update of the SQLite deployment model is already available and it will be in production in the next few weeks. The adoption of Frontier was planned for the summer, but it has been delayed because of lack of manpower. SQLite will cover the short term issues, Frontier the long term ones.

CondDB Deployment Model (next) Oracle to write @PIT Online conds. SQLite for everything else new distrib. model We will still use Oracle to write conditions at the PIT and replicate them to the T0 database.

Considerations Is Oracle really needed? MySQL/PostgreSQL solutions could be more easily adopted (T1s, T2s…) open source tools use open source DBs We need better control on the replication process pull instead of push (Frontier) is “eventual consistency” better than “hoped consistency”?

Conclusions Oracle more a problem than a solution High Availability local replicas Synchronization “pull” rather than “push” Migration? no, thanks (not yet, at least)