CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES Future plans for CORAL and COOL Andrea Valassi (IT-ES) For the Persistency Framework.

Slides:



Advertisements
Similar presentations
Metadata Progress GridPP18 20 March 2007 Mike Kenyon.
Advertisements

Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL & COOL A critical review from the inside Andrea Valassi (IT-ES) WLCG TEG on Databases,
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL Server A middle tier for accessing relational database servers from CORAL applications.
CERN - IT Department CH-1211 Genève 23 Switzerland t LCG Persistency Framework CORAL, POOL, COOL – Status and Outlook A. Valassi, R. Basset,
CORAL and COOL news for ATLAS (update since March 2012 ATLAS sw workshop)March 2012 Andrea Valassi (IT-SDC) ATLAS Database Meeting 24 th October 2013.
CERN IT Department CH-1211 Genève 23 Switzerland t ES Discussion of COOL - CORAL - POOL priorities for ATLAS Andrea Valassi (IT-ES) For the.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
CERN - IT Department CH-1211 Genève 23 Switzerland t Relational Databases for the LHC Computing Grid The LCG Distributed Database Deployment.
CERN - IT Department CH-1211 Genève 23 Switzerland t Partitioning in COOL Andrea Valassi (CERN IT-DM) R. Basset (CERN IT-DM) Distributed.
Andrea Valassi (CERN IT-SDC) DPHEP Full Costs of Curation Workshop CERN, 13 th January 2014 The Objectivity migration (and some more recent experience.
Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
CERN IT Department CH-1211 Genève 23 Switzerland t SDC Stabilizing SQL execution plans in COOL using Oracle hints Andrea Valassi (IT-SDC)
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES PhEDEx Monitoring Nicolò Magini CERN IT-ES-VOS For the PhEDEx.
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
A. Valassi – QA for CORAL and COOL Forum – 29 h Sep Quality assurance for CORAL and COOL within the LCG software stack for the LHC.
ALICE, ATLAS, CMS & LHCb joint workshop on
ATLAS Detector Description Database Vakho Tsulaia University of Pittsburgh 3D workshop, CERN 14-Dec-2004.
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL Server A middle tier for accessing relational database servers from CORAL applications.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
The POOL Persistency Framework POOL Project Review Introduction & Overview Dirk Düllmann, IT-DB & LCG-POOL LCG Application Area Internal Review October.
EGEE User Forum Data Management session Development of gLite Web Service Based Security Components for the ATLAS Metadata Interface Thomas Doherty GridPP.
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL COmmon Relational Abstraction Layer Radovan Chytracek, Ioannis Papadopoulos (CERN.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Database authentication in CORAL and COOL Database authentication in CORAL and COOL Giacomo Govi Giacomo Govi CERN IT/PSS CERN IT/PSS On behalf of the.
D. Duellmann - IT/DB LCG - POOL Project1 The LCG Pool Project and ROOT I/O Dirk Duellmann What is Pool? Component Breakdown Status and Plans.
A. Valassi – Python bindings for C++ in PyCool ROOT Workshop – 16 th Sep Python bindings for C++ via PyRoot User experience from PyCool in COOL.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Streams Service Review Distributed Database Workshop CERN, 27 th November 2009 Eva Dafonte.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
LCG Distributed Databases Deployment – Kickoff Workshop Dec Database Lookup Service Kuba Zajączkowski Chi-Wei Wang.
CERN IT Department CH-1211 Geneva 23 Switzerland t WLCG Operation Coordination Luca Canali (for IT-DB) Oracle Upgrades.
CERN IT Department CH-1211 Genève 23 Switzerland t COOL performance optimization using Oracle hints Andrea Valassi and Romain Basset (IT-DM)
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
Andrea Valassi (CERN IT-DB)CHEP 2004 Poster Session (Thursday, 30 September 2004) 1 HARP DATA AND SOFTWARE MIGRATION FROM TO ORACLE Authors: A.Valassi,
CORAL CORAL a software system for vendor-neutral access to relational databases Ioannis Papadopoulos, Radoval Chytracek, Dirk Düllmann, Giacomo Govi, Yulia.
CERN - IT Department CH-1211 Genève 23 Switzerland t Operating systems and Information Services OIS Proposed Drupal Service Definition IT-OIS.
CERN IT Department CH-1211 Genève 23 Switzerland t ES Developing C++ applications using Oracle OCI Lessons learnt from CORAL Andrea Valassi.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
The GridPP DIRAC project DIRAC for non-LHC communities.
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
D. Duellmann, IT-DB POOL Status1 POOL Persistency Framework - Status after a first year of development Dirk Düllmann, IT-DB.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL.
POOL Based CMS Framework Bill Tanenbaum US-CMS/Fermilab 04/June/2003.
CERN IT Department CH-1211 Genève 23 Switzerland t Load testing & benchmarks on Oracle RAC Romain Basset – IT PSS DP.
CERN IT Department CH-1211 Genève 23 Switzerland t DPM status and plans David Smith CERN, IT-DM-SGT Pre-GDB, Grid Storage Services 11 November.
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
Jean-Philippe Baud, IT-GD, CERN November 2007
Database Replication and Monitoring
(on behalf of the POOL team)
Andrea Valassi (IT-ES)
3D Application Tests Application test proposals
Database Readiness Workshop Intro & Goals
POOL persistency framework for LHC
Dirk Düllmann CERN Openlab storage workshop 17th March 2003
Conditions Data access using FroNTier Squid cache Server
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Data Management cluster summary
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t ES Future plans for CORAL and COOL Andrea Valassi (IT-ES) For the Persistency Framework team Database Futures Workshop, 7 th June 2011

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 2Database Futures Workshop – 7 th June 2011 Outline Overview COOL CORAL CORAL Server Oracle client issues Conclusions

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 3Database Futures Workshop – 7 th June 2011 Overview of components CORAL –Abstraction of access to relational databases Support for Oracle, MySQL, SQLite, FroNtier CORAL: generic interfaces (no schema/query design) –Used directly or indirectly via COOL/POOL COOL/POOL: specific applications (with schema/query design) COOL –Conditions database (relational) Conditions object metadata (interval of validity, version) Conditions object data payload (user-defined attributes) POOL –Object streaming and metadata catalogs (ROOT/relational) Event collections: tags database (ATLAS) Object relational mapping (CMS, no longer used) Event storage to ROOT files (not relevant for this workshop)

A. Valassi – 4Database Futures Workshop – 7 th June 2011 CORAL is used in ATLAS, CMS and LHCb by many of the client applications that access physics data stored in Oracle (e.g. conditions data). Oracle data are accessed directly on the master DBs at CERN (or their Streams replicas at T1 sites), or via Frontier/Squid or CoralServer. Overview of usage and architecture DB lookup XML COOL C++ API C++ code of LHC experiments (independent of DB choice) POOL C++ API use CORAL directly OracleAccess (CORAL Plugin) OCI C API CORAL C++ API (technology-independent) Oracle DB SQLiteAccess (CORAL Plugin) SQLite C API MySQLAccess (CORAL Plugin) MySQL C API MySQL DB SQLite DB (file) OCI FrontierAccess (CORAL Plugin) Frontier API CoralAccess (CORAL Plugin) coral protocol Frontier Server (web server) CORAL server JDBC http coral Squid (web cache) CORAL proxy (cache) coral http XMLLookupSvc XMLAuthSvc LFCReplicaSvc (CORAL Plugins) Authentication XML (file) LFC server (DB lookup and authentication) No longer used CORAL is now the most active of the three Persistency packages: closer to lower-level services used by COOL and POOL

A. Valassi – 5Database Futures Workshop – 7 th June 2011 Details of CORAL, COOL, POOL usage Conditions data (COOL) Geometry data (detector descr.) Trigger configuration data Event collections/tags (POOL) Conditions data Geometry data (detector descr.) Trigger configuration data Conditions data (COOL) Event collections/tags CMSLHCb CORAL (Oracle, SQLite, XML authentication and lookup) POOL (collections – ROOT/relational) ––– Conditions data COOL POOL (relational storage service) ––– Conditions, Geometry, Trigger (only until 2010) ––– ATLAS Conditions, Geometry, Trigger (R/O access in HLT) ––– CORAL + Frontier (Frontier/Squid) CORAL Server (CoralServer/CoralServerProxy) ––– CORAL + LFC (LFC authentication and lookup) Persistency Framework in the LHC experiments Conditions data (R/O access in Grid) Conditions, Geometry, Trigger (R/O access in Grid, HLT, Tier0) Conditions data (authentication/lookup in Grid) (only until 2010) - CORAL, COOL and POOL are a joint development of IT-ES, ATLAS, CMS and LHCb - For POOL: only included the components using relational databases, relevant for this works - CORAL and COOL are also used by non-LHC experiments (Minerva at FNAL, NA62 at CERN)

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 6Database Futures Workshop – 7 th June 2011 Overview of future plans Software maintenance –Regular software releases of the stack (~one per month) –New platforms/compilers, new external software versions Including selection/installation/patching of Oracle (OCI) client –Infrastructure: repository, build, test (e.g. SVN, cmake, valgrind…) Operation and support –User support and bug fixing –Debugging of complex service issues (~from a client perspective) A few enhancements and new features –See next slides for details on CORAL and COOL –About POOL: may transfer support to ATLAS if LHCb drops it

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 7Database Futures Workshop – 7 th June 2011 Plans for COOL Performance optimizations and related new features –e.g. speed up BLOB access through Python API (PyCool) –e.g. new API (and SQL queries) for fetching first/last IOV in a tag All access (read/write) to the COOL database goes via the API –No plans for significant SQL/database performance optimizations Major optimizations a few years ago (queries rewritten, indexes…) Hints added (in dynamic query creation) to stabilize execution plans New software features and database schema extensions –e.g. COOL user control over CORAL sessions and transactions –e.g. better storage of ‘vector’ payload for IOVs and of DATE types –Some of these are already coded but need to be tested/released In summary: fewer tasks planned for COOL than CORAL

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 8Database Futures Workshop – 7 th June 2011 Plans for CORAL Reconnection after network/database glitches –Details in the next slide More robust tests (spot bugs before users report them) –Was heavily relying on COOL test suite so far Performance studies and optimizations –e.g. reduce data dictionary queries in the Oracle plugin compare Oracle, Frontier, CoralServer to find any other such issue also related to CORAL handling of read-only transactions (serializable) Later: enhance/redesign monitoring functionalities –Interest by ATLAS online (CORAL server) and CMS too –In practice: need better code instrumentation for DB queries See Cary Millsap’s recommendations at his seminar last week! In parallel: a few minor feature enhancements –Support for sequences, FKs in SQLite, multi-schema queries…

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 9Database Futures Workshop – 7 th June 2011 CORAL “network glitch” issues Different issues reported by all experiments –e.g. ORA “need explicit attach” in ATLAS/CMS (bug #58522)bug #58522 Fixed with a workaround in CORAL (released in LCG 59b) –e.g. OracleAccess crash after losing session in LHCb (bug #73334)bug #73334 Fixed in current CORAL candidate (see below) Similar crashes can also be reproduced on all other plugins Work in progress since a few months (A.Kalkhof, R.Trentadue, A.V.) –Catalogued different scenarios and prepared tests for each of them –Prototyped implementation changes in ConnectionSvc and plugins Current priority: fix crashes when using a stale session –May be caused both by network glitch and user code (bug #73834)!bug #73834 –A major internal reengineering of all plugins is needed (replace references to SessionProperties by shared pointers) Done for OracleAccess ST in candidate, pending for other plugins The patch fixes single-thread issues; MT issues are still being analyzed Next: address actual reconnections on network glitches –e.g. non serializable R/O transaction: should reconnect and restart it –e.g. DDL not committed in update transaction: cannot do anything

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 10Database Futures Workshop – 7 th June 2011 CoralServer in ATLAS online CoralServer deployed for HLT in October 2009 –Smooth integration, used for LHC data taking since then –No problems except for handling of network glitches

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 11Database Futures Workshop – 7 th June 2011 Plans for CORAL server Support usage in the ATLAS online system –Requests and plans are in line with more general CORAL needs Fix for the network glitch issue CORAL monitoring of DB queries (both in server and proxies) More detailed performance analysis and optimizations Work on further extensions (e.g. for offline) is now frozen –Interest from offline communities is limited or inexistent Frontier is used for read-only use cases by both ATLAS and CMS –Also, likely synergy with CVMFS in the future for Squid deployment (http) –Possibly larger potential for extension than Frontier in other use cases, but no real need/request for these extended features Authentication & authorization via X509 Grid proxy certificates –Already in CVS: will be released after cleanup of Globus integration Database update functionalities (DDL/DML) – wont do Disk resident cache (a la Squid) – wont do

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 12Database Futures Workshop – 7 th June 2011 Oracle client libraries for CORAL Oracle client for CORAL is maintained by the CORAL team –Different installation (consistent with AA ‘externals’ on AFS) –Tailor-made contents e.g patches to fix selinux and AMD quadcore bugs e.g. sqlnet.ora customization for 11g ADR diagnostics –Close collaboration with Physics DB team in IT-DB on these issues Two open issues in Oracle – plus a similar one in Globus –All three are conflicts with the default Linux system libraries Should either use the system libraries or use ‘versioned symbols’ –Globus redefines gssapi symbols (bug #70641)bug #70641 Suggested to use versioned symbols: will be in the 2011 EMI release Workaround: disabled gssapi from Xerces (used by CORAL) –Oracle client redefines gssapi symbols (bug #71416)bug #71416 SR – gssapi in libclntsh.so conflicts with libgssapi_krb5.so Suggested to use versioned symbols (Oracle bug ) No workaround needed so far (problem not yet seen in production…) –Oracle client redefines kerberos symbols (bug #76988)bug #76988 SR # – krb5 symbols in libclntsh.so conflict with libkrb5.so Suggested to use versioned symbols (Oracle bug ) Workaround: will customize kerberos parameters in sqlnet.ora

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 13Database Futures Workshop – 7 th June 2011 Conclusions ATLAS, CMS and LHCb access Oracle via CORAL –For conditions data (e.g. COOL in ATLAS/LHCb) and more Support for many backends allows many options –ATLAS switch to Frontier was painless from a software POV Highest load is from software and service support –Releases, Oracle client issues, debugging of complex service issues –A few new developments too, in parallel Thanks to the Physics DB team in IT-DB and to the experiment users and DBA’s for their help and very good collaboration!

CERN IT Department CH-1211 Genève 23 Switzerland t A. Valassi – 14Database Futures Workshop – 7 th June 2011 Reserve slides

A. Valassi – 15Database Futures Workshop – 7 th June 2011 CoralServer secure access scenario For comparison, if authentication uses the LFC replica service: –Credentials are stored in LFC server (here: in Coral server) –Credentials are retrieved onto client by LFC plugin (here: stay in Coral server) –Credentials are sent directly by client to Oracle (here: sent by Coral server) –In both cases, credentials for Oracle authentication are username & password No support of Oracle server for X509 proxy certificates Could try using Kerberos authentication on Oracle server otherwise?