CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL.

Slides:



Advertisements
Similar presentations
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
Advertisements

October Dyalog File Server Version 2.0 Morten Kromberg CTO, Dyalog LTD Dyalog’13.
Netscape Application Server Application Server for Business-Critical Applications Presented By : Khalid Ahmed DS Fall 98.
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL Server A middle tier for accessing relational database servers from CORAL applications.
Adding scalability to legacy PHP web applications Overview Mario A. Valdez-Ramirez.
CERN - IT Department CH-1211 Genève 23 Switzerland t LCG Persistency Framework CORAL, POOL, COOL – Status and Outlook A. Valassi, R. Basset,
CORAL and COOL news for ATLAS (update since March 2012 ATLAS sw workshop)March 2012 Andrea Valassi (IT-SDC) ATLAS Database Meeting 24 th October 2013.
CERN IT Department CH-1211 Genève 23 Switzerland t ES Discussion of COOL - CORAL - POOL priorities for ATLAS Andrea Valassi (IT-ES) For the.
Technical Architectures
N-Tier Architecture.
CERN - IT Department CH-1211 Genève 23 Switzerland t Partitioning in COOL Andrea Valassi (CERN IT-DM) R. Basset (CERN IT-DM) Distributed.
Andrea Valassi (CERN IT-SDC) DPHEP Full Costs of Curation Workshop CERN, 13 th January 2014 The Objectivity migration (and some more recent experience.
Computer Measurement Group, India Reliable and Scalable Data Streaming in Multi-Hop Architecture Sudhir Sangra, BMC Software Lalit.
Getting connected.  Java application calls the JDBC library.  JDBC loads a driver which talks to the database.  We can change database engines without.
Database System Concepts and Architecture Lecture # 3 22 June 2012 National University of Computer and Emerging Sciences.
Architecture Of ASP.NET. What is ASP?  Server-side scripting technology.  Files containing HTML and scripting code.  Access via HTTP requests.  Scripting.
Oracle8 JDBC Drivers Section 2. Common Features of Oracle JDBC Drivers The server-side and client-side Oracle JDBC drivers provide the same basic functionality.
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
Institute of Computer and Communication Network Engineering OFC/NFOEC, 6-10 March 2011, Los Angeles, CA Lessons Learned From Implementing a Path Computation.
FroNtier: High Performance Database Access Using Standard Web Components in a Scalable Multi-tier Architecture Marc Paterno Fermilab CHEP 2004 Sept. 27-Oct.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES PhEDEx Monitoring Nicolò Magini CERN IT-ES-VOS For the PhEDEx.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
A. Valassi – QA for CORAL and COOL Forum – 29 h Sep Quality assurance for CORAL and COOL within the LCG software stack for the LHC.
CERN IT Department CH-1211 Genève 23 Switzerland t ES Future plans for CORAL and COOL Andrea Valassi (IT-ES) For the Persistency Framework.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
ALICE, ATLAS, CMS & LHCb joint workshop on
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL Server A middle tier for accessing relational database servers from CORAL applications.
INTRODUCTION TO DBS Database: a collection of data describing the activities of one or more related organizations DBMS: software designed to assist in.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
EGEE User Forum Data Management session Development of gLite Web Service Based Security Components for the ATLAS Metadata Interface Thomas Doherty GridPP.
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL COmmon Relational Abstraction Layer Radovan Chytracek, Ioannis Papadopoulos (CERN.
CSI 3125, Preliminaries, page 1 SERVLET. CSI 3125, Preliminaries, page 2 SERVLET A servlet is a server-side software program, written in Java code, that.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Database authentication in CORAL and COOL Database authentication in CORAL and COOL Giacomo Govi Giacomo Govi CERN IT/PSS CERN IT/PSS On behalf of the.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Interactive Data Analysis on the “Grid” Tech-X/SLAC/PPDG:CS-11 Balamurali Ananthan David Alexander
A. Valassi – Python bindings for C++ in PyCool ROOT Workshop – 16 th Sep Python bindings for C++ via PyRoot User experience from PyCool in COOL.
REST By: Vishwanath Vineet.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
CERN IT Department CH-1211 Genève 23 Switzerland t HEPiX Conference, ASGC, Taiwan, Oct 20-24, 2008 The CASTOR SRM2 Interface Status and plans.
ATLAS Database Access Library Local Area LCG3D Meeting Fermilab, Batavia, USA October 21, 2004 Alexandre Vaniachine (ANL)
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
CORAL CORAL a software system for vendor-neutral access to relational databases Ioannis Papadopoulos, Radoval Chytracek, Dirk Düllmann, Giacomo Govi, Yulia.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
AFS/OSD Project R.Belloni, L.Giammarino, A.Maslennikov, G.Palumbo, H.Reuter, R.Toebbicke.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
CERN IT Department CH-1211 Genève 23 Switzerland t ES Developing C++ applications using Oracle OCI Lessons learnt from CORAL Andrea Valassi.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
Core and Framework DIRAC Workshop October Marseille.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
2 Copyright © Oracle Corporation, All rights reserved. Basic Oracle Net Architecture.
CERN IT Department CH-1211 Genève 23 Switzerland t Load testing & benchmarks on Oracle RAC Romain Basset – IT PSS DP.
Topic 4: Distributed Objects Dr. Ayman Srour Faculty of Applied Engineering and Urban Planning University of Palestine.
Jean-Philippe Baud, IT-GD, CERN November 2007
Netscape Application Server
(on behalf of the POOL team)
N-Tier Architecture.
Andrea Valassi (IT-ES)
CHAPTER 3 Architectures for Distributed Systems
Conditions Data access using FroNTier Squid cache Server
Introduction to J2EE Architecture
#01 Client/Server Computing
Data Management cluster summary
An example design for an Amadeus APIv2 Web Server Application
#01 Client/Server Computing
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications A. Valassi, A. Kalkhof (CERN IT-ES) M. Wache (University of Mainz / Atlas) A. Salnikov, R. Bartoldus (SLAC / Atlas) CHEP2010 (Taiwan), 19 th October 2010

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 2 A. Valassi – CHEP 2010, 19 October 2010 Outline Introduction –ATLAS High Level Trigger specific requirements –Broader motivation for a general purpose CORAL middle-tier Development and deployment status –Successful experience with ATLAS HLT –Development outlook Conclusions

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 3 A. Valassi – CHEP 2010, 19 October 2010 Introduction CORAL is used by most applications accessing LHC physics data stored in relational databases –Important example: conditions data of Atlas, LHCb, CMS –Oracle is the main deployment technology at T0 and T1 Limitations of classic client/server architecture –Security, performance, software distribution –Several issues may be addressed by adding a middle tier Collaboration of two teams in ATLAS and IT with two sets of use cases from the start of the project –RO access with caching and multiplexing for ATLAS HLT –Secure access with RW capabilities for generic offline users –Converged on an open design that may cover both

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 4 A. Valassi – CHEP 2010, 19 October 2010 ATLAS DAQ/HLT architecture All HLT nodes (500 L EF) need to read configuration data from the database (trigger configuration, detector geometry, conditions data) to be able to process events

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 5 A. Valassi – CHEP 2010, 19 October 2010 HLT requirements for DB access Every HLT process (up to 8 on each of ~2000 nodes) must read MB of data from Oracle –Too many simultaneous clients for the DB servers –Too much data to get in a short time (hundreds of GBs) →Must reduce both data volume from the DB and # clients Positive point: each L2 or EF client needs to retrieve the same data from the database –Database only needs to send the L2 and EF data once →Add an intermediate ‘proxy’ layer Cache the data retrieved from DB Multiplex client connections Chain proxies for more scalability

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 6 A. Valassi – CHEP 2010, 19 October 2010 DbProxy and M2O for ATLAS HLT DbProxy: implementation with MySQL protocol –MySQL was used during HLT commissioning –Included some useful tools for database client monitoring –Essential part of TDAQ since 2007 till CoralServer deployed M2O: short-term MySQL-to-Oracle bridge –Oracle is used by HLT in production during data-taking –Oracle protocol is closed and proprietary Short-term workaround till CoralServer fully functional –Essential part of TDAQ since 2008 till CoralServer deployed

CERN IT Department CH-1211 Genève 23 Switzerland t CORAL Server – 7 ES A. Valassi – CHEP 2010, 19 October 2010 CoralServer broader motivation Efficient and scalable use of DB server resources –Multiplex clients using fewer physical connections to DB –Optional caching tier for R/O access (CORAL server proxy) Also useful for further multiplexing of logical client sessions Secure access (R/O and R/W) to DB servers –Authentication via Grid certificates No support of Oracle for X.509 proxy certificates Hide database ports within the firewall (reduce vulnerabilities) –Authorization via VOMS groups in Grid certificates Client software deployment –CoralAccess client plugin using custom network protocol No need for Oracle/MySQL/SQLite client installation

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 8 A. Valassi – CHEP 2010, 19 October 2010 High-level architecture (e.g. COOL) Oracle DB server FIREWALL Oracle OCI protocol (OPEN PORTS) COOL API OracleAccess Oracle OCI ConnectionSvc CORAL API User code CoralServer OracleAccess Oracle OCI ConnectionSvc CORAL API CORAL protocol Oracle OCI protocol (NO OPEN PORTS) User code CoralAccess COOL API ConnectionSvc CORAL API CORAL protocol CoralServer Proxy CORAL protocol

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 9 A. Valassi – CHEP 2010, 19 October 2010 CoralServer development in 2009 The development of the current code base started in January 2009 –New team and new design, benefitted from requirement gathering and earlier developments and prototypes in 2008 –Joint architecture design for server and client, modular components decoupled using abstract interfaces Priority was to solve the HLT requirements first –But include both offline and online needs in the design –Weekly meetings to keep track of the progress –Network protocol agreed with HLT proxy developers –A few features are specific to HLT (e.g. transactions)

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 10 A. Valassi – CHEP 2010, 19 October 2010 CoralServer deployment for ATLAS CoralServer deployed at Point1 in October 2009 –Deployment was very smooth, it worked almost immediately Largely thanks to systematic testing during the development process (unit tests, standalone HLT test, TDAQ farm tests…) –It provides full read-only functionalities –It simplified authentication handling (single credential store) –Performance is adequate for the current purposes It is successfully used since then for data taking –Now an essential part of TDAQ, replacing M2O/DbProxy –Very stable: only one issue, cured by restart, on a DB failure General problem with CORAL reconnections after DB/network glitches (not specific to CoralServer), which is being worked on –Monitoring features are still limited and are being extended –Adopted by other online systems, interest for R/W features

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 11 A. Valassi – CHEP 2010, 19 October 2010 Deployment model for ATLAS HLT A single CoralServer for the ATLAS HLT system –Two chains of CoralServerProxy’s for the L2 and EF subsystems

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 12 A. Valassi – CHEP 2010, 19 October 2010 Work in progress (at low priority) Monitoring enhancements for ATLAS HLT –SQLite prototype, similar features to M2O monitoring Complete secure authentication/authorization –SSL sockets and VOMS proxy certificates Dependencies on SSL/VOMS versions only recently sorted out –Tool to load Oracle passwords into the server Further performance tests and optimizations –Compare to Oracle direct and Frontier, add proxy disk cache Deploy general purpose server/proxy at CERN –Test CoralServer already deployed for the nightlies Full Read-Write functionalities –DML (e.g. insert table rows) and DDL (e.g. create tables)

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 13 A. Valassi – CHEP 2010, 19 October 2010 Conclusions CoralServer has been successfully used by the ATLAS HLT during data taking since October 2009 –Smoothly deployed and stable during production operation –Full R/O functionalities with data caching and multiplexing The production software used by the ATLAS HLT was essentially developed in only 9 months –Modular design and strong emphasis on testing –Excellent cooperation between the teams involved Work on other areas is (slowly) progressing –Enhanced monitoring, secure access, R/W functionalities… –Current priority during data taking is now experiment support and service operation for CORAL, COOL and POOL

CERN IT Department CH-1211 Genève 23 Switzerland t CORAL Server – 14 ES A. Valassi – CHEP 2010, 19 October 2010 Reserve slides

CERN IT Department CH-1211 Genève 23 Switzerland t CORAL Server – 15 ES A. Valassi – CHEP 2010, 19 October 2010 COOL C++ API C++ code of LHC experiments (independent of DB choice) POOL C++ API use CORAL directly OracleAccess (CORAL Plugin) OCI C API CORAL C++ API (technology-independent) Oracle DB SQLiteAccess (CORAL Plugin) SQLite C API MySQLAccess (CORAL Plugin) MySQL C API MySQL DB SQLite DB (file) OCI FrontierAccess (CORAL Plugin) Frontier API CoralAccess (CORAL Plugin) coral protocol Frontier Server (web server) Coral Server JDBC http coral Squid (web cache) Coral Proxy (cache) coral http No longer used CORAL DB access plugins mysql

CORAL Server – 16 A. Valassi – CHEP 2010, 19 October 2010 S/w architecture components client bridge classes User application CORAL application (plugins for Oracle, MySQL...) ServerStub ServerSocketMgr ServerFacade ClientStub ClientSocketMgr invoke remote call return remote results marshal arguments unmarshal results send request receive reply return local results invoke local call marshal results unmarshal arguments send reply receive request ICoralFacade IRequestHandler RelationalAccess interfaces CoralServer CoralStubs CoralSockets CoralAccess CoralServerBase Package 1 (a/b) Package 2 Package 3

CORAL Server – 17 A. Valassi – CHEP 2010, 19 October 2010 Add package 3 “server” (full chain) Add package 1 (a/b) “façade only” Add package 2 “stub + façade” OracleAccess OCI implementation RelationalAccess interfaces User application Incremental tests of applications User application OracleAccess OCI implementation RelationalAccess interfaces CoralAccess bridge from RelationalAccess RelationalAccess interfaces ServerFacade facade to RelationalAccess ICoralFacade CoralAccess bridge from RelationalAccess RelationalAccess interfaces ServerFacade facade to RelationalAccess ICoralFacade OracleAccess OCI implementation RelationalAccess interfaces User application ClientStub marshal / unmarshal ICoralFacade ServerStub unmarshal / marshal IRequestHandler Traditional CORAL “direct” (OracleAccess) CoralAccess bridge from RelationalAccess RelationalAccess interfaces ClientStub marshal / unmarshal ICoralFacade ServerStub unmarshal / marshal IRequestHandler ServerFacade facade to RelationalAccess ICoralFacade OracleAccess OCI implementation RelationalAccess interfaces User application ClientSocket send / receive IRequestHandler ServerSocket receive / send TCP/IP

CORAL Server – 18 A. Valassi – CHEP 2010, 19 October 2010 Secure access scenario Oracle DB server FIREWALL CoralServer OracleAccess Oracle OCI ConnectionSvc CORAL API User Code CoralAccess COOL API ConnectionSvc CORAL API DB Connection String X509 Proxy Certificate DB Username DB Password DB Connection String VO attributes DB connection string DB Username DB Password SSL implementation of CoralSockets library decodes proxy certificates using VOMS CoralAuthentication Service Experiment admins also need a tool to load into the CORAL server the DB username and password available with given VO attributes

CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server – 19 A. Valassi – CHEP 2010, 19 October 2010 Guidelines of present design Joint design of server and client components –Split system into packages ‘horizontally’ (each package includes both the server-side and the client-side components) Proxy now standalone but can be reengineered with this design –RPC architecture based on Dec python/C++ prototype Different people work in parallel on different pkgs –Minimize software dependencies and couplings –Upgrades in one package should not impact the others Strong emphasis on functional tests –Include standalone package tests in the design Aim to intercept issues before they show up in system tests –System tests may be split back into incremental subtests Decouple components using abstract interfaces –Modular architecture based on object-oriented design –Thin base package with common abstract interfaces –Implementation encapsulated in concrete derived classes

CERN IT Department CH-1211 Genève 23 Switzerland t CORAL Server – 20 ES A. Valassi – CHEP 2010, 19 October 2010 A few implementation details Server is multi-threaded –Threads are managed by the SocketServer component –One listener thread (to accept new client connections) –One socket thread per client connection Pool of handler threads (many per client connection if needed) Network protocol agreed with proxy developers –Weekly meetings (ongoing for regular progress review) –Most application-level content is opaque to the proxy Proxy understands transport-level metadata and a few special application-level messages (connect, start transaction…) Most requests are flagged as cacheable: only hit the DB once –Server may identify (via a packet flag) client connections from a proxy and establishes a special ‘stateless’ mode Session multiplexing: cache connect, drop disconnect requests One RO transaction per session (drop transaction requests) ‘Push all rows’ model for queries (no open cursors in the DB)

CERN IT Department CH-1211 Genève 23 Switzerland t CORAL Server – 21 ES A. Valassi – CHEP 2010, 19 October 2010 ATLAS HLT proxy specificities Development has largely focused on HLT so far –Very complex or very simple, depending on the point of view Some specific choices for HLT (mainly in the proxy layer) may need to be changed for a general-purpose offline service Oracle connection sharing was initially disabled –Not needed for HLT, thanks to session multiplexing in proxy But needed for connection multiplexing in a generic server –Hang had been observed with connection sharing Problem identified as Oracle bug for MT and solved in 11g client HLT needs non-serializable R/O transactions –Requirement: see data added after start of R/O connection Connections are potentially long –Presently handled by hidden environment variable in CORAL May need cleaner way out (API extension: 3 transaction modes)

CORAL Server – 22 A. Valassi – CHEP 2010, 19 October 2010 client bridge classes User application CORAL application (plugins for Oracle, MySQL...) ServerStub ServerSocketMgr ServerFacade ClientStub ClientSocketMgr invoke remote call return remote results marshal arguments unmarshal results send request receive reply return local results invoke local call marshal results unmarshal arguments send reply receive request ICoralFacade IRequestHandler RelationalAccess interfaces Cache ClientSocketMgr IRequestHandler ServerSocketMgr send request receive reply receive request send reply forward request forward reply reply to request from cache; cache forwarded reply CORAL proxy application (possible future design)