Database Replication and Monitoring

Slides:



Advertisements
Similar presentations
Metadata Progress GridPP18 20 March 2007 Mike Kenyon.
Advertisements

1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
ATLAS Databases: An Overview, Athena use of Geometry/Conditions DB, and Conditions Metadata Elizabeth Gallas - Oxford ATLAS-UK Distributed Computing Tutorial.
Creating WordPress Websites. Creating a site on your computer Local server Local WordPress installation Setting Up Dreamweaver.
Week 2 IBS 685. Static Page Architecture The user requests the page by typing a URL in a browser The Browser requests the page from the Web Server The.
Apache Tomcat Server Typical html Request/Response cycle
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
Talend 5.4 Architecture Adam Pemble Talend Professional Services.
Star (Traditional) Database Tasks & MySQL 1. Database Types & Operation Issues 2. Server & Database deployments 3. Tools with MySQL 4. Data definition.
Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
1 Web Database Processing. Web Database Applications Static Report Publishing a report is prepared from a database application and exported to HTML DB.
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
Alignment Strategy for ATLAS: Detector Description and Database Issues
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
Software Solutions for Variable ATLAS Detector Description J. Boudreau, V. Tsulaia University of Pittsburgh R. Hawkings, A. Valassi CERN A. Schaffer LAL,
2nd September Richard Hawkings / Paul Laycock Conditions data handling in FDR2c  Tag hierarchies set up (largely by Paul) and communicated in advance.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Mainframe (Host) - Communications - User Interface - Business Logic - DBMS - Operating System - Storage (DB Files) Terminal (Display/Keyboard) Terminal.
ALICE, ATLAS, CMS & LHCb joint workshop on
ATLAS Detector Description Database Vakho Tsulaia University of Pittsburgh 3D workshop, CERN 14-Dec-2004.
Giuseppe Codispoti INFN - Bologna Egee User ForumMarch 2th BOSS: the CMS interface for job summission, monitoring and bookkeeping W. Bacchi, P.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Database Monitoring Requirements Salvatore Di Guida (CERN) On behalf of the CMS DB group.
3rd November Richard Hawkings Luminosity, detector status and trigger - conditions database and meta-data issues  How we might apply the conditions.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
Michele de Gruttola 2008 Report: Online to Offline tool for non event data data transferring using database.
New COOL Tag Browser Release 10 Giorgi BATIASHVILI Georgian Engineering Center 23/10/2012
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
AliEn central services Costin Grigoras. Hardware overview  27 machines  Mix of SLC4, SLC5, Ubuntu 8.04, 8.10, 9.04  100 cores  20 KVA UPSs  2 * 1Gbps.
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
11/01/20081 Data simulator status CCRC’08 Preparatory Meeting Radu Stoica, CERN* 11 th January 2007 * On leave from IFIN-HH.
M.Frank, CERN/LHCb Persistency Workshop, Dec, 2004 Distributed Databases in LHCb  Main databases in LHCb Online / Offline and their clients  The cross.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
E-commerce Architecture Ayşe Başar Bener. Client Server Architecture E-commerce is based on client/ server architecture –Client processes requesting service.
A Presentation Presentation On JSP On JSP & Online Shopping Cart Online Shopping Cart.
1 Copyright © 2008, Oracle. All rights reserved. Repository Basics.
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
Distribution of ATLAS Software and configuration data Costin Caramarcu on behalf of ATLAS TDAQ SysAdmins.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
L1Calo DBs: Status and Plans ● Overview of L1Calo databases ● Present status ● Plans Murrough Landon 20 November 2006.
Jean-Philippe Baud, IT-GD, CERN November 2007
ATLAS Database Deployment and Operations Requirements and Plans
BESIII data processing
Data Management and Database Framework for the MICE experiment
Database System Concepts and Architecture
What is WWW? The term WWW refers to the World Wide Web or simply the Web. The World Wide Web consists of all the public Web sites connected to the Internet.
CMS High Level Trigger Configuration Management
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Database Readiness Workshop Intro & Goals
Data Management and Database Framework for the MICE Experiment
Existing Perl/Oracle Pipeline
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
Conditions Data access using FroNTier Squid cache Server
TYPES OF SERVER. TYPES OF SERVER What is a server.
Workshop Summary Dirk Duellmann.
An introduction to the ATLAS Computing Model Alessandro De Salvo
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Lecture 1: Multi-tier Architecture Overview
Internet Protocols IP: Internet Protocol
ATLAS DC2 & Continuous production
Presentation transcript:

Database Replication and Monitoring in ATLAS Computing Operations Suijian Zhou LCG Database Readiness Workshop Rutherford, UK March.23,2006

The ATLAS Tiers and roles Tier0: 1). Calibration and alignment 2). First-pass ESD,AOD production and TAG production 3). Archiving and Distribution of RAW, ESD, AOD and TAG data Tier1: 1). Storage of RAW,ESD, calibration data, meta-data, analysis data, simulation data and databases 2). Perform reprocessing of RAWESD Tier2: 1). Data processing for calibration and alignment tasks 2). Perform Monte Carlo simulation and end-user analysis – batch and interactive.

The ATLAS Databases Detector production, detector installation Survey data Detector geometry Online configuration, online run book-keeping, run conditions (DCS and others) Online and offline calibrations and alignments Offline processing configuration and book- keeping Event tag data

Conditions Database of ATLAS It refers to nearly all the non-event data produced during the operation of the ATLAS detector, and also those required to perform reconstruction and analysis Varies with time, characterized by “Interval of validity” (IOV) It includes: 1). data ahchived from ATLAS detector control system(DCS) 2). online book-keeping data, online and offline calibration and alignment data 3). Monitoring data charactering the performance of the detector

ATLAS DB Replication Task Conditions DB should be distributed worldwide to support the data processing tasks at Tier-1s and Tier-2s Conditions DB updates (e.g. improved calibration constants) generated worldwide should be brought back to the central CERN-based DB servers, for subsequent distribution to all sites that require them To avoid overloading the central Tier0 server at CERN (thousands of jobs requiring the database at the same time may exhaust the resources of a single DB server or even crash it), slave DB servers need to be deployed on at least 10 Tier-1s

The Conditions DB--COOL Interval-of-Validity (IOV) based storage and retrieval expressed as a range of absolute times or run and event numbers Data is stored in folders which are arranged in a hierarchical structure of foldersets Implements using Relational Access Layer (RAL), makes it possible for COOL database to be stored in Oracle, MySQL or SQLite technology

ATLAS DB Replication Strategies (1) Conditions data in POOL ROOT format can be replicated using the standard tools of the ATLAS Distributed Data Management (DDM) system  DQ2 Small database such as Geometry DB using MySQL and SQLite technologies. Native Oracle Streams replication from Tier-0Tier-1s, where data are replicated ‘real-time’ from master to slave databases. (any Oracle data, also event TAG data etc.)

ATLAS DB Replication Strategies (2) COOL API-Level replication from OracleSQLite. The PyCoolCopy tool in PyCoolUtilities (Python-based COOL Utilities) enables subsets of COOL folder trees copied from one database to another. Currently ‘static’, will be ‘dynamic’ in the future. CORAL Frontier-based replication. It translate SQL database requests into http protocol request at the client. A Tomcat web server interacting with an Oracle database backend will return the query results to the client as html pages. Setup squid web-proxy cache servers at Tier-0,Tier-1s.

The Octopus Replicator for the Database Replication(1) It can work between different database backends as long as they contain equivalent schemas (e.g. Atlas GeometryDatabase, Tag database, etc.) It is configured to replicate between Oracle, MySQL and SQLite. It also works on other database and files: Access, MSQL, CJDBC, EXCEL, Informix, PostgreSQL, XML etc. Other functions as: Database backup/restore, and Database synchronization

The Octopus Replicator for the Database Replication(2) The Octopus Replicator works in two steps: 1). Generation of database schema description and conversions scripts (generate). 2). The actual database replication itself (load) Typical configurations for Atlas tasks are considered: Geometry Database: Oracle  MySQL Oracle  SQLite Tag Database: MySQL  MySQL MySQL  Oracle Oracle  MySQL

Database replication monitoring(1) Dedicated machine “atlmysql04” for database replication monitoring and test is being set up. Currently: mysql-standard-4.0.26 MonALISA v1.4.14 MonAMI v0.4 are installed on this server. A “farm_name” of “atlasdbs” on MonALISA is given to this server

Database replication monitoring (2) Using MonALISA and MonAMI to monitor the DB replication activities(e.g. from Tier0Tier1s DB servers) System information of DB servers (Load, free memory etc.) Network information (traffic, flows, connectivity, topology etc.) The MonAMI (by Paul Millar etc.) monitoring daemon uses a plugin architecture to talk between the “monitoring targets” (a MySQL database, an Apache webserver etc.) and the “reporting targets” (MonAlisa, ganglia etc.)

The MonAlisa monitoring system

Next tasks: Support from MonAMI for plugins to monitor Oracle database. Deploy and test the monitoring as soon as possible.