LCG 3D Distributed Deployment of Databases

Slides:



Advertisements
Similar presentations
Data Management Expert Panel - WP2. WP2 Overview.
Advertisements

Cloud Computing: Theirs, Mine and Ours Belinda G. Watkins, VP EIS - Network Computing FedEx Services March 11, 2011.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
RLS Production Services Maria Girone PPARC-LCG, CERN LCG-POOL and IT-DB Physics Services 10 th GridPP Meeting, CERN, 3 rd June What is the RLS -
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
Daniela Anzellotti Alessandro De Salvo Barbara Martelli Lorenzo Rinaldi.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Heterogeneous Database Replication Gianni Pucciani LCG Database Deployment and Persistency Workshop CERN October 2005 A.Domenici
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
LFC Replication Tests LCG 3D Workshop Barbara Martelli.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Oracle to MySQL synchronization Gianni Pucciani CERN, University of Pisa.
LCG CERN David Foster LCG WP4 Meeting 20 th June 2002 LCG Project Status WP4 Meeting Presentation David Foster IT/LCG 20 June 2002.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
 Distributed Database Concepts  Parallel Vs Distributed Technology  Advantages  Additional Functions  Distribution Database Design  Data Fragmentation.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Replicazione e QoS nella gestione di database grid-oriented Barbara Martelli INFN - CNAF.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
Storage & Database Team Activity Report INFN CNAF,
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
1 LCG Distributed Deployment of Databases A Project Proposal Dirk Düllmann LCG PEB 20th July 2004.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Bentley Systems, Incorporated
Dirk Duellmann CERN IT/PSS and 3D
Virtualization and Clouds ATLAS position
IT-DB Physics Services Planning for LHC start-up
Open Source distributed document DB for an enterprise
3D Application Tests Application test proposals
Database Readiness Workshop Intro & Goals
LCG 3D and Oracle Cluster
The INFN TIER1 Regional Centre
Windows Azure Migrating SQL Server Workloads
NGS Oracle Service.
LCG Distributed Deployment of Databases A Project Proposal
Workshop Summary Dirk Duellmann.
Grid Data Integration In the CMS Experiment
Presentation transcript:

LCG 3D Distributed Deployment of Databases Barbara Martelli

INFN-GRID Technical Board 16/02/2005 LCG 3D Project Goals Define an LCG distributed database service allowing LCG applications to find relevant database back-ends authenticate use the provided data in a consistent location independent way Avoid the costly parallel development of data distribution backup procedures high availability mechanisms in each grid site in order to limit the support costs. INFN-GRID Technical Board 16/02/2005

INFN-GRID Technical Board 16/02/2005 Why LCG 3D LCG provides an infrastructure for distributed access to file based data and data replication Physics applications (and Grid services) require a similar infrastructure for data stored in relational databases Need for some standardisation as part of LCG Need to bring service providers (site technology experts) closer to database users/developers to define a LCG database service for the upcoming data challenges in 2005 INFN-GRID Technical Board 16/02/2005

Database Service Structure Different requirements and service capabilities for different Tiers Database Backbone (Tier1 <-> Tier1) High volume, often complete replication of RDBMS data Can expect good, but not continuous network connection to other T1 sites Symmetric, possibly multi-master replication Large scale central database service, local dba team Tier1 <->Tier2 Medium volume, sliced extraction of data Asymmetric, possibly only uni-directional replication Part time administration (shared with fabric administration) Tier1/2 <->Tier4 (Laptop extraction) Support fully disconnected operation Low volume, sliced extraction from T1/T2 Need a catalog of implementation/distribution technologies Each addressing parts of the problem But all together forming a consistent distribution model INFN-GRID Technical Board 16/02/2005

M M O O M O M T0 T3/4 T1- db back bone T2 - local db cache - autonomous T3/4 T1- db back bone - all data replicated - reliable service T2 - local db cache -subset data -only local service O M O M Oracle Streams Cross vendor extract MySQL Files Proxy Cache

INFN-GRID Technical Board 16/02/2005 Project Structure WP1 - Data Inventory and Distribution Requirements Define a complete inventory of RDBMS data and applications foreseen by the experiment computing models Feed back to the software development process the proposed distribution model WP2 - Database Service Definition and Implementation represent the service infrastructure of the particular site contribute to the definition of a common database and distribution service which is provided to worker node and grid service applications WP3 - Evaluation Tasks define an evaluation plan for a particular technology which is used to implement part of the 3d distribution infrastructure setup a prototype system for the particular technology and compare the prototype performance with the requirements delivered by the WP1 summarise the outcome of a evaluation in a short technology evaluation document INFN-GRID Technical Board 16/02/2005

INFN-GRID Technical Board 16/02/2005 CNAF Contribute to 3D Personnel Barbara Martelli Elisabetta Vilucchi Simone Dalla Fina (INFN - PD) HW resources 4 machines (2 dual PIII 2,4 GHz, 512 MB RAM with two 60 GB disks, 2 dual Intel Xeon 2,4 GHz with HyperThreading, 512 MB RAM with two 60 GB disks) 1TB storage accessible via SAN One machine at INFN-Padua (Dual PIII 1,3 GHz, 1GB RAM, two 60 GB disks) INFN-GRID Technical Board 16/02/2005

CNAF Planned Activities Q1 CNAF is in the first set of sites which will join 3D testbed (before the end of March) Install and test Oracle 10g Initial setup and distribution tests using Oracle Streams technology Setup Oracle Enterprise Manager for testbed and evaluate suitability for shared administration in WAN INFN-GRID Technical Board 16/02/2005

CNAF Planned Activities Q2 Introduce new applications into 3D testbed Distribute applications and workload as defined by experiments Run distributed tests and test data distribution as required by experiments Test of RAC setup INFN-GRID Technical Board 16/02/2005

CNAF Planned Activities Q3 Final version of 3D service definition and service implementation documents Installation of T1 production setup Need the testbed to stay available for ongoing integration INFN-GRID Technical Board 16/02/2005

CNAF Planned Activities Q4 Starting production deployment of distributed service According to experiment deployment plans INFN-GRID Technical Board 16/02/2005

INFN-GRID Technical Board 16/02/2005 Contacts and Links Barbara Martelli barbara.martelli@cnaf.infn.it Elisabetta Vilucchi elisabetta.vilucchi@cnaf.infn.it Simone Dalla Fina dfina@pd.infn.it http://lcg3d.cern.ch INFN-GRID Technical Board 16/02/2005