3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at

Slides:



Advertisements
Similar presentations
RLS Production Services Maria Girone PPARC-LCG, CERN LCG-POOL and IT-DB Physics Services 10 th GridPP Meeting, CERN, 3 rd June What is the RLS -
Advertisements

Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Users’ Authentication in the VRVS System David Collados California Institute of Technology November 20th, 2003TERENA - Authentication & Authorization.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
Databases Technologies and Distribution Techniques Dirk Duellmann, CERN HEPiX, Rome, April 4th 2006.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
LCG 3D Project Status and Production Plans Dirk Duellmann, CERN IT On behalf of the LCG 3D project CHEP 2006, 15th February, Mumbai.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Heterogeneous Database Replication Gianni Pucciani LCG Database Deployment and Persistency Workshop CERN October 2005 A.Domenici
Responsibilities of ROC and CIC in EGEE infrastructure A.Kryukov, SINP MSU, CIC Manager Yu.Lazin, IHEP, ROC Manager
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
1 CS Tutorial 3 Frid. Oct 9 th, 2009 Architecture Document Tutorial Questions & Examples.
David Adams ATLAS DIAL/ADA JDL and catalogs David Adams BNL December 4, 2003 ATLAS software workshop Production session CERN.
15-Dec-04D.P.Kelsey, LCG-GDB-Security1 LCG/GDB Security Update (Report from the Joint Security Policy Group) CERN 15 December 2004 David Kelsey CCLRC/RAL,
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Database Readiness Workshop Summary Dirk Duellmann, CERN IT For the LCG 3D project SC4 / pilot WLCG Service Workshop.
Portal Update Plan Ashok Adiga (512)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Site Manageability & Monitoring Issues for LCG Ian Bird IT Department, CERN LCG MB 24 th October 2006.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
D. Duellmann - IT/DB LCG - POOL Project1 The LCG Pool Project and ROOT I/O Dirk Duellmann What is Pool? Component Breakdown Status and Plans.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
3D Project Status Dirk Duellmann, CERN IT For the LCG 3D project Meeting with LHCC Referees, March 21st 06.
LCG Distributed Databases Deployment – Kickoff Workshop Dec Database Lookup Service Kuba Zajączkowski Chi-Wei Wang.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
FTS monitoring work WLCG service reliability workshop November 2007 Alexander Uzhinskiy Andrey Nechaevskiy.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
LCG POOL, 3D and LCG POOL, Distributed Database Deployment and Oracle Dirk Düllmann, CERN HEPiX Fall‘04, BNL Outline: POOL Persistency.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Replicazione e QoS nella gestione di database grid-oriented Barbara Martelli INFN - CNAF.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
DB Questions and Answers open session (comments during session) WLCG Collaboration Workshop, CERN Geneva, 24 of April 2008.
1 LCG Distributed Deployment of Databases A Project Proposal Dirk Düllmann LCG PEB 20th July 2004.
LCG 3D Project Status For more details: Dirk Düllmann, CERN IT-ADC LCG Application Area Meeting 6 April, 2005.
Status of Task Forces Ian Bird GDB 8 May 2003.
“A Data Movement Service for the LHC”
Dirk Duellmann CERN IT/PSS and 3D
David Kelsey CCLRC/RAL, UK
(on behalf of the POOL team)
LCG 3D Distributed Deployment of Databases
3D Application Tests Application test proposals
Database Readiness Workshop Intro & Goals
LCG Distributed Deployment of Databases A Project Proposal
Dirk Düllmann CERN Openlab storage workshop 17th March 2003
Workshop Summary Dirk Duellmann.
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Grid Data Integration In the CMS Experiment
Presentation transcript:

3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at

D.Duellmann, CERN2 Why a LCG Database Deployment Project? LCG today provides an infrastructure for distributed access to file based data and file replicationLCG today provides an infrastructure for distributed access to file based data and file replication Physics applications (and grid services) require a similar services for data stored in relational databasesPhysics applications (and grid services) require a similar services for data stored in relational databases –Several applications and services already use RDBMS –Several sites have already experience in providing RDBMS services Goals for common project as part of LCGGoals for common project as part of LCG –increase the availability and scalability of LCG and experiment components –allow applications to access data in a consistent, location independent way –allow to connect existing db services via data replication mechanisms –simplify a shared deployment and administration of this infrastructure during 24 x 7 operation Need to bring service providers (site technology experts) closer to database users/developers to define a LCG database serviceNeed to bring service providers (site technology experts) closer to database users/developers to define a LCG database service –Time frame: First deployment in 2005 data challenges (autumn ‘05)

D.Duellmann, CERN3 Workshop Logistics Workshop will take place at two locationsWorkshop will take place at two locations –Mornings: Bat –Mo & Tu Afternoon: (VRVS available) Wireless NetworkWireless Network –MAC registration is required –Follow the procedure on the given on the web page displayed in your web browser Workshop Dinner on Tuesday?Workshop Dinner on Tuesday? –If sufficient people are interested on Tue evening 19:30 –Please send an by today lunch time if you plan to join LHC experiment visit?LHC experiment visit? –Trying to arrange a visit to an LHC experiment site for Wed afternoon –Please let me know if you’d be interested to join

D.Duellmann, CERN4 Workshop Outline Monday Morning - Experiment RequirementsMonday Morning - Experiment Requirements GoalsGoals –Clarify which database applications exist and need to be deployed in the LCG by autumn 2005 –Main volume and functionality requirements check service architecture input to resource planning at T0/1/2 –Which distribution mechanism is used (if any)? –Status of the application development Input to a realistic test and deployment plan ResultResult –Adapt service definition, collect a list key technologies to be tested against applications –Schedule for joint test of applications in the 3D testbed

D.Duellmann, CERN5 Workshop Outline Monday Afternoon - Site ServicesMonday Afternoon - Site Services GoalsGoals –Which services and policies are in place today and can be relied on for 2005? –Site production experience with physics applications –Available deployment and software development services –Overview on available resources for LCG work Real resource discussion will be done via LCG channels –Short term plans until autumn 2005 ResultResult –Check with deployment plan to identify missing resources

D.Duellmann, CERN6 Workshop Outline Tuesday Morning - Application-Service couplingTuesday Morning - Application-Service coupling GoalGoal –Identify main areas of which required s/w development to couple applications to LCG database services –Overview of ongoing development activities in experiments and projects ResultResult –Check project development plans against the provided services –Identify activities / applications which do not fit into a standard architecture and therefore require separate deployment plan and/or dedicated deployment resources

D.Duellmann, CERN7 Workshop Outline Tuesday Afternoon - 3D distribution technologiesTuesday Afternoon - 3D distribution technologies GoalGoal –Describe the two main technologies proposed for a standardized 3D service Oracle Streams Replication FroNtier based database access –Overview of experience with both –Identify main uncertainties which will need to be resolved in tests in the 3D test bed ResultResult –Proposed items for a 3D test plan

D.Duellmann, CERN8 Workshop Outline Wednesday Morning - Persistency FrameworkWednesday Morning - Persistency Framework GoalGoal –Overview of developments in POOL/CondDB which impact on distributed database services –Describe the development plan for other remaining features of LCG-PF ResultResult –Items for 2005 Persistency Framework work plan

D.Duellmann, CERN9 Workshop Outline Wednesday - Workshop SummaryWednesday - Workshop Summary GoalGoal –Figure out if it all (or any) of that really happened :-) –Summarize work items (together with available / required resources) for the 3D work plan

D.Duellmann, CERN10 M Starting Point for a Service Architecture? O O O M T1- db back bone - all data replicated - reliable service T2 - local db cache -subset data -only local service T3/4 MM T0 - autonomous reliable service Oracle Streams Cross vendor extract MySQL Files Proxy Cache

D.Duellmann, CERN11 Site Contacts Established contact to several Tier 1 and Tier 2 sitesEstablished contact to several Tier 1 and Tier 2 sites –Tier 1: ASCC, BNL, CERN, CNAF, FNAL, GridKa, IN2P3, RAL –Tier 2: ANL, U Chicago Visited FNAL and BNL around HEPiXVisited FNAL and BNL around HEPiX –Very useful discussions with experiment developers and database service teams there Regular meetings have startedRegular meetings have started –(Almost) weekly phone meeting back-to-back for Requirement WG Service definition WG –Current time (Thursday 16-18) is inconvenient for Taiwan All above sites have expressed interest in project participationAll above sites have expressed interest in project participation –BNL has started to setup Oracle setup –Most sites have allocated and installed h/w for participation in the 3D test bed –U Chicago agreed to act as Tier 2 in the testbed

D.Duellmann, CERN12 Requirement Gathering All participating experiments have signed up their representativesAll participating experiments have signed up their representatives –Rather 3 than 2 names –Tough job, as survey inside experiment crosses many boundaries! Started with a simple spreadsheet capturing the various applicationsStarted with a simple spreadsheet capturing the various applications –Grouped by application One line per replication and Tier (distribution step) –Two main patterns Experiment data: data fan-out - T0->Tn Grid services: data consolidation - Tn->T0 –Exceptions need to be documented Aim to collect complete set of requirements for database servicesAim to collect complete set of requirements for database services –Also data which is stored locally never leaves a site Needed to properly size the h/w at each site Most applications don’t require multi-master replicationMost applications don’t require multi-master replication –Good news - document any exceptions