INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.

Slides:



Advertisements
Similar presentations
CERN - IT Department CH-1211 Genève 23 Switzerland t Relational Databases for the LHC Computing Grid The LCG Distributed Database Deployment.
Advertisements

Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
ORACLE GOLDENGATE AT CERN
MySQL and GRID Gabriele Carcassi STAR Collaboration 6 May Proposal.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
LCG 3D Project Status and Production Plans Dirk Duellmann, CERN IT On behalf of the LCG 3D project CHEP 2006, 15th February, Mumbai.
CHEP 2006, Mumbai13-Feb-2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans Andrea Valassi On behalf of the COOL.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL Conditions Database for the LHC Experiments Development and Deployment Status Andrea.
Database Readiness Workshop Summary Dirk Duellmann, CERN IT For the LCG 3D project SC4 / pilot WLCG Service Workshop.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Implementation and performance analysis of.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
3D Project Status Dirk Duellmann, CERN IT For the LCG 3D project Meeting with LHCC Referees, March 21st 06.
Oracle to MySQL synchronization Gianni Pucciani CERN, University of Pisa.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
CERN IT Department CH-1211 Geneva 23 Switzerland t Eva Dafonte Perez IT-DB Database Replication, Backup and Archiving.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
Conditions Database Status and Plans for 2005 Andrea Valassi (CERN IT-ADC) LCG Applications Area Review 31 March 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t Persistency Framework CORAL, POOL, COOL status and plans Andrea Valassi (IT-PSS) On.
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
2007 Workshop on INFN ComputingRimini, 7 May 2007 Conditions Database Service implications at Tier1 and Tier2 sites Andrea Valassi (CERN IT-PSS)
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
CVMFS Alessandro De Salvo Outline  CVMFS architecture  CVMFS usage in the.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
CERN - IT Department CH-1211 Genève 23 Switzerland t COOL – Status and Plans Andrea Valassi (CERN IT-PSS-DP) LCG Application Area Review,
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Database Readiness Workshop Summary Dirk Duellmann, CERN IT For the LCG 3D project GDB meeting, March 8th 06.
T0-T1 Networking Meeting 16th June Meeting
WLCG IPv6 deployment strategy
ATLAS Database Deployment and Operations Requirements and Plans
“A Data Movement Service for the LHC”
Dirk Duellmann CERN IT/PSS and 3D
Database Replication and Monitoring
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
LCG 3D Project Update for the meeting with the LHCC referees
LCG 3D Distributed Deployment of Databases
Evolution of Data(base) Replication Technologies for WLCG
Andrea Valassi (IT-ES)
3D Application Tests Application test proposals
Data Challenge with the Grid in ATLAS
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
Database Readiness Workshop Intro & Goals
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Conditions DB Data Distribution
Dirk Düllmann CERN Openlab storage workshop 17th March 2003
Conditions Data access using FroNTier Squid cache Server
Workshop Summary Dirk Duellmann.
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Summary of Service Challenges
3D Project Status Report
R. Graciani for LHCb Mumbay, Feb 2006
Oracle Streams Performance
LHC Data Analysis using a worldwide computing grid
Offline framework for conditions data
Presentation transcript:

INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea Valassi (CERN IT-PSS)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Conditions DB in the 4 experiments ALICE –AliROOT (Alice-specific software) for time/version handling –ROOT files with Alien file catalog CMS –CMSSW (CMS-specific software) for time/version handling –Oracle (via POOL-ORA C++ API) with Frontier web cache ATLAS and LHCb –COOL (LCG AA common software) for time/version handling –Oracle, MySQL, SQLite, Frontier (via COOL C++ API)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 ALICE – use Alien/ROOT (no DBs) ALICE has no special needs for database deployment and replication at Tier1 and Tier2 sites for managing its conditions data

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 CMS – use Oracle/Frontier Oracle+Tomcat at Tier0 and Squid caches at Tier1 and Tier2 already set up to access CMS conditions data during CSA06 NB Cache consistency control - caches must be refreshed periodically (querying T0 db)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Atlas and LHCb – use COOL COOL: LCG Conditions database –Common development of IT-PSS, LHCb and Atlas –Handle time-variation and versioning of conditions data Four supported relational technologies (via CORAL) –Oracle and MySQL database servers –SQLite files –Frontier (read-only ): Squid + Tomcat + Oracle server COOL service deployment model –Based on generic 3D distributed db deployment model Oracle at Tier0 and Tier1 (with distribution via Oracle Streams) Other technologies elsewhere if at all needed –Details depend on each experiment’s computing model

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 LHCb computing model COOL only stores the conditions data needed for event reconstruction –Oracle at Tier0 –Oracle at Tier1’s (six sites) –Streams replication T0-T1 –COOL not needed at Tier2’s (only MC production there) –SQLite files may in any case be used for any special needs (Marco Clemencic, 3D workshop 13 Sep 2006)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 LHCb – COOL service model Two servers at CERN – essentially for online and offline –Replication to Tier1’s from the online database is a two-step replication –Server at the pit managed by LHCb; offline server managed by IT-PSS FZK RAL IN2P3 CNAF SARA PIC (Marco Clemencic, 3D workshop 13 Sep 2006) COOL (Oracle)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 LHCb – status and plans Oracle and Streams replication set up and tests –Oracle servers at PIT (single-instance prototype), CERN offline (test RAC) and three Tier1 sites (FZK/Gridka, RAL, IN2P3) –Tested two-step Streams replication between PIT-CERN-Tier1 –Tested replication throughputs much higher than expectations All OK with 100 IOVs/sec for one hour and 1 IOVs/sec for 24 hours –In progress: stress-test T1 read access; test tag replication latency Future milestones –By March ‘07: add CNAF (Dec), Nikhef/SARA (Jan), PIC (Mar) –By March ’07: integration with CORAL LFC replica svc and latest COOL DB lookup and secure user authentication using grid certificate (also needs secure SSH data transfer in LFC – deployment expected Jan-Feb ‘07) –April ’07: production using CERN offline and T1s (pit still in test mode)

Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Atlas – COOL service model COOL Oracle services at Tier0 and ten Tier1’s –Two COOL servers at CERN for online/offline (similar to LHCb) Online database within the Atlas pit network, but physically in the CC –In addition: Oracle (no COOL) at three ‘muon calibration center’ Tier2’s Online OracleDB Offline master CondDB Tier-0 recon replication Tier-1 replica Tier-1 replica Online / PVSS / HLT farm Tier-0 farm Dedicated 10Gbit link ATLAS pitComputer centreOutside world CERN public network Calibration updates Streams replication ATLAS pit network (ATCN) gateway (Sasha Vaniachine and Richard Hawkings, 3D workshop 14 Sep 2006) FZK ASGC BNL RAL IN2P3 CNAF TRIUMF Nordugrid PIC SARA

10 Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Atlas – muon calibration centers (Sasha Vaniachine and Joe Rothberg, 3D workshop 14 Sep 2006) Muon calibration centers: Roma, Michigan, (Munich) Streams set up between Michigan (source) and CERN (target)

11 Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Atlas – status and plans Oracle and Streams replication set up and tests –Oracle servers at CERN online and offline (test RACs) and five Phase1 Tier1 sites (FZK/Gridka, BNL, ASGC/Taiwan, IN2P3, RAL) - plus Triumf TCP and protocol tuning for BNL and ASGC: 400 -> 3200 LCR/sec –Tested two-step Streams replication between online-offline-Tier1 Future milestones –By December ‘06: add CNAF –February ’07: “CDC” production with all six Tier1’s –Three “Phase 2” Tier2 sites to join later (Nordugrid, SARA, PIC) Open issues for Atlas –Can the achievable replication throughput meet the Atlas requirements? Intrinsic Streams limitation: single row updates in the apply step Need detailed throughput requirements – take as much as Streams can do? Considering transportable tablespaces for TAGS (not possible for COOL?) –Replication to Tier2’s COOL ‘dynamic replication’, e.g. to MySQL – under development Evaluating COOL Frontier backend (performance, cache consistency…)

12 Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Streams downstream capture Objective –Source database (CERN) isolation against network or database problems at the replicas Status and plans –Blocking Oracle bug solved two weeks ago –Four nodes allocated –Production setup in December with two servers (Atlas/LHCb) and two spares Eva Dafonte Perez (IT-PSS)

13 Andrea Valassi Conditions Data ReplicationCNAF, 22 November 2006 Summary ALICE: use ROOT and Alien (no db services outside CERN) CMS: Oracle and Frontier at T0, Squid web caches at T1/2 –Already set up and successfully tested during CSA06 –No database service is required outside CERN for CMS LHCb (COOL): Oracle at T0 and T1 with Streams replication –Two servers at CERN: one at the PIT (online) and one in the CC (offline) –No database service is required at T2’s for LHCb –Production with all 6 T1 sites in April 2007 Atlas (COOL): Oracle at T0 and T1 with Streams replication –Two servers at CERN, both in the CC (online server in the online network) –Production with 6 ‘Phase1’ T1 sites in February 2007 –Potential problem with Streams throughput – need detailed requirements –No decision yet about Tier2 deployment – evaluating MySQL and Frontier