A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at

Slides:



Advertisements
Similar presentations
RLS Production Services Maria Girone PPARC-LCG, CERN LCG-POOL and IT-DB Physics Services 10 th GridPP Meeting, CERN, 3 rd June What is the RLS -
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Databases Technologies and Distribution Techniques Dirk Duellmann, CERN HEPiX, Rome, April 4th 2006.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
LCG 3D Project Status and Production Plans Dirk Duellmann, CERN IT On behalf of the LCG 3D project CHEP 2006, 15th February, Mumbai.
ATLAS Database Operations Invited talk at the XXI International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, September 2007 Alexandre.
OSG Area Coordinator’s Report: Workload Management April 20 th, 2011 Maxim Potekhin BNL
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
Database Administrator RAL Proposed Workshop Goals Dirk Duellmann, CERN.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
DGC Paris WP2 Summary of Discussions and Plans Peter Z. Kunszt And the WP2 team.
Database Readiness Workshop Summary Dirk Duellmann, CERN IT For the LCG 3D project SC4 / pilot WLCG Service Workshop.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.
3D Project Status Dirk Duellmann, CERN IT For the LCG 3D project Meeting with LHCC Referees, March 21st 06.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
ATLAS FroNTier cache consistency stress testing David Front Weizmann Institute 1September 2009 ATLASFroNTier chache consistency stress testing.
INFSO-RI Enabling Grids for E-sciencE gLite Test and Certification Effort Nick Thackray CERN.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
LCG Pilot Jobs + glexec John Gordon, STFC-RAL GDB 7 December 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
DB Questions and Answers open session (comments during session) WLCG Collaboration Workshop, CERN Geneva, 24 of April 2008.
1 LCG Distributed Deployment of Databases A Project Proposal Dirk Düllmann LCG PEB 20th July 2004.
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 6 April 2005.
LCG 3D Project Status For more details: Dirk Düllmann, CERN IT-ADC LCG Application Area Meeting 6 April, 2005.
INFN Tier1/Tier2 Cloud WorkshopCNAF, 22 November 2006 Conditions Database Services How to implement the local replicas at Tier1 and Tier2 sites Andrea.
WLCG IPv6 deployment strategy
Regional Operations Centres Core infrastructure Centres
Dirk Duellmann CERN IT/PSS and 3D
(on behalf of the POOL team)
IT-DB Physics Services Planning for LHC start-up
LCG 3D Distributed Deployment of Databases
Database Services at CERN Status Update
3D Application Tests Application test proposals
Database Readiness Workshop Intro & Goals
LCG Distributed Deployment of Databases A Project Proposal
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
Workshop Summary Dirk Duellmann.
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
POOL/RLS Experience Current CMS Data Challenges shows clear problems wrt to the use of RLS Partially due to the normal “learning curve” on all sides in.
Presentation transcript:

A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at

D.Duellmann, CERN2 Experiment Requests Not a complete listNot a complete list –Rather a addendum to the experiment presentations.. LHCbLHCb –PVSS No request for > change 2005 spreadsheet –Bookkeeping T0 service sufficient Application ready –CondDB T0/1/2 required T2 slicing (slice definition?) or caching

D.Duellmann, CERN3 Experiment Requests ALICEALICE –Need to obtain T0 / online request Volumes, applications and deployment dates –No request for database services bejond T0 –FC (gLite) No data distribution required ATLASATLAS –GeometryDB s/w ready Data copy: Octopus (no slicing) or special HVS app (slicing) –Replication would be possible as well –Collections Becoming available from POOL Integration and reference load needed

D.Duellmann, CERN4 Experiment Requests CMSCMS –Conditions read access via FroNtier Alternative RAL based prototype From T0 or also T1? –FC Replication: none Distribution: CMS peer-2-peer

D.Duellmann, CERN5 Database Services for EGGE/ARDA/GDA EGEE and ARDAEGEE and ARDA –Still in s/w development phase –First deployment ideas but no concrete plans for 2005 yet –Development effort goes in specialised replication implementation Somewhat disconnected from experiment requirements and existing deployment infrastructure? –Expect service request for deployment once deployment plan is agreed with experiments In other words - late as well.. Grid Deployment ServicesGrid Deployment Services –Monitoring does use specialised replication Currently based on MySQL - plans for Oracle so far w/o date? Need to assure consistent deployment planningNeed to assure consistent deployment planning –Or (temporarily?) accept a larger diversity of special services

D.Duellmann, CERN6 Grid Security Some ideas for integration with grid certificatesSome ideas for integration with grid certificates No support of cert identity by db backends - and unlikely on 2005 timescaleNo support of cert identity by db backends - and unlikely on 2005 timescale –Is individual grid identity needed as db identity More for diagnostics then for authorisation –Cert attribute could map directly to database role set

D.Duellmann, CERN7 3D Requests Data volume is determined only up to a factor of ten - still not (yet) a problemData volume is determined only up to a factor of ten - still not (yet) a problem Other requirements (CPU, connections and I/O) are often not determinedOther requirements (CPU, connections and I/O) are often not determined –Need reference work load - and leave headroom! Generic distribution (a la streams) used - but not main focusGeneric distribution (a la streams) used - but not main focus –Development of several copy mechanisms or specialised replication tools –Need to understand their deployment impact Several key apps still under developmentSeveral key apps still under development –ConditionsDB RAL based (March) FroNtier based (March?) –File Catalog Which one? By when? Will use POOL RFC until decision –Book keeping ATLAS(failover at T0?), LHCb (T0) Don’t want to be burocratic - but avoid late surprises and finger pointingDon’t want to be burocratic - but avoid late surprises and finger pointing –But need agreement on defined deployment services and plans –Proposal will be written up, discussed and go to the PEB

D.Duellmann, CERN8 Main Problems 2005 Significant additional workload on many parts of the database areaSignificant additional workload on many parts of the database area Database application developers –Several new DB apps to complete and integrate on production scale Service providers –Ongoing R&D for consolidation & scalability (CERN) –Coordination of distributed service (CERN and FNAL) –Integration and optimisation of new applications –Implementation of LCG service at T1 Likely contention at CERN and FNALLikely contention at CERN and FNAL –Due to late application arrival –Unexpected access patterns –Insufficient manpower on the service side

D.Duellmann, CERN Main Problems Should expect (and plan) integration bottleneckShould expect (and plan) integration bottleneck –Outsourcing or experiment database are unlikely more efficient solutions Need agreed apps deployment priorityNeed agreed apps deployment priority –and maybe change development effort to key applications T0 Database services need to focus on apps integrationT0 Database services need to focus on apps integration –Out source as much as possible Database server s/w installation and patching Storage subsystem configuration and deployment

D.Duellmann, CERN10 Test bed Setup Oracle 10g serverOracle 10g server –Install kits and documentation are provided for test bed sites CERN can not offer offsite support though –At least 100GB storage –Application and reference load packaged by CERN / experiments FroNtier installationFroNtier installation –Just one server plus squid installations at other sites? –Need squid package and install instructions (FNAL?) –Need a test server at CERN or FNAL Worker nodes to run reference loadWorker nodes to run reference load OEM installation for test bed administration and diagnosticOEM installation for test bed administration and diagnostic –Propose to prepare setup between FNAL and CERN

D.Duellmann, CERN11 Workplan Q1 Connect the first set of sites to the testbedConnect the first set of sites to the testbed –CNAF, FNAL, ASCC, BNL –Install and test Oracle 10g and FroNtier setup –Use POOL RFC for initial setup and distribution tests Setup Oracle Enterprise Manager for testbedSetup Oracle Enterprise Manager for testbed – evaluate suitability for shared administration in WAN Setup server and client side diagnosticsSetup server and client side diagnostics –Add client performance summary to RAL –Evaluate FNAL monitoring and integrate with RAL CERN T0CERN T0 –Agree and implement the integration policy –Implement stop-gap solution with dedicated resource for key apps –RAC functionality test and configuration optimisation –Coordinate testbed installation and documentation

D.Duellmann, CERN12 Workplan Q2 Add second batch of sitesAdd second batch of sites –GridKa, IN2P3, more ? Introduce new applications into 3D testbedIntroduce new applications into 3D testbed –Distribute applications and workload package as defined by experiments –Run distributed tests and test data distribution as required by experiments Sequence following priority and availability list of the experimentsSequence following priority and availability list of the experiments EgEg –March: ATLAS GeomDB –April: LHCb ConditionsDB –June: CMS Conditions (via FroNtier) CERN T0CERN T0 –Test of RAC setup with real applications (R&D) Define disk and RAC configuration –T0 integration of new applications (integration service) and definition of performance metric Similar sequence as 3D but including T0-only apps

D.Duellmann, CERN13 Workplan Q3 Final version of 3D service definition and service implementation documentsFinal version of 3D service definition and service implementation documents –And presenting them to the PEB/GDB Installation of T1 production setupInstallation of T1 production setup –Need the testbed to stay available for ongoing integration CERN T0CERN T0 –Starting to moving current and new certified apps onto new RAC infrastructure –Finishing new application integration –All applications in T0 production

D.Duellmann, CERN14 Workplan Q4 Starting production deployment of distributed serviceStarting production deployment of distributed service –According to experiment deployment plans CERN T0CERN T0 –All production application migrated from Solaris cluster to RAC/Linux setup –Together with new applications

Thanks to all! And a Merry Christmas…