Introduction: Distributed POOL File Access Elizabeth Gallas - Oxford – September 16, 2009 Offline Database Meeting.

Slides:



Advertisements
Similar presentations
Chapter 20 Oracle Secure Backup.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
June 23rd, 2009Inflectra Proprietary InformationPage: 1 SpiraTest/Plan/Team Deployment Considerations How to deploy for high-availability and strategies.
ATLAS Databases: An Overview, Athena use of Geometry/Conditions DB, and Conditions Metadata Elizabeth Gallas - Oxford ATLAS-UK Distributed Computing Tutorial.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
Database Access Elizabeth Gallas - Oxford - October 06, 2009 ATLAS Week - Barcelona, Spain What does a job need ? 1. Data (Events) 2. Database (Geometry,
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 11 Managing and Monitoring a Windows Server 2008 Network.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
11 MAINTAINING THE OPERATING SYSTEM Chapter 5. Chapter 5: MAINTAINING THE OPERATING SYSTEM2 CHAPTER OVERVIEW Understand the difference between service.
NovaBACKUP 10 xSP Technical Training By: Nathan Fouarge
AMOD Report Doug Benjamin Duke University. Hourly Jobs Running during last week 140 K Blue – MC simulation Yellow Data processing Red – user Analysis.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
ATLAS DQ2 Deletion Service D.A. Oleynik, A.S. Petrosyan, V. Garonne, S. Campana (on behalf of the ATLAS Collaboration)
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Distribution After Release Tool Natalia Ratnikova.
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Daniela Anzellotti Alessandro De Salvo Barbara Martelli Lorenzo Rinaldi.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Overview Managing a DHCP Database Monitoring DHCP
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Distributed Backup And Disaster Recovery for AFS A work in progress Steve Simmons Dan Hyde University.
Elizabeth Gallas August 9, 2005 CD Support for D0 Database Projects 1 Elizabeth Gallas Fermilab Computing Division Fermilab CD Grid and Data Management.
1 Database mini workshop: reconstressing athena RECONSTRESSing: stress testing COOL reading of athena reconstruction clients Database mini workshop, CERN.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
EGI-InSPIRE EGI-InSPIRE RI DDM Site Services winter release Fernando H. Barreiro Megino (IT-ES-VOS) ATLAS SW&C Week November
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Data Management: US Focus Kaushik De, Armen Vartapetian Univ. of Texas at Arlington US ATLAS Facility, SLAC Apr 7, 2014.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
ELSSISuite Services QIZHI ZHANG Argonne National Laboratory on behalf of the TAG developers group ATLAS Software and Computing Week, 4~8 April, 2011.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
ATLAS FroNTier cache consistency stress testing David Front Weizmann Institute 1September 2009 ATLASFroNTier chache consistency stress testing.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
DDM Central Catalogs and Central Database Pedro Salgado.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
11/01/20081 Data simulator status CCRC’08 Preparatory Meeting Radu Stoica, CERN* 11 th January 2007 * On leave from IFIN-HH.
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
FroNTier at BNL Implementation and testing of FroNTier database caching and data distribution John DeStefano, Carlos Fernando Gamboa, Dantong Yu Grid Middleware.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
DB Questions and Answers open session (comments during session) WLCG Collaboration Workshop, CERN Geneva, 24 of April 2008.
ATLAS TAGs: Tools from the ELSSI Suite Elizabeth Gallas - Oxford ATLAS-UK Distributed Computing Tutorial Edinburgh, UK – March 21-22, 2011.
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
Sergi Rubio Manrique “Archiving System at ALBA”. Tango Meeting. ALBA. October 16 th, MMVIII 1 Archiving ALBA Sergi Rubio Manrique.
CMS data access Artem Trunov. CMS site roles Tier0 –Initial reconstruction –Archive RAW + REC from first reconstruction –Analysis, detector studies, etc.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Real Time Fake Analysis at PIC
The ATLAS “DQ2 Accounting and Storage Usage Service”
Database Replication and Monitoring
Service Challenge 3 CERN
3D Application Tests Application test proposals
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
Database Readiness Workshop Intro & Goals
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Conditions Data access using FroNTier Squid cache Server
Workshop Summary Dirk Duellmann.
Presentation transcript:

Introduction: Distributed POOL File Access Elizabeth Gallas - Oxford – September 16, 2009 Offline Database Meeting

04-Sept-2009 Elizabeth Gallas2 Overview ATLAS relies on the Grid for processing many types of jobs. Jobs need Conditions data from Oracle + referenced POOL files. ATLAS has decided to deploy an array of Frontier/Squid servers to negotiate transactions between grid jobs and the Oracle DB. reduce the load on Oracle reduce latency observed connecting to Oracle over the WAN. With Frontier: Inline Conditions via Squid cache –> Frontier Server -> Oracle Referenced Conditions data is in POOL files (always < 2GB) which are manageable on all systems. FOCUS TODAY on how GRID JOBS find the POOL files. All sites accepting jobs on the grid must have: all the POOL files and a PFC (POOL File Catalog) – xml file w/POOL file locations at the site Job success on the GRID requires GRID submission system must know how sites are configured. GRID sites configured with site appropriate env and Squid failover*

04-Sept-2009 Elizabeth Gallas3 DB Access Software Components

04-Sept-2009 Elizabeth Gallas4 Where are the POOL files ? DQ2(DDM) - distributes Event data files and Conditions POOL files. TWiki: StorageSetUp for T0, T1's and T2'sStorageSetUp for T0, T1's and T2's ADC/DDM maintains ToA sites (Tiers of ATLAS) ToA sites are subscribed to receive DQ2 POOL files ToA sites have "space tokens" (areas for file destinations) such as: “DATADISK" for real event data “MCDISK" area for simulated event data … “HOTDISK" area for holding POOL files needed by many jobs  has more robust hardware for more intense access Some sites also use Charles Waldman's "pcache": Duplicates files to a scratchdisk accessible to local jobs avoiding network access to "hotdisk". Magic in pcache tells the job to look in the scratchdisk first. Are POOL files deployed to all ToA sites 'on the GRID' ? Tier-1 ? Tier-2 ? bigger Tier-3s ? Any other sites that want to use them ? Are these sites in ToA ?

04-Sept-2009 Elizabeth Gallas5 from Stephane Jezequel (Sept 15) Could you please forward this request to all ATLAS Grid sites which are included in DDM: As discussed during the ATLAS software week, sites are requested to implement the space token ATLASHOTDISK. More information: TLASHOTDISK_space_token TLASHOTDISK_space_token Sites should assign at least 1 TB to this space token (should foresee 5 TB). In case of storage crisis at the site, the 1 TB can be reduced to 0.5 TB. Because of the special usage of these files, sites should decide to assign a specific pool or not. When it is done, please report to DDM Ops (Savannah ticket is a good solution) to create the new DDM site.

04-Sept-2009 Elizabeth Gallas6 Where are the PFCs (POOL File catalogs)? Mario Lassnig - modified DQ2 client dq2-ls Can ‘on the fly’ create the PFC for the POOL files on a system written to work for "SRM systems“ (generally Tier-1s) Non-SRM systems (generally Tier-2,3) this PFC file must be modified: replace SRM specific descriptors We need to collectively agree on the best method and designate who will follow it up Scriptable way to remove SRM descriptors from PFC for use on non-SRM systems. Cron? Detection of new POOL file arrival Generate updated PFC Run above script preparing file for local use

04-Sept-2009 Elizabeth Gallas7 Configuring jobs on the GRID Item 5 from Dario’s TOB Action items: DB and ADC groups: discuss and implement a way to set the environment on each site so as to point to the nearest Squid and the local POOL file catalogue Grid submission system must know which sites have Squid access to Conditions data Site specific ? Failover  Experience at Michigan with muon calibration: Frontier / Squid access to multiple Squid servers Subscriptions in place to insure POOL files are in place and PFC location (?) Site specific – continuous updates to local PFC Manual setup for now in Ganga/Panda, will move to AGIS with configuration file on each site. Link to AGIS Technical Design Proposal: =7&confId=50976

04-Sept-2009 Elizabeth Gallas8 BACKUP

04-Sept-2009 Elizabeth Gallas9 Features of Athena: Previous to Release 15.4: Athena (RH) looks at IP the job is running at, uses dblookup.xml in the release to decide the order of database connections to try to get the Conditions data. Release 15.4 Athena looks for Frontier environment variable, if found, ignores the dblookup  using instead another env