Concept: Well-managed provisioning of storage space on OSG sites owned by large communities, for usage by other science communities in OSG. Examples –Providers:

Slides:



Advertisements
Similar presentations
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
Advertisements

Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (4) File Management & Input/Out Systems 10/14/2008 Yang Song (Prepared.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
Silberschatz, Galvin and Gagne  Operating System Concepts Common System Components Process Management Main Memory Management File Management.
Chapter 3 Operating Systems Introduction to CS 1 st Semester, 2015 Sanghyun Park.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Public Storage and iRODS
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Summary of Accounting Discussion at the GDB in Bologna Dave Kant CCLRC, e-Science Centre.
CMS STEP09 C. Charlot / LLR LCG-DIR 19/06/2009. Réunion LCG-France, 19/06/2009 C.Charlot STEP09: scale tests STEP09 was: A series of tests, not an integrated.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
CY2003 Computer Systems Lecture 09 Memory Management.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
STORK: Making Data Placement a First Class Citizen in the Grid Tevfik Kosar and Miron Livny University of Wisconsin-Madison March 25 th, 2004 Tokyo, Japan.
June 10, D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
OSG Production Report OSG Area Coordinator’s Meeting Aug 12, 2010 Dan Fraser.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
Design and Implementation of a Generic Resource-Sharing Virtual-Time Dispatcher Tal Ben-Nun Scl. Eng & CS Hebrew University Yoav Etsion CS Dept Barcelona.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
OSG Production Report OSG Area Coordinator’s Meeting Nov 17, 2010 Dan Fraser.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
OSG Abhishek Rana Frank Würthwein UCSD.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
1 SRM v2.2 Discussion of key concepts, methods and behaviour F. Donno CERN 11 February 2008.
Capacity Planning in a Virtual Environment Chris Chesley, Sr. Systems Engineer
Improving Performance using the LINUX IO Scheduler Shaun de Witt STFC ISGC2016.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
System Components Operating System Services System Calls.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
OSG User Group August 14, Progress since last meeting OSG Users meeting at BNL (Jun 16-17) –Core Discussions on: Workload Management; Security.
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
Kevin Thaddeus Flood University of Wisconsin
Memory Management.
dCache “Intro” a layperson perspective Frank Würthwein UCSD
The ADC Operations Story
The LHCb Computing Data Challenge DC06
Presentation transcript:

Concept: Well-managed provisioning of storage space on OSG sites owned by large communities, for usage by other science communities in OSG. Examples –Providers: CMS, ATLAS. –Consumers: D0, CDF, …, DES, SBGrid. Opportunistic Storage on OSG

Procedure A provisioning site implements the model, makes space allocations when needed, and advertises the ‘token’ to the consumer VO. Technological model leverages on: –Space reservation functions in SRM v2.2 spec. –If applicable at a site, dCache filesystem internals and disk partitioning. Space allocation at a storage site: –Based on a formal understanding between provider and consumer. –Allocation made with a well-defined size and lifetime. E.g., 1 TB for 1 year. –Space expected to expire after the due lifetime.

Technology Areas in need of Improvement Token lifetime flexibility: In current implementations, altering the lifetime of a token is not possible. This can lead to unintended expiration of tokens, and loss of data in the expired space. This flexibility will be required for re-negotiation of space allocations. Token access control consistency: If a token identifier is widely known, there is a potential of VOs' writing into their own areas - by using another VO's token. Strict access control over space allocations will be required for wider usage of opportunistic storage. Token advertisement: Within limitations of token access control, dynamic mechanisms to advertise tokens using Generic Information Provider (GIP) will be useful for wider deployment of opportunistic storage. Pure opportunistic throttles: If a site does not perform a physical partition, separating subsets of disks, there is a risk of opportunistic load overlays -- disk I/O, CPU load, and network I/O overlays -- taking a toll interfering with the main provider’s own transfers. Overall, not a major problem in the short-term. In long term, however, new internal mechanisms for separation of data-mover queues on a per-VO-basis or a per-token-basis will be required on disks.

D0’s Needs D0 typically submits 60, ,000 jobs per week at sites on OSG. The experiment’s workflows make multiple requests for input data in quick succession. In past, due to lack of storage local to the processing sites, D0 input/output data had to be transferred in real time over the wide area network. This had led to high latencies, job timeouts, job failures, and excessively low overall efficiencies.

D0’s Solution D0 started using opportunistic storage in Summer’08. D0 and OSG worked together to make changes in D0’s workflow to adapt to SRM client-side usage. Main providers –CMS: Tier-2’s at UCSD, UNL, Purdue. –ATLAS: MidWest Tier-2 at IU, Great Lakes Tier-2 at MSU:. Results: –Ready availability of space for data movement and storage. –Increase in D0’s workflow success rate. –Increase in D0’s efficiency of OSG wall hours utilization. –Increase in D0 Event production.