Gene Oleynik, Head of Data Storage and Caching,

Slides:



Advertisements
Similar presentations
High Throughput Data Program at Fermilab R&D Parag Mhashilkar Grid and Cloud Computing Department Computing Sector, Fermilab Network Planning for ESnet/Internet2/OSG.
Advertisements

Summary report dCache T1 WS Technical exchange to improve stability and reliability of the dCache data management system at WLCG T1 centers.
Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Assessment of Core Services provided to USLHC by OSG.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Apr 30, 20081/11 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Apr 30, 2008 Gabriele Garzoglio.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
State of Georgia Release Management Training
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
OSG Area Coordinator’s Report: Workload Management Maxim Potekhin BNL May 8 th, 2008.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
High Throughput Data Program (HTDP) at FNAL Mission: investigate the impact of and provide solutions for the scientific computing challenges in Big Data.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Feb 15, 20071/6 OSG EB Meeting – VO Services Status Gabriele Garzoglio VO Services Status OSG EB Meeting Feb 15, 2007 Gabriele Garzoglio, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
My Jobs at CERN April 2015 My Jobs at CERN2
Projects, Tools and Engineering Patricia McBride Computing Division Fermilab March 17, 2004.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Jamie Shiers WLCG Service Coordination WLCG – Worldwide LHC Computing Grid SRM v2.2 Status LHCC Referees meeting, September 24 th 2007.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Hall D Computing Facilities Ian Bird 16 March 2001.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Open Science Grid Interoperability
Roles and Responsibilities
Key Project Drivers - FY10 Ruth Pordes, June 15th 2009
Regional Operations Centres Core infrastructure Centres
A Dutch LHC Tier-1 Facility
Open Science Grid Progress and Status
Ian Bird GDB Meeting CERN 9 September 2003
Data Challenge with the Grid in ATLAS
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
LQCD Computing Operations
DCache things Paul Millar … on behalf of the dCache team.
Phase 4: Compelling Case for Change
Leigh Grundhoefer Indiana University
LHC Data Analysis using a worldwide computing grid
Department of Licensing HP 3000 Replatforming Project Closeout Report
Status of Grids for HEP and HENP
Presentation transcript:

Storage Development at Fermilab at WLCG Collaboration meeting Data Management II BOF Gene Oleynik, Head of Data Storage and Caching, Fermi National Accelerator Laboratory

Fermilab Mission in Storage Provide high performance, state-of-the-art storage, data movement and caching systems for all Fermilab users and collaborations. Provide systems with common interfaces across HEP implementations. Contribute to US CMS, CMS and OSG storage and data management solutions. Participate in community wide projects and initiatives - at present these include GridFTP, dCache, and SRM. Integrate storage and data movement with managed high throughput network technologies.

Constraints Finite development resources at Fermilab implies effort must fit in the lab mission Expect that successful developments will emerge from the community and attract developer contributions and funding. The complexity and diversity of storage and data management systems in HEP means that expertise and support is provided by the deployment community for storage installations, configurations and software.

Fermilab Development Efforts for the WLCG Contributions related to dCache.org and dCache software. I coordinate all Fermilab contributions to the dCache collaborations and support of dCache users. We develop and support the following dCache software components: SRM Resilient (replica) Manager GPlazma GridFTP contributions (in Globus and dCache codes): Checksumming (Andrew Baranovski- SciDAC CEDS) Extensions to the standard and code enhancements. Activities funded by Open Science Grid for storage and data movement support for OSG sites and stakeholders. Packaging, deployment and support through the Virtual Data Toolkit Support for WLCG US Tier-2s and Tier-3s (and collaboration with the US Tier-1s) Extensions to the dCache and SRM codes needed by OSG stakeholders. To come: Contributions to the GSSD.

Funding for storage and data movement support for WLCG dCache contributions at Fermilab funded by Fermilab Computing Division, US-CMS and OSG. SciDAC CEDPS project includes extensions to data movement (GridFTP), enhancements to integration of GridFTP and dCache. DOE funded LambdaStation provides enhanced features for integration of dCache with managed wide area networks.

Organization Gene Oleynik - Storage Section Head, Overall leader of Fermilab dCache Contributions. Timur Perelmutov - dCache/SRM at Fermilab technical project lead. Timur Perelmutov - SRM Dmitry Litvintsev (50%) - SRM Alex Kulyavtsev - Resilient Manager Vladimir Podstavkov - Resilient Manager Expect to hire another developer this coming year. Ted Hesselroth - OSG Storage Middleware

Support: WLCG Support expected to follow this model We operate within a 3 tiered support model: Level 1 - Local site expertise. Install hardware, configure and maintain systems, first level of troubleshooting Level 2 - More expertise; advise, assist and troubleshoot installations, configuration issues and transfer problems Level 3 support - Developer expertise, identify and fix bugs Fermilab developers provide all three levels of support (with help from operational groups) for the Fermilab dCache systems at CDF, CMS, and our public dCache system (MINOS MiniBooNe, etc.) Our expectations are L1 expertise is available at and provided by the deployment sites L2 support is funded the by external stakeholder organizations (e.g. OSG storage activities staff, WLCG/EGEE funded support staff) Developers provide L3 support in a steady state situation (ie effort limited).

FTEs in Support Fermilab core developers (total 3.5 FTEs) spend 1.5 FTE in local and global support. US CMS Tier-1 Facility provides Level 1 and Level 2 support of 2 FTEs. US CMS Tier-2 Sites provide local and community support at 1 FTE per site. OSG Level 2 and Extensions staff is currently 2.75 FTE (reevaluated annually).

Current Issues and Action items in Fermilab dCache Contributions & Delivery for the WLCG We recognize the software schedule has slipped. For Fermilab the reasons are various and mainly related to the current large install dCache base for our running experiments. A major cause has been continuing instability in the CDF dCache 1.7 system since its upgrade in March ‘07. This is critical for CDF data analysis. Good news this is quickly winding down. We are adding to our dCache contributions for WLCG delivery and schedule: Dmitry will be 100% on SRM starting next week (When he is back from vacation) Vladimir has joined the resilient dCache effort US CMS are increasing their contributions through the hire another developer in the next few months. We are working with OSG to re-align the priorities and increase contributions through the GSSD.

Next Steps.. Fermilab is committed to our contributions to and support of dCache for the WLCG, US CMS and LHC experiments. We continue to believe a key to successful ramp-up is to have stakeholders and development managers need to work together to set expectations and prioritize work.