 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.

Slides:



Advertisements
Similar presentations
LCG WLCG Operations John Gordon, CCLRC GridPP18 Glasgow 21 March 2007.
Advertisements

ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
Assessment of Core Services provided to USLHC by OSG.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
Integration Program Update Rob Gardner US ATLAS Tier 3 Workshop OSG All LIGO.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
GGF12 – 20 Sept LCG Incident Response Ian Neilson LCG Security Officer Grid Deployment Group CERN.
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
OSG Software and Operations Plans Rob Quick OSG Operations Coordinator Alain Roy OSG Software Coordinator.
Mar 28, 20071/9 VO Services Project Gabriele Garzoglio The VO Services Project Don Petravick for Gabriele Garzoglio Computing Division, Fermilab ISGC 2007.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Key Project Drivers - an Update Ruth Pordes, June 14th 2008, V2: June 23 rd. These slides are in addition to the information available in
Key Project Drivers - FY10 Ruth Pordes, June 15th 2009.
Tools for collaboration How to share your duck tales…
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
Internet2 AdvCollab Apps 1 Access Grid Vision To create virtual spaces where distributed people can work together. Challenges:
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
The WLCG Service from a Tier1 Viewpoint Gareth Smith 7 th July 2010.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
INFSO-RI Enabling Grids for E-sciencE An overview of EGEE operations & support procedures Jules Wolfrat SARA.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider Program OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
Sep 17, 20081/16 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Sep 17, 2008 Gabriele Garzoglio.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
My Jobs at CERN April 2015 My Jobs at CERN2
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Gene Oleynik, Head of Data Storage and Caching,
Key Project Drivers - FY10 Ruth Pordes, June 15th 2009
Open Science Grid Progress and Status
Cloud Computing R&D Proposal
LHC Data Analysis using a worldwide computing grid
gLite The EGEE Middleware Distribution
The LHCb Computing Data Challenge DC06
Presentation transcript:

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and Internet2.  Virtual Data Toolkit: Common software developed between Computer Science & applications used by OSG and others. OSG Today

2 OSG Job Throughput  29 VOs  ~75 sites (19 SE & 82 CE)  ~400,000 wall clock hours per day (peaks over 500,000)  25-30% opportunistic use  ~15% is non-physics  >20,000 cores used per day  >43,000 cores accessible US-CMS, US-ATLAS and OSG ready for LHC startup

3 OSG Data Throughput Petabytes a month distributed from CERN to Tier-1s, between Tier-1s and to/from Tier-2s. Transfers bursts of >10Gb/sec. Relies on ESNET, LHCNet and Internet2 in the US.

Are These Estimates Realistic? Yes. Slide Courtesy ESNET: FNAL outbound CMS traffic for 4 months, to Sept. 1, 2007 Max= 8.9 Gb/s ( 1064 MBy/s of data ), Average = 4.1 Gb/s ( 493 MBy/s of data ) Gigabits/sec of network traffic Megabytes/sec of data traffic Destinations:

Known LHC Tier 2+3 Sites Drive Many of the ESnet Peering Point Location and Design Decisions Slide Courtesy Internet2:

background slides

7 7 OSG Platform for the US-LHC Collaborations Software/Middleware a)Support the movement, storage and management of the petabyte LHC data sets. b)Support of job workflow, scheduling and execution at the Tier-1, Tier-2 and Tier-3 sites, that supports transparent access across the European and US grids Services a)Information, accounting and monitoring Services publishing to the WLCG b)Reliability and Availability monitoring used by the experiments to determine the availability of sites and the WLCG to match to the MOU. Support a)Security monitoring, incident response, notification and mitigation b)Operational support including centralized Ticket Handling, with automated bi-directional communication between the systems in Europe and the USA c)Collaboration with ESNET and Internet2 network projects for the integration and monitoring of the underlying network fabric. d)Site Coordination and common support for Tier 3 sites (>8 now on OSG) e)End-to-end support for simulation, production, analysis and focused data challenges; enabling USLHC readiness for real data taking.

8 8 OSG Reporting to WLCG on behalf of US-LHC (Example) US LHC Tier2 Activity for September 2008 Long path to success, and there remains fragility in end-to-end process

9 9 US-ATLAS Production on OSG

10 US-CMS Production on OSG

11 US-LHC Benefits from OSG Common to US-ATLAS and US-CMS 1.Serves as integration and delivery point for core middleware components including compute and storage elements (VDT) 2.Cyber Security operations support within OSG and across Grids (e.g. WLCG) in case of security incidents 3.Cyber Security infrastructure including site-level authorization service, operational service for updating certificates and revocation lists 4.Service availability monitoring of critical site infrastructure services, i.e. Computing and Storage Elements (RSV) 5.Service availability monitoring and forwarding of results to WLCG 6.Site level accounting services and forwarding accumulated results to WLCG 7.Consolidation of Grid client utilities incl. incorporation of LCG client suite, resolving Globus library inconsistencies 8.dCache packaging through VDT and support through OSG-Storage 9.Integration testbed for new releases of the OSG software, pre-production deployment testing 10.Continuous support of the distributed Computing Facility and production services through the weekly OSG facility phone meetings

12 US-LHC Benefits from OSG (continued) Specific to US-ATLAS 1.LCG File Catalog (LFC) server and client packaging – needed in support of the ATLAS global Distributed Data Management system (DDM) 2.Bestman and xrootd: SRM and file system support for Tier 2 and Tier 3 facilities 3.Support for integration and extension of security services in the PanDA workload management system and the GUMS grid identity mapping service, for compliance with OSG security policies and requirements Specific to US-CMS 1.Bestman: SRM support for Tier 3 facilities 2.lcg-utils tools for data management 3.Scalability testing of OSG services, incl. BDII, CE, SE, and work with developers to improve the underlying middleware.