1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.

Slides:



Advertisements
Similar presentations
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab.
Assessment of Core Services provided to USLHC by OSG.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Operations and Interoperations Rob Quick Open Science Grid Operations Center - Indiana University EGEE Operations Meeting Stockholm, Sweden - 14 June.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
VOX Project Status T. Levshina. Talk Overview VOX Status –Registration –Globus callouts/Plug-ins –LRAS –SAZ Collaboration with VOMS EDG team Preparation.
Mar 28, 20071/9 VO Services Project Gabriele Garzoglio The VO Services Project Don Petravick for Gabriele Garzoglio Computing Division, Fermilab ISGC 2007.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Job and Data Accounting on the Open Science Grid Ruth Pordes, Fermilab with thanks to Brian Bockelman, Philippe Canal, Chris Green, Rob Quick.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks State of Interoperability Laurence Field.
1 Open Science Grid Update for DOSAR Keith Chadwick, Fermilab.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider Program OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider.
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Ruth Pordes Executive Director University of Washingon Seattle OSG Consortium Meeting 21st August University of Washingon Seattle.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
1 Open Science Grid Progress & Vision Keith Chadwick, Fermilab
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab.
Open Science Grid Interoperability
Bob Jones EGEE Technical Director
EGEE Middleware Activities Overview
Open Science Grid Progress and Status
Monitoring and Information Services Technical Group Report
Stephen Pickles Technical Director, GOSC
Leigh Grundhoefer Indiana University
Open Science Grid at Condor Week
Presentation transcript:

1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab

2 OSG -- a reminder... The OSG Distributed Facility is a US grid computing infrastructure that supports scientific computing via an open collaboration of science researchers, software developers and computing, storage and network providers. OSG provides access to existing computing and storage resources contributed by members of the OSG Consortium. The OSG Consortium policies are to be open to participation by all researchers. The OSG Project is co-funded by DOE and NSF for 5 years at $6M/year starting in Sept ‘06 currently including deliverables for US LHC, LIGO and STAR; use by CDF and D0; and with potential other experiment deliverables in the future. The OSG Project responsibilities are to operate, protect, extend and support the Distributed Facility for the Consortium.

3 OSG is part of the WLCG OSG is relied on by the US LHC as their Distributed Facility in the US. Resources accessible through the OSG infrastructure deliver accountable cycles for the US LHC experiments. OSG interoperates with many other infrastructures in managerial, operational and technical activities. OSG cooperates specifically with the EGEE to ensure an effective and transparent distributed system for the experiments. OSG supplies the Virtual Data Toolkit to OSG, EGEE and WLCG - a packaged, integrated and distributable set of middleware including Condor, Globus, Myproxy and components needed by the scientific community.

4 What software is in the VDT? Security  VOMS (VO membership)  GUMS (local authorization)  mkgridmap (local authorization)  MyProxy (proxy management)  GSI SSH  CA CRL updater Monitoring  MonaLISA Accounting  OSG Gratia Support  Apache  Tomcat  MySQL (with MyODBC)  Non-standard Perl modules  Wget  Squid  Logrotate  Configuration Scripts Job Management  Condor (including Condor-G & Condor-C)  Globus GRAM Data Management  GridFTP (data transfer)  RLS (replication location)  DRM (storage management)  Globus RFT Information Services  Globus MDS  GLUE schema & providers  gLite CEMon Client tools  Virtual Data System  SRM clients (V1 and V2)  UberFTP (GridFTP client) Developer Tools  PyGlobus  PyGridWare Testing  NMI Build & Test  VDT Tests

5 Current OSG deployment 96 Resources across production & integration infrastructures 27 Virtual Organizations including operations and monitoring groups >15,000 CPUs ~6 PB MSS ~4 PB disk

6 OSG core competancies Integration: Software, Systems, Virtual Organizations. Operations: Common Support & Grid Services. Inter-Operation: Bridging Administrative & Technical Boundaries. with Validation, Verification and Diagnosis at each step. with Integrated Security Operations and Management.

7 OSG support for non-physics communities Frank told you about our non-physics community activities last time. Since then: Alan Blatecky’s group at RENCI is porting the “award winning Bioportal” to OSG. >100 Nanotechnology jobs -- that run from days -- are being executed on LIGO, ATLAS and CMS sites. We are discussing partnership with the Northwest Indiana Computing Grid (NWICG) -- which brings me to Gaussian. When we start talking to Computational Chemistry we quickly run into licencing issues. Yes, we also say it is the responsibility of the project/VO.. But there are 50 sites on OSG. The P-Grade portal has been interfaced to a version of CHARMM molecular dynamics simulation package. Some versions of this also have licencing issues. Work on Campus Grids enabling Crimson Grid, NWICG, New York State Grid (NYSG), GPN (Nebraska Education/Training grid) partnerships. (Note: Partners do not have to contribute resources; collaboration can equally be in software, procedures, education, training, security etc.)

8 OSG is in a flat world OSG is one of many grids. VOs interface to more than one Grid. Computing and Storage Resources are accessible to more than one Grid. Any Work might be done using multiple grids e.g. Workflow submitted using the CMS analysis Grid interface; Dispatched using the EGEE Resource Broker; Data is transferred from an OSG site; Job scheduled through the FermiGrid Campus Grid gateway; and Executed on the local CDF grid site.

9 OSG Interoperation - Security OSG Security is based on managerial, operational and technical controls that manage risk. OSG regards Site and VO Security Responsibilities as equivalent. VOs cross Grid boundaries -- so coordination is essential. OSG - EGEE Security groups are Joint. This does not mean all policies and documents are the same -- but that we work together to be consistent and in common wherever sensible. We are also working together on security middleware extensions (e.g. to allow “pull” architecture for job scheduling)

10 OSG Interoperation - Software From now on software release process includes tests for interoperability. VO end-to-end systems are part of the OSG “concern”: VOs make their priorities clear for the common middleware. VOs contribute to the testing of new OSG releases. VOs consider commonality of and are prepared to contribute middleware that they develop or adopt. VOs often use (and therefore harden) new components in the VO Environment before they are part of the common middleware. TeraGrid CTSS and OSG VDT software stacks being aligned e.g. have the same Globus patches applied.

11 OSG Interoperation - Jobs OSG publishes information to the WLCG information service through a web interface. Laurence combines the information with that from the EGEE to publish to the WLCG. In practice we use the same infrastructure as EGEE, with our own information gatherers. E.g. how CMS dispatches job (courtesy O. Gutsche)

12 OSG Interoperation - Data OSG middleware supports the GridFTP and the SRM storage interfaces. Catalogs, Replication etc is in the scope of the VOs. There are 3 implementations of SRM on OSG: srm/dCache, Jasmine/JLAB, srm/drm (LBNL). A 4th srm/LStore is in test. Storage management is a major focus of the next year.

13 OSG Interoperation - Operations Automated ticket exchange in distributed support centers in OSG. Working on automated ticket exchange with EGEE. Manual procedures of course work first! OSG EGEE

14 OSG Interoperation - Education We will continue the successful iVDGL grid summer workshop (hands on training for a week). We will work more closely with the UK eScience program on the International Summer School on Grid Computing. (ISSGC) OSG works with TeraGrid on education and training activities.

15 Interoperation Concerns How do we communicate and test interoperability of changes (interfaces and capabilities) before they get to production? How do we communicate about new s/w developments in time to have common approaches & avoid duplication & divergence? How do we manage ourselves to plan ahead and not do “just in time” developments. And for OSG how do we prioritize support for our non-WLCG stakeholders during data taking?

16 GIN - Grid Interoperability Now OSG is part of the 9 grid partnership for Grid Interoperability Now (GIN). Tests evolved since “Worldgrid” between DataTag/EDG and Grid3 a few years ago. OSG specific contributions:  Storage interface tester across 6 implementations.  Sites for executing jobs from Application tests (e.g. Ninf-G, charmm).  Bilateral interoperation with EGEE for Information Services. Looking to work on Service Discovery with EGEE and NGDF next.

17 Summary The Open Science Grid has lots of work it is doing and lots more work to do!