Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.

Slides:



Advertisements
Similar presentations
The National Grid Service and OGSA-DAI Mike Mineter
Advertisements

Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Assessment of Core Services provided to USLHC by OSG.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Open Science Grid Frank Würthwein OSG Application Coordinator Experimental Elementary Particle Physics UCSD.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
GGF12 – 20 Sept LCG Incident Response Ian Neilson LCG Security Officer Grid Deployment Group CERN.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
OSG Project Manager Report for OSG Council Meeting August 5, 2008 Chander Sehgal.
Grid Basics Adarsh Patil
OSG Project Manager Report for OSG Council Meeting OSG Project Manager Report for OSG Council Meeting October 14, 2008 Chander Sehgal.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Michael Fenn CPSC 620, Fall 09.  Grid computing is the process of allowing loosely-coupled virtual organizations to share resources over a wide area.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
9 Oct Overview Resource & Project Management Current Initiatives  Generate SOWs  8 written and 6 remain;  drafts will be complete next week 
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
OSG Abhishek Rana Frank Würthwein UCSD.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
Open Science Grid as XSEDE Service Provider Open Science Grid as XSEDE Service Provider December 4, 2011 Chander Sehgal OSG User Support.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
OSG Deployment Preparations Status Dane Skow OSG Council Meeting May 3, 2005 Madison, WI.
OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider Program OSG Report for DOE/NSF Joint Oversight Group U.S. Large Hadron Collider.
HLRmon accounting portal The accounting layout A. Cristofori 1, E. Fattibene 1, L. Gaido 2, P. Veronesi 1 INFN-CNAF Bologna (Italy) 1, INFN-Torino Torino.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Ruth Pordes Executive Director University of Washingon Seattle OSG Consortium Meeting 21st August University of Washingon Seattle.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
Bob Jones EGEE Technical Director
Open Science Grid Progress and Status
Monitoring and Information Services Technical Group Report
GGF OGSA-WG, Data Use Cases Peter Kunszt Middleware Activity, Data Management Cluster EGEE is a project funded by the European.
Open Science Grid at Condor Week
Presentation transcript:

Open Science Grid Frank Würthwein UCSD

2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware. — Owner controlled policies.  “Linux rules”: mostly RHEL3 on Intel/AMD  Heterogeneous Middleware stack — Minimal site requirements & optional services — Production grid allows coexistence of multiple OSG releases.  “open consortium” — Stakeholder projects & OSG project to provide cohesion and sustainability.  Grid of sites — Compute & storage (mostly) on private Gb/s LANs. — Some sites with (multiple) 10Gb/s WAN uplink.

2/13/2006 GGF 3 OSG by numbers  53 Compute Elements  9 Storage Elements (8 SRM/dCache & 1 SRM/DRM)  23 active Virtual Organizations 4 VOs with >750 jobs max. 4 VOs with max.

2/13/2006 GGF 4 Official Opening of OSG July 22nd 2005

2/13/2006 GGF 5 HEP Bio/Eng/Med Non-HEP physics 100 jobs 600 jobs 1500 jobs

2/13/2006 GGF 6 OSG Organization

2/13/2006 GGF 7 OSG organization (explained)  OSG Consortium — Stakeholder organization with representative governance by OSG council.  OSG project — (To be) funded project to provide cohesion & sustainability — OSG Facility — “Keep the OSG running” — “Engagement of new communities” — OSG Applications Group — “keep existing user communities happy” — Work with middleware groups on extensions of software stack — Education & Outreach

2/13/2006 GGF 8 OSG Management Executive Director: Ruth Pordes Facility Coordinator: Miron Livny Application Coordinators: Torre Wenaus & fkw Resource Managers: P. Avery & A. Lazzarini Education Coordinator: Mike Wilde Council Chair: Bill Kramer

2/13/2006 GGF 9 The Grid “Scalability Challenge”  Minimize entry threshold for resource owners — Minimize software stack. — Minimize support load.  Minimize entry threshold for users — Feature rich software stack. — Excellent user support. Resolve contradiction via “thick” Virtual Organization layer of services between users and the grid.

2/13/2006 GGF 10 Me -- My friends -- The grid Me: thin user layer My friends: VO services VO infrastructure VO admins The Grid: anonymous sites & admins Common to all. Me & My friends are domain science specific.

2/13/2006 GGF 11

2/13/2006 GGF 12 User Management  User registers with VO and is added to VOMS of VO. — VO responsible for registration of VO with OSG GOC. — VO responsible for users to sign AUP. — VO responsible for VOMS operations. — VOMS shared for ops on both EGEE & OSG by some VOs. — Default OSG VO exists for new communities.  Sites decide which VOs to support (striving for default admit) — Site populates GUMS from VOMSes of all VOs — Site chooses uid policy for each VO & role — Dynamic vs static vs group accounts  User uses whatever services the VO provides in support of users — VO may hide grid behind portal  Any and all support is responsibility of VO — Helping its users — Responding to complains from grid sites about its users.

2/13/2006 GGF 13

2/13/2006 GGF 14 Compute & Storage Elements  Compute Element — GRAM to local batch system.  Storage Element — SRM interface to distributed storage system. — Continued legacy support: gsiftp to shared filesystem.

2/13/2006 GGF 15 Disk areas in more detail:  Shared filesystem as applications area. — Read only from compute cluster. — Role based installation via GRAM.  Batch slot specific local work space. — No persistency beyond batch slot lease. — Not shared across batch slots. — Read & write access (of course).  SRM controlled data area. — Job related stage in/out. — “persistent” data store beyond job boundaries. — SRM v1.1 today. — SRM v2 expected in next major release (summer 2006).

2/13/2006 GGF 16 Middleware lifecycle Domain science requirements. Joint projects between OSG applications group & Middleware developers to develop & test on “parochial testbeds”. Integrate into VDT and deploy on OSG-itb. Inclusion into OSG release & deployment on (part of) production grid. EGEE et al.

2/13/2006 GGF 17 Challenges Today  Metrics & Policies — How many resources are available? — Which of these are available to me?  Reliability — Understanding of failures. — Recording of failure rates. — Understanding relationship between failure and use.

2/13/2006 GGF 18 Release Schedule PlannedActual OSG 0.2Spring 2005July 2005 OSG 0.4.0December 2005January 2006 OSG 0.4.1April 2005 OSG 0.6.0July 2006 Dates here mean “ready for deployment”. Actual deployment schedules are chosen by each site, resulting in heterogeneous grid at all times.

2/13/2006 GGF 19 Summary  OSG facility is under steady use — ~20 VOs, ~ jobs at all times — Mostly HEP but large Bio/Eng/Med occasionally — Moderate other physics (Astro/Nuclear)  OSG project — 5 year Proposal to DOE & NSF — Facility & Extensions & E&O  Aggressive release schedule for 2006 — January 2006: — April 2006: — July 2006: 0.6.0