Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.

Slides:



Advertisements
Similar presentations
Dec 14, 20061/10 VO Services Project – Status Report Gabriele Garzoglio VO Services Project WBS Dec 14, 2006 OSG Executive Board Meeting Gabriele Garzoglio.
Advertisements

Role Based VO Authorization Services Ian Fisk Gabriele Carcassi July 20, 2005.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
Bosco: Enabling Researchers to Expand Their HTC Resources The Bosco Team: Dan Fraser, Jaime Frey, Brooklin Gore, Marco Mambelli, Alain Roy, Todd Tannenbaum,
ANTHONY TIRADANI AND THE GLIDEINWMS TEAM glideinWMS in the Cloud.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
Open Science Grid Use of PKI: Wishing it was easy A brief and incomplete introduction. Doug Olson, LBNL PKI Workshop, NIST 5 April 2006.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
Assessment of Core Services provided to USLHC by OSG.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
Campus Grids Report OSG Area Coordinator’s Meeting Dec 15, 2010 Dan Fraser (Derek Weitzel, Brian Bockelman)
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
Mine Altunay OSG Security Officer Open Science Grid: Security Gateway Security Summit January 28-30, 2008 San Diego Supercomputer Center.
Use of Condor on the Open Science Grid Chris Green, OSG User Group / FNAL Condor Week, April
Mar 28, 20071/9 VO Services Project Gabriele Garzoglio The VO Services Project Don Petravick for Gabriele Garzoglio Computing Division, Fermilab ISGC 2007.
Evolution of the Open Science Grid Authentication Model Kevin Hill Fermilab OSG Security Team.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
Mine Altunay July 30, 2007 Security and Privacy in OSG.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Overview of Privilege Project at Fermilab (compilation of multiple talks and documents written by various authors) Tanya Levshina.
Role Based VO Authorization Services Ian Fisk Gabriele Carcassi July 20, 2005.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
OSG Site Admin Workshop - Mar 2008Using gLExec to improve security1 OSG Site Administrators Workshop Using gLExec to improve security of Grid jobs by Alain.
Open Science Grid as XSEDE Service Provider Open Science Grid as XSEDE Service Provider December 4, 2011 Chander Sehgal OSG User Support.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
The Great Migration: From Pacman to RPMs Alain Roy OSG Software Coordinator.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Running User Jobs In the Grid without End User Certificates - Assessing Traceability Anand Padmanabhan CyberGIS Center for Advanced Digital and Spatial.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Job submission overview Marco Mambelli – August OSG Summer Workshop TTU - Lubbock, TX THE UNIVERSITY OF CHICAGO.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
Why you should care about glexec OSG Site Administrator’s Meeting Written by Igor Sfiligoi Presented by Alain Roy Hint: It’s about security.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
3 Compute Elements are manageable By hand 2 ? We need middleware – specifically a Workload Management System (and more specifically, “glideinWMS”) 3.
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Open Science Grid at Condor Week
Presentation transcript:

Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February 23, 2012 Gabriele Garzoglio Marco Mambelli Marko Slyz

Feb 23, 2012 Open Science Grid Ecosystem Consortium Infrastructures Project Satellites Services: Consulting Production Software 2 Mission: The Open Science Grid aims to promote discovery and collaboration in data-intensive research by providing a computing facility and services that integrate distributed, reliable and shared resources to support computation at all scales.

Feb 23, 2012 Introduction to OSG 3 > 30 research communities >100 siteshttps://oim.grid.iu.edu/oim/homehttps://oim.grid.iu.edu/oim/home >70,000 cores accessible Users communities (aka VOs) and Campus Infrastructures bring: 1) Users, and/or 2) Resources Resources accessible through the OSG are contributed by the community. Their autonomy is retained. Resources can be distributed locally as a campus infrastructure.

Feb 23, 2012 Basic Architecture 4 Virtual Organizations Common Services & Software Resources: Sites, Campuses, Clouds

Feb 23, 2012 OSG Architecture 5 Architecture is driven by principles of Distributed High Throughput Computing (DHTC) captured in our Blueprint.

Feb 23, 2012 Some Contacts OSG StaffResponsibility Chander Sehgal (Fermilab)User Support Lead and Project Manager Gabriele Garzoglio (Fermilab)User Support – new communities integration Marko SlyzUser Support Marco Mambelli (UofChicago)Site Coordination Rob Quick (Indiana)Operations Lead Dan Fraser (ANL/UofChicago)Production and Campus Infrastructure lead Ruth Pordes (Fermilab)Executive Director Miron Livny (U Wisconsin)PI and Technical Director 6 Introduction to OSG: General OSG User Documentation: Central ticketing system at the Operations Center lists for community discussions

Feb 23, 2012 Software – the Virtual Data Toolkit OSG Software is packaged, tested, distributed as RPMs through the Virtual Data Toolkit (VDT).  The VDT has more than 40 components.  It also provides installation, configuration, administration tools for resources, VOs and Campus Infrastructures.  Alain Roy at the University of Wisconsin Madison is the Software Team Lead.  Software is added at the request of stakeholder needs. 7

Feb 23, 2012 Virtual Organizations (VOs) The OSG environment is VO based.  VOs can be science communities (e.g. Belle, CMS) or  Multi-disciplinary Campus based [e.g. U-Nebraska(HCC), U-Wisconsin(GLOW)] OSG assumes VOs will register with, support and use multiple Grids as needed [e.g. EGI, UK National Grid Service (NGS)] Users can be members of multiple VOs Site resources are owned by one or more VOs. 8

Feb 23, 2012 Using the OSG 9 The prevalent OSG job submission mechanism is via Glide-ins which submit Pilot jobs to Gram gatekeepers, where the Pilots then create a Condor overlay environment across the sites and manage the user job queues and execution. Sites also accept many other job submission mechanisms: gliteWMS*, Globus, PANDA. Security is X509 PKI certificate based, using the glite VO Management System (VOMS) and extended attributes (including groups and roles). * European Grid Infrastructure (EGI) Lightweight Middleware for Grid Computing

Feb 23, 2012 OSG Site An OSG Site (one or more Resources) provides one or more of the following:  Access to local computational resources using a batch queue  Local storage of and access to data using a distributed file system.  Remote access to data through the Storage Resource Management Interface (SRM)  Access to users to use external computing resources on the Grid.  The ability to transfer large datasets to and from remote sites. An OSG site offers access to these resources to grid users.  The policies and priority of use are determined locally by the site owners.  Typically the owner VO has highest priority, other “like” VOs have middle priority and the rest of the community has a lower “opportunistic” access. 10

Feb 23, 2012 What does a Site Owner Agree to? Offer services to at least one VO within OSG. While fair access is encouraged, you need not offer services to all communities within OSG To advertise accessible services accurately To sign the “OSG Appropriate Usage Policy”  Do not knowingly interfere with the operation of other resources  Do not breach trust chains, be responsible and responsive on security issues 11

Feb 23, 2012 Requirements: System and Software There are no specific hardware requirements but there are recommendations as to the size of memory, local disk, server nodes depending on the expected usage. The VDT supports several different Linux flavors of operating system:  Scientific Linux5 is currently the most used.  We also support some packages in CentOS and Debian  Scientific Linux6 is on the horizon (already in use by LIGO, expected to be needed by US LHC soon). OSG Software Stack  The current software release is RPM based (OSG 3.0).  Previous software releases used Pacman for distribution and installation. Worker Nodes  No specific requirement  Expect site to install the OSG Worker Node Client which is in practice needed for most VO applications 12

Feb 23, 2012 Requirements: To support the Users Mapping from Grid credentials to local Unix accounts done using GUMS or Gridmapfile:  Some VO prefer group accounts, some require pool accounts  None are privileged Some sites use Glexec* to map the Pilot user to the actual User credentials. User and/or groups ids should be mapped to allow consistent file access 13 *

Feb 23, 2012 Requirements: Storage These requirements are revisited and adjusted periodically by agreement to match current needs of the VOs.  Common locations provided using Environment Variables Provide persistent storage space for VO applications - $OSG_APP. Provide space for User/VO data  At least 10 GB per Worker Node  One of:  shared file system ($OSG_DATA),  special file systems ($OSG_READ, $OSG_WRITE),  Convenient/local Storage Element ($OSG_DEFAULT_SE)  Consistent data access across the resource (gatekeeper and all nodes) Provide local scratch space, $OSG_WN_TMP 14

Feb 23, 2012 Requirements: Network Interfaces Grid Accessible Interfaces and Services  Public IP, name in DNS  Connection requirements specified in the installation and firewall documents   Worker nodes  No specific requirement, but must be able to access updates to the Certificate Revocation Lists.  VO are encouraged not to require persistent services on Worker Nodes  Most VOs use some outbound connectivity – but OSG supports sites that do not. 15

Additional Slides 16

Feb 23, 2012 $OSG_APP can be read-only mounted on the worker nodes in the cluster Clusters can allow installation jobs on all nodes, only on the gatekeeper, in a special queue, or not at all Only users with software installation privileges in their VO (if supported by the VO) should have write privileges to these directories. At least 10 GB of space should be allocated per VO. 17

Feb 23, 2012 How a typical GlideinWMS based job submission works For every user job slot, a pilot job process starts up. The pilot job sends outbound TCP traffic over HTTP to a Factory service (at UCSD or IU) and via HTTP to a host at the VO Frontend (submit site; typically each VO installs its own Frontend). The pilot job spawns a condor startd, which spawns a condor starter. The startd sends outbound UDP traffic to a single port on the frontend. This is randomly chosen (per pilot) from a range of 200 ports. This can be changed to TCP if necessary. The starter sends outbound TCP traffic to a port on the frontend. This is chosen randomly (per pilot) from the frontend's ephemeral ports. Example Hosts and ports:  VO Frontend glidein.unl.edu ( ). Ports: 80,  OSG Factory is glidein-1.t2.ucsd.edu. Port 8319 Source: Derek Weitzel et al. HCC VO requirements for ATLAS sites