OSG Fundamentals Adapted from: Alain Roy Terrence Martin Suchandra Thapa.

Slides:



Advertisements
Similar presentations
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks MyProxy and EGEE Ludek Matyska and Daniel.
Advertisements

Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
GUMS status Gabriele Carcassi PPDG Common Project 12/9/2004.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Open Science Grid Use of PKI: Wishing it was easy A brief and incomplete introduction. Doug Olson, LBNL PKI Workshop, NIST 5 April 2006.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Rsv-control Marco Mambelli – Site Coordination meeting October 1, 2009.
OSG Public Storage and iRODS
OSG S ITE INSTALLATION AND M AINTENANCE Suchandra Thapa Computation Institute University of Chicago.
OSG Operations and Interoperations Rob Quick Open Science Grid Operations Center - Indiana University EGEE Operations Meeting Stockholm, Sweden - 14 June.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
Publication and Protection of Site Sensitive Information in Grids Shreyas Cholia NERSC Division, Lawrence Berkeley Lab Open Source Grid.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
OSG Fundamentals Scot Kronenfeld - Marco Mambelli –
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Blueprint Meeting Notes Feb 20, Feb 17, 2009 Authentication Infrastrusture Federation = {Institutes} U {CA} where both entities can be empty TODO1:
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
OSG S ITE INSTALLATION AND M AINTENANCE Suchandra Thapa Computation Institute University of Chicago.
Mine Altunay OSG Security Officer Open Science Grid: Security Gateway Security Summit January 28-30, 2008 San Diego Supercomputer Center.
Use of Condor on the Open Science Grid Chris Green, OSG User Group / FNAL Condor Week, April
Evolution of the Open Science Grid Authentication Model Kevin Hill Fermilab OSG Security Team.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
OSG Storage Architectures Tuesday Afternoon Brian Bockelman, OSG Staff University of Nebraska-Lincoln.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
First attempt for validating/testing Testbed 1 Globus and middleware services WP6 Meeting, December 2001 Flavia Donno, Marco Serra for IT and WPs.
Open Science Grid OSG CE Quick Install Guide Siddhartha E.S University of Florida.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Introduction to OSG Security Suchandra Thapa Computation Institute University of Chicago March 19, 20091GSAW 2009 Clemson.
Introduction to OSG Fundamentals Suchandra Thapa Computation Institute University of Chicago March 19, 20091GSAW 2009 Clemson.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Condor Project Computer Sciences Department University of Wisconsin-Madison Grids and Condor Barcelona,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
Portal Update Plan Ashok Adiga (512)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Configuring and Troubleshooting Identity and Access Solutions with Windows Server® 2008 Active Directory®
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
OSG Site Admin Workshop - Mar 2008Using gLExec to improve security1 OSG Site Administrators Workshop Using gLExec to improve security of Grid jobs by Alain.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Open Science Grid OSG Resource and Service Validation and WLCG SAM Interoperability Rob Quick With Content from Arvind Gopu, James Casey, Ian Neilson,
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
The Great Migration: From Pacman to RPMs Alain Roy OSG Software Coordinator.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Open Science Grid Configuring RSV OSG Resource & Service Validation Thomas Wang Grid Operations Center (OSG-GOC) Indiana University.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Why you should care about glexec OSG Site Administrator’s Meeting Written by Igor Sfiligoi Presented by Alain Roy Hint: It’s about security.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
OSG Fundamentals Alain Roy Terrence Martin Suchandra Thapa.
Software Tools Group & Release Process Alain Roy Mine Altunay.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
Accessing the VI-SEEM infrastructure
Operating a glideinWMS frontend by Igor Sfiligoi (UCSD)
Presentation transcript:

OSG Fundamentals Adapted from: Alain Roy Terrence Martin Suchandra Thapa

November 2008 OSG Site Admin Meeting What is OSG? OSG provides high-throughput computing across the United States.  More than 70 sites  For 11-Nov-2008:  282,912 jobs for 433,051 hours  Used 75sites  Jobs by ~20 different virtual organizations  92% of jobs succeeded  Underestimate: 4 sites didn’t report anything 2

November 2008 OSG Site Admin Meeting Who uses OSG? About 20 virtual organizations  High-energy physics uses a large chunk of OSG  But several other sciences are actively using OSG.  nanoHUB: nanotechnology simulations  LIGO: detecting gravitational waves  CHARMM: molecular dynamics 3 More at:

November 2008 OSG Site Admin Meeting OSG: Project & Consortium Consortium members make significant contributions Most active are Physics Collaborations: HEP, NP, LIGO, who are prepared to collaborate with and support other programs and disciplines Active partnerships with European projects, ESNET, Internet2, Condor, Globus… We’ve worked carefully on Consortium governance… we have Council, exec board.. We report to stakeholders & national funding agencies 4

November 2008 OSG Site Admin Meeting Virtual Organizations OSG works with Virtual Organizations or Communities There are 30 VOs in OSG spanning scientific, regional, campus, international and education There are specific “OSG owned VOs” to accommodate individual users 5

November 2008 OSG Site Admin Meeting OSG Resource Maps 6

November 2008 OSG Site Admin Meeting OSG Test beds OSG runs a Validation Test bed (VTB)  small, quick pre-release testing Integration test bed (ITB)  Multiple platforms, OS  VO app validation 7

November 2008 OSG Site Admin Meeting Example partnership GridUNESP campus grid initiative  Central cluster (2K cores)  7 clusters in state of Sao Paulo SPRACE  HEP analysis facility  OSG middleware  OSG participated in GridUNESP workshop  orkshops/IIBLHCCW/ 8

November 2008 OSG Site Admin Meeting 9

10 INTEGRATION & VALIDATION

November 2008 OSG Site Admin Meeting OSG WLCG Service Reliability Our HEP sites  monitored closely as part of WLCG Metrics  availability  reliability  MOU % Other OSG  Use same infrastrucure 11

November 2008 OSG Site Admin Meeting Getting applications on OSG OSG decided early on to NOT develop its own workflow management system  Too much diversity among the disciplines  Too much at stake within the VOs  Focus on the core infrastructure (what the VO was not)  So no Central resource broker, data management system, etc (though interoperable with EGEE resource brokers) There are a number of systems developed by VOs  Pilot-based systems, Condor “Glide-in”, client-side workflow planners (~ functional programming) 12

November 2008 OSG Site Admin Meeting Glide-ins at RENCI (North Carolina) TeraGrid Resources NIH Resources OSG Resources RENCI Resources Condor Pool Temporarily join remote machines into local Condor pool

November 2008 OSG Site Admin Meeting Pilot abstraction above MW 14

November 2008 OSG Site Admin Meeting So all complexity & MW differences hidden A light command line tool to submit that submits Submits to all Grids 15

November 2008 OSG Site Admin Meeting Cumulative CPU hours delivered 16

November 2008 OSG Site Admin Meeting OSG Production by VO 17

November 2008 OSG Site Admin Meeting ATLAS Production on Grids 18

November 2008 OSG Site Admin Meeting Principle: Autonomy Sites and VOs are autonomous  You make decisions about your site  We provide software  You decide when to install, upgrade  You make operational decisions  We help out, but you are responsible for your site. 19

November 2008 OSG Site Admin Meeting What is the role of an OSG site admin? An OSG site administrator should  Keep in touch with OSG about  Site contacts (Administrative and security)  Problems you are encountering  Downtime of your site  Plan how your site works  Attempt to keep up to date with software  Be part of the OSG community 20

November 2008 OSG Site Admin Meeting What does OSG do for site admins? We should provide:  Up to date grid software  An easy installation and upgrade process  Assistance in times of need  A community of site administrators to share experiences with.  Users who want to use your site 21  An exciting, cutting-edge, 21st-century collaborative distributed computing grid cloud buzzword-compliant environment

November 2008 OSG Site Admin Meeting A few definitions VDT OSG Software Stack Computing Element (CE) Storage Element (SE) Worker Node 22

November 2008 OSG Site Admin Meeting Definition: VDT The Virtual Data Toolkit A large set of software, mix and match Used to install grid site, or client Attempts to be grid-generic 23

November 2008 OSG Site Admin Meeting VDT Example GUMS  Authorizes users at a site  Maps global user name to local UID VDT includes dependencies. For example, GUMS needs: 24 -Apache -Tomcat -MySQL -CA Certificates -Configuration Utilities -Infrastructure /DC=org/DC=doegrids/OU=People/CN=Alain Roy  roy

November 2008 OSG Site Admin Meeting Definition: OSG Software Stack OSG Software Stack: Subsets of VDT + OSG-specific bits Example: OSG CE  VDT Subset  Globus  RSV  PRIMA  … and another dozen  OSG bits:  Information about OSG VOs  OSG configuration script (configure_osg.py) 25

November 2008 OSG Site Admin Meeting Definition: CE, SE, Worker Node CE: Computing Element  The head node to your site.  Users submit jobs to the CE  Well-defined set of software SE: Storage Element  Manages large set of data at your site  Multiple implementations WN: Worker Node  Runs jobs  Some software installed here too 26

November 2008 OSG Site Admin Meeting Bias towards CE A lot of discussion in OSG is biased towards the CE. It’s unfair: storage is important too! As an organization, we have more experience and understanding of the CE and running job. The CE is better developed than the SE. This talk will mostly cover the CE  With some discussion about SEs. 27

November 2008 OSG Site Admin Meeting The CE software “big picture” GRAM: Allow job submissions GridFTP: Allow file transfers CEMon/GIP: Publish site information Gratia: Job accounting Some authorization mechanism  grid-mapfile: file that lists authorized users  GUMS: service that maps users RSV: Monitor health of CE And a few other things… 28

November 2008 OSG Site Admin Meeting A Basic CE 29 GRAM GridFTP Authorization RSV CEMon/GIP Submit jobs ? ? Test Query Gratia

November 2008 OSG Site Admin Meeting GRAM GRAM comes in two flavors  You’ll get both on your CE  We support both  The implementations are totally different GRAM 2  a.k.a pre-web services GRAM  a.k.a “old GRAM”  What most VOs currently use  What we want to move away from GRAM 4  a.k.a web services GRAM  a.k.a “newGRAM”  What we want to move to 30 GRAM GridFTP Auth RSV CEMon/GI P Gratia

November 2008 OSG Site Admin Meeting Gratia Collects information about jobs run on your site Hooks into GRAM  Also a cron job to collect data Stats sent to central OSG service Optional: you can collect information locally. 31 GRAM GridFTP Auth RSV CEMon/GI P Gratia

November 2008 OSG Site Admin Meeting CEMon/GIP These work together  Essential for accurate information about your site  End-users see this information Generic Information Provider (GIP)  Scripts to scrape information about your site  Some information is dynamic (queue length)  Some is static (site name) CEMon  Reports information to OSG GOC’s BDII  Reports to OSG Resource Selector (ReSS) 32 GRAM GridFTP Auth RSV CEMon/GI P Gratia

November 2008 OSG Site Admin Meeting RSV System for running tests Goal: You should be the first to know when your site has grid problems Doesn’t have to be run from the CE: large sites may prefer to use a separate computer. Variety of tests, run periodically 33 GRAM GridFTP Auth RSV CEMon/GI P Gratia

November 2008 OSG Site Admin Meeting Planning a CE Now…  Bureaucratic advance work  What software goes where?  How many computers?  Disk layout  Worker node software  Authorization mechanism 34

November 2008 OSG Site Admin Meeting Bureaucratic advance work You’ll need a site name  You pick it, tell GOC.  It’s used all over, so keep it consistent You need site contacts  Administrative contact  Security contact  These are important!!  OSG will contact you sometimes URL describing…  Your site  Policies about your site 35

November 2008 OSG Site Admin Meeting What software goes where? Simple case:  Everything goes on CE  Worker node software on NFS volume  GRAM, GridFTP, etc. on CE 36

November 2008 OSG Site Admin Meeting More advanced site 37 GRAM GridFTP CEMon/GIP Submit jobs Gratia GUMS (Authorization service) RSV (For Testing) NFS Server

November 2008 OSG Site Admin Meeting OSG Disk Layout for a CE Required directories OSG_APP: Store VO applications  Must be shared (usually NFS)  Must be writeable from CE, readable from WN  Must be usable by whole cluster OSG_GRID: Stores WN client software  May be shared or installed on each WN  May be read-only (no need for users to write)  Has a copy of CA Certs & CRLs, which must be up to date OSG_WN_TMP: temporary directory on worker node  May be static or dynamic  Must exist at start of job  Not guaranteed to be cleaned by batch system 38

November 2008 OSG Site Admin Meeting OSG Disk Layout for a CE Optional directories OSG_DATA: Data shared between jobs  Must be writable from the worker nodes  Potentially massive performance requirements  Cluster file system can mitigate limitations with this file system  Performance & support varies widely among sites  0177 permission on OSG_DATA (like /tmp) Squid server: HTTP proxy can assist many VOs and sites in reducing load  Reduces VO web server load  Efficient and reliable for site  Fairly low maintenance  Can help with CRL maintenance on worker nodes 39

November 2008 OSG Site Admin Meeting Disk Usage Varies between VOs  Some VOs download all data & code per job (may be Squid assisted), and return data to VO per job.  Other VOs use hybrids of OSG_APP and/or OSG_DATA OSG_APP used by several VOs, not all.  1 TB storage is reasonable  Serve from separate computer so heavy use won’t affect other site services. OSG_DATA sees moderate usage.  1 TB storage is reasonable  Serve it from separate computer so heavy use of OSG_DATA doesn’t affect other site services. OSG_WN_TMP is not well managed by VOs and you should be aware of it.  ~100GB total local WN space  ~10GB per job slot. 40

November 2008 OSG Site Admin Meeting Authorization Two mechanisms for authorization  File with list of mappings (global user DN  local user)  Tool to generate list based on VO membership: edg-mkgridmap  Too simplistic, doesn’t deal with users in multiple VOs  Service with list of mappings (GUMS)  One service for multiple computers  Deals correctly with complex cases  Preferred solution  Best placed on separate computer 41

November 2008 OSG Site Admin Meeting Installing a CE TOMORROW: Session on OSG Fundamentals will guide you through CE installations.  Act now! Special Offer! Limited supplies!  Hands on!  Go home with working CE!  Impress your co-workers and lovers! Now we’ll walk through basic process 42

November 2008 OSG Site Admin Meeting Certificates Your site needs PKI certificates  Beyond this talk to discuss PKI  I assume you understand basics  You need a public cert  You need a private key  Often referred to informally, incorrectly as “certificate” Your site needs two certificates  Host certificate  HTTP certificate  Best to get these in advance Online documentation on getting them 43 n/GetGridCertificates

November 2008 OSG Site Admin Meeting Users You need a user for RSV Some people like user for Globus Daemon user used for many components. 44

November 2008 OSG Site Admin Meeting Pacman The OSG Software stack is installed with Pacman  No, not RPM or deb  Yes, custom installation software Why?  Mostly historical reasons  Makes multiple installations and non-root installations easy Why not?  It’s different from what you’re used to  It sometimes breaks in strange ways Will we always use Pacman?  Probably  But I expect work to support RPM/deb in the future 45

November 2008 OSG Site Admin Meeting More on Pacman Easy installation  Download  Untar  No root needed Non-standard usage  Pacman installs in current directory (unlike RPM/deb) 46

November 2008 OSG Site Admin Meeting Online Documentation Twiki  OSG collaborative documentation  Used throughout OSG Installation documentation mentation/ 47

November 2008 OSG Site Admin Meeting Basic process for CE Install Pacman  Download  Untar (keep in own directory)  Source setup Make OSG directory  Example: /opt/osg symlink to /opt/osg-1.0 Run pacman commands  Get CE  Get job manager interface Configure  Edit configure_osg.ini  Run configure_osg.py 48

November 2008 OSG Site Admin Meeting Run Pacman commands Install CE: pacman –get OSG:ce Get environment. setup.sh Install Job Manager pacman –get OSG:Globus-Condor-Setup  (Substitute PBS, LSF, or SGE) 49

November 2008 OSG Site Admin Meeting Some Initial Configuration Need to run gums-host-cron (if you use GUMS) Sets up the $OSG_LOCATION/monitoring/osg- user-vo-map.txt file This file is needed by the GIP service 50

November 2008 OSG Site Admin Meeting Configuring site Configuration primarily done using configure-osg.py script For basic sites, $OSG_LOCATION/monitoring/simple- config.ini will provide a skeleton that can be used /monitoring/full-config.ini has complete configuration for more complex installations 51

November 2008 OSG Site Admin Meeting Configuration File Format Similar to windows ini file Broken up into sections Each section starts with a [Section Name] hear (e.g. [Site Information]) Each section has variables set using variable = value format Variable substitution is supported Lines starting with ; considered a comment 52

November 2008 OSG Site Admin Meeting Example configure_osg.ini fragment [GIP] enable = True home = /opt/osg ; this is used for something my_dir = %(home)s 53 Variable Substitution

November 2008 OSG Site Admin Meeting Variable Substitution Variable substitution is done by referring to other variables using %(variable_name)s Substitutions are recursive but limits to recursion Special section called [Default] that contains variables used in other sections for substitution 54

November 2008 OSG Site Admin Meeting Using configure-osg.py Two important modes for new site admins Verification mode which is set using –v flag (e.g. configure-osg.py –v ) This mode verifies settings and values but does not change or set any settings Configuration mode which is set using the –c flag This mode makes changes and alters system 55

November 2008 OSG Site Admin Meeting Troubleshooting Logging is your friend All actions, errors, and warnings logged to $OSG_LOCATION/vdt-install.log file Can give –d flag to log debugging information to this file 56

November 2008 OSG Site Admin Meeting Other configuration steps WS-GRAM Cut and paste entries from $OSG_LOCATION/monitoring/sudo- example.txt or $OSG_LOCATION/post- install/README 57

November 2008 OSG Site Admin Meeting CA Certificates What are they?  Public certificate for certificate authorities  Used to verify authenticity of user certificates Why do you care?  If you don’t have them, users can’t access your site 58

November 2008 OSG Site Admin Meeting Installing CA Certificates The OSG installation will not install CA certificates by default  Users will not be able to access your site! To install CA certificates  Edit a configuration file to select what CA distribution you want vdt-update-certs.conf  Run a script vdt-setup-ca-certificates 59

November 2008 OSG Site Admin Meeting Choices for CA certificates You have two choices:  Recommended: OSG CA distribution  IGTF + TeraGrid-only  Optional: VDT CA distribution  IGTF only (Eventually)  Same as OSG CA (Today) IGTF: Policy organization that makes sure that CAs are trustworthy You can make your own CA distribution You can add or remove CAs 60

November 2008 OSG Site Admin Meeting Why all this effort for CAs? Certificate authentication is the first hurdle for a user to jump through Do you trust all CAs to certify users?  Does your site have a policy about user access?  Do you only trust US CAs? European CAs?  Do you trust the IGTF-accredited Iranian CA?  Does the head of your institution? 61

November 2008 OSG Site Admin Meeting Updating CAs CAs are regularly updated  New CAs added  Old CAs removed  Tweaks to existing CAs If you don’t keep up to date:  May be unable to authenticate some user  May incorrectly accept some users Easy to keep up to date  vdt-update-certs  Runs once a day, gets latest CA certs 62

November 2008 OSG Site Admin Meeting CA Certificate RPM There is an alternative for CA Certificate installation: RPM  We have an RPM for each CA cert distribution  No deb package yet  Install and keep up to date with yum  Some details not discussed here: read the docs 63

November 2008 OSG Site Admin Meeting Certificate Revocation Lists (CRLs) It’s not enough to have the CAs CAs publish CRLs: lists of certificates that have been revoked  Sometimes revoked for administrative reasons  Sometimes revoked for security reasons You really want up to date CRLs CE provides periodic update of CRLs  Program called fetch-cr  Runs once a day (today)  Will run four times a day (soon) 64

November 2008 OSG Site Admin Meeting Updates We periodically release updates to OSG software stack Announced by VDT team on vdt-discuss mailing list  Not OSG-specific announcement or update procedure Announced by GOC  OSG-specific instructions 65

November 2008 OSG Site Admin Meeting Two kinds of updates Incremental updates  Frequent (Every 1-4 weeks)  Can be done within a single installation  Process:  Turn off services  Backup installation directory  Perform update  Re-enable services Major updates  Irregular (Every 6-12 months)  Must be a new installation  Can copy configuration from old installation  Process:  Point to old install  Perform new install  Turn off old services  Turn on new services 66

November 2008 OSG Site Admin Meeting A few words about Storage Elements A bit about SRM A bit about dCache A bit about Bestman/Xrootd 67

November 2008 OSG Site Admin Meeting A few words about Storage Elements OSG relies on SRM  Well-defined storage management interface  Manages storage:  Who can store data?  How much data can be stored?  Does permission expire? 68

November 2008 OSG Site Admin Meeting Multiple types of SEs Unlike job submission (which uses Globus GRAM), there are two commonly used, very different SEs in OSG:  dCache  Scales very well  Moderately complex installation  Bestman  Lighter weight than dCache  By itself, doesn’t scale as far as dCache  May scale well with XRootd 69

November 2008 OSG Site Admin Meeting dCache dCache widely used by CMS Scales well Fairly complex installation Requires multiple computers to install Part of VDT, but NOT installed with Pacman, but with RPMs. Well-supported by OSG’s VDT Storage Group 70

November 2008 OSG Site Admin Meeting Bestman (with optional XRootd) Not yet widely used in OSG May become heavily used in ATLAS Relatively simple to install Packaged with VDT using Pacman May scale very well with Xrootd  But then no longer as simple to install 71