OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
Florida Tech Grid Cluster P. Ford 2 * X. Fave 1 * M. Hohlmann 1 High Energy Physics Group 1 Department of Physics and Space Sciences 2 Department of Electrical.
Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Additional SugarCRM details for complete, functional, and portable deployment.
Tier 3 Plan and Architecture OSG Site Administrators workshop ACCRE, Nashville August Marco Mambelli University of Chicago.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
OSG Public Storage and iRODS
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Campus Grids Report OSG Area Coordinator’s Meeting Dec 15, 2010 Dan Fraser (Derek Weitzel, Brian Bockelman)
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
Use of Condor on the Open Science Grid Chris Green, OSG User Group / FNAL Condor Week, April
Andrew McNabETF Firewall Meeting, NeSC, 5 Nov 2002Slide 1 Firewall issues for Globus 2 and EDG Andrew McNab High Energy Physics University of Manchester.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Open Science Grid OSG CE Quick Install Guide Siddhartha E.S University of Florida.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
Michael Fenn CPSC 620, Fall 09.  Grid computing is the process of allowing loosely-coupled virtual organizations to share resources over a wide area.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Pilot Factory using Schedd Glidein Barnett Chiu BNL
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
1 Worker Node Requirements TCO – biggest bang for the buck –Efficiency per $ important (ie cost per unit of work) –Processor speed (faster is not necessarily.
DataTAG is a project funded by the European Union DataTAG WP4 meeting, Bologna 29/07/2003 – n o 1 GLUE Schema - Status Report DataTAG WP4 meeting Bologna,
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
OSG Site Admin Workshop - Mar 2008Using gLExec to improve security1 OSG Site Administrators Workshop Using gLExec to improve security of Grid jobs by Alain.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
TANYA LEVSHINA Monitoring, Diagnostics and Accounting.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
GLite WN Installation Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Running User Jobs In the Grid without End User Certificates - Assessing Traceability Anand Padmanabhan CyberGIS Center for Advanced Digital and Spatial.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Why you should care about glexec OSG Site Administrator’s Meeting Written by Igor Sfiligoi Presented by Alain Roy Hint: It’s about security.
3 Compute Elements are manageable By hand 2 ? We need middleware – specifically a Workload Management System (and more specifically, “glideinWMS”) 3.
Parrot and ATLAS Connect
WLCG IPv6 deployment strategy
Belle II Physics Analysis Center at TIFR
Workload Management System
Provisioning 160,000 cores with HEPCloud at SC17
Installing and Running a CMS T3 Using OSG Software - UCR
Composition and Operation of a Tier-3 cluster on the Open Science Grid
From Prototype to Production Grid
Presentation transcript:

OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local computational resources – storage of large amounts of data using a distributed file system – access to external computing resources on the Grid – the ability to transfer large datasets to and from the Grid Offer computing resources and data to fellow grid users 1

Agreements Offer services to at least one organization within OSG. You agree to Advertise your services accurately, represent your capabilities fairly, and make your requirements and limitations known via OSG standard mechanisms Not attempt to circumvent OSG process or controls by falsely representing your status or capabilities. While fair access is encouraged, you need not offer services to all organizations within OSG Do not knowingly interfere with the operation of other resources Do not breach trust chains, be responsible and responsive on security issues 2

Requirements: System/SW No hardware requirement Supported OS for services: – SL5 is the main testing platform – OSG 3.0 (RPM): RHEL 5 based OS (including SL and CentOS). – OSG 1.2 (Pacman): VDT 2.0 supported platform Worker Nodes – No specific requirement – Possibly compatible with the OSG WN client (same as supported OS for services). Needed for most VOs 3

Requirements: Users Root access to install and update OSG software (not required for most of OSG 1.2) Users to run services specified in installation documents Some VO prefer VO users, some require pool accounts – All are not privileged – Glexec – InstallConfigureAndManageGUMS InstallConfigureAndManageGUMS User or groups should allow consistent file access 4

Requirements: Storage Requirements agreed periodically by Vos Space for VO applications, $OSG_APP Space for VO data – At least 10 GB per Worker Node – Different options: shared file system ($OSG_DATA), special file systems ($OSG_READ, $OSG_WRITE), convenient Storage Element ($OSG_DEFAULT_SE) – Consistent access across the resource (gatekeeper and all nodes) – A CE can provide $OSG_DATA, both $OSG_SITE_READ and $OSG_SITE_WRITE, or none of them if it has a local SE specified in $OSG_DEFAULT_SE Scratch space, $OSG_WN_TMP torageConfiguration 5

$OSG_APP can be read-only mounted on the worker nodes in the cluster Clusters can allow installation jobs on all nodes, only on the gatekeeper, in a special queue, or not at all Some clusters allow installation jobs on all nodes, some only on the gatekeeper, Only users with software installation privileges in their VO should have write privileges to these directories. At least 10 GB of space should be allocated per VO. 6

Requirements: Networking Services – Public IP, name in DNS – Variable connection requirements specified in the installation and firewall documents allInformation allInformation lRSV lRSV Worker nodes – No specific requirement (updated CRLs) – VO are encouraged not to require persistent services on WN – Most VO use some outbound connectivity 7

Typical VO requirements for OSG Sites * Requirements are common for OSG VO from multiple disciplines (HEP, NP, Astro, Bio, Civil Engineer, etc.) Worker nodes require outbound internet access (nodes can be behind NAT) Preferred worker node OS: RHEL 5, CentOS5 or SL5 Worker node memory: 1GB / slot minimum, assume GB Worker node scratch: assume 10G per job slot (not stringent) The worker nodes require the OSG CA certs that are installed as part of the OSG Worker Node Client. Host certs on the worker nodes are not required. OSG wn-client package is required; lcg-utils heavily used for SRM Prefer availability of site squid (for data 10 MB to 200 MB) Jobs can generally deal well with preemption Jobs can generally interact with glexec, if present *Source: VO requirements for ATLAS sites 8

Typical GlideinWMS network requirements * For every user job slot, a pilot job process starts up. The pilot job sends outbound TCP traffic over HTTP to a host at UCSD or IU ("factory") and via HTTP to a host at the VO submit site ("frontend"). The pilot job spawns a condor startd, which spawns a condor starter. The startd sends outbound UDP traffic to a single port on the frontend. This is randomly chosen (per pilot) from a range of 200 ports. This can be changed to TCP if necessary. The starter sends outbound TCP traffic to a port on the frontend. This is chosen randomly (per pilot) from the frontend's ephemeral ports. Example Hosts and ports: – The HCC frontend is glidein.unl.edu ( ). Ports: 80, – The UCSD factory is glidein-1.t2.ucsd.edu. Port 8319 *Source: Derek Weitzel et al. HCC VO requirements for ATLAS sites 9