1 Open Science Grid.. An introduction Ruth Pordes Fermilab.

Slides:



Advertisements
Similar presentations
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Advertisements

Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Implementing Finer Grained Authorization in the Open Science Grid Gabriele Carcassi, Ian Fisk, Gabriele, Garzoglio, Markus Lorch, Timur Perelmutov, Abhishek.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid Use of PKI: Wishing it was easy A brief and incomplete introduction. Doug Olson, LBNL PKI Workshop, NIST 5 April 2006.
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Configuring and Maintaining EGEE Production.
G RID M IDDLEWARE AND S ECURITY Suchandra Thapa Computation Institute University of Chicago.
Mine Altunay OSG Security Officer Open Science Grid: Security Gateway Security Summit January 28-30, 2008 San Diego Supercomputer Center.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Job and Data Accounting on the Open Science Grid Ruth Pordes, Fermilab with thanks to Brian Bockelman, Philippe Canal, Chris Green, Rob Quick.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
Open Science Grid An Update and Its Principles Ruth Pordes Fermilab.
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
OSG Deployment Preparations Status Dane Skow OSG Council Meeting May 3, 2005 Madison, WI.
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
1 Open Science Grid Progress & Vision Keith Chadwick, Fermilab
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
1 Open Science Grid Middleware at the WLCG LHCC review Ruth Pordes, Fermilab.
A Joint Operations Workshop Open Science Grid and Enabling Grids for EsciencE (and ARC and LCG …) Ruth Pordes, Fermilab September 27th 2005 Culham.
Open Science Grid Interoperability
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
Open Science Grid Progress and Status
Leigh Grundhoefer Indiana University
Open Science Grid at Condor Week
Presentation transcript:

1 Open Science Grid.. An introduction Ruth Pordes Fermilab

Open Science Maryland 2 Why am I at Maryland today ? The US LHC Experiments, which will take data at CERN starting next year have a global data processing and distribution model based on Grids. US CMS (and US ATLAS) are relying on and contributing to the Open Science Grid as their distributed facility in the US. Maryland is a US CMS facility. Maryland physics also includes: D0 - already using Grids for data distribution and processing; Icecube - already collaborating with Condor in Wisconsin.. I am attending the Open Grid Forum standards meeting in Washington - *and yes, I met David McNabb*

Open Science Maryland 3 CMS Experiment LHC Global Data Grid (2007+) Online System CERN Computer Center USA Korea Russia UK Maryland GB/s >10 Gb/s Gb/s Gb/s Tier 0 Tier 1 Tier 3 Tier 2 Physics caches PCs Iowa UCSDCaltech U Florida  5000 physicists, 60 countries  10s of Petabytes/yr by 2008  1000 Petabytes in < 10 yrs? FIU Tier 4

Open Science Maryland 4 Computing Facility Tiers in CMS Tier-1 : One Center in each Geographical region of the world. Well provisioned with Petabytes of Archival Storage to accept data from and provide data to Tier-2s/Tier-Ns. Tier-2s: Several (in the US 7) University facilities that accept responsibility to deliver to the CMS experiment Monte Carlo simulations and analysis; and provide 200 TB of online data caches for analysis; and mentor the Tier-3s. Tier-3s: University facilities providing processing and online storage for the local physics and research groups, and provisioned to get data from and send data to Tier-2s.

Open Science Maryland 5 What is Open Science Grid? It is a supported US distributed facility The OSG Consortium builds and operates the OSG. The farms and storage are not owned by OSG - we integrate existing resources. 5 years of funding $6M/year (~35 FTEs) starting 9/2006 from DOE SciDAC-II and NSF MPS and OCI. Cooperation with other distributed facilities / Grids to give our research groups transparent access to resources they are offered worldwide.

Open Science Maryland 6 The OSG Map Aug-2006 Some OSG sites are also on TeraGrid or EGEE. 10 SEs & 50 CEs (about 30 very active)

Open Science Maryland 7 Open Science Grid in a nutshell  Set of Collaborating Computing Farms - Compute Elements. Commodity Linux Farms; & Disk. optional MSS From ~20 CPUs in Department Computers To 10,000 CPU SuperComputer Any University Local Batch System OSG CE gateway OSG & the Wide Area Network

Open Science Maryland 8  Set of Collaborating Storage Sites - Storage Elements. Mass Storage Systems And Disk Caches From 20 GBytes Disk Cache To 4 Petabyte Robotic Tape Systems Any Shared Storage OSG SE gateway OSG & the Wide Area Network

Open Science Maryland 9 OSG Services  Grid wide monitoring, information, accounting services in support of a System.  Based on:  X509 GSI Security  Globus Toolkit  Condor products  Storage Resource Management (SRM) storage interface  VOMS/Prima/GUMS Role Based Attribute and Access Control  S/w to make it interoperable with Europe and TeraGrid.

Open Science Maryland 10 Supported Software Stacks  Integrated Supported Reference Software Services — Most Services on Linux PC “Gateways” -- minimal impact on compute nodes. — Loose coupling between services, heterogeneity in releases and functionality. Independent Collections for Client, Server, Administrator

Open Science Maryland 11 Middleware Me: thin user layer My friends: VO services VO infrastructure VO admins The Grid: anonymous sites & admins Common to all. Me & My friends are usually domain science specific. Middleware & Service Principles: Me -- My friends -- The grid

Open Science Maryland 12 Grid - of - Grids  Inter-Operating and Co-Operating Grids: Campus, Regional, Community, National, International  Open Consortium of Virtual Organizations doing Research & Education

Open Science Maryland 13 Data Transfer for LHC crucial Core Goal to Deliver to US LHC & LIGO scale in next 2 years 1 GigaByte/sec

Open Science Maryland 14 Who is OSG ? Large global physics collaborations: US ATLAS, US CMS, LIGO, CDF, D0, STAR Education projects e.g. Mariachi,I2U2. Grid technology groups: Condor, Globus, SRM, NMI. Many DOE Labs and DOE/NSF sponsored University IT facilities. Partnerships e.g. TeraGrid, European Grids, Regional/Campus Grids e.g. Texas, Wisconsin…

Open Science Maryland 15 OSG Consortium Partners Contributors Project

Open Science Maryland 16 Current OSG deployment - across integration and production grids 96 Resources 27 Virtual Organizations

Open Science Maryland 17 Smaller VOs

Open Science Maryland 18 Large VOs... CMS CDFATLAS

Open Science Maryland 19 Running a Production Grid

Open Science Maryland 20 Running a Production Grid

Open Science Maryland 21 OSG Core Program of Work Integration: software and systems. Operations: common support and procedures. Inter-Operation: across administrative and technical boundaries.

Open Science Maryland 22 Release Process (Subway Map) Gather requirements Build software Test Validation test bed ITB Release Candidate VDT Release Integration test bed OSG Release Time

Open Science Maryland 23 What is the VDT? A collection of software  Grid software (Condor, Globus and lots more)  Virtual Data System (Origin of the name “VDT”)  Utilities  Built for >10 flavors/versions of Linux An easy installation  Goal: Push a button, everything just works  Two methods: Pacman: installs and configures it all RPM: installs some of the software, no configuration A support infrastructure Build Software

Open Science Maryland 24 What software is in the VDT? Security  VOMS (VO membership)  GUMS (local authorization)  mkgridmap (local authorization)  MyProxy (proxy management)  GSI SSH  CA CRL updater Monitoring  MonaLISA  gLite CEMon Accounting  OSG Gratia Job Management  Condor (including Condor-G & Condor-C)  Globus GRAM Data Management  GridFTP (data transfer)  RLS (replication location)  DRM (storage management)  Globus RFT Information Services  Globus MDS  GLUE schema & providers

Open Science Maryland 25 Client tools  Virtual Data System  SRM clients (V1 and V2)  UberFTP (GridFTP client) Developer Tools  PyGlobus  PyGridWare Testing  NMI Build & Test  VDT Tests What software is in the VDT? Support  Apache  Tomcat  MySQL (with MyODBC)  Non-standard Perl modules  Wget  Squid  Logrotate  Configuration Scripts And More!

Open Science Maryland 26 VO Registers with with Operations Center. User registers with VO. Sites Register with the Operations Center. VOs and Sites provide Support Center Contact and join Operations groups We’re all fun people!

Open Science Maryland 27 The OSG VO A VO for individual researchers and users. Managed by the OSG itself. Where one can learn how to use the Grid!

Open Science Maryland 28 Due diligence to Security Risk assessment, planning, Service auditing and checking Incident response, Awareness and Training, Configuration management, User access Authentication and Revocation, Auditing and analysis. End to end trust in quality of code executed on remote CPU -signatures? Identity and Authorization: Extended X509 Certificates  OSG is a founding member of the US TAGPMA.  DOEGrids provides script utilities for bulk requests of Host certs, CRL checking etc.  VOMS extended attributes and infrastructure for Role Based Access Controls.

Open Science Maryland 29 Role Based Access Control

Open Science Maryland 30 Training - e.g. Grid Summer Workshop Year 4 Hands on. Technical trainers. Nice Setting (Padre Island). Students got their own applications to run on OSG!

Open Science Maryland 31 Use commodity networks - ESNet, Campus LANs Well network provisioned sites e.g. connected to Starlight to low bandwidth connections e.g. Taiwan Connectivity ranges from full-duplex, outgoing only, to fully behind firewalls. Network Connectivity

Open Science Maryland 32 Bridging Campus Grid Jobs - GLOW Dispatch jobs from local security, job, storage infrastructure and “uploading” to wide-area infrastructure.

Open Science Maryland 33 FermiGrid? Interfacing All Fermilab Resources to common Campus Infrastructure Gateway to Open Science Grid  Unified and reliable common interface and services through one FermiGrid gateway - security, job scheduling, user management, and storage. Sharing Resources  Policies and Agreements enable fast response to changes in resource needs by Fermilab users. More information is available at

Open Science Maryland 34 Access to FermiGrid OSG General Users Fermilab Users CDF Farm FermiGrid Gateway OSG “agreed” Users D0 Farm Common Farms CMS Farms

Open Science Maryland 35 Web sites