Open Science Grid at Condor Week

Slides:



Advertisements
Similar presentations
9/25/08DLP1 OSG Operational Security D. Petravick For the OSG Security Team: Don Petravick, Bob Cowles, Leigh Grundhoefer, Irwin Gaines, Doug Olson, Alain.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Assessment of Core Services provided to USLHC by OSG.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Open Science Grid Frank Würthwein OSG Application Coordinator Experimental Elementary Particle Physics UCSD.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Apr 30, 20081/11 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Apr 30, 2008 Gabriele Garzoglio.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
OSG Project Manager Report for OSG Council Meeting OSG Project Manager Report for OSG Council Meeting October 14, 2008 Chander Sehgal.
Mar 28, 20071/9 VO Services Project Gabriele Garzoglio The VO Services Project Don Petravick for Gabriele Garzoglio Computing Division, Fermilab ISGC 2007.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Open Science Grid An Update and Its Principles Ruth Pordes Fermilab.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
OSG Deployment Preparations Status Dane Skow OSG Council Meeting May 3, 2005 Madison, WI.
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
FermiGrid The Fermilab Campus Grid 28-Oct-2010 Keith Chadwick Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
A Joint Operations Workshop Open Science Grid and Enabling Grids for EsciencE (and ARC and LCG …) Ruth Pordes, Fermilab September 27th 2005 Culham.
Open Science Grid Interoperability
Bob Jones EGEE Technical Director
Gene Oleynik, Head of Data Storage and Caching,
Key Project Drivers - FY10 Ruth Pordes, June 15th 2009
Open Science Grid Progress and Status
Monitoring and Information Services Technical Group Report
Ian Bird GDB Meeting CERN 9 September 2003
Stephen Pickles Technical Director, GOSC
Connecting the European Grid Infrastructure to Research Communities
Leigh Grundhoefer Indiana University
LHC Data Analysis using a worldwide computing grid
Status of Grids for HEP and HENP
GLOW A Campus Grid within OSG
gLite The EGEE Middleware Distribution
Presentation transcript:

Open Science Grid at Condor Week Ruth Pordes Fermilab April 25th 2006

Outline OSG goals and organization Drivers and use today Middleware Focus and roadmap 2/21/2019

What, Who is Open Science Grid? High Throughput Distributed Facility Shared opportunistic access to existing clusters, storage and networks. Owner controlled resources and usage policies. Supports Science Funded by NSF and DOE projects. Common technologies & cyber-infrastructure. Open & Inclusive Collaboration of users, developers, grid technologists & facility administrators. Training & help for existing & new administrators and users Heterogeneous 2/21/2019

OSG Organization 2/21/2019

OSG Organization Executive Director: Ruth Pordes Facility Coordinator: Miron Livny Application Coordinators: Torre Wenaus & fkw Resource Managers: P. Avery & A. Lazzarini Education Coordinator: Mike Wilde Engagement Coord.: Alan Blatecky Middleware Coord.: Alain Roy Ops Coordinator: Leigh Grundhoefer Security Officer: Don Petravick Liaison to EGEE: John Huth Liaison to Teragrid: Mark Green Council Chair: Bill Kramer Condor project 2/21/2019

OSG Drivers: LIGO- gravitational wave physics; STAR - nuclear physics, CDF, D0, - high energy physics, SDSS - astrophysics GADU - bioinformatics Nanohub etc. Research groups transitioning from & extending (legacy) systems to Grids: Evolution of iVDGL, GriPhyN, PPDG US LHC Collaborations Contribute to & depend on milestones, functionality, capacity of OSG. Commitment to general solutions, sharing resources & technologies; Application Computer Scientists Contribute to technology, integration, operation. Excited to see technology put to production use! Federations with Campus Grids Bridge & interface Local & Wide Area Grids. Interoperation & partnerships with national/ international infrastructures Ensure transparent and ubiquitous access. Work towards standards. NMI, Condor, Globus, SRM GLOW, FermiGrid, GROW, Crimson, TIGRE EGEE, TeraGrid, INFNGrid 2/21/2019

LHC Physics gives schedule and performance stress: Beam starts in 2008: Distributed System must serve 20PB of data in served across 30PB disk distributed across 100 sites worldwide to be analyzed by 100MSpecInt2000 of CPU. Service Challenges give steps to full system 1 GigaByte/sec 2/21/2019

Priority to many other stakeholders New science enabled by opportunistic use of resources E.g. From OSG Proposal: LIGO : With an annual science run of data collected at roughly a terabyte of raw data per day, this will be critical to the goal of transparently carrying out LIGO data analysis on the opportunistic cycles available on other VOs hardware Opportunity to share use of “standing army” of resources E.g. From OSG news: Genome Analysis and Database Update system, uses OSG resources to run BLAST, Blocks and Chisel for all publicly available genome sequence data used by over 2,400 researchers worldwide GADU processes 3.1 million protein sequences, about 500 batch jobs per day for several weeks, to be repeated every monthly. Interface existing computing and storage facilities and Campus Grids to a common infrastructure. E.g. FermiGrid Strategy: To allow opportunistic use of otherwise dedicated resources. To save effort by implementing shared services. To work coherently to move all of our applications and services to run on the Grid. 2/21/2019

OSG Use - the Production Grid 2/21/2019

OSG Sites Today Not all registered slots are available to grid users. 23 VOs More than 18,000 batch slots registered. … but only 15% of it used via grid interfaces that are monitored. Large fraction of local use rather than grid use. Not all registered slots are available to grid users. Not all available slots are available to every grid user. Not all slots used are monitored. 2/21/2019

Use - Daily Monitoring 04/23/2006 “bridged” GLOW jobs Genome analyis (GADU) 2000 running jobs 500 waiting jobs 2/21/2019

Use - 1 Year View Deploy OSG Release 0.4.0 LHC OSG “launch” 2/21/2019

Common Middleware provided through Virtual Data Toolkit Domain science requirements. Globus, Condor, EGEE etc OSG stakeholders and middleware developer (joint) projects. Test on “VO specific grid” Integrate into VDT Release. Deploy on OSG integration grid Include in OSG release & deploy to OSG production. Condor project 2/21/2019

Middleware Services Grid Site Interfaces Grid Access: X509 User certificate extended using EGEE-VOMS. Condor-G job submission. Jobs may place initial data. Middleware Services VO Management: EGEE-VOMS extends certificate with attributes based on Role. Grid Site Interfaces Monitoring & Information: Processing Service: Job Submission: GRAM (pre-WS or WS) Overhead Mitigations: Condor Gridmonitor; Condor Managed Fork Queue Site and VO based Authorization and Account Mapping Data Movement: GridFTP Identity & Authorization: GUMS Certificate to Account Mapping. Data Storage and Management: Storage Resource Management (SRM) or Local Environment Variables ($APP, $GRID_WRITE etc) Shared File Systems Site and VO based Authorization and Access mapping 2/21/2019

S U Secure R F Usable Reliable Flexible OSG allows you to Providing a Distributed Facility S U R F Usable Reliable Flexible The Grid 2/21/2019

Security - Management, Operational and Technical Controls Management: Risk assessment and security planning, Service auditing and checking. Operational: Incident response, Awareness and Training, Configuration management. Technical: Authentication and Revocation, Auditing and analysis. End to end trust in quality of code executed on remote CPU. 2/21/2019

Usability - Throughput, Scaling, Fault Diagnosis OSG users need high throughput. Focus on effective utilization. VO control of use: examples: Batch system priority in based on VO activity. Control Write-authorization and quotas based on VO role. Prioritize data transfer based on role of the user. Usability Goals include: Minimize entry threshold for resource owners Minimize software stack. Minimize support load. Minimize entry threshold for users Feature rich software stack. Excellent user support. 2/21/2019

Reliability - Central Operations Activities Daily Grid Exerciser: Automated validation of basic services and site configuration Configuration of HeadNode and Storage to reduce errors: Remove dependence on Shared File System Condor-managed GRAM fork queue Scaling tests of WS-GRAM and GridFTP. 2/21/2019

Flexibility Multiple user interfaces. Support for multiple versions of the Middleware Releases. Heterogeneous resources. Sites have local control of use of the resources - including storage and space (no global file system). Distributed support model through VO and Facility Support Centers which contribute to global operations. 2/21/2019

Finally - the OSG Roadmap Sustain the Users, the Facility, and the Experts. Continually improve effectiveness and throughput. Capabilities and schedule driven by Science Stakeholders - maintain production system while integrating new features. Enable new Sites, Campus and Grids to contribute while maintaining self-control. Release Planned Actual OSG 0.2 Spring 2005 July 2005 OSG 0.4.0 Dec 2005 January 2006 OSG 0.4.1 April 2005 May 2006: WS-Gram; condor based fork queue; accounting V0; Inteoperability with LCG using RB. OSG 0.6.0 July 2006 Edge services; Condor-C; test gLITE wms; OSG 0.8.0 End of 2006 OSG 1.0 2007 2/21/2019