R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.

Slides:



Advertisements
Similar presentations
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Assessment of Core Services provided to USLHC by OSG.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
OSG at the Support Centers Meeting Meet the Grid:.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
Open Science Grid Frank Würthwein OSG Application Coordinator Experimental Elementary Particle Physics UCSD.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Some Grid Experiences Laura Pearlman USC Information Sciences Institute ICTP Advanced Training Workshop on Scientific Instruments on the Grid *Most of.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Open Science Grid  Consortium of many organizations (multiple disciplines)  Production grid cyberinfrastructure  80+ sites, 25,000+ CPU.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Tools for collaboration How to share your duck tales…
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Открытая решетка науки строя открытое Cyber- инфраструктура для науки GRID’2006 Dubna, Россия Июнь 26, 2006 Robertovich Gardner Университет Chicago.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Ruth Pordes Executive Director University of Washingon Seattle OSG Consortium Meeting 21st August University of Washingon Seattle.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
1 Open Science Grid Progress & Vision Keith Chadwick, Fermilab
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
A Joint Operations Workshop Open Science Grid and Enabling Grids for EsciencE (and ARC and LCG …) Ruth Pordes, Fermilab September 27th 2005 Culham.
Open Science Grid Interoperability
Open Science Grid Progress and Status
Leigh Grundhoefer Indiana University
Open Science Grid Overview
LHC Data Analysis using a worldwide computing grid
Open Science Grid at Condor Week
Presentation transcript:

R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing clusters, storage and networks.  Owner controlled resources and usage policies.  Supports Science  Funded by NSF and DOE projects.  Common technologies & cyber-infrastructure.  Open and Heterogeneous  Research groups transitioning from & extending (legacy) systems to Grids:  Experiments developing new systems.  Application Computer Scientists  Real life use of technology, integration, operation.

R. Pordes, I Brazilian LHC Computing Workshop 2 Who is OSG: a Consortium US DOE HENP Laboratory Facilities + Universities (US) LHC Collaborations + offshore sites LIGO Condor Project Running HENP Experiments - CDF, D0, STAR… Globus/CDIGS LBNL SDM Collaboration of users, developers, grid technologists, facility administrators. Training & help for administrators and users

R. Pordes, I Brazilian LHC Computing Workshop 3 OSG 1 day last week: 50 Clusters : used locally as well as through the grid 5 Large disk or tape stores 23 VOs >2000 jobs running through Grid; Bioinformatics Routed from Local UWisconsin Campus Grid 2000 running jobs 500 waiting jobs LHC Run II

R. Pordes, I Brazilian LHC Computing Workshop 4 Broad Engagement

R. Pordes, I Brazilian LHC Computing Workshop 5 The OSG World: Partnerships Campus Grids:  GRid Of IoWa,  Grid Laboratory Of Wisconsin,  Crimson Grid,  Texas Advanced Computer Center,  Center for Computational Research /Buffalo,  TIGRE,  FermiGrid Grid Projects  DISUN  CDIGS National Grids: TeraGrid, HEP-Brazil International Grids: EGEE

R. Pordes, I Brazilian LHC Computing Workshop 6 What is an OSG Job? “ work done ” accomplished by and delivered as “ benefit received ” ; accountable to multiple organizations OSG EGEE Job Counted on Campus Grid, OSG and EGEE. MyApplication, EGEE RB,, VDS, OSG RESS Job Submission Condor-G Job does work benefiting WLCG.

R. Pordes, I Brazilian LHC Computing Workshop 7 Common Middleware provided through Virtual Data Toolkit Domain science requirements. OSG stakeholders and middleware developer (joint) projects. Integrate into VDT Release. Deploy on OSG integration grid Include in OSG release & deploy to OSG production. Globus, Condor, EGEE etc Test on “VO specific grid”

R. Pordes, I Brazilian LHC Computing Workshop 8 Reliable: Central Operations Activities Automated validation of basic services and site configuration Configuration of HeadNode and Storage to reduce errors:  Remove dependence on Shared File System  Condor-managed GRAM fork queue Scaling tests of WS-GRAM and GridFTP. Daily Grid Exerciser:

R. Pordes, I Brazilian LHC Computing Workshop 9 OSG Drivers:  Research groups transitioning from & extending (legacy) systems to Grids:  US LHC Collaborations  Contribute to & depend on milestones, functionality, capacity of OSG.  Commitment to general solutions, sharing resources & technologies;  Application Computer Scientists  Real life use of technology, integration, operation.  Federations with Campus Grids  Bridge & interface Local & Wide Area Grids.  Interoperation & partnerships with national/ international infrastructures  Ensure transparent and ubiquitous access.  Work towards standards. LIGO- gravitational wave physics; STAR - nuclear physics, CDF, D0, - high energy physics, SDSS - astrophysics GADU - bioinformatics Nanohub NMI, Condor, Globus, SRM GLOW, FermiGrid, GROW, Crimson, TIGRE EGEE, TeraGrid, INFNGrid

R. Pordes, I Brazilian LHC Computing Workshop 10 LHC Physics drive schedule and performance envelope  Beam starts in 2008:  Distributed System must serve 20PB of data in served across 30PB disk distributed across 100 sites worldwide to be analyzed by 100MSpecInt2000 of CPU.  Service Challenges give steps to full system 1 GigaByte/sec

R. Pordes, I Brazilian LHC Computing Workshop 11 Bridging Campus Grid Jobs - GLOW Dispatch jobs from local security, job, storage infrastructure and “uploading” to wide-area infrastructure. Fast ramp up in last week. Currently running the football pool problem which has application in data compression, coding theory, and statistical designs.

R. Pordes, I Brazilian LHC Computing Workshop 12 Genome Analysis and Database Update system Request: 1000 CPUs for 1-2 weeks. Once a month. 3 different applications: BLAST, Blocks, Chisel. Currently ramping up on OSG and receiving 600 CPUs and 17,000 jobs a week.

R. Pordes, I Brazilian LHC Computing Workshop 13 Common Middleware provided through Virtual Data Toolkit Domain science requirements. OSG stakeholders and middleware developer (joint) projects. Integrate into VDT Release. Deploy on OSG integration grid Include in OSG release & deploy to OSG production. Globus, Condor, EGEE etc Test on “VO specific grid” Condor project

R. Pordes, I Brazilian LHC Computing Workshop 14 of course a special grid … it’s the people… (some of them at the consortium meeting in Jan 06)

R. Pordes, I Brazilian LHC Computing Workshop 15 TeraGrid Through high-performance network connections, TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the (US) country. CDF MonteCarlo jobs running on Purdue TeraGrid resource; able to access OSG data areas and be accounted to both Grids.

R. Pordes, I Brazilian LHC Computing Workshop 16 OSG: More than a US Grid Taiwan - (CDF, LHC) Brazil - (D0, STAR, LHC) Korea

R. Pordes, I Brazilian LHC Computing Workshop 17 OSG: Where to find information: OSG Web site: Work in progress: viewGuide viewGuide Virtual Data Toolkit: News about Grids in Science in “Science Grid This Week”: OSG Consortium meeting Seattle Aug 21st. Thank you!

R. Pordes, I Brazilian LHC Computing Workshop 18 BDII LDAP URLs OSG - EGEE Interoperation for WLCG Jobs GRAM T2 GRAM T2 GRAM T2 SRM GRAM T2 SRM Site SRM T2 GRAM T2 SRM GRAM T2 SRM T2 SRM Site SRM VO RB VO UI BDII Data Stores Picture thanks to I. Fisk

R. Pordes, I Brazilian LHC Computing Workshop 19 Open Science Grid in 1 minute: OSG Resources - use and policy under owner control. Clusters and storage shared across local, Campus intra-grid, Regional Grid and large federated Inter-Grids. OSG Software Stack - based on Virtual Data Toolkit. Interfaces:  Condor-G job submission interface;  GridFTP data movement  SRM storage management;  Glue Schema V1.2; easy to configure GIPs;, CEMON coming in 3 months. OSG Use - Register VO with with Operations Center; Provide  URL for VOMS service - this must be propagated to sites.  Contact for Support Center.  Join operations groups. OSG Job Brokering, Site Selection - no central or unique service.  LIGO uses Pegasus;  SDSS uses VDS;  STAR uses Star-schedule;  CMS uses EGEE-RB;  ATLAS uses Panda;  CDF uses CDF GlideCAF;  D0 uses SAM-JIM;  GLOW uses “condor-schedd on the side”.  Nano-hub uses application portal. OSG Storage & Space Management shared file systems; persistent VO application areas; SRM interfaces. OSG Operations - Distributed including each VO, Campus Grid. Operations is also a WLCG ROC. OSG Accounting & Monitoring - MonaLisa; can support rGMA; OSG meters/probes for Condor being released soon. US Tier-1s reporting monthly to WLCG APEL.

Services to the US Tier- 1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory

R. Pordes, I Brazilian LHC Computing Workshop 21 ESnet Target Architecture: High-reliability IP Core Chicago Atlanta Seattle Albuquerque IP Core LA Denver Primary DOE Labs Possible hubs SDN hubs IP core hubs Washington DC Sunnyvale New York San Diego Cleveland

R. Pordes, I Brazilian LHC Computing Workshop 22 ESnet Target Architecture: Science Data Network New York Chicago Atlanta Seattle Albuquerque Science Data Network Core San Diego LA Sunnyvale Denver Primary DOE Labs Possible hubs SDN hubs IP core hubs Cleveland Washington DC

R. Pordes, I Brazilian LHC Computing Workshop Gbps circuits Production IP core Science Data Network core Metropolitan Area Networks International connections Metropolitan Area Rings ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings New York Chicago Washington DC Atlanta Seattle Albuquerque San Diego LA Sunnyvale Denver Loop off Backbone SDN Core IP Core Primary DOE Labs Possible hubs SDN hubs IP core hubs international connections Cleveland