Ruth Pordes 17-19 November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.

Slides:



Advertisements
Similar presentations
1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
Advertisements

Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 IWLSC, Kolkata India 2006 Jérôme Lauret for the Open Science Grid consortium The Open-Science-Grid: Building a US based Grid infrastructure for Open.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Assessment of Core Services provided to USLHC by OSG.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
SG - OSG Improving Campus Research CI Through Leveraging and Integration: Developing a SURAgrid-OSG Collaboration John McGee, RENCI/OSG Engagement Coordinator.
Some Grid Experiences Laura Pearlman USC Information Sciences Institute ICTP Advanced Training Workshop on Scientific Instruments on the Grid *Most of.
OSG – the next few years Vicky White, Fermilab OSG Council Meeting March 15, 2013.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Authors: Ronnie Julio Cole David
Open Science Grid An Update and Its Principles Ruth Pordes Fermilab.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI UMD Roadmap Steven Newhouse 14/09/2010.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
EGI-InSPIRE RI An Introduction to European Grid Infrastructure (EGI) March An Introduction to the European Grid Infrastructure.
JRA1 Middleware re-engineering
Open Science Grid Interoperability
Bob Jones EGEE Technical Director
EGEE Middleware Activities Overview
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
Open Science Grid at Condor Week
Expand portfolio of EGI services
Presentation transcript:

Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science Grid Consortium

Ruth Pordes November 2004TeraGrid GIG Site Review2 Open Science Grid Consortium Building a community driven engineered Grid infrastructure for large-scale science: Building from Grid3, a 25 site >3000 CPU loosely coherent grid infrastructure through partnering of NSF iVDGL, GriPHyN, DOE PPDG, US LHC software and computing. Commitment from US LHC Tier-1 and Tier-2 centers and DOE Laboratories to make resources accessible by and make contributions to the consortium - esp LHC ramp up over the next 4+ years. Peering with other Grids, especially EGEE/LCG in Europe. Partnering with Grid infrastructures in the US especially TeraGrid as well as university campus Grids. Reaching out to scientific communities beyond the original physics drivers: e.g., biology, chemistry, astronomy. Outreach & education to enable remote and small community participation.

Ruth Pordes November 2004TeraGrid GIG Site Review3

Ruth Pordes November 2004TeraGrid GIG Site Review4 Character of Open Science Grid Distributed ownership of resources. Local Facility policies, priorities, and capabilities need to be supported. DOE and NSF funded involvement. Mix of agreed upon performance expectations and opportunistic resource use. Infrastructure based on the Virtual Data Toolkit. Will incrementally scale the infrastructure with milestones to support stable running of mix of increasingly complex jobs and data management. Peer collaboration of computer and application scientists, facility, technology and resource providers - “end to end approach”. Support for many VOs from the large (thousands) to the very small and dynamic (to the single researcher & high school class) Loosely coupled consistent infrastructure - “Grid of Grids”.

Ruth Pordes November 2004TeraGrid GIG Site Review5 Communities span Grids Users need to run their jobs - transparently and seamless - and access their data across many grid infrastructures Facilities need to make their resources accessible to multiple grid environments Grid infrastructures need to interoperate and federate with standard interfaces and secure and accessible services. OSG users need dynamically installed, VO specific environments.

Ruth Pordes November 2004TeraGrid GIG Site Review6 OSG & Teragrid - goals Sustain a production, scalable, performant common infrastructure for applications over the Grid Deploy, integrate and support higher level more complex services for policy, data management, workflow, system performance and optimization Address needs of diverse applications - especially distributed ad-hoc analyses - with many different users - including dynamic and evolving groups of researchers Common goal of virtualization of resources in support of broad science applications

Ruth Pordes November 2004TeraGrid GIG Site Review7 Opportunities for Partnership Teragrid 5 year plan matches the OSG roadmap of 5-7 years - needed to meet the life cycles of initial scientific participants. Increase effectiveness by configuring access to joint set of resources (which need to expand by orders of magnitude in the next few years). Increase efficiency by enabling use of the best architecture for particular problems by scheduling jobs across TeraGrid and OSG resources. Increase efficiency through joint activities such as testing and deployment of new services. Seamless access to storage & data across two infrastructures.

Ruth Pordes November 2004TeraGrid GIG Site Review8 Initial Goals Access to TeraGrid resources for jobs submitted by OSG users - subject to agreements and policies. Access to OSG resources for jobs submitted by Teragrid users - subject to agreements and policies. Understand and address commonality and interoperability in middleware software stacks. Define policies and agreements in support of interfacing and integration.

Ruth Pordes November 2004TeraGrid GIG Site Review9 Proposed Roadmap Gateway Capability/ServiceGateway Users 6 mo  VDT software installed on TeraGrid systems.  OSG-TeraGrid policy and mechanism planning document.  mo  Two OSG applications ported to TeraGrid systems.  First applications operational on OSG and TeraGrid simultaneously.  mo  TeraGrid supports the manual configuration of “virtual clusters” that can be used to expand the size of OSG while maintaining OSG services.  Further OSG applications operational on TeraGrid.  Community allocations, accounting, authorization.  mo  OSG incorporates TeraGrid-based virtual clusters.  Several OSG applications operational on the expanded system.  mo  OSG incorporates virtual storage resources located at TeraGrid sites.  Several OSG applications make use of TeraGrid virtual storage resources.  Simple coordinated operations.  mo  TeraGrid supports on-demand allocation and configuration of virtual clusters with negotiated service level agreements.  TeraGrid used to support other OSG virtual services, such as data distribution.  mo  OSG applications make routine use of dynamically allocated virtual clusters.  Transfer of loosely coupled tasks from TeraGrid to OSG resources implemented and incorporated into TeraGrid metascheduler.  mo  OSG resources used to support data distribution services for TeraGrid users.  Coordinated incident response.  mo  End-to-end monitoring and problem determination.  Fault tolerant services with dynamic fail over from OSG to TeraGrid.  mo  TeraGrid resources a major source of resources for LHC and other major national data-intensive science communities.  3000