GRIDS Center G rid R esearch I ntegration D evelopment & S upport Chicago - NCSA – SDSC - USC/ISI - Wisconsin.

Slides:



Advertisements
Similar presentations
GRIDS Center G rid R esearch I ntegration D evelopment & S upport Copyright Thomas Garritano, This work is the intellectual.
Advertisements

The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
High Performance Computing Course Notes Grid Computing.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Introduction and Overview “the grid” – a proposed distributed computing infrastructure for advanced science and engineering. Purpose: grid concept is motivated.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Globus Ian Foster and Carl Kesselman Argonne National Laboratory and University of Southern California
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
Simo Niskala Teemu Pasanen
Grid Computing Net 535.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Grid Toolkits Globus, Condor, BOINC, Xgrid Young Suk Moon.
The GRIDS Center, part of the NSF Middleware Initiative The GRIDS Center: Defining and Deploying Grid Middleware presented by Tom.
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Long Term Ecological Research Network Information System LTER Grid Pilot Study LTER Information Manager’s Meeting Montreal, Canada 4-7 August 2005 Mark.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
The Anatomy of the Grid: An Integrated View of Grid Architecture Ian Foster, Steve Tuecke Argonne National Laboratory The University of Chicago Carl Kesselman.
Development Timelines Ken Kennedy Andrew Chien Keith Cooper Ian Foster John Mellor-Curmmey Dan Reed.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
The Anatomy of the Grid Mahdi Hamzeh Fall 2005 Class Presentation for the Parallel Processing Course. All figures and data are copyrights of their respective.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
GRID ARCHITECTURE Chintan O.Patel. CS 551 Fall 2002 Workshop 1 Software Architectures 2 What is Grid ? "...a flexible, secure, coordinated resource- sharing.
Authors: Ronnie Julio Cole David
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
1 Observations on Architecture, Protocols, Services, APIs, SDKs, and the Role of the Grid Forum Ian Foster Carl Kesselman Steven Tuecke.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
7. Grid Computing Systems and Resource Management
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GraDS MacroGrid Carl Kesselman USC/Information Sciences Institute.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
Internet2 Spring Meeting NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions.
The GRIDS Center, part of the NSF Middleware Initiative Grid Security Overview presented by Von Welch National Center for Supercomputing.
Realizing the Promise of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
The Globus Toolkit The Globus project was started by Ian Foster and Carl Kesselman from Argonne National Labs and USC respectively. The Globus toolkit.
GRIDS Center John McGee, USC/ISI April 10, 2003 Internet2 – Spring Member Meeting Arlington, VA NSF Middleware Initiative.
Accessing the VI-SEEM infrastructure
Clouds , Grids and Clusters
Globus —— Toolkits for Grid Computing
Grid Computing.
Grid Computing B.Ramamurthy 9/22/2018 B.Ramamurthy.
The Anatomy and The Physiology of the Grid
Presentation transcript:

GRIDS Center G rid R esearch I ntegration D evelopment & S upport Chicago - NCSA – SDSC - USC/ISI - Wisconsin

GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS, part of the NSF Middleware Initiative (NMI) The Information Sciences Institute (ISI) at the University of Southern California (Carl Kesselman) The University of Chicago (Ian Foster) The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana- Champaign (Randy Butler) The University of California at San Diego (Phil Papadoupolus) The University of Wisconsin at Madison (Miron Livny)

GRIDS Part of the NSF Middleware Initiative (NMI) Enabling Seamless Collaboration GRIDS will help distributed communities pursue common goals u Scientific research u Engineering design u Education u Artistic creation Focus is on the enabling mechanisms required for collaboration u Resource sharing as a fundamental concept

GRIDS Part of the NSF Middleware Initiative (NMI) Grid Computing Rationale The need for flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resource See “The Anatomy of the Grid: Enabling Scalable Virtual Organizations” by Foster, Kesselman, Tuecke at (in the “Publications” section) The need for communities (“virtual organizations”) to share geographically distributed resources as they pursue common goals while assuming the absence of: u central location u central control u omniscience u existing trust relationships

GRIDS Part of the NSF Middleware Initiative (NMI) Elements of Grid Computing Resource sharing u Computers, storage, sensors, networks u Sharing is always conditional, based on issues of trust, policy, negotiation, payment, etc. Coordinated problem solving u Beyond client-server: distributed data analysis, computation, collaboration, etc. Dynamic, multi-institutional virtual organizations u Community overlays on classic org structures u Large or small, static or dynamic

GRIDS Part of the NSF Middleware Initiative (NMI) Resource-Sharing Mechanisms Should address security and policy concerns of resource owners and users Should be flexible and interoperable enough to deal with many resource types and sharing modes Should scale to large numbers of resources, participants, and/or program components Should operate efficiently when dealing with large amounts of data & computational power

GRIDS Part of the NSF Middleware Initiative (NMI) Grid Applications Science portals u Help scientists overcome steep learning curves of installing and using new software u Solve advanced problems by invoking sophisticated packages remotely from Web browsers or "thin clients” u Portals are currently being developed in biology, fusion, computational chemistry, and other disciplines Distributed computing u High-speed workstations and networks can yoke together an organization's PCs to form a substantial computational resource u E.g., U.S. and Italian mathematicians pooled resources for one week, aggregating 42,000 CPU- days to solve "Nug30"

Grid Portals

Community = u 1000s of home computer users u Philanthropic computing vendor (Entropia) u Research group (Scripps) Common goal= advance AIDS research Distributed Computing to Evaluate AIDS Drugs

GRIDS Part of the NSF Middleware Initiative (NMI) More Grid Applications Large-scale data analysis u Science increasingly relies on large datasets that benefit from distributed computing and storage u E.g., the Large Hadron Collider at CERN will generate many petabytes of data from high-energy physics experiments, with single-site storage impractical for technical and political reasons Computer-in-the-loop instrumentation u Data from telescopes, synchrotrons, and electron microscopes are traditionally archived for batch processing u Grids are permitting quasi-real-time analysis that enhances the instruments’ capabilities u E.g., with sophisticated “on-demand” software, astronomers may be able to use automated detection techniques to zoom in on solar flares as they occur

Large-scale Data Analysis

GRIDS Part of the NSF Middleware Initiative (NMI) Still More Grid Applications Collaborative work u Researchers often want to aggregate not only data and computing power, but also human expertise u Grids enable collaborative problem formulation and data analysis u E.g., an astrophysicist who has performed a large, multi-terabyte simulation could let colleagues around the world simultaneously visualize the results, permitting real-time group discussion u E.g., civil engineers collaborate to design, execute, & analyze shake table experiments

Collaboration via Online Access to Scientific Instruments

GRIDS Part of the NSF Middleware Initiative (NMI) Grids and Industry Grid computing has much in common with major industrial thrusts u Business-to-business, Peer-to-peer, Application Service Providers, Storage Service Providers, Distributed Computing, Internet Computing, etc. u Outsourcing increases decentralization of resources Sharing issues are not adequately addressed by existing technologies u Complicated requirements: “run program X at site Y subject to community policy P, providing access to data at Z according to policy Q” Companies like IBM, Platform Computing and Microsoft are getting substantively involved with the open-source Grid community (e.g., web services and Grid services)

GRIDS Part of the NSF Middleware Initiative (NMI) eBusiness Grids Engineers at a multinational company collaborate on the design of a new product A multidisciplinary analysis in aerospace couples code and data in four companies An insurance company mines data from partner hospitals for fraud detection An application service provider offloads excess load to a compute cycle provider An enterprise configures internal & external resources to support eBusiness workload

GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS and the NSF Middleware Initiative GRIDS is one of two NMI teams; the other is EDIT NMI seeks standard components and mechanisms u Authentication, authorization, policy u Resource discovery and directory u Remote access of computers, data, instruments Also seeks: u Integration with end-user tools (conferencing, data analysis, data sharing, distributed computing, etc.) u Integration with campus infrastructures u Integration with commercial technologies

GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS Deliverables for NMI Release 1.0 On May 7, NMI Release 1.0 was issued (see including deliverables from the GRIDS and EDIT teams GRIDS software in NMI-R1 will include new versions of: u Globus Toolkit™ u Condor-G u Network Weather Service u package also includes KX.509

GRIDS Part of the NSF Middleware Initiative (NMI) The Globus Toolkit™ The de facto standard for Grid computing u A modular “bag of technologies” addressing key technical problems facing Grid tools, services and applications u Made available under liberal open source license u Simplifies collaboration across virtual organizations l Authentication u Grid Security Infrastructure (GSI) l Scheduling u Globus Resource Allocation Manager (GRAM) u Dynamically Updated Request Online Coallocator (DUROC) l File transfer u Global Access to Secondary Storage (GASS) u GridFTP l Resource description u Monitoring and Discovery Service (MDS)

GRIDS Part of the NSF Middleware Initiative (NMI) Condor-G u High performance computing (HPC) is often measured in operations per second; with high throughput computing (HTC), Condor permits increased processing capacity over longer periods of time l CPU cycles/day (week, month, year?) under non-ideal circumstances l “How many times can I run simulation X in a month using all available machines?” u The Condor Project develops, deploys, and evaluates mechanisms and policies for HTC on large collections of distributed systems u NMI-R1 will include Condor-G, an enhanced version of the core Condor software optimized to work with Globus Toolkit™ for managing Grid jobs

GRIDS Part of the NSF Middleware Initiative (NMI) Network Weather Service u From UC Santa Barbara, NWS monitors and dynamically forecasts performance of network and computational resources u Uses a distributed set of performance sensors (network monitors, CPU monitors, etc.) for instantaneous readings u Numerical models’ ability to predict conditions is analogous to weather forecasting – hence the name l For use with the Globus Toolkit and Condor, allowing dynamic schedulers to provide statistical Quality-of-Service readings l NWS forecasts end-to-end TCP/IP performance (bandwidth and latency), available CPU percentage and available non- paged memory l NWS automatically identifies the best forecasting technique for any given resource

GRIDS Part of the NSF Middleware Initiative (NMI) KX.509 for Converting Kerberos Certificates to PKI Stand-alone client program from the University of Michigan u For a Kerberos-authenticated user, KX.509 acquires a short- term X.509 certificate that can be used by PKI applications u Stores the certificate in the local user's Kerberos ticket file u Systems that already have a mechanism for removing unused kerberos credentials may also automatically remove the X.509 credentials u Web browser may then load a library (PKCS11) to use these credentials for https u The client reads X.509 credentials from the user’s Kerberos cache and converts them to PEM, the format used by the Globus Toolkit

GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS Integration Issues Ten NMI testbed sites will be early adopters, seeking integration of enterprise and Grid computing u Eight sites to be announced soon by SURA u Two further sites: CalTech and USC Via NMI partnerships, GRIDS will help identify points of intersection and divergence between Grid and enterprise computing u Directory services u Authorization, authentication and security u Emphasis is on open standards and architectures as the route to successful collaboration