Download presentation
Presentation is loading. Please wait.
Published byAlec Wyly Modified over 10 years ago
1
GRIDS Center G rid R esearch I ntegration D evelopment & S upport http://www.grids-center.org Copyright Thomas Garritano, 2002. This work is the intellectual property of the author. Permission is granted for this material to be shared for non-commercial, educational purposes, provided that this copyright appears on the reproduced materials and notice is given that the copying is by permission of the author. To disseminate otherwise or to republish requires written permission from the author. Chicago - NCSA – SDSC - USC/ISI - Wisconsin
2
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS, part of the NSF Middleware Initiative (NMI) The Information Sciences Institute (ISI) at the University of Southern California (Carl Kesselman) The University of Chicago (Ian Foster) The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana- Champaign (Randy Butler) The University of California at San Diego (Phil Papadoupolus) The University of Wisconsin at Madison (Miron Livny)
3
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Enabling Seamless Collaboration GRIDS will help distributed communities pursue common goals u Scientific research u Engineering design u Education u Artistic creation Focus is on the enabling mechanisms required for collaboration u Resource sharing as a fundamental concept
4
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Grid Computing Rationale The need for flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resource See “The Anatomy of the Grid: Enabling Scalable Virtual Organizations” by Foster, Kesselman, Tuecke at http://www.globus.org (in the “Publications” section) The need for communities (“virtual organizations”) to share geographically distributed resources as they pursue common goals while assuming the absence of: u central location u central control u omniscience u existing trust relationships
5
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Elements of Grid Computing Resource sharing u Computers, storage, sensors, networks u Sharing is always conditional, based on issues of trust, policy, negotiation, payment, etc. Coordinated problem solving u Beyond client-server: distributed data analysis, computation, collaboration, etc. Dynamic, multi-institutional virtual organizations u Community overlays on classic org structures u Large or small, static or dynamic
6
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Resource-Sharing Mechanisms Should address security and policy concerns of resource owners and users Should be flexible and interoperable enough to deal with many resource types and sharing modes Should scale to large numbers of resources, participants, and/or program components Should operate efficiently when dealing with large amounts of data & computational power
7
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Grid Applications Science portals u Help scientists overcome steep learning curves of installing and using new software u Solve advanced problems by invoking sophisticated packages remotely from Web browsers or "thin clients” u Portals are currently being developed in biology, fusion, computational chemistry, and other disciplines Distributed computing u High-speed workstations and networks can yoke together an organization's PCs to form a substantial computational resource u E.g., U.S. and Italian mathematicians pooled resources for one week, aggregating 42,000 CPU- days to solve "Nug30"
8
Grid Portals
9
Mathematicians Solve NUG30 Looking for the solution to the NUG30 quadratic assignment problem An informal collaboration of mathematicians and computer scientists Condor-G delivered 3.46E8 CPU seconds in 7 days (peak 1009 processors) in U.S. and Italy (8 sites) 14,5,28,24,1,3,16,15, 10,9,21,2,4,29,25,22, 13,26,17,30,6,20,19, 8,18,7,27,12,11,23 MetaNEOS: Argonne, Iowa, Northwestern, Wisconsin
10
Community = u 1000s of home computer users u Philanthropic computing vendor (Entropia) u Research group (Scripps) Common goal= advance AIDS research Home Computers Evaluate AIDS Drugs
11
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) More Grid Applications Large-scale data analysis u Science increasingly relies on large datasets that benefit from distributed computing and storage u E.g., the Large Hadron Collider at CERN will generate many petabytes of data from high-energy physics experiments, with single-site storage impractical for technical and political reasons Computer-in-the-loop instrumentation u Data from telescopes, synchrotrons, and electron microscopes are traditionally archived for batch processing u Grids are permitting quasi-real-time analysis that enhances the instruments’ capabilities u E.g., with sophisticated “on-demand” software, astronomers may be able to use automated detection techniques to zoom in on solar flares as they occur
12
Image courtesy Harvey Newman, Caltech Data Grids for High Energy Physics Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels.” Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server. Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Tier2 Centre ~1 TIPS Caltech ~1 TIPS ~622 Mbits/sec Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents
13
DOE X-ray grand challenge: ANL, USC/ISI, NIST, U.Chicago tomographic reconstruction real-time collection wide-area dissemination desktop & VR clients with shared controls Advanced Photon Source Online Access to Scientific Instruments archival storage
14
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Still More Grid Applications Collaborative work u Researchers often want to aggregate not only data and computing power, but also human expertise u Grids enable collaborative problem formulation and data analysis u E.g., an astrophysicist who has performed a large, multi-terabyte simulation could let colleagues around the world simultaneously visualize the results, permitting real-time group discussion u E.g., civil engineers collaborate to design, execute, & analyze shake table experiments
15
U.S. PIs: Avery, Foster, Gardner, Newman, Szalay www.ivdgl.org iVDGL: International Virtual Data Grid Laboratory Tier0/1 facility Tier2 facility 10 Gbps link 2.5 Gbps link 622 Mbps link Other link Tier3 facility
16
Network for Earthquake Engineering Simulation NEESgrid: US national infrastructure to couple earthquake engineers with experimental facilities, databases, computers, and each other On-demand access to experiments, data streams, computing, archives, collaboration NEESgrid: Argonne, Michigan, NCSA, UIUC, USC
17
The 13.6 TF TeraGrid: Computing at 40 Gb/s 26 24 8 4 HPSS 5 UniTree External Networks Site Resources NCSA/PACI 8 TF 240 TB SDSC 4.1 TF 225 TB CaltechArgonne TeraGrid/DTF: NCSA, SDSC, Caltech, Argonne www.teragrid.org
18
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Grids and Industry Grid computing has much in common with major industrial thrusts u Business-to-business, Peer-to-peer, Application Service Providers, Storage Service Providers, Distributed Computing, Internet Computing, etc. u Outsourcing increases decentralization of resources Sharing issues are not adequately addressed by existing technologies u Complicated requirements: “run program X at site Y subject to community policy P, providing access to data at Z according to policy Q” Companies like IBM, Platform Computing and Microsoft are getting substantively involved with the open-source Grid community (e.g., web services and Grid services)
19
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) eBusiness Grids Engineers at a multinational company collaborate on the design of a new product A multidisciplinary analysis in aerospace couples code and data in four companies An insurance company mines data from partner hospitals for fraud detection An application service provider offloads excess load to a compute cycle provider An enterprise configures internal & external resources to support eBusiness workload
20
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Grid Computing: Why Now? Moore’s law improvements in computing produce highly functional endsystems The Internet and burgeoning wired and wireless provide universal connectivity Changing modes of problem solving emphasize teamwork, computation Network exponentials produce dramatic changes in geometry and geography
21
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Network Exponentials Network vs. computer performance u Computer speed doubles every 18 months u Network speed doubles every 9 months u Difference = order of magnitude per 5 years 1986 to 2000 u Computers: x 500 u Networks: x 340,000 2001 to 2010 u Computers: x 60 u Networks: x 4000 Moore’s Law vs. storage improvements vs. optical improvements. Graph from Scientific American (Jan- 2001) by Cleo Vilett, source Vined Khoslan, Kleiner, Caufield and Perkins.
22
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS and the NSF Middleware Initiative GRIDS is one of two NMI teams; the other is EDIT NMI seeks standard components and mechanisms u Authentication, authorization, policy u Resource discovery and directory u Remote access of computers, data, instruments Also seeks: u Integration with end-user tools (conferencing, data analysis, data sharing, distributed computing, etc.) u Integration with campus infrastructures u Integration with commercial technologies
23
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS Deliverables for NMI Release 1.0 On May 7, NMI Release 1.0 will be issued (see www.nsf-middleware.org), including deliverables from the GRIDS and EDIT teams GRIDS software in NMI-R1 will include new versions of: u Globus Toolkit™ u Condor-G u Network Weather Service u package also includes KX.509
24
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) The Globus Toolkit™ The de facto standard for Grid computing u A modular “bag of technologies” addressing key technical problems facing Grid tools, services and applications u Made available under liberal open source license u Simplifies collaboration across virtual organizations l Authentication u Grid Security Infrastructure (GSI) l Scheduling u Globus Resource Allocation Manager (GRAM) u Dynamically Updated Request Online Coallocator (DUROC) l File transfer u Global Access to Secondary Storage (GASS) u GridFTP l Resource description u Monitoring and Discovery Service (MDS)
25
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Condor-G u High performance computing (HPC) is often measured in operations per second; with high throughput computing (HTC), Condor permits increased processing capacity over longer periods of time l CPU cycles/day (week, month, year?) under non-ideal circumstances l “How many times can I run simulation X in a month using all available machines?” u The Condor Project develops, deploys, and evaluates mechanisms and policies for HTC on large collections of distributed systems u NMI-R1 will include Condor-G, an enhanced version of the core Condor software optimized to work with Globus Toolkit™ for managing Grid jobs
26
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) Network Weather Service u From UC Santa Barbara, NWS monitors and dynamically forecasts performance of network and computational resources u Uses a distributed set of performance sensors (network monitors, CPU monitors, etc.) for instantaneous readings u Numerical models’ ability to predict conditions is analogous to weather forecasting – hence the name l For use with the Globus Toolkit and Condor, allowing dynamic schedulers to provide statistical Quality-of-Service readings l NWS forecasts end-to-end TCP/IP performance (bandwidth and latency), available CPU percentage and available non- paged memory l NWS automatically identifies the best forecasting technique for any given resource
27
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) KX.509 for Converting Kerberos Certificates to PKI Stand-alone client program from the University of Michigan u For a Kerberos-authenticated user, KX.509 acquires a short- term X.509 certificate that can be used by PKI applications u Stores the certificate in the local user's Kerberos ticket file u Systems that already have a mechanism for removing unused kerberos credentials may also automatically remove the X.509 credentials u Web browser may then load a library (PKCS11) to use these credentials for https u The client reads X.509 credentials from the user’s Kerberos cache and converts them to PEM, the format used by the Globus Toolkit
28
www.grids-center.org GRIDS Part of the NSF Middleware Initiative (NMI) GRIDS Integration Issues Ten NMI testbed sites will be early adopters, seeking integration of enterprise and Grid computing u Eight sites to be announced soon by SURA u Two further sites: CalTech and USC Via NMI partnerships, GRIDS will help identify points of intersection and divergence between Grid and enterprise computing u Directory services u Authorization, authentication and security u Emphasis is on open standards and architectures as the route to successful collaboration
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.