1-1.1 Introduction to Grid Computing Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC, © 2009. Chapter 1,

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

C3.ca in Atlantic Canada Virendra Bhavsar Director, Advanced Computational Research Laboratory (ACRL) Faculty of Computer Science University of New Brunswick.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
High Performance Computing Course Notes Grid Computing.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
A New Computing Paradigm. Overview of Web Services Over 66 percent of respondents to a 2001 InfoWorld magazine poll agreed that "Web services are likely.
1.1 Introduction to Grid Computing ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson.
Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 1, pp For educational use only.
1 Short Course on Grid Computing Jornadas Chilenas de Computación 2010 INFONOR-CHILE 2010 November 15th - 19th, 2010 Antofagasta, Chile Dr. Barry Wilkinson.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
1 Workshop 20: Teaching a Hands-on Undergraduate Grid Computing Course SIGCSE The 41st ACM Technical Symposium on Computer Science Education Friday.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
SURA Coastal Ocean Observing and Prediction (SCOOP) Program Philip Bogden CEO, GoMOOS Director, SCOOP Program at SURA Southeastern Universities Research.
Institute of Technology, Sligo Dept of Computing Semester 3, version Semester 3 Chapter 3 VLANs.
Simo Niskala Teemu Pasanen
Tutorial on Distributed High Performance Computing 14:30 – 19:00 (2:30 pm – 7:00 pm) Wednesday November 17, 2010 Jornadas Chilenas de Computación 2010.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
SAP Student Interest Group
Internet Basics مهندس / محمد العنزي
Evolved from ARPANET (Advanced Research Projects Agency of the U.S. Department of Defense) Was the first operational packet-switching network Began.
A global, public network of computer networks. The largest computer network in the world. Computer Network A collection of computing devices connected.
1 Int System Introduction to Systems and Networking Department Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology.
Assessment of Core Services provided to USLHC by OSG.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Introduction to the Internet. What is the Internet The Internet is a worldwide group of connected networks that allows public access to information and.
1 Grid Computing Barry Wilkinson Department of Computer Science University of North Carolina at Charlotte.
DISTRIBUTED COMPUTING
Networks QUME 185 Introduction to Computer Applications.
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
The Cluster Computing Project Robert L. Tureman Paul D. Camp Community College.
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
1 What is the history of the Internet? ARPANET (Advanced Research Projects Agency Network) TCP/IP (Transmission Control Protocol/Internet Protocol) NSFNET.
MySQL and PHP Internet and WWW. Computer Basics A Single Computer.
Chaos, Communication and Consciousness Module PH19510 Lecture 12 Data Networks Across the World.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
1-1.1 Sample Grid Computing Projects. NSF Network for Earthquake Engineering Simulation (NEES) 2004 – to date‏ Transform our ability to carry out research.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
1a-1.1 Introduction to Grid Computing ITCS 4146/5146, UNC-Charlotte, B. Wilkinson, 2008 Aug 27, 2008.
1 Lecture # 21 Evolution of Internet. 2 Circuit switching network This allows the communication circuits to be shared among users. E.g. Telephone exchange.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Authors: Ronnie Julio Cole David
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1-1.1 Introduction to Grid Computing © 2011 B. Wilkinson/Clayton Ferner. Modification date: June 20, 2011.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Giuseppe Andronico INFN Sez. CT / Consorzio COMETA Beijing,
7. Grid Computing Systems and Resource Management
Internet Overview (Chapter 1 in [2]). 2 Outline History of the Internet History of the Internet Seven Layers of the OSI Model Seven Layers of the OSI.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
The National Grid Service Mike Mineter.
The Globus Toolkit The Globus project was started by Ian Foster and Carl Kesselman from Argonne National Labs and USC respectively. The Globus toolkit.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
A service Oriented Architecture & Web Service Technology.
EGEE Workshop on Management of Rights in Production Grids Paris, June 19th, 2006 Victor Alessandrini IDRIS - CNRS DEISA : status, strategies, perspectives.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
Grid Optical Burst Switched Networks
Clouds , Grids and Clusters
Introduction to Grid Computing
Introduction to Grid Computing
Grid Computing Software Interface
Presentation transcript:

1-1.1 Introduction to Grid Computing Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC, © Chapter 1, pp For educational use only. All rights reserved. Aug 24, 2009

1-1.2 “The grid virtualizes heterogeneous geographically disperse resources” from "Introduction to Grid Computing with Globus," IBM Redbooks Using geographically distributed and interconnected computers together for computing and for resource sharing. Grid Computing

“Grid” Common practice to use word Grid as a proper noun (i.e. G is capitalized) although does not refer to one universe Grid. There are many Grid infrastructures. We have set up one for this course. You will learn how that was done and the technicalities in the course

1-1.4 Need to harness computers Original driving force behind Grid computing same as behind the early development of networks that became the Internet: – Connecting computers at distributed sites for high performance computing.

1-1.5 However, Grid computing is about collaborating and resource sharing as much as it is about high performance computing.

1-1.6 Virtual Organizations Grid computing offers potential of virtual organizations: –groups of people, both geographically and organizationally distributed, working together on a problem, sharing computers AND other resources such as databases and experimental equipment.

Different organizations can supply resources and personnel. Concept has many benefits, including: Problems that could not be solved previously for humanity because of limited computing resources can now be tackled. Examples Understanding the human genome Searching for new drugs …. Continued

Users can have access to far greater computing resources and expertise than available locally. Inter-disciplinary teams can be formed across different institutions and organizations to tackle problems that require expertise of multiple disciplines. Specialized localized experimental equipment can be accessed remotely and collectively. Continued

Large collective databases can be created to hold vast amounts of data. Unused compute cycles can be harnessed at remote sites, achieving more efficient use of computers. Business processes can be re-implemented using Grid technology for dramatic cost saving

Crosses multiple administrative domains. Another hallmark of larger Grid computing projects. Resources being shared owned either by members of virtual organization or donated by others. Introduces challenging technical and social-political challenges. Requires true collaboration

Some key features we regard as indicative of Grid computing: –Shared multi-owner computing resources –Uses Grid computing software, with security and cross-management mechanisms in place –Tools to bring together geographically distributed computers owned by others

Shared Resources Can share much more than just computers: Storage Sensors for experiments at particular sites Application Software Databases Network capacity, …

Interconnections and Protocols Focus now on: using standard Internet protocols and technology, i.e. HTTP, SOAP, web services, etc.,

History of distributed computing Certainly one can go back a long way to trace the history of distributed computing. Types of distributed computing existed in 1960s. Many people interested in connecting computers together for high performance computing. From connecting processors/computers together locally that began in earnest in the 1960s and 1970s, distributed computing now extends to connecting computers that are geographically distant - Grid computing

Distributed computing technologies that underpin Grid computing developed concurrently and rely upon each other. Three concurrent interrelated paths: Networks Computing platforms Software techniques

Networks 1960s - Development of packet switched networks ARPNET network became operational. 4 nodes, Univ. of California at Los Angeles, Stanford Research Institute, Univ. of California at Santa Barbara, and Univ. of Utah. Design speed of 50 Kbits/sec TCP (Transmission Control Protocol) TCP/IP (Transmission Control Protocol/Internet Protocol). TCP a protocol for reliable communication IP for network routing. IP addresses identify hosts on the Internet Ports identify end points (processes) for communication purposes. Early 1970s - Ethernet for interconnecting computers on local networks. Early 1980s - Internet. Uses the TCP/IP protocol. 1990s - Internet developed into World-Wide Web. Browser and HTML markup language introduced.

Computing Platforms 1960 onwards - Recognized that increased speed could potentially be obtained by having more than one processor inside a single computer system Parallel computer coined to describe such systems. 1970s and 1980s - many parallel computer projects especially with advent of low cost microprocessors. 1990s - cluster computing, a group of computers inter connected through a network switch to form a computing platform Commodity computers (PCs) provided cost-effective solution.

Typical cluster computing configuration Fig. 1.1

Programming clusters Message passing programming -- Messages between processes specified by programmer using message- passing routines: Late 1980s - early 1990s - PVM (Parallel Virtual Machine) Late 1990s - MPI (Message Passing Interface) Late 1980’s onwards – Condor To harness “unused” cycles of networked computers for high performance computing. A collection of computers could be given over to remote access automatically when not being used locally. Widely used as a job scheduler for clusters in addition to its original purpose of using laboratory computers collectively. We will consider Condor in the light of Grid computing

Software Techniques Mid 1980s - Remote procedure call (RPC) for invoking a procedure on a remote computer. Service registry - introduced with RPC to locate remote services. 1990s - Object-oriented versions of RPC: CORBA (Common Request Broker Architecture) Java Method Invocation (RMI) Web service Provide remote actions as RPC but invoked through standard protocols and Internet addressing. Use XML (eXtensible Markup Language), also introduced in Web services and XML adopted into Grid computing soon after their introduction

Grid Computing History Began in mid 1990s with experiments using computers at geographically dispersed sites. Seminal experiment – “I-way” experiment at 1995 Supercomputing conference (SC’95), using 17 sites across US running: –60+ applications. –Existing networks (10 networks).

Globus Project Led by Ian Foster, a co-developer of I-Way demonstration, and founder of the Grid computing concept. Globus -- middleware software Grid computing toolkit. Evolved through several implementation versions although basic structural components remained essentially same: Security, Data management Execution management Information services Run time environment) We will describe Globus in detail later.

Other grid computing middleware software Although Globus widely adopted and basis of the course, there are other software infrastructure projects Legion project Software development started in 1996 Used object-based approach to Grid computing. First public release at Supercomputing 97 in Nov Led to Avaki company/software, taken over by Sybase Inc. 1990s - UNICORE (UNiform Interface to COmputing REsources) European grid computing project. Initially funded by German Ministry for Education and Research. Continued with other European funding. Basis of several European efforts in Grid computing and elsewhere. Many similarities to Globus

Key concepts in the history of Grid computing Fig. 1.2

Applications Originally e-Science applications –Computational intensive Traditional high performance computing addressing large problems Not necessarily one big problem but a problem that has to be solved repeatedly with different parameters. –Data intensive Computational but emphasis on large amounts of data to store and process –Experimental collaborative projects

Now also e-Business applications –To improve business models and practices. –Sharing corporate computing resources and databases –On-demand Grid computing … indirectly led to cloud computing.

Grid Computing verse Cluster Computing Important not to think of Grid computing simply as large cluster because potential and challenges different. Courses on Grid computing and on cluster computing are quite different

Cluster computing course One learns about : –Message passing programming using tools such as MPI, and –Shared memory programming using threads and OpenMP, given that most computers in a cluster today now multi-core shared memory systems. –Parallel algorithms (lots)‏ Network security is not a big issue. –Usually an ssh connection to front node of cluster sufficient. –User logging onto a single compute resource. Computers connected together locally under one administrative domain

Grid computing course Learn about running jobs of remote machines, scheduling jobs and distributed workflow Learn in detail underlying Grid infrastructure How Internet technologies applied to Grid computing Grid computing software and standards Security is an issue

Grid Computing verse Cluster Computing Of course, there are things in common Both courses hands-on with programming experiences. Both use multiple computers Both require job scheduler to place jobs

Cloud computing Lot of hype on Cloud computing at the moment. Business model in which services provided on servers that can be accessed through Internet. Lineage of cloud computing can be traced back to on-demand Grid computing in the early 2000s

Cloud computing using virtualized resources Fig. 1.3

Common thread between Grid computing and cloud computing is use of Internet to access resources. Cloud computing driven by widespread access that Internet provides and Internet technologies. However cloud computing quite distinct from original purpose of Grid computing

Grid Computing verse Cloud Computing Whereas Grid computing focuses on collaborative and distributed shared resources, Cloud computing concentrates upon placing services for users to pay to use. Technology for cloud computing emphases: – use of software as a service (SaaS)‏ – virtualization (process of separating particular user’s software environment from underlying hardware)

Ian Fosters’ check list Ian Foster credited for development of Grid computing. Sometimes called father of Grid computing Proposed simple checklist of aspects that are common to most true Grids: No centralized Control Standard open protocols Non-trivial quality of service (QoS)‏

Computational Grid Applications Biomedical research Industrial research Engineering research Studies in Physics and Chemistry …

Sample Grid Computing Projects

Enterprise Grids – Grid formed within an organization for collaboration –Still might cross administrative domains of departments and requires departments to share their resources –Example: campus Grids

1a.39 Example University of Virginia Campus Grid

Partner Grids -- Grids between collaborative organizations This makes most use of potential of Grid computing and collaboration

NSF Network for Earthquake Engineering Simulation (NEES)‏ Transform our ability to carry out research vital to reducing vulnerability to catastrophic earthquakes from I. Foster Environment/Earth

SCOOP Project Southeastern Coastal Ocean Observing and Prediction Program Integrating data from regional observing systems for real time coastal forecasts in SE Coastal modelers with computer scientists to couple models, provide data solutions, deploy ensembles of models on the Grid, assemble real time results with GIS technologies. From: "Urgent Computing for Hurricane Forecasts,“ Gabrielle Allen, Urgent Computing Workshop, Argonne National Laboratory, April 25th to 26th,

SCOOP Prototype Distributed Laboratory Funded by ONR & NOAA Bedford Institute of Oceanography Virginia Institute of Marine Science University of Alabama, Huntsville Texas A&M Renaissance Computing Institute 2005/2006 SCOOP Implementation Team University of North Carolina University of Florida Louisiana State University Gulf of Maine Ocean Observing System MCNC Southeastern Universities Research Association External Resources e.g. SURAgrid regional grid infrastructure, From: Dr. Philip Bogden "Designing a Collaborative Cyberinfrastructure for Event-Driven Coastal Modeling," Philip Bogden, Supercomputing 2006, Nov 2006, Tampa, Fl.

DOE Earth System Grid Goal Address technical obstacles to sharing and analysis of high-volume data from advanced earth system models

1.45 Earth System Grid II

1a.46 ox.ac.uk/ Medicine /Biology Project period:

Project period: …

Large Hadron Collider experimental facility for complex particle experiments at CERN (European Center for Nuclear Research, near Geneva Switzerland). Physics CERN LCH Computing grid (LCG)‏ Started in Expected operational 2008

1a.50 CERN LCH Computing grid (LCG)‏

LCG depends on two major science grid infrastructures …. EGEE - Enabling Grids for E-Science OSG - US Open Science Grid From: LCG Overview - May Les Robertson,

Grid computing infrastructure projects Not tied to one specific application

Grid networks for collaborative grid computing projects Grids have been set up at local level, national level, and international level throughout the world, to promote Grid computing Grid Networks

Funded by NSF in 2001 initially to link five supercomputer centers. Hubs established at Chicago and Los Angeles. Five centers connected to one hub: Argonne National Laboratory (ANL) (Chicago hub)‏ National Center for Supercomputing Applications (NCSA) (Chicago hub)‏ Pittsburgh Supercomputing Center (PSC) (Chicago hub)‏ San Diego Supercomputer Center (SDSC) ( LA hub)‏ Caltech (LA hub)‏ National Center for Supercomputing Applications (NCSA) (Chicago hub)‏ TeraGrid

Hubs at Chicago and Los Angeles Interconnected using 40 Gigabit/sec optical backplane network. Five centers Connected to one hub using 30 Gigabit/sec connections State-of-the-art optical lines could reach 10 Gigabit/sec in the early 2000s Four lines used to achieve 40 Gigabit/sec. Three lines used to achieve 30 Gigabit/sec

TeraGrid circa 2004

TeraGrid was further funded by NSF for period Has developed into a platform for a wide range of Grid applications and is described as: “the world’s largest, most comprehensive distributed cyberinfrastructure for open scientific research.”

TeraGrid as of 2008

Open Science Grid (OSG) Started around 2005, received $30 million funding from NSF and DOE in 2006: Boston University Brookhaven National Laboratory California Institute of Technology Columbia University Cornell University Fermi National Accelerator Laboratory Indiana University Lawrence Berkeley National Laboratory Stanford Linear Accelerator Center University of California, San Diego University of Chicago University of Florida University of Iowa University of North Carolina/RENCI University of Wisconsin- Madison

1a.60 Current status July 2008

SURAGrid as of 2009 Southeastern Universities Research Association Fig. 1.4

National Grids Many countries have embraced Grid computing and set-up Grid computing infrastructure: UK e-Science grid Grid-Ireland NorduGrid DutchGrid POINIER grid (Poland)‏ ACI grid (France)‏ Japanese grid etc, etc., …

UK e-Science Grid Early 2000’s

UK National Grid Service Follow-up from UK e-Science Grid Founded in 2004 to provide distributed access to computational and database resources, with four core sites: – Universities of Manchester, Oxford and Leeds, and Rutherford Appleton Laboratory By 2008, it had grown to 16 sites. Access free to any academic with a legitimate need

Multi-national Grids , several efforts to create Grids that spanned across many countries

Multi-national Grid example ApGrid A partnership in Asia Pacific region involving: –Australia, Canada, China, Hong Kong, India, Japan, Malaysia, New Zealand, Philippines, Singapore, South Korea, Taiwan, Thailand, USA, and Vietnam

European centered multi- national Grids Several initiatives for European countries to collaborated in forming Grid-like infrastructures to share compute resources funded by European programs

European centered multi-national Grid Example DEISA (Distributed European Infrastructure for Supercomputing Applications)‏ DEISA-1 project from DEISA-2 started in 2008, to extend to

DEISA (Distributed European Infrastructure for Supercomputing Applications) As of a.69

DEISA-2 partners Barcelona Supercomputing Centre Spain (BSC), Consortio Interuniversitario per il Calcolo Automatico Italy (CINECA), Finnish Information Technology Centre for Science Finland (CSC), University of Edinburgh and CCLRC UK (EPCC)‏ European Centre for Medium-Range Weather Forecast UK (ECMWF)‏ Research Centre Juelich Germany (FZJ)‏ High Performance Computing Centre Stuttgart Germany (HLRS), Institut du Développement et des Ressources en Informatique Scientifique - CNRS France (IDRIS), Leibniz Rechenzentrum Munich Germany (LRZ), Rechenzentrum Garching of the Max Planck Society Germany (RZG)‏ Dutch National High Performance Computing Netherlands (SARA), Kungliga Tekniska Högskolan Sweden (KTH), Swiss National Supercomputing Centre Switzerland (CSCS), Joint Supercomputer Center of the Russian Academy of Sciences Russia (JSCC)

Vision of a single universal international Grid such as the Internet/World Wide Web May never be achieved though. More likely - Grids will connect to other Grids but will maintain their identity. 1a.71

Uses the teleconferencing facilities of NCREN and Clusters at various sites across North Carolina UNC-Charlotte’s Grid computing course

Our Grid Computing Course Uses the teleconferencing facilities of NCREN Broadcast on NCREN network across North Carolina. Uses clusters at various participating sites Relies heavily on faculty at participating sites First offered in 2004 (8 sites). Again in Fall 2005 (12 sites), Spring 2007 (3 sites), and Fall 2008 (5 sites)‏ WCU teleclassroom

15 Participating sites to total

1a.75 Every state has its own network structure for the Internet Close to home: Basis of our course

Fall 2005 Course grid structure MCNCUNC-WUNC-ANCSUWCU UNC-CASU CA Backup facility, not actually used

Questions

Quiz Question: What is a virtual organization? (a) An imaginary company. (b) A web-based organization. (c) A group of people geographically distributed that come together from different organizations to work on a Grid project. (d) A group of people that come together to work on a virtual reality Grid project.

Question: What is meant by the term cloud computing? (a) Atmospheric Computing (b) Computing using geographically distributed computers (c) A facility providing services and software applications (d) A secure CIA computing facility

Question: In addition to computers, which of the following resources can be shared on a Grid? (a) Storage (b) Application Software (c) Specialized equipment (such as sensors) (d) Databases (e) All of the above