History of Distributed Systems Joseph Cordina

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

SE-292 High Performance Computing
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
Today’s topics Single processors and the Memory Hierarchy
Beowulf Supercomputer System Lee, Jung won CS843.
Classification of Distributed Systems Properties of Distributed Systems n motivation: advantages of distributed systems n classification l architecture.
Types of Parallel Computers
CSCI-455/522 Introduction to High Performance Computing Lecture 2.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Introduction to Parallel Processing Ch. 12, Pg
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Chapter 5 Array Processors. Introduction  Major characteristics of SIMD architectures –A single processor(CP) –Synchronous array processors(PEs) –Data-parallel.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1 Next Few Classes Networking basics Protection & Security.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
MIMD Distributed Memory Architectures message-passing multicomputers.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
Course Wrap-Up Miodrag Bolic CEG4136. What was covered Interconnection network topologies and performance Shared-memory architectures Message passing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
SJSU SPRING 2011 PARALLEL COMPUTING Parallel Computing CS 147: Computer Architecture Instructor: Professor Sin-Min Lee Spring 2011 By: Alice Cotti.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Parallel Computing Department Of Computer Engineering Ferdowsi University Hossain Deldari.
Multiprossesors Systems.. What are Distributed Databases ? “ A Logically interrelated collection of shared data ( and a description of this data) physically.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Orange Coast College Business Division Computer Science Department CS 116- Computer Architecture Multiprocessors.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Data Management for Decision Support Session-4 Prof. Bharat Bhasker.
+ Clusters Alternative to SMP as an approach to providing high performance and high availability Particularly attractive for server applications Defined.
Outline Why this subject? What is High Performance Computing?
Parallel Computing Erik Robbins. Limits on single-processor performance  Over time, computers have become better and faster, but there are constraints.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 May 2, 2006 Session 29.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
Background Computer System Architectures Computer System Software.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Processor Level Parallelism 1
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Constructing a system with multiple computers or processors
What is Parallel and Distributed computing?
CLUSTER COMPUTING.
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Networks Networking has become ubiquitous (cf. WWW)
Chapter 4 Multiprocessors
Types of Parallel Computers
Presentation transcript:

History of Distributed Systems Joseph Cordina

Motivation Computation demands were always higher than technological status quo Obvious answer Several computing elements working in harmony to solve a single problem The need to have shared resources between the computing elements. Simplifying computing as ‘instruction operating on data’, all computing architectures fall under one of the following titles: SISDSIMD MISDMIMD

Interconnection Networks The computing elements need to communicate using interconnection networks. Criteria for such networks are: Latency Bandwidth Connectivity Hardware Cost Reliability Functionality End result is speedup Topologies used include: Linear, Ring, Star, Tree, Nearest-Neighbor Mesh, 3D structures

Static vs. Dynamic Networks Static Networks came first and are fixed in topology Dynamic Networks contain switches (switching networks) Evolution from Static to Dynamic Networks: P PP P MM MM P M P M P M P M S SS S P M P M P M P M Network P M P M P M P M

SIMD Architectures Originally it was observed that if we have For I:=1 to 100 A(I)=B(I)*C(I); we are merely repeating the same instruction on different data Thus vector processors were created Vector processors take one single vector instruction that can simultaneously operate on a series of data arranged in array format. First successful computer was the CRAY-1 architecture (1976) that used 11-stage pipelines that could execute concurrently on data Other follow-ups include CDC Cyber 205 (4 identical FP units, IBM 3090, ILLIAC IV, GF11 and the Connection Machine

Pros and Cons of SIMD Cost is prohibitively expensive Special compilers need to be used Programming has to be done using specific languages Speedup is effective only for specific kind of problems. Very difficult to make use of total computing power all the time. For specific problems, results can be obtained very quickly.

MIMD architectures To reduce price one could connect multiple processors, each marching on its own drum beat. Most real-life problems can be split up into a large number of small problems that can be solved individually. Solution can be gathered through communication between the different computing elements. Connection topology now becomes very effective on speedup Specific connection protocols were used: system bus, Myrinet, etc. For most problems, still requires some form of shared resource

Private Memory This is a model where different computing elements with private memory communicate through some form of connection network Communication occurs through message passing The earliest were clusters of workstations (farms) communicating through a LAN. Slow communication Infrequent communication leads to good speedup Scalability is inherent in LAN architecture Very cheap commodities The first real private memory distributed computer was the Cosmic Cube with 64 computing nodes, each node having a direct, point-to-point connection to six others like it. Later developments were further hypercubes, meshes and data flow machines.

Shared Memory Whenever some data needs to be very frequently accessed, a common alternative is shared memory architectures. These are still MIMD architectures, yet the processing elements share the same memory bus. These are known as Symmetric Multi-Processor computers Nowadays these are readily available at a fraction of the cost of parallel architectures. Memory CPU

MIMD Characteristics Pros: Shareability Expandibility Local Autonomy Improved Performance Improved reliability and availibility Potential cost reductions Cons: Network reliance Complexities Security Specific OS features required

Current MIMD Beowulf Clusters started in 1993 using commodity platforms and traditional Ethernet. LANs run at Gigabit speeds (10Gb expected soon) SMP machines are relatively cheap Millions of MIMD elements exist connected together for shared resources Internet Computing can also be performed on internet connected elements Example stories: SETI Two main types of distributed systems: Client-Server (Farming) Peer-to-Peer networks Large number of APIs available (PVM, MPI, RMI,…)

Future High-speed networking with external motherboard to motherboard buses (NGIO) Super broadband connectivity on the internet Huge farms of devices (GRID) offering resources including computing resources. High integration of devices at varying levels of computing power with high-bandwidth cross-talk. Massively parallel computing elements through the advent of quantum computing. Multi-processor machines becoming the norm. The development of new computing and programming tools to allow parallel and distributed computation.