Evolution of Distributed Computing

Slides:



Advertisements
Similar presentations
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Advertisements

Lecture 6: Multicore Systems
2. Computer Clusters for Scalable Parallel Computing
History of Distributed Systems Joseph Cordina
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Mapping Techniques for Load Balancing
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Computer System Architectures Computer System Software
Parallel Processing LAB NO 1.
Ch. 1. Distributed System Models1 Ch. 1. Distributed System Models and Enabling Technologies 병렬처리로 고성능과 고처리 컴퓨팅 시스템을 연구 컴퓨터 클러스터, 서비스 - 지향 구조, 계산 그리드,
N. GSU Slide 1 Chapter 02 Cloud Computing Systems N. Xiong Georgia State University.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
1 SIAC 2000 Program. 2 SIAC 2000 at a Glance AMLunchPMDinner SunCondor MonNOWHPCGlobusClusters TuePVMMPIClustersHPVM WedCondorHPVM.
Distributed Computing Systems CSCI 4780/6780. Geographical Scalability Challenges Synchronous communication –Waiting for a reply does not scale well!!
Parallel Computing.
Distributed Computing Systems CSCI 6900/4900. Review Distributed system –A collection of independent computers that appears to its users as a single coherent.
Distributed Computing Systems CSCI 4780/6780. Scalability ConceptExample Centralized servicesA single server for all users Centralized dataA single on-line.
Outline Why this subject? What is High Performance Computing?
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 May 2, 2006 Session 29.
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-2.
CS4315A. Berrached:CMS:UHD1 Introduction to Operating Systems Chapter 1.
1 TCS Confidential. 2 Objective : In this session we will be able to learn:  What is Cloud Computing?  Characteristics  Cloud Flavors  Cloud Deployment.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Background Computer System Architectures Computer System Software.
Primitive Concepts of Distributed Systems Chapter 1.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
These slides are based on the book:
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Overview Parallel Processing Pipelining
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
Clouds , Grids and Clusters
CS5102 High Performance Computer Systems Thread-Level Parallelism
Distributed Processors
Definition of Distributed System
Jhashuva. U1 Asst. Prof. Dept. of CSE
Parallel Programming By J. H. Wang May 2, 2017.
Grid Computing.
Recap: introduction to e-science
University of Technology
GRID COMPUTING PRESENTED BY : Richa Chaudhary.
Overview Introduction to Operating Systems
Cloud Computing.
Chapter 1: Introduction
Distributed System Structures 16: Distributed Structures
Advanced Operating Systems
What is Parallel and Distributed computing?
Parallel and Multiprocessor Architectures – Shared Memory
Chapter 17: Database System Architectures
Chapter 17 Parallel Processing
CLUSTER COMPUTING.
CSE8380 Parallel and Distributed Processing Presentation
Language Processors Application Domain – ideas concerning the behavior of a software. Execution Domain – Ideas implemented in Computer System. Semantic.
Part 2: Parallel Models (I)
By Brandon, Ben, and Lee Parallel Computing.
MPJ: A Java-based Parallel Computing System
Database System Architectures
Chapter 1: Introduction
Presentation transcript:

Evolution of Distributed Computing Scalable computing over the internet

Scalable Computing Over Internet Age of Internet Computing Platform evolution 1950-70: mainframes like IBM 360 1960-80: low cost minis – VAX 1970-90: PC  VLSI 1980-2000:portable computers and pervasive devices 1990-2010: shared web resources over Internet. Using HPC and HTC in grid/ cloud env is elaborated in figure Degrees of parallelism Bit level : 4 to 8…64 bit CPU; Serial to word parallel Instruction level: simultaneous multiple inst, pipelining- branch prediction, dy sch, seculation Data level: SIMD, vector machines Task level: fine grained, difficult programming; parallel Job level: course grained; distributed

Web 2.0, Clouds, and Internet of Things HPC: High-Performance Computing HTC: High-Throughput Computing P2P: Peer to Peer MPP: Massively Parallel Processors Source: K. Hwang, G. Fox, and J. Dongarra, Distributed Systems and Cloud Computing,

Top 10 Technologies for 2010

Mainly for file sharing Geographically dispersed peers P2P Mainly for file sharing Geographically dispersed peers Autonomous nodes Decentralised Clusters Resource sharing Close to each other, Usually homogenous Centralised control, cooperative working Shared Memory Computing Parallel systems, multicore Divide and conquer synchronization Tightly Coupled High Throughput Computing Distributed Computing, loosely coupled Disparate Autonomous heterogenous systems Computation intensive long period,Sharing , single adm High Performance Computing Tightly coupled, fine grain parallelism Homogenous Systems high computing power, short period Low latency communication GRID Heterogeneous systems, HTC VO – trust groups, dynamic, cross organisational Geographically dispersed Resource sharing Scientific, distribution of work among all resources CLOUD Heterogeneous systems , HPC On demand resource provisioning over Internet Data centric with grid backbone, utility value Elastic , Business, full utilization of resources Web Services Application integration Separation of concerns Data integration, interoperability Virtualisation System integration Multi tenancy Sharing a resource among multiple clients Viewing a single system as multiple resources

Centralised Computing all computer resources are centralized in one physical system. Fully shared and tightly coupled within one integrated OS. Many data centers and supercomputers are centralized systems, but they are used in parallel, distributed, and cloud computing applications High Performance Computing (HPC) Parallel computing Super computing Cloud computing High Throughput Computing (HTC) Distributed computing Grid computing Cluster computing Utility computing Many Task Computing (MTC)

High Performance Computing HPC contains High Cost tightly coupled systems that are interconnected using low latency communication channels that executes parallel tasks on a particular domain which needs large amount of computing power over a short period of time Metric : Flops/sec, MB/sec Nature of Job: fine grain parallel Parallel computing, Cloud computing, Super Computing

Parallel Computing Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). Open MP Message passing shared mem

High Throughput Computing HTC consists of loosely coupled independent systems that executes sequential jobs that requires large amount of compute power over a long period of time. Comm is through network High flux computing Metric : Operations /Month throughput Nature of Job: Coarse grain parallel, Embarrassingly parallel. Distributed computing, Grid computing, Cluster Computing, Utility computing

Distributed computing A distributed system consists of multiple autonomous computers that communicate through a computer network. There are several autonomous processes, each of which has its own local memory. The processes communicate with each other by message passing. Issues: No common Clock, Load Balancing, latency Programming Languages: MPI

Many Task Computing MTC is similar to HTC, but it differs in the emphasis of using many computing resources over short periods of time to accomplish many computational tasks (i.e. including both dependent and independent tasks), where the primary metrics are measured in seconds (e.g. FLOPS, tasks/s, MB/s I/O rates), as opposed to operations (e.g. jobs) per month. MTC denotes high-performance computations comprising multiple distinct activities, coupled via file system operations. Tasks may be small or large, uniprocessor or multiprocessor, compute-intensive or data-intensive. The set of tasks may be static or dynamic, homogeneous or heterogeneous, loosely coupled or tightly coupled.

Trend towards utility computing When the Internet was introduced in 1969, Leonard Klienrock of UCLA declared: “As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of computer utilities, which like present electric and telephone utilities, will service individual homes and offices across the country.” Utility computing focuses on a business model in which customers receive computing resources from a paid service provider. All grid/cloud platforms are regarded as utility service providers

Vision of utility computing

Hypecycle of new technologies The expectations rise sharply from the trigger period to a high peak of inflated expectations. Through a short period of disillusionment, the expectation may drop to a valley and then increase steadily over a long enlightenment period to a plateau of productivity 2010 2 years, 2 to 5 years , 5 to 10 years, more than 10 years, obsolete

Applns of HTC and HPC

Cyber Physical System A cyber-physical system (CPS) is the result of interaction between computational processes and the physical world. A CPS integrates “cyber” (heterogeneous, asynchronous) with “physical” (concurrent and information-dense) objects. A CPS merges the “3C” technologies of computation, communication, and control into an intelligent closed feedback system between the physical world and the information world The IoT emphasizes various networking connections among physical objects, while the CPS emphasizes exploration of virtual reality (VR) applications in the physical world. transform how we interact with the physical world just like the Internet transformed how we interact with the virtual world