Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent.

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Lecture 6: Multicore Systems
Khaled A. Al-Utaibi  Computers are Every Where  What is Computer Engineering?  Design Levels  Computer Engineering Fields  What.
CSCI 1412 Tutorial 1 Introduction to Hardware, Software Parminder Kang Home:
Parallel Programming Yang Xianchun Department of Computer Science and Technology Nanjing University Introduction.
Chapter 8 Hardware Conventional Computer Hardware Architecture.
Background Computer System Architectures Computer System Software.
Chapter 17 Parallel Processing.
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Parallel Computer Architectures
DISTRIBUTED COMPUTING
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 1.
Prince Sultan College For Woman
Presentation On Parallel Computing  By  Abdul Mobin  KSU ID :  Id :
What is Concurrent Programming? Maram Bani Younes.
Computer System Architectures Computer System Software
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
APPLICATION DELIVERY IN UNIVERSITIES Glen D. Hauser, Joel Ahmed Engineering Computer Center (ECC) College of Engineering University of Saskatchewan.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
Classification of Computers
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
SJSU SPRING 2011 PARALLEL COMPUTING Parallel Computing CS 147: Computer Architecture Instructor: Professor Sin-Min Lee Spring 2011 By: Alice Cotti.
MANAGING SOFTWARE ASSETS ~ pertemuan 6 ~ Oleh: Ir. Abdul Hayat, MTI 1[Abdul Hayat, SIM, Semester Genap 2007/2008]
Advanced Principles of Operating Systems (CE-403).
DISTRIBUTED COMPUTING. Computing? Computing is usually defined as the activity of using and improving computer technology, computer hardware and software.
Chapter 2 Introduction to Systems Architecture. Chapter goals Discuss the development of automated computing Describe the general capabilities of a computer.
EEC4133 Computer Organization & Architecture Chapter 9: Advanced Computer Architecture by Muhazam Mustapha, May 2014.
Parallel Computing.
Processor Architecture
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Outline Why this subject? What is High Performance Computing?
Computer performance issues* Pipelines, Parallelism. Process and Threads.
EKT303/4 Superscalar vs Super-pipelined.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 3.
3/12/2013Computer Engg, IIT(BHU)1 CONCEPTS-1. Pipelining Pipelining is used to increase the speed of processing It uses temporal parallelism In pipelining,
CSC 480 Software Engineering Lecture 17 Nov 4, 2002.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Computer Architecture Lecture 24 Parallel Processing Ralph Grishman November 2015 NYU.
CS4315A. Berrached:CMS:UHD1 Introduction to Operating Systems Chapter 1.
Distributed Computing Systems CSCI 6900/4900. Review Definition & characteristics of distributed systems Distributed system organization Design goals.
Parallel Computing Presented by Justin Reschke
Background Computer System Architectures Computer System Software.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Classification of parallel computers Limitations of parallel processing.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Distributed Systems Architectures Chapter 12. Objectives  To explain the advantages and disadvantages of different distributed systems architectures.
These slides are based on the book:
PARALLEL COMPUTING.
Parallel Processing - introduction
3.3.3 Computer architectures
Grid Computing.
CSC 480 Software Engineering
What is Parallel and Distributed computing?
Distributed computing deals with hardware
What is Concurrent Programming?
What is Concurrent Programming?
Computer Evolution and Performance
Presentation transcript:

Distributed Computing Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.hardwaresoftwaresystemsstorage concurrentregime In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer.parallel computing Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

Distributed Computing : Goals and advantages There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way.transparentscalable Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.fault tolerantstand-alone

Form of Distributed Computing : Grid Computing Grid computing (or the use of a computational grid) is the application of several computers to a single problem at the same time  usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. Grid computing depends on software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing can also be thought of as distributed and large-scale cluster computing, as well as a form of network-distributed parallel processing. It can be small  confined to a network of computer workstations within a corporation, for example  or it can be a large, public collaboration across many companies or networks.

More on Grid Computing It is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely coupled computers, acting in concert to perform very large tasks.distributed computingclusterloosely coupled This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and Web services.volunteer computingdrug discoveryeconomic forecastingseismic analysisback-officee-commerceWeb services What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general-purpose grid software libraries and middleware.

Parallel Computing Parallel computing is a form of computation in which many calculations are carried out simultaneously,computation  operating on the principle that large problems can often be divided into smaller ones,  which are then solved concurrently ("in parallel").concurrently  There are several different forms of parallel computing: bit-level-, instruction-level-, data-, and task parallelism.bit-level- instruction-level-data-task parallelism Parallelism has been employed for many years, mainly in high-performance computing.high-performance computing

Parallel Computing Parallel computers can be roughly classified according to the level at which the hardware supports parallelism—  with multi-core and multi-processor computers having multiple processing elements within a single machinemulti-coremulti-processor  while clusters, MPPs, and grids use multiple computers to work on the same task.clustersMPPsgrids  Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more difficult to write than sequential ones, Parallel computer programs  because concurrency introduces several new classes of potential software bugs,software bugs  of which race conditions are the most common.race conditions  Communication and synchronization between the different subtasks is typically one of the greatest barriers to getting good parallel program performance. The speed-up of a program as a result of parallelization is given by Amdahl's law.Communicationsynchronization speed-upAmdahl's law

Fine-grained, coarse-grained, and embarrassing parallelism Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it is embarrassingly parallel if they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize.embarrassingly parallel

Hardware on Software for Distributed or Parallel Computing Hardware  High Performance Computer Beowulf Cluster Computer, Proprietary Cluster Computer. Workstation with LAN, Personal Computer. Software  Globus for Grid Middleware  MPI – Library for parallel programming for distributed memory.  JAVA, C language  Parallel Processing Model - LINDA

Additional Notes

Pipeline In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.computing buffer storage Computer-related pipelines include: Instruction pipelines, such as the classic RISC pipeline, which are used in processors to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time. Instruction pipelinesclassic RISC pipelineprocessorscircuitry Graphics pipelines, found in most graphics cards, which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.). Graphics pipelinesgraphics cardsarithmetic unitsCPUsperspective projectioncolorlight Software pipelines, consisting of multiple processes arranged so that the output stream of one process is automatically and promptly fed as the input stream of the next one. Unix pipelines are the classical implementation of this concept. Software pipelinesprocessesUnix pipelines Instruction scheduling on the Intel Pentium 4.