Slide 1 COMP 308 Parallel Efficient Algorithms Lecturer: Dr. Igor Potapov Ashton Building, room 3.15 COMP 308 web-page:

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

ICS 556 Parallel Algorithms Ebrahim Malalla Office: Bldg 22, Room
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
Slide 1 Parallel Computation Models Lecture 3 Lecture 4.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 5, 2005 Lecture 2.
1 Lecture 4 Analytical Modeling of Parallel Programs Parallel Computing Fall 2008.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
CS 584 Lecture 11 l Assignment? l Paper Schedule –10 Students –5 Days –Look at the schedule and me your preference. Quickly.
Lecture 5 Today’s Topics and Learning Objectives Quinn Chapter 7 Predict performance of parallel programs Understand barriers to higher performance.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Introduction to Parallel Processing Ch. 12, Pg
10-1 Chapter 10 - Trends in Computer Architecture Principles of Computer Architecture by M. Murdocca and V. Heuring © 1999 M. Murdocca and V. Heuring Principles.
1a-1.1 Parallel Computing Demand for High Performance ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson Dec 27, 2012 slides1a-1.
Lecture 3 – Parallel Performance Theory - 1 Parallel Performance Theory - 1 Parallel Computing CIS 410/510 Department of Computer and Information Science.
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
Pipeline And Vector Processing. Parallel Processing The purpose of parallel processing is to speed up the computer processing capability and increase.
Flynn’s Taxonomy SISD: Although instruction execution may be pipelined, computers in this category can decode only a single instruction in unit time SIMD:
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
RESOURCES, TRADE-OFFS, AND LIMITATIONS Group 5 8/27/2014.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
(Short) Introduction to Parallel Computing CS 6560: Operating Systems Design.
Lecture 9 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
CSCI-455/522 Introduction to High Performance Computing Lecture 1.
Slide 1 Course Description and Objectives: The aim of the module is –to introduce techniques for the design of efficient parallel algorithms and –their.
Parallel Computing.
Pipelining and Parallelism Mark Staveley
Data Structures and Algorithms in Parallel Computing Lecture 1.
10-1 Chapter 10 - Trends in Computer Architecture Department of Information Technology, Radford University ITEC 352 Computer Organization Principles of.
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Modern Information Retrieval
Parallel Computing Presented by Justin Reschke
CSCI-455/552 Introduction to High Performance Computing Lecture 21.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dynamic Mapping Dr. Xiao Qin Auburn University
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
Classification of parallel computers Limitations of parallel processing.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Decomposition and Parallel Tasks (cont.) Dr. Xiao Qin Auburn University
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Auburn University
Auburn University
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Exploratory Decomposition Dr. Xiao Qin Auburn.
Parallel Processing - introduction
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Parallel Odd-Even Sort Algorithm Dr. Xiao.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Data Partition Dr. Xiao Qin Auburn University.
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Morgan Kaufmann Publishers
Data Structures and Algorithms in Parallel Computing
Objective of This Course
CSE8380 Parallel and Distributed Processing Presentation
Overview Parallel Processing Pipelining
AN INTRODUCTION ON PARALLEL PROCESSING
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Presentation transcript:

Slide 1 COMP 308 Parallel Efficient Algorithms Lecturer: Dr. Igor Potapov Ashton Building, room COMP 308 web-page: Introduction to Parallel Computation

Slide 2 Course Description and Objectives: The aim of the module is –to introduce techniques for the design of efficient parallel algorithms and –their implementation.

Slide 3 At the end of the course you will be:  familiar with the wide applicability of graph theory and tree algorithms as an abstraction for the analysis of many practical problems,  familiar with the efficient parallel algorithms related to many areas of computer science: expression computation, sorting, graph-theoretic problems, computational geometry, algorithmics of texts etc.  familiar with the basic issues of implementing parallel algorithms. Also a knowledge will be acquired of those problems which have been perceived as intractable for parallelization. Learning Outcomes:

Slide 4 Teaching method Series of 30 lectures ( 3hrs per week ) LectureMonday LectureTuesday LectureFriday Course Assessment A two-hour examination80% Continues assignment (Written class test + Home assignment)20%

Slide 5 Recommended Course Textbooks Introduction to Algorithms Cormen et al. Introduction to Parallel Computing: Design and Analysis of Algorithms Vipin Kumar, Ananth Grama, Anshul Gupta, and George Karypis, Benjamin Cummings 2 nd ed Efficient Parallel Algorithms A.Gibbons, W.Rytter, Cambridge University Press Research papers (will be announced later)

Slide 6 What is Parallel Computing? (basic idea) Consider the problem of stacking (reshelving) a set of library books. – A single worker trying to stack all the books in their proper places cannot accomplish the task faster than a certain rate. – We can speed up this process, however, by employing more than one worker.

Slide 7 Solution 1 Assume that books are organized into shelves and that the shelves are grouped into bays One simple way to assign the task to the workers is: – To divide the books equally among them. – Each worker stacks the books one a time This division of work may not be most efficient way to accomplish the task since – The workers must walk all over the library to stack books.

Slide 8 Solution 2 An alternative way to divide the work is to assign a fixed and disjoint set of bays to each worker. As before, each worker is assigned an equal number of books arbitrarily. – If the worker finds a book that belongs to a bay assigned to him or her, he or she places that book in its assignment spot – Otherwise, He or she passes it on to the worker responsible for the bay it belongs to. The second approach requires less effort from individual workers Instance of task partitioning Instance of Communication task

Slide 9 Problems are parallelizable to different degrees For some problems, assigning partitions to other processors might be more time-consuming than performing the processing locally. Other problems may be completely serial. – For example, consider the task of digging a post hole. Although one person can dig a hole in a certain amount of time, Employing more people does not reduce this time

Slide 10 Power of parallel solutions Pile collection – Ants/robots with very limited abilities (see its neighbourhood ) – Grid environment (sticks and robots) Move() Move randomly (  ) Until robot sees a stick in its nighbouhood Collect() Move(); Pick up a sick; Move(); Put it down; Collect();

Slide 11 Sorting in nature

Slide 12 Parallel Processing (Several processing elements working to solve a single problem) Primary consideration: elapsed time – NOT: throughput, sharing resources, etc. Downside: complexity – system, algorithm design Elapsed Time = computation time + communication time + synchronization time

Slide 13 Design of efficient algorithms A parallel computer is of little use unless efficient parallel algorithms are available. – The issue in designing parallel algorithms are very different from those in designing their sequential counterparts. – A significant amount of work is being done to develop efficient parallel algorithms for a variety of parallel architectures.

Slide 14 Processor Trends Moore’s Law – performance doubles every 18 months Parallelization within processors – pipelining – multiple pipelines

Slide 15 Why Parallel Computing Practical: – Moore’s Law cannot hold forever – Problems must be solved immediately – Cost-effectiveness – Scalability Theoretical: – challenging problems

Slide 16 Some Complex Problems N-body simulation Atmospheric simulation Image generation Oil exploration Financial processing Computational biology

Slide 17 Some Complex Problems N-body simulation – O(n log n) time – galaxy  stars  approx. one year / iteration Atmospheric simulation – 3D grid, each element interacts with neighbors – 1x1x1 mile element  5  10 8 elements – 10 day simulation requires approx. 100 days

Slide 18 Some Complex Problems Image generation – animation, special effects – several minutes of video  50 days of rendering Oil exploration – large amounts of seismic data to be processed – months of sequential exploration

Slide 19 Some Complex Problems Financial processing – market prediction, investing – Cornell Theory Center, Renaissance Tech. Computational biology – drug design – gene sequencing (Celera) – structure prediction (Proteomics)

Slide 20 Fundamental Issues Is the problem amenable to parallelization? How to decompose the problem to exploit parallelism? What machine architecture should be used? What parallel resources are available? What kind of speedup is desired?

Slide 21 Two Kinds of Parallelism Pragmatic – goal is to speed up a given computation as much as possible – problem-specific – techniques include: overlapping instructions (multiple pipelines) overlapping I/O operations (RAID systems) “traditional” (asymptotic) parallelism techniques

Slide 22 Two Kinds of Parallelism Asymptotic – studies: architectures for general parallel computation parallel algorithms for fundamental problems limits of parallelization – can be subdivided into three main areas

Slide 23 Asymptotic Parallelism Models – comparing/evaluating different architectures Algorithm Design – utilizing a given architecture to solve a given problem Computational Complexity – classifying problems according to their difficulty

Slide 24 Architecture Single processor: – single instruction stream – single data stream – von Neumann model Multiple processors: – Flynn’s taxonomy

Slide 25 MISD SISD MIMD SIMD 1 Many 1 Data Streams Instruction Streams Flynn’s Taxonomy

Slide 26

Slide 27 Parallel Architectures Multiple processing elements Memory: – shared – distributed – hybrid Control: – centralized – distributed

Slide 28 Parallel vs Distributed Computing Parallel: –several processing elements concurrently solving a single same problem Distributed: –processing elements do not share memory or system clock Which is the subset of which? –distributed is a subset of parallel

Slide 29 Efficient and optimal parallel algorithms A parallel algorithm is efficient iff – it is fast (e.g. polynomial time) and – the product of the parallel time and number of processors is close to the time of at the best know sequential algorithm T sequential  T parallel  N processors A parallel algorithms is optimal iff this product is of the same order as the best known sequential time

Slide 30 A measure of relative performance between a multiprocessor system and a single processor system is the speed-up S( p), defined as follows: S( p) = Execution time using a single processor system Execution time using a multiprocessor with p processors S( p) = T1TpT1Tp Efficiency = SppSpp Cost = p  T p Metrics

Slide 31 Metrics Parallel algorithm is cost-optimal: parallel cost = sequential time C p = T 1 E p = 100% Critical when down-scaling: parallel implementation may become slower than sequential T 1 = n 3 T p = n 2.5 when p = n 2 C p = n 4.5

Slide 32 Amdahl’s Law f = fraction of the problem that’s inherently sequential (1 – f) = fraction that’s parallel Parallel time T p : Speedup with p processors:

Slide 33 What kind of speed-up may be achieved? Part f is computed by a single processor Part (1-f) is computed by p processors, p>1 Basic observation: Increasing p we cannot speed-up part f. f

Slide 34 Amdahl’s Law Upper bound on speedup (p =  ) Example: f = 2% S = 1 / 0.02 = 50 Converges to 0

Slide 35 The basic parallel complexity class is NC. NC is a class of problems computable in poly-logarithmic time (log c n, for a constant c) using a polynomial number of processors. P is a class of problems computable sequentially in a polynomial time The main open question in parallel computations is NC = P ? The main open question