Parallel Computing and Parallel Computers

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
Potential for parallel computers/parallel programming
Chapter 1 Parallel Computers.
Parallel Computers Chapter 1
COMPE 462 Parallel Computing
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
Introduction to Cryptography and Security Mechanisms: Unit 5 Theoretical v Practical Security Dr Keith Martin McCrea
1 Lecture 4 Analytical Modeling of Parallel Programs Parallel Computing Fall 2008.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming with MPI and OpenMP Michael J. Quinn.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Quantitative.
Lecture 5 Today’s Topics and Learning Objectives Quinn Chapter 7 Predict performance of parallel programs Understand barriers to higher performance.
Complexity 19-1 Parallel Computation Complexity Andrei Bulatov.
1a-1.1 Parallel Computing Demand for High Performance ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson Dec 27, 2012 slides1a-1.
CS 420 Design of Algorithms Analytical Models of Parallel Algorithms.
Lecture 3 – Parallel Performance Theory - 1 Parallel Performance Theory - 1 Parallel Computing CIS 410/510 Department of Computer and Information Science.
Performance Evaluation of Parallel Processing. Why Performance?
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
INTEL CONFIDENTIAL Predicting Parallel Performance Introduction to Parallel Programming – Part 10.
Parallel Computing and Parallel Computers. Home work assignment 1. Write few paragraphs (max two page) about yourself. Currently what going on in your.
Grid Computing, B. Wilkinson, 20047a.1 Computational Grids.
1 BİL 542 Parallel Computing. 2 Parallel Programming Chapter 1.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Parallel Processing Steve Terpe CS 147. Overview What is Parallel Processing What is Parallel Processing Parallel Processing in Nature Parallel Processing.
Lecture 9 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Scaling Area Under a Curve. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
CSCI-455/522 Introduction to High Performance Computing Lecture 1.
Parallel Programming with MPI and OpenMP
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Introduction. Demand for Computational Speed Continual demand for greater computational speed from a computer system than is currently possible Areas.
1 BİL 542 Parallel Computing. 2 Parallel Programming Chapter 1.
SNU OOPSLA Lab. 1 Great Ideas of CS with Java Part 1 WWW & Computer programming in the language Java Ch 1: The World Wide Web Ch 2: Watch out: Here comes.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Scaling Conway’s Game of Life. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
Classification of parallel computers Limitations of parallel processing.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
All-to-All Pattern A pattern where all (slave) processes can communicate with each other Somewhat the worst case scenario! 1 ITCS 4/5145 Parallel Computing,
Parallel Computers Chapter 1.
Potential for parallel computers/parallel programming
Parallel Computing and Parallel Computers
PERFORMANCE EVALUATIONS
Parallel Computing Demand for High Performance
Parallel Computers.
Objective of This Course
Parallel Computing Demand for High Performance
CSE8380 Parallel and Distributed Processing Presentation
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012 slides5.ppt Oct 24, 2013.
Quiz Questions Parallel Programming Parallel Computing Potential
All-to-All Pattern A pattern where all (slave) processes can communicate with each other Somewhat the worst case scenario! ITCS 4/5145 Parallel Computing,
Parallel Computing Demand for High Performance
Parallel Computing Demand for High Performance
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, 2012 slides5.ppt March 20, 2014.
Pipeline Pattern ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson slides5.ppt August 17, 2014.
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
Parallel Speedup.
Potential for parallel computers/parallel programming
Programming with Shared Memory Specifying parallelism
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Potential for parallel computers/parallel programming
Presentation transcript:

Parallel Computing and Parallel Computers ITCS 4/5145 Parallel Programming UNC-Charlotte, B. Wilkinson, 2012. Jan 12, 2012

Parallel Computing Motives Using more than one computer, or a computer with more than one processor, to solve a problem. Motives Usually faster computation. Very simple idea n computers operating simultaneously can achieve the result faster it will not be n times faster for various reasons Other motives include: fault tolerance, larger amount of memory available, ...

Demand for Computational Speed Continual demand for greater computational speed from a computer system than is currently possible Areas requiring great computational speed include: Numerical modeling Simulation of scientific and engineering problems. Computations need to be completed within a “reasonable” time period.

“Grand Challenge” Problems Ones that cannot be solved in a reasonable amount of time with today’s computers. Obviously, an execution time of 10 years is always unreasonable. Grand Challenge Problem Examples Modeling large DNA structures Global weather forecasting Modeling motion of astronomical bodies.

Weather Forecasting Atmosphere modeled by dividing it into 3-dimensional cells. Calculations of each cell repeated many times to model passage of time. Temperature, pressure, humidity, etc.

Global Weather Forecasting Example Suppose whole global atmosphere divided into cells of size 1 mile  1 mile  1 mile to a height of 10 miles (10 cells high) - about 5  108 cells. Suppose each calculation requires 200 floating point operations. In one time step, 1011 floating point operations necessary. To forecast weather over 7 days using 1-minute intervals, a computer operating at 1Gflops (109 floating point operations/s) takes 106 seconds or over 10 days. To perform calculation in 5 minutes requires computer operating at 3.4 Tflops (3.4  1012 floating point operations/sec) Needs to be 34,000 faster.

Modeling Motion of Astronomical Bodies Each body attracted to each other body by gravitational forces. Movement of each body predicted by calculating total force on each body.

Modeling Motion of Astronomical Bodies With N bodies, N - 1 forces to calculate for each body, or approx. N2 calculations, i.e. O(N2) * After determining new positions of bodies, calculations repeated, i.e. N2 x T calculations where T is the number of time steps. * There is an O(N log2 N) algorithm, which we will cover in the course

A galaxy might have, say, 1011 stars. Even if each calculation done in 1 ms (extremely optimistic figure), it takes: 109 years for one iteration using N2 algorithm or Almost a year for one iteration using the N log2 N algorithm assuming the calculations take the same time (which may not be true).

Astrophysical N-body simulation by Scott Linssen (undergraduate UNC-Charlotte student).

Programming parallel computers Parallel programming Programming parallel computers Has been around for more than 50 years.

Gill writes in 1958: “... There is therefore nothing new in the idea of parallel programming, but its application to computers. The author cannot believe that there will be any insuperable difficulty in extending it to computers. It is not to be expected that the necessary programming techniques will be worked out overnight. Much experimenting remains to be done. After all, the techniques that are commonly used in programming today were only won at the cost of considerable toil several years ago. In fact the advent of parallel programming may do something to revive the pioneering spirit in programming which seems at the present to be degenerating into a rather dull and routine occupation ...” Gill, S. (1958), “Parallel Programming,” The Computer Journal, vol. 1, April, pp. 2-10.

Potential for parallel computers/parallel programming

Speedup Factor ts S(p) = = tp Execution time using one processor (best sequential algorithm) S(p) = = tp Execution time using a multiprocessor with p processors where ts is execution time on a single processor and tp is execution time on a multiprocessor. S(p) gives increase in speed by using multiprocessor. Typically use best sequential algorithm with single processor system. Underlying algorithm for parallel implementation might be (and is usually) different.

Speedup factor can also be cast in terms of computational steps: Can also extend time complexity to parallel computations. Number of computational steps using one processor S(p) = Number of parallel computational steps with p processors

Maximum Speedup Maximum speedup usually p with p processors (linear speedup). Possible to get superlinear speedup (greater than p) but usually a specific reason such as: Extra memory in multiprocessor system Nondeterministic algorithm

Maximum Speedup Amdahl’s law t s ft (1 - f ) t s s Serial section Parallelizable sections (a) One processor (b) Multiple processors p processors (1 - f ) t / p t s p

Speedup factor is given by: This equation is known as Amdahl’s law

Speedup against number of processors 4 8 12 16 20 f = 20% = 10% = 5% = 0% Number of processors , p

Even with infinite number of processors, maximum speedup limited to 1/f. Example With only 5% of computation being serial, maximum speedup is 20, irrespective of number of processors. This is a very discouraging result. Amdahl used this argument to support the design of ultra-high speed single processor systems in the 1960s.

Gustafson’s law Later, Gustafson (1988) described how the conclusion of Amdahl’s law might be overcome by considering the effect of increasing the problem size. He argued that when a problem is ported onto a multiprocessor system, larger problem sizes can be considered, that is, the same problem but with a larger number of data values.

Gustafson’s law Starting point for Gustafson’s law is the computation on the multiprocessor rather than on the single computer. In Gustafson’s analysis, parallel execution time kept constant, which we assume to be some acceptable time for waiting for the solution.

Gustafson’s law Parallel computation composed of fraction computed sequentially say f ’ and fraction that contains parallel parts,1 – f ’. Gustafson’s so-called scaled speedup fraction given by: f ’ is fraction of computation on multiprocessor that cannot be parallelized. tp is regarded as constant. f ’ is different to f previously, which is fraction of computation on a single computer that cannot be parallelized. Conclusion - almost linear increase in speedup with increasing number of processors, but fractional sequential part f ‘ needs to diminish with increasing problem size. f ’tp + (1 – f ’)ptp S’(p) = = p + (1 – p)f ’ tp

Gustafson’s law For example if is 5%, the scaled speedup computes to 19.05 with 20 processors whereas with Amdahl’s law with f = 5% the speedup computes to 10.26. Gustafson quotes results obtained in practice of very high speedup close to linear on a 1024-processor hypercube.

Superlinear Speedup example - Searching (a) Searching each sub-space sequentially t s /p Start Time D Solution found x Sub-space search indeterminate

(b) Searching each sub-space in parallel Solution found D t (b) Searching each sub-space in parallel

Speed-up then given by t x ´ s + D t p S(p) = D t

p – 1 ´ t + D t s p S(p) = ® ¥ D t as D t tends to zero Worst case for sequential search when solution found in last sub-space search. Then parallel version offers greatest benefit, i.e. p – 1 ´ t + D t s p S(p) = ® ¥ D t as D t tends to zero   

Least advantage for parallel version when solution found in first sub-space search of the sequential search, i.e. Actual speed-up depends upon which subspace holds solution but could be extremely large. D t S(p) = = 1 D t

Next question to answer is how does one construct a computer system with multiple processors to achieve the speed-up?