Amdahl's Law Validity of the single processor approach to achieving large scale computing capabilities Presented By: Mohinderpartap Salooja.

Slides:



Advertisements
Similar presentations
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
Advertisements

Distributed Systems CS
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
PradeepKumar S K Asst. Professor Dept. of ECE, KIT, TIPTUR. PradeepKumar S K, Asst.
Lecturer: Simon Winberg Lecture 18 Amdahl’s Law & YODA Blog & Design Review.
Potential for parallel computers/parallel programming
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
Computer Organization and Architecture 18 th March, 2008.
Example (1) Two computer systems have been tested using three benchmarks. Using the normalized ratio formula and the following tables below, find which.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 5, 2005 Lecture 2.
1 Lecture 4 Analytical Modeling of Parallel Programs Parallel Computing Fall 2008.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Quantitative.
Performance Metrics Parallel Computing - Theory and Practice (2/e) Section 3.6 Michael J. Quinn mcGraw-Hill, Inc., 1994.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
Recap.
CS 584 Lecture 11 l Assignment? l Paper Schedule –10 Students –5 Days –Look at the schedule and me your preference. Quickly.
Lecture 5 Today’s Topics and Learning Objectives Quinn Chapter 7 Predict performance of parallel programs Understand barriers to higher performance.
Steve Lantz Computing and Information Science Parallel Performance Week 7 Lecture Notes.
Fall 2001CS 4471 Chapter 2: Performance CS 447 Jason Bakos.
Amdahl's Law.
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
Computer Science 320 Measuring Speedup. What Is Running Time? T(N, K) says that the running time T is a function of the problem size N and the number.
Performance Evaluation of Parallel Processing. Why Performance?
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Compiled by Maria Ramila Jimenez
Lecturer: Simon Winberg Lecture 18 Amdahl’s Law (+- 25 min)
Parallel Processing Sharing the load. Inside a Processor Chip in Package Circuits Primarily Crystalline Silicon 1 mm – 25 mm on a side 100 million to.
Some Notes on Performance Noah Mendelsohn Tufts University Web: COMP 40: Machine.
Scaling Area Under a Curve. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
1 CS/COE0447 Computer Organization & Assembly Language CHAPTER 4 Assessing and Understanding Performance.
Chapter 1 Performance & Technology Trends Read Sections 1.5, 1.6, and 1.8.
Parallel Programming with MPI and OpenMP
1  1998 Morgan Kaufmann Publishers How to measure, report, and summarize performance (suorituskyky, tehokkuus)? What factors determine the performance.
Performance Performance
Advanced Computer Networks Lecture 1 - Parallelization 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Scaling Conway’s Game of Life. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Computer Science 320 Measuring Sizeup. Speedup vs Sizeup If we add more processors, we should be able to solve a problem of a given size faster If we.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
Computer Organization CS345 David Monismith Based upon notes by Dr. Bill Siever and from the Patterson and Hennessy Text.
Performance. Moore's Law Moore's Law Related Curves.
Potential for parallel computers/parallel programming
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to Parallelism.
Literature Review Dongdong Chen Peng Huang
EE 193: Parallel Computing
Parallel Processing Sharing the load.
Closing Remarks Cyrus M. Vahid, Principal Solutions Architect,
CSE8380 Parallel and Distributed Processing Presentation
Quiz Questions Parallel Programming Parallel Computing Potential
By Brandon, Ben, and Lee Parallel Computing.
Workshop on Empirical Methods for the Analysis of Algorithms
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Potential for parallel computers/parallel programming
Dealing with reading assignments
Chapter 2: Performance CS 447 Jason Bakos Fall 2001 CS 447.
Presentation transcript:

Amdahl's Law Validity of the single processor approach to achieving large scale computing capabilities Presented By: Mohinderpartap Salooja

History Gene Amdahl, chief architect of IBM's first mainframe series and founder of Amdahl Corporation and other companies found that there were some fairly stringent restrictions on how much of a speedup one could get for a given parallelized task. These observations were wrapped up in Amdahl's Law

INTRODUCTION If F is the fraction of a calculation that is sequential, and (1-F) is the fraction that can be parallelized, then the maximum speed-up that can be achieved by using P processors is 1/(F+(1-F)/P).

Examples if 90% of a calculation can be parallelized (i.e. 10% is sequential) then the maximum speed-up which can be achieved on 5 processors is 1/(0.1+(1-0.1)/5) or roughly 3.6 (i.e. the program can theoratically run 3.6 times faster on five processors than on one)

Examples If 90% of a calculation can be parallelized then the maximum speed-up on 10 processors is 1/(0.1+(1-0.1)/10) or 5.3 (i.e. investing twice as much hardware speeds the calculation up by about 50%). If 90% of a calculation can be parallelized then the maximum speed-up on 20 processors is 1/(0.1+(1-0.1)/20) or 6.9 (i.e. doubling the hardware again speeds up the calculation by only 30%).

Examples if 90% of a calculation can be parallelized then the maximum speed-up on 1000 processors is 1/(0.1+(1-0.1)/1000) or 9.9 (i.e. throwing an absurd amount of hardware at the calculation results in a maximum theoretical (i.e. actual results will be worse) speed-up of 9.9 vs a single processor).

Essence The point that Amdahl was trying to make was that using lots of parallel processors was not a viable way of achieving the sort of speed-ups that people were looking for. i.e. it was essentially an argument in support of investing effort in making single processor systems run faster.

Generalizations The performance of any system is constrained by the speed or capacity of the slowest point. The impact of an effort to improve the performance of a program is primarily constrained by the amount of time that the program spends in parts of the program not targeted by the effort

MAXIMUM THEORETICAL SPEED-UP Amdahl's Law is a statement of the maximum theoretical speed-up you can ever hope to achieve. The actual speed-ups are always less than the speed-up predicted by Amdahl's Law

Why actual speed ups are always less ? distributing work to the parallel processors and collecting the results back together is extra work required in the parallel version which isn't required in the serial version straggler problem :when the program is executing in the parallel parts of the code, it is unlikely that all of the processors will be computing all of the time as some of them will likely run out of work to do before others are finished their part of the parallel work. straggler problem

Bottom Line Anyone intending to write parallel software simply MUST have a very deep understanding of Amdahl's Law if they are to avoid having unrealistic expectations of what parallelizing a program/algorithm can achieve and if they are to avoid underestimating the effort required to achieve their performance expectations

Questions ?