Amdahl's law.

Slides:



Advertisements
Similar presentations
CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Advertisements

CDA 3101 Fall 2010 Discussion Section 08 CPU Performance
Distributed Systems CS
Computer Abstractions and Technology
Potential for parallel computers/parallel programming
9/21/12. I can identify which factors are prime numbers. LEARNING TARGET.
Computer Organization and Architecture 18 th March, 2008.
Example (1) Two computer systems have been tested using three benchmarks. Using the normalized ratio formula and the following tables below, find which.
CS 584. Logic The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basis of.
Performance D. A. Patterson and J. L. Hennessey, Computer Organization & Design: The Hardware Software Interface, Morgan Kauffman, second edition 1998.
1 Tuesday, October 03, 2006 If I have seen further, it is by standing on the shoulders of giants. -Isaac Newton.
Recap.
Steve Lantz Computing and Information Science Parallel Performance Week 7 Lecture Notes.
DCS/2003/1 CENG Distributed Computing Systems Measures of Performance.
Capacity Planning Primer Dennis Shasha. Capacity Planning Arrival Rate –A1 is given as an assumption –A2 = (0.4 A1) + (0.5 A2) –A3 = 0.1 A2 Service Time.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Yulia Newton CS 147, Fall 2009 SJSU. What is it? “Parallel processing is the ability of an entity to carry out multiple operations or tasks simultaneously.
Amdahl's Law Validity of the single processor approach to achieving large scale computing capabilities Presented By: Mohinderpartap Salooja.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 6.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Compiled by Maria Ramila Jimenez
Parallel Processing Sharing the load. Inside a Processor Chip in Package Circuits Primarily Crystalline Silicon 1 mm – 25 mm on a side 100 million to.
Scaling Area Under a Curve. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Computer Architecture
Advanced Computer Networks Lecture 1 - Parallelization 1.
Performance – Last Lecture Bottom line performance measure is time Performance A = 1/Execution Time A Comparing Performance N = Performance A / Performance.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Scaling Conway’s Game of Life. Why do parallelism? Speedup – solve a problem faster. Accuracy – solve a problem better. Scaling – solve a bigger problem.
Computer Engineering Rabie A. Ramadan Lecture 2. Table of Contents 2 Architecture Development and Styles Performance Measures Amdahl’s Law.
Amdahl’s Law CPS 5401 Fall 2013 Shirley Moore
Concurrency and Performance Based on slides by Henri Casanova.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Thinking in Parallel - Introduction New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Intro to Computer Org. Assessing Performance. What Is Performance? What do we mean when we talk about the “performance” of a CPU?
1 Components of the Virtual Memory System  Arrows indicate what happens on a lw virtual address data physical address TLB page table memory cache disk.
Distributed and Parallel Processing George Wells.
Computer Organization CS345 David Monismith Based upon notes by Dr. Bill Siever and from the Patterson and Hennessy Text.
Performance. Moore's Law Moore's Law Related Curves.
Introduction to Parallel Processing
Generalized and Hybrid Fast-ICA Implementation using GPU
Potential for parallel computers/parallel programming
Parallel Computing and Parallel Computers
Software Architecture in Practice
Introduction to Parallelism.
Chapter One Introduction to Pipelined Processors
CS2100 Computer Organisation
Parallel Computers.
Parallel Programming.
Parallel Processing Sharing the load.
Other time considerations
Quiz Questions Parallel Programming Parallel Computing Potential
CS 584.
By Brandon, Ben, and Lee Parallel Computing.
Introduction to High Performance Computing Lecture 7
Performance Cycle time of a computer CPU speed speed = 1 / cycle time
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Example 1 Written Description:
Potential for parallel computers/parallel programming
Potential for parallel computers/parallel programming
The University of Adelaide, School of Computer Science
Potential for parallel computers/parallel programming
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Quiz Questions Parallel Programming Parallel Computing Potential
Potential for parallel computers/parallel programming
Single Cycle MIPS Implementation
Presentation transcript:

Amdahl's law

Example from Wikipedia Suppose a task is split into four consecutive parts: P1 ( takes 11% of the time required), P2(18%), P3(23%) and P4(48%). Further suppose Then that part P1 is not sped up, while P2 is sped up by a factor of 5 times, P3 is sped up 20×, and P4 is sped up 1.6×.

Improvement in speed The 11% still takes the same amount of time, while the 18% is divided by 5 (i.e. is 5 times faster), etc. If the original time was 1, the new time is 0.4575 – a little more than twice as fast. Speed is:

Amdahl’s law In Amdahl’s Law we break the process into two pieces – a parallelizable part P and a non-parallelizable part (1-P). If we assume that there are N processors (or pipelines or whatever) then the improvement of the parallelizable part is N.

Limit as N  ∞   1/(1-P) If a process is 90% parallelizable, then the maximal improvement in speed is 1/(1-.9) or 10. It can be made 10 times faster. “For this reason, parallel computing is only useful for either small numbers of processors, or problems with very high values of P: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce the component (1 – P) to the smallest possible value.”

Reference http://en.wikipedia.org/wiki/Amdahl's_law