Download presentation
Presentation is loading. Please wait.
Published byAnna Smith Modified over 9 years ago
1
CS 615: Design & Analysis of Algorithms Chapter 2: Efficiency of Algorithms
2
25 December 2015CS 615 Design & Analysis of Algorithms2 Course Content 1.Introduction, Algorithmic Notation and Flowcharts (Brassard & Bratley, Chap. Chapter 3) 2.Efficiency of Algorithms (Brassard & Bratley, Chap. 2) 3.Basic Data Structures (Brassard & Bratley, Chap. 5) 4.Sorting (Weiss, Chap. 7) 5.Searching (Brassard & Bratley Chap.: 9) 6.Graph Algorithms (Weiss, Chap.: 9) 7.Randomized Algorithms (Weiss, Chap.: 10) 8.String Searching (Sedgewick, Chap. 19) 9.NP Completeness (Sedgewick, Chap. 40)
3
25 December 2015CS 615 Design & Analysis of Algorithms3 Definitions Problem: A situation to be solved by an algorithm Example: Multiply two integers Instance A special case of the problem Example: Multiply(981, 1234) An algorithm must work correctly On every instance of the problem it claims to solve To prove an algorithm is not correct Find an instance which the algorithm cannot solve correctly
4
25 December 2015CS 615 Design & Analysis of Algorithms4 Efficiency of Algorithms To decide which algorithm to choose: Empirical Approach Program the competing algorithms Try them on different instances with the help of the computer(s) Resources: Computing time Storage space Number of processes (for parallel algorithms)
5
25 December 2015CS 615 Design & Analysis of Algorithms5 Efficiency of Algorithms To decide which algorithm to choose: Theoritical Approach Using formal methods to anlyze the efficiency Does not depend on the computer No need to make “programming”
6
25 December 2015CS 615 Design & Analysis of Algorithms6 Efficiency of Algorithms To decide which algorithm to choose Hybrid Approach Describe algorithm’s efficiency function theoritically Empirically determine numerical parameters for a particular machine Predict t he time an actual implementation will take to solve an instance
7
25 December 2015CS 615 Design & Analysis of Algorithms7 Principle of Invariance Two different implementations of an algorithm Will not differ in efficiency by more than some multiplicative constant Example If the constant is 5: if the first implementation takes 1 second to solve an instance then a second implementation (possible on a different machine) Will not take more than 5 seconds
8
25 December 2015CS 615 Design & Analysis of Algorithms8 Principle of Invariance For an instance of size n Implementation 1: Takes time of t 1 (n) Implementation 2: Takes time of t 2 (n) There always exist positive constants c and d such that t 1 (n) c*t 2 (n) and t 2 (n) d *t 1 (n) where n is sufficiently large
9
25 December 2015CS 615 Design & Analysis of Algorithms9 Results of Principle of Invarience 1.A change on the implementation of the same algorithm 1. Can only cause a constant of change in efficiency 2.The principle does not depend on The computer we use The compiler we implement The abilities of the person making the coding 3.If we want a radical change in efficiency We need to change the algorithm itself
10
25 December 2015CS 615 Design & Analysis of Algorithms10 Theoritical Efficiency For a given function t an algorithm for some problem takes a time in the order of t(n), if there exist a positive constant c the algorithm is capable of solving every instance of size then n is not more than c*t(n) seconds/hours/years. For numerical problems n may sometimes be the value rather than the size of the instance
11
25 December 2015CS 615 Design & Analysis of Algorithms11 Algorithm Types Time takes to solve an instance of a Linear Algorithm is Never greater than c*n Quadratic Algorithm is Never greater than c*n 2 Cubic Algorithm is Never greater than c*n 3 Polynomial Algorithm is Never greater than n k Exponential Algorithm is Never greater than c n where c & k are appropriate constants
12
25 December 2015CS 615 Design & Analysis of Algorithms12 Worst Case Analysis When to analyse an algorithm Considering the cases only take maximum amount of time If the algorithm is capable of solving cases in t(n) then the worst case should not be greater than c*t(n) Useful if the algorithm is to be applied to cases the upper bound of an algorithm must be known Example: Response time for a nuclear power plant.
13
25 December 2015CS 615 Design & Analysis of Algorithms13 Average Time Analysis If the algorithm is going to be used many times it is useful to know the average execution time on instances of size n It is harder to analyse the average case the distribution of data should be known Insertion sorting average time is in the order of n 2
14
25 December 2015CS 615 Design & Analysis of Algorithms14 Elementary Operation Is one whose execution time is bounded above by a constant The constant does not depend on The size or other parameters of the instance considered Example x=y+w*z is it elementary operation? Suppose t a : time to execute an addition (constant) t m : time to execute a multiplication (constant) t s : time to execute an assignment (constant) t: time required to execute an addition, multiplication, & asignment: t a t a + m t m + t s s where a,m,s are constants t max(t a, t m, t s ) x (a+m+s)
15
25 December 2015CS 615 Design & Analysis of Algorithms15 Elementary Operation A single line of of program may correspond to a variable number of elemenary operations x=min{T[i] | 1 i n} Time required to compute min increases with n min() is not an elementary operation !
16
25 December 2015CS 615 Design & Analysis of Algorithms16 Elementary Operation addition, multiplication: Normally Time required to compute addition, multiplication Depends on the length of the operands But it is reasonable to assume addition and multiplication are elementary operations when the operands are in fixed length
17
25 December 2015CS 615 Design & Analysis of Algorithms17 Some Algorithm Examples Calculating Determinants Sorting Multiplication of Large Integers Calculating the Greatest Common Divisor Calculating Fibonacci Sequences Fourier Transforms
18
25 December 2015CS 615 Design & Analysis of Algorithms18 Calculating Determinants Recursive definition of Algorithm To compute a determinant of n x n matrix Takes time proportional to n! Worse than taking exponential time Experiments: 5 x 5 matrix 20 sec. 10 x 10 matrix 10 min. Estimation 20 x 20 matrix 10 million years. Gauss-Jordan Elimination To compute a determinant of n x n matrix Takes time proportional to n 3 Experiments: 10 x 10 matrix 0.01 sec. 20 x 20 matrix 0.05 sec. 100 x 100 matrix 5.5 sec.
19
25 December 2015CS 615 Design & Analysis of Algorithms19 Sorting Arranging n objects based on the “ordering function” defined for these objects No sorting algorithm is faster than order of nlogn Insertion sorting Takes time proportional to n 2 Experiment: Sorting 1000 elements takes 3 sec. Estimation: Sorting 100 000 elements would take 9.5 hrs. Selection sorting Takes time proportional to n 2
20
25 December 2015CS 615 Design & Analysis of Algorithms20 Sorting Heapsort Takes time proportional to nlogn even in worst cases Mergesort Takes time proportional to nlogn even in worst cases Quicksort Takes time proportional to nlogn Experiment: Sorting 1000 elements takes 0.2 sec. Sorting 100 000 elements takes 30 sec.
21
25 December 2015CS 615 Design & Analysis of Algorithms21 Multiplication of Large Integers When multiplying large integers Operands might become too large to hold in a single word Assume two large integers of sizes m and n are to be multiplied Multiply each digit of one integer by the digit of the second digit Takes time proportional to m x n More efficient algorithms: Divide-and-conquer: Takes time proportional to n x m lg(3/2) = n x m 0.59 m is the size of the smaller integer
22
25 December 2015CS 615 Design & Analysis of Algorithms22 Calculating the Greatest Common Divisor Denoted by gcd(m,n) Finding the largest integer divides both m and n exactly gcd(6,15)=3 gcd(10,21)=1 gcd algorithm takes time of order n Euclid’s algorithm takes order of logn function gcd(m,n) i=min(m,n)+1 repeat i=i-1 until i divides both m and n exactly return i function Euclid(m,n) while m>0 do t=m m=n mod m n=t return n
23
25 December 2015CS 615 Design & Analysis of Algorithms23 Calculating Fibonacci Sequences Fibonacci Sequence: f 0 =0; f 1 =1; f n = f n-1 + f n-2 Order of Fibrec is f n Order of Fibiter is n function Fibrec(n) if n<2 then return n else return Fibrec(n-1)+Fibrec(n-2) function Fibiter(n) i=1 j=0 for k=1 to n do j=i+j i=j-i return j n10203050100 Fibrec 8ms1sec2min21days10 9 years Fibiter 0.17ms0.33ms0.5ms0.75ms1.5ms
24
25 December 2015CS 615 Design & Analysis of Algorithms24 Fourier Transforms One of the most useful algorithm in history Used in Optics Acoustics Quantum physics Telecommunications System theory Signal processing Speech processing Example Used to analyze data from earthquake in Alaska 1964 Classic algorithm takes 26 minutes of computation A new algorithm need less than 2.5 seconds
25
25 December 2015CS 615 Design & Analysis of Algorithms25 End of Chapter 3 Efficiency of Algorithms
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.