Download presentation
Presentation is loading. Please wait.
Published byOphelia Stevenson Modified over 9 years ago
1
Introduction to Analysis of Algorithms CS342 S2004
2
How to assess an algorithm –Correctness –Ease of use –Efficiency as a function of n, the number of data elements Time efficiency Memory or space (RAM) efficiency: less important today.
3
Running time analysis Measure actual execution time: depends on machine architecture (processor) and operating systems (hardware and software dependencies); not very useful. Tally the number of key computations (operations) such as comparisons or assignments as a function of n, or T(n), where n is the number of data elements being processed by the algorithm. T(n) is independent of hardware and software. From T(n), one can easily obtain Big-O(O(n)).
4
Examples Selection sort algorithm key operations: comparison operations T(n) = (n-1) + (n-2) + … + 3 + 2 + 1 = n(n-1)/2 = n2 /2 – n/2 Sequential search algorithm Key operations: comparison operations T(n) = 1 best case (not useful) T(n) = n/2average case T(n) = nworst case Binary search algorithm Key operations: comparison operations T(n) = 2(1 + log2n)each iteration requires 2 comparisons (== and >)
5
How to obtain Big-O from T(n) intuitively The Big-O notation –Asymptotic behavior of T(n) Drop insignificant terms as n becomes very large – Obtain Big-O notation from T(n) by inspection look for dominant term in T(n) and drop constant associated with the dominant term
6
Examples Selection sort: T(n) = n 2 /2 – n/2; dominant term is n 2 /2, dropping 1/2, O(n 2 ), the algorithm is quadratic; Big-O of n square. Linear search: T(n) = n; dominant term is n, the algorithm has an efficiency of O(n) (reads Big-O of n and the algorithm is linear) Binary search: T(n) = 2(1 + log 2 n); dominant term is 2log 2 n, dropping 2, O(log 2 n), the algorithm is logarithmic, Big-O if log 2 n.
7
Some observations –As n increases, so does execution time. –Running time of practical algorithms may grow with n (form best to worse) Constant (does not depend on n) n (Linear) log 2 n nlog 2 n n 2 (quadratic) n 3 (cubic) 2 n (exponential)
8
Formal definition of the Big-O notation Given functions T(n) and g(n), we say that T(n) is O(g(n)) if there are positive constants c and n 0 such that T(n) cg(n) for n n 0
9
Example T(n) = 2n + 10 is O(n) 2n + 10 cn (c 2) n 10 n 10/(c 2) Pick c = 3 and n 0 = 10 (can have infinite number of solutions)
10
Example T(n) = 3n 3 + 20n 2 + 5 is O(n 3 ) need c > 0 and n 0 1 such that 3n3 + 20n 3 + 5 cn 3 for n n 0 this is true for c = 4 and n 0 = 21
11
Example T(n) = 3 log 2 n + log 2 log 2 n is O(log 2 n) need c > 0 and n 0 1 such that 3 log 2 n + log 2 log 2 n clog 2 n for n n 0 this is true for c = 4 and n 0 = 2
12
Observations –The big-Oh notation gives an upper bound on the growth rate of a function –The statement “T(n) is O(g(n))” means that the growth rate of T(n) is no more than the growth rate of g(n) –We can use the big-Oh notation to rank functions according to their growth rate
13
Big-O rules If is T(n) a polynomial of degree d, then T(n) is O(n d ), i.e., Drop lower-order terms Drop constant factors Use the smallest possible class of functions Say “2n is O(n)” instead of “2n is O(n 2 )” Use the simplest expression of the class, Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
14
Math you need to know Summations Logarithms and Exponents –properties of logarithms: log b (xy) = log b x + log b y log b (x/y) = log b x - log b y log b xa = alog b x log b a = log x a/log x b –properties of exponentials: a (b+c) = a b a c a bc = (a b ) c a b /a c = a (b-c) b = a log a b b c = a c*log a b –Proof techniques –Basic probability
15
Relatives of Big-O big-Omega –f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0 1 such that –f(n) cg(n) for n n 0 big-Theta –f(n) is (g(n)) if there are constants c’ > 0 and c’’ > 0 and an integer constant n 0 1 such that c’g(n) f(n) c’’g(n) for n n 0 little-oh –f(n) is o(g(n)) if, for any constant c > 0, there is an integer constant n0 0 such that f(n) cg(n) for n n 0 little-omega –f(n) is (g(n)) if, for any constant c > 0, there is an integer constant n 0 0 such that f(n) cg(n) for n n 0
16
Intuition for asymptotic notations –Big-Oh f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n) –big-Omega f(n) is (g(n)) if f(n) is asymptotically greater than or equal to g(n) –big-Theta f(n) is (g(n)) if f(n) is asymptotically equal to g(n) –little-oh f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n) –little-omega f(n) is (g(n)) if is asymptotically strictly greater than g(n)
17
Selection sort: The C++ code template void selectionSort(T arr[ ], int n) { int smallIndex; // index of smallest element in the sublist int pass, j; T temp; // pass has the range 0 to n-2
18
Selection sort: The C++ code continued for (pass = 0; pass < n-1; pass++) // loop n-1 times; // =: 1 time; <: n-1 times; ++: n-2 times { smallIndex = pass; // scan the sublist starting at index pass = n-1 for (j = pass+1; j < n; j++) // j traverses the sublist arr[pass+1] to arr[n-1] if (arr[j] < arr[smallIndex]) // update if smaller element found // < operation: T(n) or n(n-1)/2 times smallIndex = j; // Worst case: = operation: T(n) or n(n-1)/2 times // if smallIndex and pass are not the same location, // exchange the smallest item in the sublist with arr[pass] if (smallIndex != pass) { // != operation: n-1times temp = arr[pass]; // Worst case: = operations: n-1 times arr[pass] = arr[smallIndex]; // = operation: n-1 times arr[smallIndex] = temp; //= operation: n-1 times }
19
Total instruction count and Big-O key operations—in doubly nested loop: –< operations: T(n) = (n-1) + (n-2) + … + 3 + 2 + 1 = n(n-1)/2 = n 2 /2 – n/2. Same for = operations: T(n) = n(n-1)/2 Other operations: = operation: [1 + (n-1) + 3(n-1)] < operation: [n-1] ++ operation: [n-2] != operation: [n-1] Total: 7(n-1) Assuming all instructions take same time to execute or total instruction count: T(n) = 2n(n-1)/2+7(n-1) = n 2 +6n-7 T(n) n 0 ; possible solution: g(n) = n 2, c = 1 n 0 = 6; therefore T(n) is O(n 2 ).
20
C++ code for Sequential or linear search template int seqSearch(const T arr[], int first, int last, const T& target) { int i; // scan indices in the range first <= i < last for(i=first; i < last; i++) if (arr[i] == target) // assume T has the "==" operator return i;// immediately return on a match return last;// return last if target not found }
21
C++ code for binary search template int binSearch(const T arr[], int first, int last, const T& target) { int mid;// index of the midpoint T midValue;// object that is assigned arr[mid] int origLast = last;// save original value of last = 1 while (first < last)// test for nonempty sublist < 2m = n or m = log2n { mid = (first+last)/2; midValue = arr[mid]; if (target == midValue) return mid;// have a match // determine which sublist to search else if (target < midValue) last = mid;// search lower sublist. reset last else first = mid+1;// search upper sublist. reset first } return origLast;// target not found
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.