Download presentation
Presentation is loading. Please wait.
Published byDarren Tucker Modified over 8 years ago
2
Chapter 3
3
Chapter Summary Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed order Growth of Functions o Big- O and other Notation Complexity of Algorithms
4
Section 3.1
5
Section Summary Properties of Algorithms Searching Algorithms Sorting Algorithms
6
Algorithms Definition 1: An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. Algorithms can be specified in different ways. Their steps can be described in English or in pseudocode. o Pseudocode is an intermediate step between an English language description of the steps and a coding of these steps using a programming language.
7
Algorithms Example: Describe an algorithm for finding the maximum value in a finite sequence of integers. Solution: o Perform the following steps: 1. Set the temporary maximum equal to the first integer in the sequence. 2. Compare the next integer in the sequence to the temporary maximum. If it is larger than the temporary maximum, set the temporary maximum equal to this integer. 3. Repeat the previous step if there are more integers. If not, stop. 4. When the algorithm terminates, the temporary maximum is the largest integer in the sequence.
8
Finding the Maximum Element in a Finite Sequence The algorithm in pseudocode: procedure max(a 1, a 2, …., a n : integers) max := a 1 for i := 2 to n if max < a i then max := a i return max{max is the largest element}
9
Some Example Algorithm Problems Three classes of problems will be studied in this section. o Searching Problems : finding the position of a particular element in a list. o Sorting problems : putting the elements of a list into increasing order.
10
Searching Problems The general searching problem is to locate an element x in the list of distinct elements a1,a2,...,an, or determine that it is not in the list. o The solution to a searching problem is the location of the term in the list that equals x (that is, i is the solution if x = a i ) or 0 if x is not in the list. o For example, a library might want to check to see if a patron is on a list of those with overdue books before allowing him/her to checkout another book. o linear search and binary search.
11
Linear Search Algorithm The linear search algorithm locates an item in a list by examining elements in the sequence one at a time, starting at the beginning. procedure linear search(x:integer, a 1, a 2, …,a n : distinct integers) i := 1 while (i ≤ n and x ≠ a i ) i := i + 1 if i ≤ n then location := i else location := 0 return location{location is the subscript of the term that equals x, or is 0 if x is not found}
12
Binary Search the binary search algorithm in pseudocode. procedure binary search(x: integer, a 1,a 2,…, a n : increasing integers) i := 1 {i is the left endpoint of interval} j := n {j is right endpoint of interval} while i < j m := ⌊ (i + j)/2 ⌋ if x > a m then i := m + 1 else j := m if x = a i then location := i else location := 0 return location{location is the subscript i of the term a i equal to x, or 0 if x is not found}
13
12 Search alg. #2: Binary Search Basic idea: On each step, look at the middle element of the remaining list to eliminate half of it, and quickly zero in on the desired element. <x<x>x>x<x<x<x<x
14
Binary Algorithm Example: The steps taken by a binary search for 19 in the list: 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 1. The list has 16 elements, so the midpoint is 8. The value in the 8 th position is 10. Since 19 > 10, further search is restricted to positions 9 through 16. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 2. The midpoint of the list (positions 9 through 16) is now the 12 th position with a value of 16. Since 19 > 16, further search is restricted to the 13 th position and above. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 3. The midpoint of the current list is now the 14 th position with a value of 19. Since 19 ≯ 19, further search is restricted to the portion from the 13 th through the 14 th positions. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 4. The midpoint of the current list is now the 13 th position with a value of 18. Since 19> 18, search is restricted to the portion from the 18 th position through the 18 th. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 5. Now the list has a single element and the loop ends. Since 19=19, the location 16 is returned. 13
15
Sorting To sort the elements of a list is to put them in increasing order (numerical order, alphabetic, and so on). Sorting is an important problem because: o A nontrivial percentage of all computing resources are devoted to sorting different kinds of lists, especially applications involving large databases of information that need to be presented in a particular order (e.g., by customer, part number etc.). o An amazing number of fundamentally different algorithms have been invented for sorting. Their relative advantages and disadvantages have been studied extensively. A variety of sorting algorithms are studied in this book; binary, insertion, bubble, selection, merge, quick, and tournament.
16
Bubble Sort Bubble sort makes multiple passes through a list. Every pair of elements that are found to be out of order are interchanged. http://en.wikipedia.org/wiki/Bubble_sort#mediaviewer/File:Bubble- sort-example-300px.gif
17
Insertion Sort Insertion sort begins with the 2nd element. It compares the 2nd element with the 1st and puts it before the first if it is not larger. {3, 7, 4, 9, 5, 2, 6, 1} 1. 3 7 4 9 5 2 6 1 2. 3 7 4 9 5 2 6 1 3. 3 4 7 9 5 2 6 1 4. 3 4 7 9 5 2 6 1 5. 3 4 5 7 9 2 6 1 6. 2 3 4 5 7 9 6 1 7. 2 3 4 5 6 7 9 1 8. 1 2 3 4 5 6 7 9 http://en.wikipedia.org/wiki/Insertion_sort#mediaviewer/File:Insertion- sort-example-300px.gif
18
Section 3.2
19
Section Summary Big-O Notation Big-O Estimates for Important Functions Big-Omega and Big-Theta Notation
20
19 How do we analyze algorithms? We need to define a number of objective measures. (1) Compare execution times? Not good : times are specific to a particular computer!! (2) Count the number of statements executed? Not good : number of statements vary with the programming language as well as the style of the individual programmer.
21
20 Example (# of statements) Algorithm 1 Algorithm 2 arr[0] = 0; for(i=0; i<N; i++) arr[1] = 0; arr[i] = 0; arr[2] = 0;... arr[N-1] = 0;
22
21 How do we analyze algorithms? (3) Express running time as a function of the input size n (i.e., f(n) ). o To compare two algorithms with running times f(n) and g(n), we need a rough measure of how fast a function grows. o Such an analysis is independent of machine time, programming style, etc.
23
22 Comparing Functions Using Rate of Growth Consider the example of buying elephants and goldfish: Cost: cost_of_elephants + cost_of_goldfish Cost ~ cost_of_elephants (approximation) The low order terms in a function are relatively insignificant for large n n 4 + 100 n 2 + 10 n + 50 ~ n 4 i.e., n 4 + 100 n 2 + 10 n + 50 and n 4 have the same rate of growth
24
23 Rate of Growth ≡ Asymptotic Analysis Using rate of growth as a measure to compare different functions implies comparing them asymptotically. If f ( x ) is faster growing than g ( x ), then f ( x ) always eventually becomes larger than g ( x ) in the limit (for large enough values of x ).
25
24 Example Suppose you are designing a web site to process user data ( e.g., financial records). Suppose program A takes f A ( n )=30 n+ 8 microseconds to process any n records, while program B takes f B ( n )= n 2 +1 microseconds to process the n records. Which program would you choose, knowing you’ll want to support millions of users?
26
25 Visualizing Orders of Growth On a graph, as you go to the right, a faster growing function eventually becomes larger... f A (n)=30n+8 Increasing n f B (n)=n 2 +1 Value of function
27
Big- O Notation Definition 1: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that whenever x > k. o This is read as “f(x) is big-O of g(x)” or “g asymptotically dominates f.” o The constants C and k are called witnesses to the relationship f(x) is O(g(x)). Only one pair of witnesses is needed.
28
Using the Definition of Big- O Notation Example: Show that is. Solution: Since when x > 1, x < x 2 and 1 < x 2 o Can take C = 4 and k = 1 as witnesses to show that Alternatively, when x > 2, we have 2 x ≤ x 2 and 1 2. o Can take C = 3 and k = 2 as witnesses instead.
29
Using the Definition of Big- O Notation Example: Show that 7x 2 is O(x 3 ). Solution: o When x > 7, 7x 2 < x 3. Take C =1 and k = 7 as witnesses to establish that 7x 2 is O(x 3 ). o (Would C = 7 and k = 1 work?)
30
Big- O Notation If and, We say that the two functions are of the same order. For example: If and h(x) is larger than g(x) for all positive real numbers, then. Note that if for x > k and if for all x, then if x > k. Hence,. For many applications, the goal is to select the function g ( x ) in O(g(x)) as small as possible (up to multiplication by a constant, of course).
31
Display of Growth of Functions Note the difference in behavior of functions as n gets larger
32
Big-Omega Notation Definition 2: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that if there are constants C and k such that when x > k. We say that “f(x) is big-Omega of g(x).” Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. Big-Omega tells us that a function grows at least as fast as another. f(x) is Ω(g(x)) if and only if g(x) is O(f(x)). This follows from the definitions.
33
Big-Omega Notation Example: Show that is where. Solution: for all positive real numbers x. Is it also the case that is ?
34
Big-Theta Notation Definition 3: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. The function if and. We say that “f is big-Theta of g ( x )” and also that “ f ( x ) is of order g ( x )” and also that “ f ( x ) and g ( x ) are of the same order.” if and only if there exists constants C 1, C 2 and k such that C 1 g ( x ) k. This follows from the definitions of big- O and big-Omega.
35
Big-Theta Notation Example: Sh0w that f(x) = 3x 2 + 8x log x is Θ(x 2 ). Solution: 3 x 2 + 8x log x ≤ 11 x 2 for x > 1, since 0 ≤ 8x log x ≤ 8 x 2. Hence, 3 x 2 + 8x log x is O( x 2 ). x 2 is clearly O(3 x 2 + 8x log x ) Hence, 3 x 2 + 8x log x is Θ( x 2 ).
36
Section 3.3
37
The Complexity of Algorithms time complexity : the time the algorithm uses to solve the problem given input of a particular size. space complexity : the computer memory the algorithm uses to solve the problem given input of a particular size measure time complexity in terms of the number of operations an algorithm uses o use big- O and big-Theta notation to estimate the time complexity. o whether it is practical to use this algorithm to solve problems with input of a particular size. o compare the efficiency of different algorithms for solving the same problem.
38
The Complexity of Algorithms To analyze the time complexity of algorithms, we determine the number of operations, such as comparisons and arithmetic operations (addition, multiplication, etc.). focus on the worst-case time complexity of an algorithm. o upper bound It is usually much more difficult to determine the average case time complexity of an algorithm.
39
Complexity Analysis of Algorithms Example: Describe the time complexity of the algorithm for finding the maximum element in a finite sequence. Solution: Count the number of comparisons. o The max < ai comparison is made n − 1 times. o Each time i is incremented, a test is made to see if i ≤ n. o One last comparison determines that i > n. o Exactly 2(n − 1) + 1 = 2n − 1 comparisons are made. Hence, the time complexity of the algorithm is Θ(n). procedure max(a 1, a 2, …., a n : integers) max := a 1 for i := 2 to n if max < a i then max := a i return max{max is the largest element}
40
Worst-Case Complexity of Linear Search Example: Determine the time complexity of the linear search algorithm. Solution: Count the number of comparisons. o At each step two comparisons are made; i ≤ n and x ≠ a i. o To end the loop, one comparison i ≤ n is made. o After the loop, one more i ≤ n comparison is made. Worst case, x is not on the list, 2n + 1 comparisons are made and then an additional comparison is used to exit the loop. So, in the worst case 2n + 2 comparisons are made. Hence, the complexity is Θ(n). procedure linear search(x:integer, a 1, a 2, …,a n : distinct integers) i := 1 while (i ≤ n and x ≠ a i ) i := i + 1 if i ≤ n then location := i else location := 0 return location{location of the term that equals x, or is 0 if x is not found}
41
Understanding the Complexity of Algorithms
42
Times of more than 10 100 years are indicated with an *.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.