Download presentation
Presentation is loading. Please wait.
Published byAubrey Rodgers Modified over 9 years ago
1
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco
2
Algorithms 2 What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you attend CS430. But this one is good enough for now…
3
Algorithms 3 Properties of algorithms: Input from a specified set, Output from a specified set (solution), Definiteness of every step in the computation, Correctness of output for every possible input, Finiteness of the number of calculation steps, Effectiveness of each calculation step and Generality for a class of problems.
4
Algorithm Examples 4 We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a1, a2, …, an: integers) max := a1 for i := 2 to n if max < ai then max := ai {max is the largest element}
5
Algorithm Examples 5 Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search(x: integer; a1, a2, …, an: integers) i := 1 while (i n and x ai) i := i + 1 if i n then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found}
6
Algorithm Examples 6 If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located.
7
Algorithm Examples 7 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval
8
Algorithm Examples 8 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval
9
Algorithm Examples 9 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval
10
Algorithm Examples 10 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval
11
Algorithm Examples 11 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval found !
12
Algorithm Examples 12 procedure binary_search(x: integer; a1, a2, …, an: integers) i := 1 {i is left endpoint of search interval} j := n {j is right endpoint of search interval} while (i < j) begin m := (i + j)/2 if x > am then i := m + 1 else j := m end if x = ai then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found}
13
Complexity 13 In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 230.
14
Measuring Complexity 14 An algorithm’s cost can be measured in terms of the number of ‘steps’ it takes to complete the work A step is anything that takes a fixed amount of time, regardless of the input size Arithmetic operations Comparisons If/Then statements Print Statements Etc. Some steps in reality take longer than others, but for our analysis we will see that this doesn’t matter that much
15
Complexity Functions 15 The number of ‘steps’ needed to execute an algorithm often depends on the size of the input: Linear_Search([1,25,33,47],33) ~ 3 ‘steps’ Linear_Search([1,16,19,21,28,33,64,72,81], 33) ~ 6 ‘steps’ We’ll instead express the complexity of an algorithm as a function of the size of the input N. T(N) = N (for linear search) T(N) = N 2 (for other algorithms we’ll see)
16
Example 1 16 Suppose an algorithm A requires 10 3 n 2 steps to process an input of size n. By what factor will the number of steps increase if the input size is doubled to 2n? Answer the same question for Algorithm B which requires 2n 3 steps for input size n. Answer the same question for both, but now increase the input size to 10n.
17
Complexity 17 For example, let us assume two algorithms A and B that solve the same class of problems, both of which have an input size of n: The time complexity of A is T(A) = 5,000n, The time complexity of B is T(B) = 1.1 n
18
Complexity 18 Comparison: time complexity of algorithms A and B Algorithm A Algorithm B Input Size n 10 100 1,000 1,000,000 5,000n 50,000 500,000 5,000,000 5 10 9 1.1 n 3 2.5 10 41 13,781 4.8 10 41392
19
Complexity 19 This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms.
20
But wait, there’s more! 20 Consider the following functions: The n 2 term dominates the rest of f(n)! Algorithm analysis typically involves finding out which term ‘dominates’ the algorithm’s complexity nf(N) = n 2 + 4n + 20g(n) = n 2 10160100 502,7202,500 10010,42010,000 10001,004,0201,000,000 10,000100,040,020100,000,000
21
Big-Oh 21 Definition: Let f and g be functions from the integers or the real numbers to the real numbers. We say that f(x) is O(g(x)) if there are constants B and b such that: |f(x)| B|g(x)| whenever x > b. In a sense, we are saying that f(x) is never greater than g(x), barring a constant factor
22
Graphical Example 22
23
The Growth of Functions 23 When we analyze the growth of complexity functions, f(x) and g(x) are always positive. Therefore, we can simplify the big-O requirement to f(x) B g(x) whenever x > b. If we want to show that f(x) is O(g(x)), we only need to find one pair (C, k) (which is never unique).
24
The Growth of Functions 24 The idea behind the big-O notation is to establish an upper boundary for the growth of a function f(x) for large x. This boundary is specified by a function g(x) that is usually much simpler than f(x). We accept the constant B in the requirement f(x) B g(x) whenever x > b, because B does not grow with x. We are only interested in large x, so it is OK if f(x) > B g(x) for x b.
25
The Growth of Functions 25 Example: Show that f(x) = x 2 + 2x + 1 is O(x 2 ). For x > 1 we have: x 2 + 2x + 1 x 2 + 2x 2 + x 2 x 2 + 2x + 1 4x 2 Therefore, for B = 4 and b = 1: f(x) Bx 2 whenever x > b. f(x) is O(x 2 ).
26
The Growth of Functions 26 Question: If f(x) is O(x 2 ), is it also O(x 3 )? Yes. x 3 grows faster than x 2, so x 3 grows also faster than f(x). Therefore, we always have to find the smallest simple function g(x) for which f(x) is O(g(x)).
27
A Tighter Bound 27 Big-O complexity is important because it tells us about the worst-case But there’s a best-case too! Incorporating a lower bound, we get big-Theta notation: f(x) is (g(x)) if there exist constants A, B, b such that A|g(x)| f(x) B|g(x)| for all real numbers x > b In a sense, we are saying that f and g grow at exactly the same pace, barring constant factors
28
Graphical Example 28
29
A useful observation 29 Given two functions f(x) and g(x), let f(x) be O(g(x)) and g(x) be O(f(x)). Then f(x) is (g(x)) and g(x) is (f(x)). Work out on board.
30
Exercise 1 30 Let f(x) = 2x + 1 and g(x) = x Show that f(x) is (g(x)).
31
Useful Rules for Big-O 31 If x > 1, then 1 < x < x 2 < x 3 < x 4 < … If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 + f 2 )(x) is O(max(g 1 (x), g 2 (x))) If f 1 (x) is O(g(x)) and f 2 (x) is O(g(x)), then (f 1 + f 2 )(x) is O(g(x)). If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 f 2 )(x) is O(g 1 (x) g 2 (x)).
32
Exercise 2 32 Let f(x) = 7x 4 + 3x 3 + 5 and g(x) = x 4 for x ≥ 0 Prove that f(x) is O(g(x)).
33
Exercise 3 33 Prove that 2n 3 - 6n is O(n 3 ). Prove that 3n 4 – 4n 2 + 7n – 2 is O(n 4 )
34
Exercise 4 34 Let f(n) = n(n-1)/2 and g(n) = n 2 for n ≥ 0. Prove that f(n) is O(g(n)).
35
Polynomial Order Theorem 35 For any polynomial f(x) = a n x n + a n-1 x n-1 + … + a 0, where a 0, a 1, …, a n are non-zero real numbers, f(x) is O(x n ) f(x) is Θ (x n )
36
Some useful summation formulas 36
37
Exercise 5 37 Use the polynomial order theorem to show that 1 + 2 + 3 + … + n is Θ (n 2 ).
38
The Growth of Functions 38 In practice, f(n) is the function we are analyzing and g(n) is the function that summarizes the growth of f(n) “Popular” functions g(n) are: n log n, 1, 2 n, n 2, n!, n, n 3, log n Listed from slowest to fastest growth: 1 log n n n log n n 2 n 3 2 n n!
39
The Growth of Functions 39 A problem that can be solved with polynomial worst-case complexity is called tractable. Problems of higher complexity are called intractable. Problems that no algorithm can solve are called unsolvable. You will find out more about this in CSC 430.
40
Analyzing a real algorithm 40 What does the following algorithm compute? procedure who_knows(a1, a2, …, an: integers) m := 0 for i := 1 to n-1 for j := i + 1 to n if |ai – aj| > m then m := |ai – aj| {m is the maximum difference between any two numbers in the input sequence} Comparisons: n-1 + n-2 + n-3 + … + 1 = (n – 1)n/2 = 0.5n 2 – 0.5n Time complexity is O(n 2 ).
41
Complexity Examples 41 Another algorithm solving the same problem: procedure max_diff(a 1, a 2, …, a n : integers) min := a1 max := a1 for i := 2 to n if a i < min then min := a i else if a i > max then max := a i m := max - min Comparisons: 2n - 2 Time complexity is O(n).
42
Logarithmic Orders 42 Logarithmic algorithms are highly desirable because logarithms grow slowly with respect to n There are two classes that come up frequently: log n n*log n The base is usually unimportant (why?) but if you must have one, 2 is a safe bet.
43
Exercise 6 43 Prove that log(x) is O(x) Prove that x is O(x*log(x)) Prove that x*log(x) is O(x 2 ).
44
Exercise 7 44 Prove that 10x + 5x*log(x) is Θ (x*log(x))
45
Exercise 8 45 Prove that log(n) is Θ (log(n))
46
Binary Representation of Integers 46 Let f(n) = the number of binary digits needed to represent n, for a positive integer n. Find f(n) and its order. How many binary digits are needed to represent 325,561?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.