Presentation is loading. Please wait.

Presentation is loading. Please wait.

COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer.

Similar presentations


Presentation on theme: "COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer."— Presentation transcript:

1 COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer

2 5/11/2004 Lecture 2COSC3101A2 Typical Running Time Functions 1 (constant running time): –Instructions are executed once or a few times logN (logarithmic) –A big problem is solved by cutting the original problem in smaller sizes, by a constant fraction at each step N (linear) –A small amount of processing is done on each input element N logN –A problem is solved by dividing it into smaller problems, solving them independently and combining the solution

3 5/11/2004 Lecture 2COSC3101A3 Typical Running Time Functions N 2 (quadratic) –Typical for algorithms that process all pairs of data items (double nested loops) N 3 (cubic) –Processing of triples of data (triple nested loops) N K (polynomial) 2 N (exponential) –Few exponential algorithms are appropriate for practical use

4 5/11/2004 Lecture 2COSC3101A4 Logarithms In algorithm analysis we often use the notation “ log n ” without specifying the base Binary logarithm Natural logarithm

5 5/11/2004 Lecture 2COSC3101A5 Review: Asymptotic Notations(1)

6 5/11/2004 Lecture 2COSC3101A6 Review: Asymptotic Notations(2) if and only if

7 5/11/2004 Lecture 2COSC3101A7 Review: Asymptotic Notations(3) A way to describe behavior of functions in the limit –How we indicate running times of algorithms –Describe the running time of an algorithm as n grows to  O notation: asymptotic “less than”: f(n) “≤” g(n)  notation: asymptotic “greater than”: f(n) “≥” g(n)  notation: asymptotic “equality”: f(n) “=” g(n)

8 5/11/2004 Lecture 2COSC3101A8 Big-O Examples(1) –2n 2 = O(n 3 ): – n 2 = O(n 2 ): – 1000n 2 +1000n = O(n 2 ): – n = O(n 2 ): 2n 2 ≤ cn 3  2 ≤ cn  c = 1 and n 0 = 2 n 2 ≤ cn 2  c ≥ 1  c = 1 and n 0 = 1 1000n 2 +1000n ≤ cn 2  1000n+1000 ≤ cn  c=2000 and n 0 = 1 n ≤ cn 2  cn ≥ 1  c = 1 and n 0 = 1

9 5/11/2004 Lecture 2COSC3101A9 Big-O Examples(2) E.g.: prove that n 2 ≠ O(n) –Assume  c & n 0 such that:  n≥ n 0 : n 2 ≤ cn –Choose n = max (n 0, c) –n 2 = n  n ≥ n  c  n 2 ≥ cn contradiction!!!

10 5/11/2004 Lecture 2COSC3101A10 More on Asymptotic Notations There is no unique set of values for n 0 and c in proving the asymptotic bounds Prove that 100n + 5 = O(n 2 ) –100n + 5 ≤ 100n + n = 101n ≤ 101n 2 for all n ≥ 5 n 0 = 5 and c = 101 is a solution –100n + 5 ≤ 100n + 5n = 105n ≤ 105n 2 for all n ≥ 1 n 0 = 1 and c = 105 is also a solution Must find SOME constants c and n 0 that satisfy the asymptotic notation relation

11 5/11/2004 Lecture 2COSC3101A11 Big-  Examples – 5n 2 =  (n) –100n + 5 ≠  (n 2 ) –n =  (2n), n 3 =  (n 2 ), n =  (logn)  c, n 0 such that: 0  cn  5n 2  cn  5n 2  c = 1 and n 0 = 1  c, n 0 such that: 0  cn 2  100n + 5 100n + 5  100n + 5n (  n  1) = 105n cn 2  105n  n(cn – 105)  0 Since n is positive  cn – 105  0  n  105/c  contradiction: n cannot be smaller than a constant

12 5/11/2004 Lecture 2COSC3101A12  Examples –n 2 /2 –n/2 =  (n 2 ) ½ n 2 - ½ n ≤ ½ n 2  n ≥ 0  c 2 = ½ ½ n 2 - ½ n ≥ ½ n 2 - ½ n * ½ n (  n ≥ 2 ) = ¼ n 2  c 1 = ¼ –n ≠  (n 2 ): c 1 n 2 ≤ n ≤ c 2 n 2  only holds for: n ≤ 1/ c 1 –6n 3 ≠  (n 2 ): c 1 n 2 ≤ 6n 3 ≤ c 2 n 2  only holds for: n ≤ c 2 /6 –n ≠  ( logn ): c 1 logn ≤ n ≤ c 2 logn  c 2 ≥ n/logn,  n≥ n 0 – impossible

13 5/11/2004 Lecture 2COSC3101A13 Comparisons of Functions Theorem: f(n) =  (g(n))  f = O(g(n)) and f =  (g(n)) Transitivity: –f(n) =  (g(n)) and g(n) =  (h(n))  f(n) =  (h(n)) –Same for O and  Reflexivity: –f(n) =  (f(n)) –Same for O and  Symmetry: –f(n) =  (g(n)) if and only if g(n) =  (f(n)) Transpose symmetry: –f(n) = O(g(n)) if and only if g(n) =  (f(n))

14 5/11/2004 Lecture 2COSC3101A14 More Examples(1) For each of the following pairs of functions, either f(n) is O(g(n)), f(n) is Ω(g(n)), or f(n) = Θ(g(n)). Determine which relationship is correct. –f(n) = log n 2 ; g(n) = log n + 5 –f(n) = n; g(n) = log n 2 –f(n) = log log n; g(n) = log n –f(n) = n; g(n) = log 2 n –f(n) = n log n + n; g(n) = log n –f(n) = 10; g(n) = log 10 –f(n) = 2 n ; g(n) = 10n 2 –f(n) = 2 n ; g(n) = 3 n f(n) =  (g(n)) f(n) =  (g(n)) f(n) = O(g(n)) f(n) =  (g(n)) f(n) =  (g(n)) f(n) =  (g(n)) f(n) = O(g(n))

15 5/11/2004 Lecture 2COSC3101A15 More Examples(2)  notation –n 2 /2 – n/2 –(6n 3 + 1)lgn/(n + 1) –n vs. n 2  notation –n vs. 2n –n 3 vs. n 2 –n vs. logn –n vs. n 2 =  (n 2 ) n ≠  (n 2 ) =  (n 2 lgn) n =  (2n) O notation –2n 2 vs. n 3 –n 2 vs. n 2 –n 3 vs. nlogn n 3 =  (n 2 ) n =  (logn) n   (n 2 ) 2n 2 = O(n 3 ) n 2 = O(n 2 ) n 3  O(nlgn)

16 5/11/2004 Lecture 2COSC3101A16 Asymptotic Notations in Equations On the right-hand side –  (n 2 ) stands for some anonymous function in  (n 2 ) 2n 2 + 3n + 1 = 2n 2 +  (n) means: There exists a function f(n)   (n) such that 2n 2 + 3n + 1 = 2n 2 + f(n) On the left-hand side 2n 2 +  (n) =  (n 2 ) No matter how the anonymous function is chosen on the left-hand side, there is a way to choose the anonymous function on the right-hand side to make the equation valid.

17 5/11/2004 Lecture 2COSC3101A17 Limits and Comparisons of Functions Using limits for comparing orders of growth: compare ½ n (n-1) and n 2

18 5/11/2004 Lecture 2COSC3101A18 Limits and Comparisons of Functions L’Hopital rule: compare and

19 5/11/2004 Lecture 2COSC3101A19 Loop Invariant A loop invariant is a relation among program variables that –is true when control enters a loop, –remains true each time the program executes the body of the loop, –and is still true when control exits the loop. Understanding loop invariants can help us –analyze algorithms, –check for errors, –and derive algorithms from specifications.

20 5/11/2004 Lecture 2COSC3101A20 Proving Loop Invariants Initialization (base case): –It is true prior to the first iteration of the loop Maintenance (inductive step): –If it is true before an iteration of the loop, it remains true before the next iteration Termination: –When the loop terminates, the invariant - usually along with the reason that the loop terminated - gives us a useful property that helps show that the algorithm is correct –Stop the induction when the loop terminates Proving loop invariants works like induction

21 5/11/2004 Lecture 2COSC3101A21 Loop Invariant for Insertion Sort(1) Alg.: INSERTION-SORT(A) for j ← 2 to n do key ← A[ j ] Insert A[ j ] into the sorted sequence A[1.. j -1] i ← j - 1 while i > 0 and A[i] > key do A[i + 1] ← A[i] i ← i – 1 A[i + 1] ← key a8a8 a7a7 a6a6 a5a5 a4a4 a3a3 a2a2 a1a1 12345678 key Invariant: at the start of the for loop the elements in A[1.. j-1] are in sorted order

22 5/11/2004 Lecture 2COSC3101A22 Loop Invariant for Insertion Sort(2) Initialization: –Just before the first iteration, j = 2 : the subarray A[1.. j-1] = A[1], (the element originally in A[1] ) – is sorted

23 5/11/2004 Lecture 2COSC3101A23 Loop Invariant for Insertion Sort(3) Maintenance: –the while inner loop moves A[j -1], A[j -2], A[j -3], and so on, by one position to the right until the proper position for key (which has the value that started out in A[j] ) is found –At that point, the value of key is placed into this position.

24 5/11/2004 Lecture 2COSC3101A24 Loop Invariant for Insertion Sort(4) Termination: –The outer for loop ends when j > n ( i.e, j = n + 1 )  j-1 = n –Replace n with j-1 in the loop invariant: the subarray A[1.. n] consists of the elements originally in A[1.. n], but in sorted order The entire array is sorted! jj - 1

25 5/11/2004 Lecture 2COSC3101A25 Steps in Designing Algorithms(1) 1.Understand the problem Specify the range of inputs the algorithm should handle 2.Learn about the model of the implementation technology RAM (Random-access machine), sequential execution 3.Choosing between an exact and an approximate solution Some problems cannot be solved exactly: nonlinear equations, evaluating definite integrals Exact solutions may be unacceptably slow 4.Choose the appropriate data structures

26 5/11/2004 Lecture 2COSC3101A26 Steps in Designing Algorithms(2) 5.Choose an algorithm design technique General approach to solving problems algorithmically that is applicable to a variety of computational problems Provide guidance for developing solutions to new problems 6.Specify the algorithm Pseudocode: mixture of natural and programming language 7.Prove the algorithm’s correctness Algorithm yields the correct result for any legitimate input, in a finite amount of time Mathematical induction, loop-invariants

27 5/11/2004 Lecture 2COSC3101A27 Steps in Designing Algorithms(3) 8. Analyze the Algorithm Predicting the amount of resources required: memory: how much space is needed? computational time: how fast the algorithm runs? FACT: running time grows with the size of the input Input size (number of elements in the input) –Size of an array, polynomial degree, # of elements in a matrix, # of bits in the binary representation of the input, vertices and edges in a graph Def: Running time = the number of primitive operations (steps) executed before termination –Arithmetic operations (+, -, *), data movement, control, decision making (if, while), comparison

28 5/11/2004 Lecture 2COSC3101A28 Steps in Designing Algorithms(4) 9.Coding the algorithm Verify the ranges of the input Efficient/inefficient implementation It is hard to prove the correctness of a program (typically done by testing)

29 5/11/2004 Lecture 2COSC3101A29 Classification of Algorithms By problem types –Sorting –Searching –String processing –Graph problems –Combinatorial problems –Geometric problems –Numerical problems By design paradigms –Divide-and-conquer –Incremental –Dynamic programming –Greedy algorithms –Randomized/probabilistic

30 5/11/2004 Lecture 2COSC3101A30 Divide-and-Conquer Divide the problem into a number of subproblems –Similar sub-problems of smaller size Conquer the sub-problems –Solve the sub-problems recursively –Sub-problem size small enough  solve the problems in straightforward manner Combine the solutions to the sub-problems –Obtain the solution for the original problem

31 5/11/2004 Lecture 2COSC3101A31 Merge Sort Approach To sort an array A[p.. r]: Divide –Divide the n-element sequence to be sorted into two subsequences of n/2 elements each Conquer –Sort the subsequences recursively using merge sort –When the size of the sequences is 1 there is nothing more to do Combine –Merge the two sorted subsequences

32 5/11/2004 Lecture 2COSC3101A32 Merge Sort Alg.: MERGE-SORT (A, p, r) if p < r Check for base case then q ←  (p + r)/2  Divide MERGE-SORT (A, p, q) Conquer MERGE-SORT (A, q + 1, r) Conquer MERGE (A, p, q, r) Combine Initial call: MERGE-SORT (A, 1, n) 12345678 6231742 5 p r q

33 5/11/2004 Lecture 2COSC3101A33 Example – n Power of 2 12345678 q = 4 6231742 5 1234 742 5 5678 623 1 12 2 5 34 74 56 3 1 78 62 1 5 2 2 3 4 4 7 1 6 3 7 2 8 6 5 Example

34 5/11/2004 Lecture 2COSC3101A34 Example – n Power of 2 1 5 2 2 3 4 4 7 1 6 3 7 2 8 6 5 12345678 7654322 1 1234 754 2 5678 632 1 12 5 2 34 74 56 3 1 78 62

35 5/11/2004 Lecture 2COSC3101A35 Example – n Not a Power of 2 62537416274 1234567891011 q = 6 416274 123456 62537 7891011 q = 9 q = 3 274 123 416 456 537 789 62 1011 74 12 2 3 16 45 4 6 37 78 5 9 2 10 6 11 4 1 7 2 6 4 1 5 7 7 3 8

36 5/11/2004 Lecture 2COSC3101A36 Example – n Not a Power of 2 77665443221 1234567891011 764421 123456 76532 7891011 742 123 641 456 753 789 62 1011 2 3 4 6 5 9 2 10 6 11 4 1 7 2 6 4 1 5 7 7 3 8 74 12 61 45 73 78

37 5/11/2004 Lecture 2COSC3101A37 Merging Input: Array A and indices p, q, r such that p ≤ q < r –Subarrays A[p.. q] and A[q + 1.. r] are sorted Output: One single sorted subarray A[p.. r] 12345678 6321754 2 p r q

38 5/11/2004 Lecture 2COSC3101A38 Merging Idea for merging: –Two piles of sorted cards Choose the smaller of the two top cards Remove it and place it in the output pile –Repeat the process until one pile is empty –Take the remaining input pile and place it face-down onto the output pile

39 5/11/2004 Lecture 2COSC3101A39 Merge - Pseudocode Alg.: MERGE(A, p, q, r) 1.Compute n 1 and n 2 2.Copy the first n 1 elements into L[1.. n 1 + 1] and the next n 2 elements into R[1.. n 2 + 1] 3.L[n 1 + 1] ←  ; R[n 2 + 1] ←  4. i ← 1; j ← 1 5. for k ← p to r 6. do if L[ i ] ≤ R[ j ] 7. then A[k] ← L[ i ] 8. i ← i + 1 9. else A[k] ← R[ j ] 10. j ← j + 1 pq 7542 6321 rq + 1 L R   12345678 6321754 2 p r q n1n1 n2n2

40 5/11/2004 Lecture 2COSC3101A40 Example: MERGE(A, 9, 12, 16) prq

41 5/11/2004 Lecture 2COSC3101A41 Example: MERGE(A, 9, 12, 16)

42 5/11/2004 Lecture 2COSC3101A42 Example (cont.)

43 5/11/2004 Lecture 2COSC3101A43 Example (cont.)

44 5/11/2004 Lecture 2COSC3101A44 Example (cont.) Done!

45 5/11/2004 Lecture 2COSC3101A45 Running Time of Merge Initialization (copying into temporary arrays): –  (n 1 + n 2 ) =  (n) Adding the elements to the final array (the last for loop): –n iterations, each taking constant time   (n) Total time for Merge: –  (n)

46 5/11/2004 Lecture 2COSC3101A46 Analyzing Divide-and Conquer Algorithms The recurrence is based on the three steps of the paradigm: –T(n) – running time on a problem of size n –Divide the problem into a subproblems, each of size n/b: takes D(n) –Conquer (solve) the subproblems aT(n/b) –Combine the solutions C(n)  (1) if n ≤ c T(n) = aT(n/b) + D(n) + C(n) otherwise

47 5/11/2004 Lecture 2COSC3101A47 MERGE-SORT Running Time Divide: –compute q as the average of p and r: D(n) =  (1) Conquer: –recursively solve 2 subproblems, each of size n/2  2T (n/2) Combine: –MERGE on an n -element subarray takes  (n) time  C(n) =  (n)  (1) if n =1 T(n) = 2T(n/2) +  (n) if n > 1

48 5/11/2004 Lecture 2COSC3101A48 Correctness of Merge Sort Loop invariant ( at the start of the for loop) –A[p…k-1] contains the k-p smallest elements of L[1.. n 1 + 1] and R[1.. n 2 + 1] in sorted order –L[i] and R[j] are the smallest elements not yet copied back to A p r

49 5/11/2004 Lecture 2COSC3101A49 Proof of the Loop Invariant Initialization –Prior to first iteration: k = p  subarray A[p..k-1] is empty –A[p..k-1] contains the k – p = 0 smallest elements of L and R –L and R are sorted arrays (i = j = 1)  L[1] and R[1] are the smallest elements in L and R

50 5/11/2004 Lecture 2COSC3101A50 Proof of the Loop Invariant Maintenance –Assume L[i] ≤ R[j]  L[i] is the smallest element not yet copied to A –After copying L[i] into A[k], A[p..k] contains the k – p + 1 smallest elements of L and R –Incrementing k (for loop) and i reestablishes the loop invariant

51 5/11/2004 Lecture 2COSC3101A51 Proof of the Loop Invariant Termination –At termination k = r + 1 –By the loop invariant: A[p..k-1] = A[p…r] contains the k – p = r – p + 1 smallest elements of L and R in sorted order –Exactly the number of elements to be sorted  MERGE(A, p, q, r) is correct k = r + 1

52 5/11/2004 Lecture 2COSC3101A52 Readings Chapters 2.2, 3 Appendix A


Download ppt "COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer."

Similar presentations


Ads by Google