Download presentation
Presentation is loading. Please wait.
Published byDoreen Allen Modified over 9 years ago
1
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
2
A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Divide-and-Conquer The most-well known algorithm design strategy: Divide instance of problem into two or more smaller instances Divide instance of problem into two or more smaller instances Solve smaller instances recursively Obtain solution to original (larger) instance by combining these solutions
3
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Divide-and-Conquer Technique (cont.) subproblem 2 of size n/2 subproblem 1 of size n/2 a solution to subproblem 1 a solution to the original problem a solution to subproblem 2 a problem of size n
4
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Example Consider the problem of computing the sum of n numbers. a 0, …, a n-1a 0, …, a n-1 Divide the problem into two subproblems 1.Sorting the first n/2 elements 2.Sorting the second n/2 elements Once we have two sorted lists, how do we add them together? What is the complexity of adding them together?What is the complexity of adding them together? What have we gained? Are the subproblems solvable?Are the subproblems solvable? Note: a list with a single element is always sorted.Note: a list with a single element is always sorted.
5
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 When Divide and Conquer? In general, we don’t have to limit ourselves to dividing by 2 We can have as many subproblems per problem as we wantWe can have as many subproblems per problem as we want Generally, we are satisfied with 2Generally, we are satisfied with 2 DnC is ideal for problems where each subproblem can be solved simultaneously by its own processor i.e. – parallel computingi.e. – parallel computing
6
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Divide-and-Conquer Examples Sorting: mergesort and quicksort Binary tree traversals Binary search (?) Multiplication of large integers Matrix multiplication: Strassen’s algorithm Closest-pair and convex-hull algorithms
7
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 General Divide-and-Conquer Recurrence n –instance size; b-subproblems; n/b-size of subproblem; a – number of subproblems needing to be solved T(n) = aT(n/b) + f (n) where f(n) (n d ), d 0 Master Theorem: If a < b d, T(n) (n d ) If a = b d, T(n) (n d log n) If a = b d, T(n) (n d log n) If a > b d, T(n) ( n log b a ) If a > b d, T(n) ( n log b a ) Note: The same results hold with O instead of . Examples: T(n) = 4T(n/2) + n T(n) ? T(n) = 4T(n/2) + n 2 T(n) ? T(n) = 4T(n/2) + n 2 T(n) ? T(n) = 4T(n/2) + n 3 T(n) ? T(n) = 4T(n/2) + n 3 T(n) ?
8
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Additional Example DnC Technique for Adding n numberrs A(n) = 2A(n/2) + 1 A=2, b=2, d=0, Hence: –a>b d –θ(n log_b a ) = θ(n log_2 2 ) = θ(n)
9
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Mergesort Split array A[0..n-1] in two about equal halves and make copies of each half in arrays B and C Sort arrays B and C recursively Merge sorted arrays B and C into array A as follows: Repeat the following until no elements remain in one of the arrays:Repeat the following until no elements remain in one of the arrays: –compare the first elements in the remaining unprocessed portions of the arrays –copy the smaller of the two into A, while incrementing the index indicating the unprocessed portion of that array Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A.Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A.
10
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Pseudocode of Mergesort
11
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Pseudocode of Merge
12
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Mergesort Example
13
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Analysis of Mergesort All cases have same efficiency: Θ (n log n) Number of comparisons in the worst case is close to theoretical minimum for comparison-based sorting: log 2 n! ≈ n log 2 n - 1.44n log 2 n! ≈ n log 2 n - 1.44n Space requirement: Θ (n) (not in-place) Can be implemented without recursion (bottom-up)
14
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Quicksort Select a pivot (partitioning element) – here, the first element Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm) Exchange the pivot with the last element in the first (i.e., ) subarray — the pivot is now in its final position Sort the two subarrays recursively p A[i] p A[i] p
15
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Partitioning Algorithm
16
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Quicksort Example 5 3 1 9 8 2 4 7
17
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Analysis of Quicksort Best case: split in the middle — Θ (n log n)split in the middle — Θ (n log n) Worst case: sorted array! — Θ (n 2 )sorted array! — Θ (n 2 ) Average case: random arrays — Θ (n log n) Improvements: better pivot selection: median of three partitioningbetter pivot selection: median of three partitioning switch to insertion sort on small subfilesswitch to insertion sort on small subfiles elimination of recursionelimination of recursion These combine to 20-25% improvement Considered the method of choice for internal sorting of large files (n ≥ 10000)
18
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Binary Search Very efficient algorithm for searching in sorted array: K vs vs A[0]... A[m]... A[n-1] If K = A[m], stop (successful search); otherwise, continue searching by the same method in A[0..m-1] if K < A[m] and in A[m+1..n-1] if K > A[m] l 0; r n-1 while l r do m (l+r)/2 if K = A[m] return m if K = A[m] return m else if K < A[m] r m-1 else if K < A[m] r m-1 else l m+1 else l m+1 return -1
19
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Example Given Array 3, 14, 27, 31, 39, 42, 55, 70, 74, 81, 85, 93, 98 Search for 70 with binary search
20
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Analysis of Binary Search Time efficiency Basic operation: Comparison (3-way comparison counted as one)Basic operation: Comparison (3-way comparison counted as one) worst-case recurrence: C w (n) = 1 + C w ( n/2 ), C w (1) = 1 solution: C w (n) = log 2 (n+1) This is VERY fast: e.g., C w (10 6 ) = 20worst-case recurrence: C w (n) = 1 + C w ( n/2 ), C w (1) = 1 solution: C w (n) = log 2 (n+1) This is VERY fast: e.g., C w (10 6 ) = 20 Optimal for searching a sorted array Caveat: Optimal for searching in general with key comparisonCaveat: Optimal for searching in general with key comparison Limitations: must be a sorted array (not linked list) Bad (degenerate) example of divide-and-conquer The subproblems for this algorithm don’t ALL need to be solvedThe subproblems for this algorithm don’t ALL need to be solved Has a continuous counterpart called bisection method for solving equations in one unknown f(x) = 0 (see Sec. 12.4)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.