Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

Slides:



Advertisements
Similar presentations
Great Theoretical Ideas in Computer Science for Some.
Advertisements

Greedy Algorithms Spanning Trees Chapter 16, 23. What makes a greedy algorithm? Feasible –Has to satisfy the problem’s constraints Locally Optimal –The.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Approximation Algorithms for TSP
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Branch & Bound Algorithms
Combinatorial Algorithms
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 9 Greedy Technique Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Chapter 11: Limitations of Algorithmic Power
Ch 13 – Backtracking + Branch-and-Bound
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Backtracking.
Approximation Algorithms
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Algorithms for Network Optimization Problems This handout: Minimum Spanning Tree Problem Approximation Algorithms Traveling Salesman Problem.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
The Traveling Salesman Problem Approximation
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Design and Analysis of Algorithms - Chapter 111 How to tackle those difficult problems... There are two principal approaches to tackling NP-hard problems.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
1 Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): b Use a.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 9 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
COPING WITH THE LIMITATIONS OF ALGORITHM POWER
CSCE350 Algorithms and Data Structure Lecture 21 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Algorithms for hard problems Introduction Juris Viksna, 2015.
Branch and Bound Searching Strategies
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Exhaustive search Exhaustive search is simply a brute- force approach to combinatorial problems. It suggests generating each and every element of the problem.
Hard Problems Some problems are hard to solve.  No polynomial time algorithm is known.  E.g., NP-hard problems such as machine scheduling, bin packing,
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
The Theory of NP-Completeness
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Greedy Technique.
Optimization problems such as
An introduction to Approximation Algorithms Presented By Iman Sadeghi
Courtsey & Copyright: DESIGN AND ANALYSIS OF ALGORITHMS Courtsey & Copyright:
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
Approximation Algorithms for TSP
Back Tracking.
Branch and Bound.
Richard Anderson Lecture 28 Coping with NP-Completeness
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Brute Force A straightforward approach, usually based directly on the problem’s statement and definitions of the concepts involved Examples: Computing.
Chapter 12 Coping with the Limitations of Algorithm Power
3. Brute Force Selection sort Brute-Force string matching
3. Brute Force Selection sort Brute-Force string matching
Backtracking and Branch-and-Bound
ICS 353: Design and Analysis of Algorithms
Unit –VII Coping with limitations of algorithm power.
Approximation Algorithms
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

12-2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): b Use a strategy that guarantees solving the problem exactly but doesn’t guarantee to find a solution in polynomial time b Use an approximation algorithm that can find an approximate (sub-optimal) solution in polynomial time

12-3 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Exact Solution Strategies b exhaustive search (brute force) useful only for small instancesuseful only for small instances b dynamic programming applicable to some problems (e.g., the knapsack problem)applicable to some problems (e.g., the knapsack problem) b backtracking eliminates some unnecessary cases from considerationeliminates some unnecessary cases from consideration yields solutions in reasonable time for many instances but worst case is still exponentialyields solutions in reasonable time for many instances but worst case is still exponential b branch-and-bound further refines the backtracking idea for optimization problemsfurther refines the backtracking idea for optimization problems

12-4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Backtracking b Construct the state-space tree nodes: partial solutionsnodes: partial solutions edges: choices in extending partial solutionsedges: choices in extending partial solutions b Explore the state space tree using depth-first search b “Prune” nonpromising nodes dfs stops exploring subtrees rooted at nodes that cannot lead to a solution and backtracks to such a node’s parent to continue the searchdfs stops exploring subtrees rooted at nodes that cannot lead to a solution and backtracks to such a node’s parent to continue the search

12-5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: n-Queens Problem Place n queens on an n-by-n chess board so that no two of them are in the same row, column, or diagonal

12-6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 State-Space Tree of the 4-Queens Problem

12-7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: Hamiltonian Circuit Problem

12-8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Branch-and-Bound b An enhancement of backtracking b Applicable to optimization problems b For each node (partial solution) of a state-space tree, computes a bound on the value of the objective function for all descendants of the node (extensions of the partial solution) b Uses the bound for: ruling out certain nodes as “nonpromising” to prune the tree – if a node’s bound is not better than the best solution seen so farruling out certain nodes as “nonpromising” to prune the tree – if a node’s bound is not better than the best solution seen so far guiding the search through state-spaceguiding the search through state-space

12-9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Select one element in each row of the cost matrix C so that: no two selected elements are in the same column no two selected elements are in the same column the sum is minimized the sum is minimizedExample Job 1 Job 2 Job 3Job 4 Person a Person b Person c Person d Lower bound: Any solution to this problem will have total cost at least: (or ) Example: Assignment Problem

12-10 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: First two levels of the state-space tree

12-11 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example (cont.)

12-12 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: Complete state-space tree

12-13 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: Traveling Salesman Problem

12-14 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Approximation Approach Apply a fast (i.e., a polynomial-time) approximation algorithm to get a solution that is not necessarily optimal but hopefully close to it Apply a fast (i.e., a polynomial-time) approximation algorithm to get a solution that is not necessarily optimal but hopefully close to it Accuracy measures: accuracy ratio of an approximate solution s a r(s a ) = f(s a ) / f(s*) for minimization problems r(s a ) = f(s a ) / f(s*) for minimization problems r(s a ) = f(s*) / f(s a ) for maximization problems r(s a ) = f(s*) / f(s a ) for maximization problems where f(s a ) and f(s*) are values of the objective function f for the approximate solution s a and actual optimal solution s* performance ratio of the algorithm A the lowest upper bound of r(s a ) on all instances the lowest upper bound of r(s a ) on all instances

12-15 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Nearest-Neighbor Algorithm for TSP Starting at some city, always go to the nearest unvisited city, and, after visiting all the cities, return to the starting one AB AB DC DC Note: Nearest-neighbor tour may depend on the starting city Accuracy: R A = ∞ (unbounded above) – make the length of AD arbitrarily large in the above example s a : A – B – C – D – A of length 10 s * : A – B – D – C – A of length 8

12-16 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Multifragment-Heuristic Algorithm Multifragment-Heuristic Algorithm Stage 1: Sort the edges in nondecreasing order of weights. Initialize the set of tour edges to be constructed to empty set Stage 2: Add next edge on the sorted list to the tour, skipping those whose addition would’ve created a vertex of degree 3 or a cycle of length less than n. Repeat this step until a tour of length n is obtained Note: R A = ∞, but this algorithm tends to produce better tours than the nearest-neighbor algorithm

12-17 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Twice-Around-the-Tree Algorithm Stage 1: Construct a minimum spanning tree of the graph (e.g., by Prim’s or Kruskal’s algorithm) Stage 2: Starting at an arbitrary vertex, create a path that goes twice around the tree and returns to the same vertex Stage 3: Create a tour from the circuit constructed in Stage 2 by making shortcuts to avoid visiting intermediate vertices more than once Note: R A = ∞ for general instances, but this algorithm tends to produce better tours than the nearest-neighbor algorithm

12-18 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example Walk: a – b – c – b – d – e – d – b – a Tour: a – b – c – d – e – a

12-19 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Christofides Algorithm Stage 1: Construct a minimum spanning tree of the graph Stage 2: Add edges of a minimum-weight matching of all the odd vertices in the minimum spanning tree Stage 3: Find an Eulerian circuit of the multigraph obtained in Stage 2 Stage 3: Create a tour from the path constructed in Stage 2 by making shortcuts to avoid visiting intermediate vertices more than once R A = ∞ for general instances, but it tends to produce better tours than the twice-around-the-minimum-tree alg.

12-20 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example: Christofides Algorithm

12-21 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Euclidean Instances Theorem If P ≠ NP, there exists no approximation algorithm for TSP with a finite performance ratio. Definition An instance of TSP is called Euclidean, if its distances satisfy two conditions: 1. symmetry d[i, j] = d[j, i] for any pair of cities i and j 2. triangle inequality d[i, j] ≤ d[i, k] + d[k, j] for any cities i, j, k For Euclidean instances: approx. tour length / optimal tour length ≤ 0.5( ⌈ log 2 n ⌉ + 1) approx. tour length / optimal tour length ≤ 0.5( ⌈ log 2 n ⌉ + 1) for nearest neighbor and multifragment heuristic; approx. tour length / optimal tour length ≤ 2 approx. tour length / optimal tour length ≤ 2 for twice-around-the-tree; approx. tour length / optimal tour length ≤ 1.5 approx. tour length / optimal tour length ≤ 1.5 for Christofides for Christofides

12-22 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Local Search Heuristics for TSP Start with some initial tour (e.g., nearest neighbor). On each iteration, explore the current tour’s neighborhood by exchanging a few edges in it. If the new tour is shorter, make it the current tour; otherwise consider another edge change. If no change yields a shorter tour, the current tour is returned as the output. Example of a 2-change

12-23 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example of a 3-change Example of a 3-change

12-24 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Empirical Data for Euclidean Instances

12-25 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Greedy Algorithm for Knapsack Problem Step 1: Order the items in decreasing order of relative values: v 1 /w 1  …  v n /w n Step 2: Select the items in this order skipping those that don’t fit into the knapsack fit into the knapsack Example: The knapsack’s capacity is 16 itemweight value v/w 1 2 $ $ $ $ $ $ $ $10 2Accuracy b R A is unbounded (e.g., n = 2, C = m, w 1 =1, v 1 =2, w 2 =m, v 2 =m) b yields exact solutions for the continuous version

12-26 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Approximation Scheme for Knapsack Problem Step 1: Order the items in decreasing order of relative values: v 1 /w 1  …  v n /w n Step 2: For a given integer parameter k, 0 ≤ k ≤ n, generate all subsets of k items or less and for each of those that fit the knapsack, add the remaining items in decreasing order of their value to weight ratios Step 3: Find the most valuable subset among the subsets generated in Step 2 and return it as the algorithm’s output Accuracy: f(s*) / f(s a ) ≤ 1 + 1/k for any instance of size nAccuracy: f(s*) / f(s a ) ≤ 1 + 1/k for any instance of size n Time efficiency: O(kn k+1 )Time efficiency: O(kn k+1 ) There are fully polynomial schemes: algorithms with polynomial running time as functions of both n and kThere are fully polynomial schemes: algorithms with polynomial running time as functions of both n and k

12-27 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Bin Packing Problem: First-Fit Algorithm First-Fit (FF) Algorithm: Consider the items in the order given and place each item in the first available bin with enough room for it; if there are no such bins, start a new one Example: n = 4, s 1 = 0.4, s 2 = 0.2, s 3 = 0.6, s 4 = 0.7 Accuracy  Number of extra bins never exceeds optimal by more than 70% (i.e., R A ≤ 1.7) b Empirical average-case behavior is much better. (In one experiment with 128,000 bins, the relative error was found to be no more than 2%.)

12-28 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Bin Packing: First-Fit Decreasing Algorithm First-Fit Decreasing (FFD) Algorithm: Sort the items in decreasing order (i.e., from the largest to the smallest). Then proceed as above by placing an item in the first bin in which it fits and starting a new bin if there are no such bins Example: n = 4, s 1 = 0.4, s 2 = 0.2, s 3 = 0.6, s 4 = 0.7 Accuracy  Number of extra bins never exceeds optimal by more than 50% (i.e., R A ≤ 1.5) b Empirical average-case behavior is much better, too

12-29 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Numerical Algorithms Numerical algorithms concern with solving mathematical problems such as b evaluating functions (e.g.,  x, e x, ln x, sin x) b solving nonlinear equations b finding extrema of functions b computing definite integrals Most such problems are of “continuous” nature and can be solved only approximately

12-30 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Principal Accuracy Metrics b Absolute error of approximation (of  * by  ) |  -  * | |  -  * | b Relative error of approximation (of  * by  ) |  -  * | / |  * | |  -  * | / |  * | undefined for  * = 0undefined for  * = 0 often quoted in %often quoted in %

12-31 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Two Types of Errors b truncation errors Taylor’s polynomial approximation e x  1 + x + x 2 /2! + … + x n /n! absolute error  M |x| n+1 /(n+1)! where M = max e t for 0  t  xTaylor’s polynomial approximation e x  1 + x + x 2 /2! + … + x n /n! absolute error  M |x| n+1 /(n+1)! where M = max e t for 0  t  x composite trapezoidal rule  f(x)dx  (h/2) [f(a) + 2  1  =i  =n-1 f(x i ) + f(b)], h = (b - a)/n absolute error  (b-a)h 2 M 2 / 12 where M 2 = max |f(x)| for a  x  bcomposite trapezoidal rule  f(x)dx  (h/2) [f(a) + 2  1  =i  =n-1 f(x i ) + f(b)], h = (b - a)/n absolute error  (b-a)h 2 M 2 / 12 where M 2 = max |f(x)| for a  x  b b round-off errors a b

12-32 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Solving Quadratic Equation Quadratic equation ax 2 + bx + c = 0 (a  0) x 1,2 = (-b   D)/2a where D = b 2 - 4ac Problems: b computing square root use Newton’s method: x n+1 = 0.5(x n + D/x n ) use Newton’s method: x n+1 = 0.5(x n + D/x n ) b subtractive cancellation use alternative formulas (see p. 411) use alternative formulas (see p. 411) use double precision for D = b 2 - 4ac use double precision for D = b 2 - 4ac b other problems (overflow, etc.)

12-33 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Notes on Solving Nonlinear Equations b There exist no formulas with arithmetic ops. and root extractions for roots of polynomials a n x n + a n-1 x n a 0 = 0 of degree n  5 a n x n + a n-1 x n a 0 = 0 of degree n  5 b Although there exist special methods for approximating roots of polynomials, one can also use general methods for f(x) = 0 f(x) = 0 b Nonlinear equation f(x) = 0 can have one, many, infinitely many, and no roots at all b Useful: sketch graph of f(x)sketch graph of f(x) separate rootsseparate roots

12-34 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Three Classic Methods Three classic methods for solving nonlinear equation f(x) = 0 in one unknown: b bisection method b method of false position (regula falsi) b Newton’s method

12-35 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Bisection Method Based on b Theorem: If f(x) is continuous on a  x  b and f(a) and f(b) have opposite signs, then f(x) = 0 has a root on a < x < b b binary search idea Approximations x n are middle points of shrinking segments b |x n - x * |  (b - a)/2 n b x n always converges to root x * but slower compared to others

12-36 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Example of Bisection Method Application Find the root of x³ - x - 1=0 with the absolute error not larger than 0.01 n anananan bnbnbnbn xn xn xn xn f(xn)f(xn)f(xn)f(xn) x ≈

12-37 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Method of False Position Similar to bisection method but uses x-intercept of line through (a, f(a)) and (b, f(b)) instead of middle point of [a,b] Approximations x n are computed by the formula x n = [a n f(b n ) - b n f(a n )] / [f(b n ) - f(a n )] x n = [a n f(b n ) - b n f(a n )] / [f(b n ) - f(a n )] b Normally x n converges faster than bisection method sequence but slower than Newton’s method sequence

12-38 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Newton’s Method Very fast method in which x n ’s are x-intercepts of tangent lines to the graph of f(x) Approximations x n are computed by the formula x n+1 = x n - f(x n ) / f(x n ) x n+1 = x n - f(x n ) / f(x n )

12-39 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 12 Notes on Newton’s Method b Normally, approximations x n converge to root very fast but can diverge with a bad choice of initial approximation x 0 b Yields a very fast method for computing square roots x n+1 = 0.5(x n + D/x n ) x n+1 = 0.5(x n + D/x n ) b Can be generalized to much more general equations