Analysis & Design of Algorithms (CSCE 321)

Slides:



Advertisements
Similar presentations
Types of Algorithms.
Advertisements

Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Fundamental Techniques
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 5. Recursive Algorithms.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Induction and recursion
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 9. Intermezzo.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
CompSci 102 Discrete Math for Computer Science March 13, 2012 Prof. Rodger Slides modified from Rosen.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
All-pairs Shortest paths Transitive Closure
Dynamic Programming Sequence of decisions. Problem state.
Design & Analysis of Algorithm Dynamic Programming
Algorithmics - Lecture 11
Introduction to the Design and Analysis of Algorithms
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Dynamic Programming.
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Analysis and design of algorithm
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming.
Seminar on Dynamic Programming.
Presentation transcript:

Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 10. Dynamic Programming Prof. Amr Goneid, AUC

Dynamic Programming Prof. Amr Goneid, AUC

Dynamic Programming Introduction What is Dynamic Programming? How To Devise a Dynamic Programming Approach The Sum of Subset Problem The Knapsack Problem Minimum Cost Path Coin Change Problem Optimal BST DP Algorithms in Graph Problems Comparison with Greedy and D&Q Methods Prof. Amr Goneid, AUC

1. Introduction We have demonstrated that Sometimes, the divide and conquer approach seems appropriate but fails to produce an efficient algorithm. One of the reasons is that D&Q produces overlapping subproblems. Prof. Amr Goneid, AUC

Introduction Solution: Buy speed using space Store previous instances to compute current instance instead of dividing the large problem into two (or more) smaller problems and solving those problems (as we did in the divide and conquer approach), we start with the simplest possible problems. We solve them (usually trivially) and save these results. These results are then used to solve slightly larger problems which are, in turn, saved and used to solve larger problems again. This method is called Dynamic Programming Prof. Amr Goneid, AUC

2. What is Dynamic Programming An algorithm design method used when the solution is a result of a sequence of decisions (e.g. Knapsack, Optimal Search Trees, Shortest Path, .. etc). Makes decisions one at a time. Never make an erroneous decision. Solves a sub-problem by making use of previously stored solutions for all other sub-problems. Prof. Amr Goneid, AUC

Dynamic Programming Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems “Programming” here means “planning” Prof. Amr Goneid, AUC

When is Dynamic Programming Two main properties of a problem that suggest that the given problem can be solved using Dynamic programming. Overlapping Subproblems Optimal Substructure Prof. Amr Goneid, AUC

Overlapping Subproblems Like Divide and Conquer, Dynamic Programming combines solutions to subproblems. Dynamic Programming is mainly used when solutions of same subproblems are needed again and again. Examples are computing the Fibonacci Sequence, Binomial Coefficients, etc. Prof. Amr Goneid, AUC

Optimal Substructure: Principle of Optimality Dynamic programming uses the Principle of Optimality to avoid non-optimal decision sequences. For an optimal sequence of decisions, the remaining decisions must constitute an optimal sequence. Example: Shortest Path Find Shortest path from vertex (i) to vertex (j) Prof. Amr Goneid, AUC

Principle of Optimality Let k be an intermediate vertex on a shortest i-to-j path i , a , b, … , k , l , m , … , j . The path i , a , b, … , k must be shortest i-to-k , and the path k , l , m , … , j must be shortest k-to-j k i j l a b m Prof. Amr Goneid, AUC

3. How To Devise a Dynamic Programming Approach Given a problem that is solvable by a Divide & Conquer method Prepare a table to store results of sub-problems Replace base case by filling the start of the table Replace recursive calls by table lookups Devise for-loops to fill the table with sub-problem solutions instead of returning values Solution is at the end of the table Notice that previous table locations also contain valid (optimal) sub-problem solutions Prof. Amr Goneid, AUC

Example(1): Fibonacci Sequence The Fibonacci graph is not a tree, indicating an Overlapping Subproblem. Optimal Substructue: If F(n-2) and F(n-1) are optimal, then F(n) = F(n-2) + F(n-1) is optimal Prof. Amr Goneid, AUC

Dynamic Programming Solution: Buy speed with space, a table F(n) Fibonacci Sequence Dynamic Programming Solution: Buy speed with space, a table F(n) Store previous instances to compute current instance Prof. Amr Goneid, AUC

Fibonacci Sequence Table F[n] F[0] = F[1] = 1; if ( n >= 2) Fib(n): if (n < 2) return 1; else return Fib(n-1) + Fib(n-2) Table F[n] F[0] = F[1] = 1; if ( n >= 2) for i = 2 to n F[i] = F[i-1] + F[i-2]; return F[n]; i-2 i-1 i Prof. Amr Goneid, AUC

Dynamic Programming Solution: Space Complexity is O(n) Fibonacci Sequence Dynamic Programming Solution: Space Complexity is O(n) Time Complexity is T(n) = O(n) Prof. Amr Goneid, AUC

Example(2): Counting Combinations Overlapping Subproblem. 5,3 4,2 4,3 3,1 3,2 3,2 3,3 2,1 2,2 2,1 2,2 1,0 1,1 1,0 1,1 Prof. Amr Goneid, AUC

Counting Combinations Optimal Substructure The value of Comb(n, m) can be recursively calculated using following standard formula for Binomial Coefficients: Comb(n, m) = Comb(n-1, m-1) + Comb(n-1, m) Comb(n, 0) = Comb(n, n) = 1 Prof. Amr Goneid, AUC

Counting Combinations Dynamic Programming Solution: Buy speed with space, Pascal’s Triangle. Use a table T[0..n, 0..m]. Store previous instances to compute current instance Prof. Amr Goneid, AUC

Counting Combinations Table T[n,m] for (i = 0 to n − m) T[i, 0] = 1; for (i = 0 to m) T[i, i] = 1; for (j = 1 to m) for (i = j + 1 to n − m + j) T[i, j] = T[i − 1, j − 1] + T[i − 1, j]; return T[n, m]; comb(n,m): if ((m == 0) || (m == n)) return1; else return comb (n − 1, m − 1) + comb (n − 1, m); i-1, j-1 i-1 , j i , j Prof. Amr Goneid, AUC

Counting Combinations Dynamic Programming Solution: Space Complexity is O(nm) Time Complexity is T(n) = O(nm) Prof. Amr Goneid, AUC

Exercise Consider the following function: Consider the number of arithmetic operations used to be T(n): Show that a direct recursive algorithm would give an exponential complexity. Explain how, by not re-computing the same F(i) value twice, one can obtain an algorithm with T(n) = O(n2) Give an algorithm for this problem that only uses O(n) arithmetic operations. Prof. Amr Goneid, AUC

4. The Sum of Subset Problem Given a set of positive integers W = {w1,w2...wn} The problem: is there a subset of W that sums exactly to m? i.e, is SumSub (w,n,m) true? Example: W = { 11 , 13 , 27 , 7} , m = 31 A possible subset that sums exactly to 31 is {11 , 13 , 7} Hence, SumSub (w,4,31) is true w1 w2 .... wn m Prof. Amr Goneid, AUC

The Sum of Subset Problem Consider the partial problem SumSub (w,i,j) SumSub (w, i, j) is true if: wi is not needed, {w1,..,wi-1} has a subset that sums to (j), i.e., SumSub (w, i-1, j) is true, OR wi is needed to fill the rest of (j), i.e., {w1,..,wi-1} has a subset that sums to (j-wi) If there are no elements, i.e. (i = 0) then SumSub (w, 0, j) is true if (j = 0) and false otherwise w1 w2 .... wi j Prof. Amr Goneid, AUC

Divide & Conquer Approach Algorithm: bool SumSub (w, i, j) { if (i == 0) return (j == 0); else if (SumSub (w, i-1, j)) return true; else if ((j - wi) >= 0) return SumSub (w, i-1, j - wi); else return false; } Prof. Amr Goneid, AUC

Dynamic Programming Approach Use a tabel t[i,j], i = 0.. n, j = 0..m Base case: set t[0,0] = true and t[0,j] = false for (j != 0) Recursive calls are constructed as follows: loop on i = 1 to n loop on j = 1 to m test on SumSub (w, i-1, j) is replaced by t[i,j] = t[i-1,j] return SumSub (w, i-1, j - wi) is replaced by t[i,j] = t[i-1,j] OR t[i-1,j – wi] Prof. Amr Goneid, AUC

Dynamic Programming Algorithm bool SumSub (w , n , m) { t[0,0] = true; for (j = 1 to m) t[0,j] = false; for (i = 1 to n) for (j = 0 to m) t[i,j] = t[i-1,j]; if ((j-wi) >= 0) t[i,j] = t[i-1,j] || t[i-1, j – wi]; return t[n,m]; } i-1, j - wi j i , j Prof. Amr Goneid, AUC

Dynamic Programming Algorithm Analysis: costs O(1) + O(m) costs O(n) costs O(m) costs O(1) Hence, space complexity is O(nm) Time complexity is T(n) = O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

5. The (0/1) Knapsack Problem Given n indivisible objects with positive integer weights W = {w1,w2...wn} and positive integer values V = {v1,v2...vn} and a knapsack of size (m) Find the highest valued subset of objects with total weight at most (m) i = n i = 1 i = 2 w1 wn m w2 …….. p1 p2 pn Prof. Amr Goneid, AUC

The Decision Instance Assume that we have tried objects of type (1,2,..., i -1) to fill the sack up to a capacity (j) with a maximum profit of P(i-1,j) If j  wi then P(i-1, j - wi) is the maximum profit if we remove the equivalent weight wi of an object of type (i). By trying to add object (i), we expect the maximum profit to change to P(i-1, j - wi) + vi Prof. Amr Goneid, AUC

The Decision Instance If this change is better, we do it, otherwise we leave things as they were, i.e., P(i , j) = max { P(i-1, j) , P(i-1, j - wi) + vi } for j  wi P(i , j) = P(i-1, j) for j < wi The above instance can be solved for P(n , m) by initializing P(0,j) = 0 and successively computing P(1,j) , P(2,j) ....., P (n , j) for all 0  j  m Prof. Amr Goneid, AUC

Divide & Conquer Approach Algorithm: int Knapsackr (int w[ ], int v[ ], int i, int j) { if (i == 0) return 0; else int a = Knapsackr (w,v,i-1,j); if ((j - w[i]) >= 0) { int b = Knapsackr (w,v,i-1,j-w[i]) + v[i]; return (b > a? b : a); } else return a; } Prof. Amr Goneid, AUC

Divide & Conquer Approach Analysis: T(n) = no. of calls to Knapsackr (w, v, n, m): For n = 0, one main call, T(0) = 1 For n > 0, one main call plus two calls each with n-1 The recurrence relation is: T(n) = 2T(n-1) + 1 for n > 0 with T(0) = 1 Hence T(n) = 2n+1 -1 = O(2n) = exponential time Prof. Amr Goneid, AUC

Dynamic Programming Approach The following approach will give the maximum profit, but not the collection of objects that produced this profit Initialize P(0 , j) = 0 for 0  j  m Initialize P(i , 0) = 0 for 0  i  n for each object i from 1 to n do for a capacity j from 0 to m do P(i , j) = P(i-1 , j) if ( j >= wi) if (P(i-1 , j) < P(i-1, j - wi) + vi ) P(i , j)  P(i-1 , j - wi) + vi Prof. Amr Goneid, AUC

DP Algorithm int Knapsackdp (int w[ ], int v[ ], int n, int m) { int p[N][M]; for (int j = 0; j <= m; j++) p[0][j] = 0; for (int i= 0; i <= n; i++) p[i][0] = 0; for (int i = 1; i <= n; i++) for (j = 0; j <= m; j++) { int a = p[i-1][j]; p[i][j] = a; if ((j-w[i]) >= 0) { int b = p[i-1][j-w[i]]+v[i]; if (b > a) p[i][j] = b; } } return p[n][m]; Hence, space complexity is O(nm) Time complexity is T(n) = O(n) + O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

Example Example: Knapsack capacity m = 5 item weight value ($) 2 12 2 12 1 10 3 20 2 15 Prof. Amr Goneid, AUC

Example wi, vi P(i-1, j-wi) P(i-1, j) P(i , j) Goal P(n,m) j-wi j m j-wi j m i-1 i n P(i , j) Goal P(n,m) Prof. Amr Goneid, AUC

Example . Prof. Amr Goneid, AUC

Exercises Modify the previous Knapsack algorithm so that it could also list the objects contributing to the maximum profit. Explain how to reduce the space complexity of the Knapsack problem to only O(m). You need only to find the maximum profit, not the actual collection of objects. Prof. Amr Goneid, AUC

Exercise: Longest Common Sequence Problem Given two sequences A = {a1, . . . , an} and B = {b1, . . . , bm}. Find the longest sequence that is a subsequence of both A and B. For example, if A = {aaadebcbac} and B = {abcadebcbec}, then {adebcb} is subsequence of length 6 of both sequences. Give the recursive Divide & Conquer algorithm and the Dynamic programming algorithm together with their analyses Hint: Let L(i, j) be the length of the longest common subsequence of {a1, . . . , ai} and {b1, . . . , bj}. If ai = bj then L(i, j) = L(i−1, j −1)+1. Otherwise, one can see that L(i, j) = max (L(i, j − 1), L(i − 1, j)). Prof. Amr Goneid, AUC

6. Minimum Cost Path Given a cost matrix C[ ][ ] and a position (n, m) in C[ ][ ], find the cost of minimum cost path to reach (n , m) from (0, 0). Each cell of the matrix represents a cost to traverse through that cell. Total cost of a path to reach (n, m) is the sum of all the costs on that path (including both source and destination). From a given cell (i, j), You can only traverse down to cell (i+1, j), , right to cell (i, j+1) and diagonally to cell (i+1, j+1) . Assume that all costs are positive integers Prof. Amr Goneid, AUC

Minimum Cost Path Example: what is the minimum cost path to (2, 2)? The path is (0, 0) –> (0, 1) –> (1, 2) –> (2, 2). The cost of the path is 8 (1 + 2 + 2 + 3). Optimal Substructure Minimum cost to reach (n, m) is “minimum of the 3 cells plus cost[n][m]“. i.e., minCost(n, m) = min (minCost(n-1, m-1), minCost(n-1, m), minCost(n, m-1)) + C[n][m] Prof. Amr Goneid, AUC

Minimum Cost Path (D&Q) Overlapping Subproblems: The recursive definition suggests a D&Q approach with overlapping subproblems: int MinCost(int C[ ][M], int n, int m) { if (n < 0 || m < 0) return ∞; else if (n == 0 && m == 0) return C[n][m]; else return C[n][m] + min( MinCost(C, n-1, m-1), MinCost(C, n-1, m), MinCost(C, n, m-1) ); } Anaysis: For m=n, T(n) = 3 T(n-1) + 3 for n > 0 with T(0) = 0 Hence T(n) = O(3n) , exponential complexity Prof. Amr Goneid, AUC

Dynamic Programming Algorithm In the Dynamic Programming(DP) algorithm, recomputations of same subproblems can be avoided by constructing a temporary array T[ ][ ] in a bottom up manner. int minCost(int C[ ][M], int n, int m) { int i, j; int T[N][M]; T[0][0] = C[0][0]; /* Initialize first column */ for (i = 1; i <= n; i++) T[i][0] = T[i-1][0] + C[i][0]; /* Initialize first row */ for (j = 1; j <= m; j++) T[0][j] = T[0][j-1] + C[0][j]; /* Construct rest of the array */ for (i = 1; i <= n; i++) for (j = 1; j <= m; j++) T[i][j] = min(T[i-1][j-1], T[i-1][j], T[i][j-1]) + C[i][j]; return T[n][m]; } Space complexity is O(nm) Time complexity is T(n) = O(n)+O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

7. Coin Change Problem We want to make change for N cents, and we have infinite supply of each of S = {S1 , S2, …Sm} valued coins, how many ways can we make the change? (For simplicity's sake, the order does not matter.) Mathematically, how many ways can we express N as For example, for N = 4,S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. We are trying to count the number of distinct sets. Since order does not matter, we will impose that our solutions (sets) are all sorted in non-decreasing order (Thus, we are looking at sorted-set solutions: collections). Prof. Amr Goneid, AUC

Coin Change Problem With S1 < S2 < …<Sm the number of possible sets C(N,m) is Composed of: Those sets that contain at least 1 Sm, i.e. C(N-Sm, m) Those sets that do not contain any Sm, i.e. C(N, m-1) Hence, the solution can be represented by the recurrence relation: C(N,m) = C(N, m-1) + C(N-Sm , m) with the base cases: C(N,m) = 1 for N = 0 C(N,m) = 0 for N < 0 C(N,m) = 0 for N  1 , M ≤ 0 Therefore, the problem has optimal substructure property as the problem can be solved using solutions to subproblems. It also has the property of overlapping subproblems. Prof. Amr Goneid, AUC

D&Q Algorithm int count( int S[ ], int m, int n ) { // If n is 0 then there is 1 solution (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no solution exists if (n < 0) return 0; // If there are no coins and n is greater than 0, then no solution if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) including S[m-1] (ii) excluding // S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] ); } The algorithm has exponential complexity. Prof. Amr Goneid, AUC

DP Algorithm int count( int S[ ], int m, int n ) { int i, j, x, y; int table[n+1][m]; // n+1 rows to include the case (n = 0) for (i=0; i < m; i++) table[0][i] = 1; // Fill for the case (n = 0) // Fill rest of the table enteries bottom up for (i = 1; i < n+1; i++) for (j = 0; j < m; j++) { x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // solutions including S[j] y = (j >= 1)? table[i][j-1]: 0; // solutions excluding S[j] table[i][j] = x + y; // total count } return table[n][m-1]; Space comlexity is O(nm) Time comlexity is O(m) + O(nm) = O(nm) Prof. Amr Goneid, AUC

8. Optimal Binary Search Trees Problem: Given a set of keys K1 , K2 , … , Kn and their corresponding search frequencies P1 , P2 , … , Pn , find a binary search tree for the keys such that the total search cost is minimum. Remark: The problem is similar to that of the optimal merge trees (Huffman Coding) but more difficult because now: - Keys can exist in internal nodes - The binary search tree condition (Left < Parent < Right) is imposed Prof. Amr Goneid, AUC

(a) Example A Binary Search Tree of 5 words: A Greedy Algorithm: Insert words in the tree in order of decreasing frequency of search. 10 30 20 18 22 Freq (tot 100) two if and am a Word Prof. Amr Goneid, AUC

A Greedy BST Insert (if) , (a) , (and) , (am) , (two) Total Search Cost (100 searches) = 1*30 + 2*22 + 2* 10 + 3*20 + 4*18 = 226 Level 30 if 1 22 10 a two 2 20 3 and am 4 18 Prof. Amr Goneid, AUC

Another way to Compute Cost For a tree containing n keys (K 1 , K 2 , … , K n), the total cost C(tree) is: Total Cost = C(tree) = ( P)All keys + C(left subtree) + C(right subtree) For the previous example: C(tree) = 100 + {60 + [0] + [38 + (18) + (0)]}+ {10} = 226 i C (Left) 1 .. (i-1) C (right) (i+1).. n Prof. Amr Goneid, AUC

An Optimal BST A Dynamic Programming Algorithm leads to the following BST: Cost = 1*20 +2*22 + 2*30 + 3*18 + 3*10 = 208 =100 + {40 + [0] + [18]} + {40 + [0] + [10] } = = 208 20 and 30 if 22 a 10 18 am two Prof. Amr Goneid, AUC

(b) Dynamic Programming Method For n keys, we perform n-1 iterations 1  j  n-1 In each iteration (j), we compute the best way to build a sub-tree containing j+1 keys (K i , K i+1 , … , K i+j) for all possible BST combinations of such j+1 keys ( i.e. for 1  i  n-j). Each sub-tree is tried with one of the keys as the root and a minimum cost sub-tree is stored. For a given iteration (j), we use previously stored values to determine the current best sub-tree. Prof. Amr Goneid, AUC

Simple Example For 3 keys (A < B < C), we perform 2 iterations j = 1 , j = 2 For j = 1, we build sub-trees using 2 keys. These come from i = 1, (A-B), and i = 2, (B-C). For each of these combinations, we compute the least cost sub-tree, i.e., the least cost of the two sub-trees (A*,B) and (A, B*) and the least cost of the two sub-trees (B*,C) and (B, C*) , where (*) denotes parent. Prof. Amr Goneid, AUC

Simple Example (Cont.) For j = 2, we build trees using 3 keys. These come from i = 1, (A-C). For this combination, we compute the least cost tree of the trees (A*,(B-C)) , (A , B* , C) , ((A-B) , C*). This is done using previously computed least cost sub-trees. Prof. Amr Goneid, AUC

Simple Example (continued) j = 1 i = 1, (A-B) i = 2 , (B-C) k = 1 k = 2 k = 2 k = 3 j = 2 i = 1, (A-C) k = 1 k = 2 k = 3 A B C B min = ? min = ? B C A B A B-C min B min = ? A C C A-B min Prof. Amr Goneid, AUC

(c) Example Revisited: Optimal BST for 5 keys Iterations 1..2 Prof. Amr Goneid, AUC

Optimal BST for 5 keys Iterations 3 .. n-1 Prof. Amr Goneid, AUC

Construction of The Tree Min BST at j = 4, i = 1, k = 3 cost = 100 + 58 + 50 = 208 (a .. am)min at j = 1, i = 1, k = 1 cost = 58 (if .. two)min at j = 1, i = 4, k = 1 cost = 50 Final min BST cost = 1*20 + 2*22 + 2*30 + 3*18 + 3*10 = 208 and a..am If..two min min a am if two and a if two am Prof. Amr Goneid, AUC

(d) Complexity Analysis Skeletal Algorithm: for j = 1 to n-1 do { // sub-tree has j+1 nodes for i = 1 to n-j do { // for each of the n-j sub-tree combinations for k = i to i+j do { find cost of each of the j+1 configurations and determine minimum cost } } T(n) = 1 j  n-1 ( j + 1) (n – j) = O(n3) , S(n) = O(n2) Prof. Amr Goneid, AUC

Exercise Find the optimal binary search tree for the following words with the associated frequencies: a (18) , and (22) , I (19) , it (20) , or (21) Answer: Min cost = 20 + 2*43 + 3*37 = 217 20 it 21 22 or and 18 19 a I Prof. Amr Goneid, AUC

9. Dynamic Programming Algorithms for Graph Problems Various optimization graph problems have been solved using Dynamic Programming algorithms. Examples are: Dijkstra's algorithm solves the single-source shortest path problem for a graph with nonnegative edge path costs Floyd–Warshall algorithm for finding all pairs shortest paths in a weighted graph (with positive or negative edge weights) and also for finding transitive closure The Bellman–Ford algorithm computes single-source shortest paths in a weighted digraph for graphs with negative edge weights. These will be discussed later under “Graph Algorithms”. Prof. Amr Goneid, AUC

10. Comparison with Greedy and Divide & Conquer Methods Greedy vs. DP : Both are optimization techniques, building solutions from a collection of choices of individual elements. The greedy method computes its solution by making its choices in a serial forward fashion, never looking back or revising previous choices. DP computes its solution bottom up by synthesizing them from smaller subsolutions, and by trying many possibilities and choices before it arrives at the optimal set of choices. There is no a priori test by which one can tell if the Greedy method will lead to an optimal solution. By contrast, there is a test for DP, called The Principle of Optimality Prof. Amr Goneid, AUC

Comparison with Greedy and Divide & Conquer Methods D&Q vs. DP: Both techniques split their input into parts, find subsolutions to the parts, and synthesize larger solutions from smaller ones. D&Q splits its input at pre-specified deterministic points (e.g., always in the middle) DP splits its input at every possible split point rather than at pre-specified points. After trying all split points, it determines which split point is optimal. Prof. Amr Goneid, AUC