ALGORITHM TYPES Divide and Conquer, Dynamic Programming, Greedy, and Backtracking Note the general strategy from the examples The classification is neither.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

ALGORITHM TYPES Greedy, Divide and Conquer, Dynamic Programming, Random Algorithms, and Backtracking. Note the general strategy from the examples. The.
Types of Algorithms.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Chapter 7 Dynamic Programming 7.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
§ 8 Dynamic Programming Fibonacci sequence
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Backtracking.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Algorithms April-May 2013 Dr. Youn-Hee Han The Project for the Establishing the Korea ㅡ Vietnam College of Technology in Bac Giang.
ALGORITHM TYPES Greedy, Divide and Conquer, Dynamic Programming, Random Algorithms, and Backtracking. Note the general strategy from the examples. The.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Fundamentals of Algorithms MCS - 2 Lecture # 7
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Algorithm Paradigms High Level Approach To solving a Class of Problems.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Lectures on Greedy Algorithms and Dynamic Programming
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
§5 Backtracking Algorithms A sure-fire way to find the answer to a problem is to make a list of all candidate answers, examine each, and following the.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Section Recursion 2  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
Dynamic Programming Typically applied to optimization problems
All-pairs Shortest paths Transitive Closure
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Types of Algorithms.
CS38 Introduction to Algorithms
CSCE 411 Design and Analysis of Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Unit-5 Dynamic Programming
Types of Algorithms.
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
ICS 353: Design and Analysis of Algorithms
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Dynamic Programming-- Longest Common Subsequence
Types of Algorithms.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
ALGORITHM TYPES Divide and Conquer, Dynamic Programming, Backtracking, and Greedy. Note the general strategy from the examples. The classification is neither.
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Dynamic Programming.
Presentation transcript:

ALGORITHM TYPES Divide and Conquer, Dynamic Programming, Greedy, and Backtracking Note the general strategy from the examples The classification is neither exhaustive (there may be other “types”), nor mutually exclusive (one may combine)

In case the divide and conquer strategy can divide the problem at a very small level, and there are repetition of some calculations over some components One can apply a bottom up approach: calculate the smaller components first and then keep combining them until the highest level of the problem is solved. Draw the recursion tree of Fibonacci-series calculation, you will see example of such repetitive calculations f(n) = f(n-1) + f(n-2), for n>1; and f(n)=1 otherwise fib(n) calculation n = 1, 2, 3, 4, 5 fib = 1, 2, 3, 5, 8 PROBLEM 1: DYNAMIC PROGRAMMING STRATEGY

DYNAMIC PROGRAMMING STRATEGY (Continued) Recursive fib(n) if (n<=1) return 1; elsereturn (fib(n-1) + fib(n-2)). Time complexity: exponential, O(k n ) for some k>1.0 Iterative fibonacci(n) fib(0) = fib(1) = 1; for i=2 through n do fib(i) = fib(i-1) + fib(i-2); end for; return fib(n). Time complexity: O(n), Space complexity: O(n)

SpaceSaving-fibonacci(n) if (n<=1) return 1; int last=1, last2last=1, result=1; for i=2 through n do result = last + last2last; last2last = last; last=result end for; return result. Time complexity: O(n), Space complexity: O(1) DYNAMIC PROGRAMMING STRATEGY (Continued)

The recursive call recalculates fib(1) 5 times, fib(2) 3 times, fib(3) 2 times - in fib(5) calculation. The complexity is exponential. In iterative calculation we avoid repetition by storing the needed values in variables - complexity of order n. Dynamic Programming approach consumes more memory to store the result of calculations of lower levels for the purpose of calculating the next higher level. They are typically stored in a table. DYNAMIC PROGRAMMING STRATEGY (Continued)

Given a set of objects with (Weight, Profit) pair, and a Knapsack of limited weight capacity (M), find a subset of objects for the knapsack to maximize total profit P Sample Problem: Objects (wt, p) = {(2, 1), (3, 2), (9, 20), (5, 3)}. M=8 Exhaustive Algorithm: Try all subset of objects. How many? Problem 1: 0-1 Knapsack Problem (not in the book) {}: total=(0 lbs, $0) {(2,1)}: total =(2lbs, $1) … {(9,20): Illegal, total wt>8 … {(2,1), (3,2)}: =(5lbs, $3) … Total possibilities = 2 n For each object present or absent 0 or 1 A bit string is a good representation for a set, e.g …. n-bits

DP uses a representation for optimal profit P(j, k): –for the first j objects considered (in any arbitrary but pre- ordering of objects) – with a variable knapsack limit k Develop the table for P(j, k) –with rows j =1 …n (#objects), and –for each row, go from k =0 …M (the KS limit) Finally P(n, M) gets the result 0-1 Knapsack Problem w-> { }{ } {O 1 }00111… => 1 {O 1 O 2 } … => 3 {O 1 O 2 O 3 }

0-1 Knapsack Recurrence Formula for computing P The recurrence, for all k=0…M, j=0…N P(j, k) = (case 1) P(j-1, k), if w j >k, w j is weight of the j-th object; else P(j, k) = (case2) max{ P(j-1, k-w j )+p j, P(j-1, k) } –Explanation for the formula is quite intuitive Recurrence terminates: P(0, k)=0, P(j, 0)=0, for all k’s and j’s w-> { }{ } {O 1 } {O 1 O 2 } {O 1 O 2 O 3 } P(3,7)= 4 Objects (wt, p) = {(2, 1), (3, 2), (5, 3)}. M=9

Recurrence -> recursive algorithm Input: Set of n objects with (w i, p i ) pairs, and knapsack limit M Output: Maximum profit P for subset of objects with total wt ≤ M Function P(j, k) 1.If j ≤ 0 or k ≤ 0 then return 0; //recursion termination 2.else 3.if w j >k then 4.return P(j-1, k) 5.else 6.return max{P(j-1, k-Wj) + pj, P(j-1, k)} End algorithm. Driver: call P(n, M), for given n objects & KS limit M. This is not table building: P’s are recursive functions Complexity: ? Exponential: O(2 n ) Why? Same reason as in Fibonacci number calculation: Repeats computing P’s

0-1 Knapsack: Recursive call tree Objects (wt, p) = {(2, 1), (3, 2), (5, 3)}. M=8 w-> { }{ } {O 1 }00111… => 1 {O 1 O 2 } … => 3 {O 1 O 2 O 3 } P(3,8) P(2,8-5)P(2,8) P(1,8) P(1,8-3)P(1,3-3) P(1,3) P(0,3-2)P(0,3) P(0,0-2) P(0,0) P(0,5-2)P(0,5) P(0,8-2) P(0,8) w 1 =2>0, not called Each leaf has w=0, and P(0,*) returns a value, 0 to the caller above

0-1 Knapsack: recurrence -> DP algorithm Input: Set of n objects with (w i, p i ) pairs, and knapsack limit M Output: Maximum profit P for a subset of objects with total wt ≤M Algorithm DPks(j, k) 1.For all k, P(0, k) =0; For all j, P(j, 0) = 0; //initialize 2.For j=1 to n do 1.For k= 1 to M do 3.if w j >k then 4.P(j, k) = P(j-1, k) 5.else 6.P(j, k)= max{P(j-1, k-Wj) + pj, P(j-1, k)} End loops and algorithm Do not repeat the same computation, store them in a table and reuse Complexity: O(nM), pseudo-polynomial because M is an input value. If M=30.5 the table would be of size O(10nM), or if M=30.54 it would be O(100NM).

0-1 Knapsack Problem (Example) Objects (wt, p) = {(2, 1), (3, 2), (5, 3)}. M=8 w-> { }{ } {O 1 } 00111… =>1 {O 1 O 2 } … =>3 {O 1 O 2 O 3 } $ +0$ -5 lbs (2+3)$

What if M=9 Objects (wt, p) = {(2, 1), (3, 2), (5, 3)}. M=9 w-> { }{ } {O 1 } {O 1 O 2 } {O 1 O 2 O 3 } HOW TO FIND KNAPSACK CONTENT FROM TABLE? SPACE COMPLEXITY? -5 lbs (2+3)$

Memoisation algorithm: 0-1 knapsack Algorithm P(j, k) If j <= 0 or k < = 0 then return 0; //recursion termination else if Wj>k then y=A(j-1, k); if y<0 {y = P(j-1, k) ; A(j-1, k)=y}; // P( ) is a recursive call, A() is matrix return y else x=A(j-1, k-Wj); if x<0 {x = P(j-1, k-Wj) ; A(j-1, k-Wj)=x}; y=A (j-1, k); if y<0 {y = P(j-1, k) ; A(j-1, k)=y}; A (j, k) = max{x+ pj, y}; return max{x+ pj, y} End algorithm. Driver: Initialize a global matrix A(0->n, 0->M) with -1; call P(n, M) Complexity: ?

A chain of matrices to be multiplied: ABCD, – dimensions: A (5x1), B(1x4), C(4x3), and D(3x6). – Resulting matrix will be of size (5x6). # scalar (or integer) multiplications for (BC) is 1.4.3, –and the resulting matrix’s dimension is (1x3), 1 column and 3 rows. Problem 2: Ordering of Matrix-chain Multiplications

Ordering of Matrix-chain Multiplications A (5x1), B(1x4), C(4x3), and D(3x6) Multiple ways to multiply: –(A(BC))D, –((AB)C)D, –(AB)(CD), –A(B(CD)), –A((BC)D) –note: Resulting matrix would be the same –but the computation time may vary drastically Time depends on #scalar multiplications –In the case of (A(BC))D, it is = = 117 –In the case (A(B(CD)), it is = = 126 Our problem here is to find the best such ordering –An exhaustive search over all orderings is too expensive - Catalan number involving n!

For a sequence A 1...( A left....A right )...A n, we want to find the optimal break point for the parenthesized sequence. Calculate for ( r-l+1) number of cases and find the minimum: min{(A l...A i ) (A i+1... A r ), with l  i < r}, Recurrence for optimum scalar-multiplication: M(l, r) = min{M(l, i) + M(i+1, r) + row l.col i.col r, with l  i < r}. Termination: M(l, l) = 0 Recurrence for Ordering of Matrix-chain Multiplications r = lft (2)69(2) Sample M(i,j):

Matrix-chain Recursive algorithm Recursive Algorithm M(l, r) 1.if r<= l then return 0; // Recurrence termination: no scalar-mult 2.else 3. return min{M(l, i) + M(i+1, r) + row l.col i.col r, 4.for l  i<r}; 5.end algorithm. Driver: call M(1, n) for the final answer.

Recurrence for optimum scalar-multiplication: M(l, r) = min{M(l, i) + M(i+1, r) + row l.col i.col r, with l  i < r}. To compute M(l,r), you need M(l,i) and M(i+1,r) available for ALL i’s E.g., For M(3,9), you need M(3,3), M(4,9), and M(3,4), M(5,9), … –Need to compute smaller size M’s first –Gradually increase size from 1, 2, 3, …, n Recurrence for Ordering of Matrix-chain Multiplications to Bottom-up Computation

Recurrence for optimum number of scalar-multiplications: M(l, r) = min{M(l, i) + M(i+1, r) + row l.col i.col r, with l  i < r}. Compute by increasing size : r-l+1=1, r-l+1=2, r-l+1=3, … r-l+1=n Start the calculation at the lowest level with two matrices, AB, BC, CD etc. –Really? –Where does the recurrence terminate? Then calculate for the triplets, ABC, BCD etc. And so on… Strategy for Ordering of Bottom-up Computation =0, for l==r

r = l (2)69(2) Ordering of Matrix-chain Multiplications (Example) Pairs Triplets Singlet's

Matrix-chain DP-algorithm Input: list of pairwise dimensions of matrices Output: optimum number of scalar multiplications 1.for all 1  i  n do M(i,i) = 0; // diagonal elements 0 2.for size = 2 to n do// size of subsequence 3.for l =1 to n-size+1 do 4.r = l+size-1;//move along diagonal 5.M(l,r) = infinity; //minimizer 6.for i = l to r-1 do 7.x = M(l, i) + M(i+1, r) 8.+ row l.col i.col r; 9.if x < M(l, r) then M(l, r) = x; End. // Complexities?

r = l Ordering of Matrix-chain Multiplications (Example) Pairs Triplets (2) 42(?) 69(2) A 1 (5x3), A 2 (3x1), A 3 (1x4), A 4 (4x6). Calculation goes diagonally. COMPLEXITY? How do you find out the actual matrix ordering?

A 1 (5x3), A 2 (3x1), A 3 (1x4), A 4 (4x6). M(1,1) = M(2,2) = M(3,3) = M(4,4) = 0 M(1, 2) = M(1,1) + M(2,2) = M(1, 3) = min{ i=1 M(1,1)+M(2,3)+5.3.4, i=2 M(1,2)+M(3,3) } = min{72, 35} = 35(2) M(1,4) = min{ i=1 M(1,1)+M(2,4)+5.3.6, i=2 M(1,2)+M(3,4)+5.1.6, i=3 M(1,3)+M(4,4)+5.4.6} = min{132, 69, 155} = 69(2) 69 comes from the break-point i=2: (A 1.A 2 )(A 3.A 4 ) Recursively break the sub-parts if necessary, e.g., for (A 1 A 2 A 3 ) optimum is at i=2: (A 1.A 2 )A 3 DP Ordering of Matrix-chain Multiplications (Example)

r = l (2)69(2) Ordering of Matrix-chain Multiplications (Example) Pairs Triplets A1 (5x3), A2 (3x1), A3 (1x4), A4 (4x6). For a chain of n matrices, Table size=O(n 2 ), computing for each entry=O(n): COMPLEXITY = O(n 2 *n) = O(n 3 ) A separate matrix I(i,j) keeping track of optimum i, for actual matrix ordering

Computing the actual break points I(i,j) r = l 1--(2)(2)(3)(3) 2--(3)(3)(4) 3--(4)(4) 4-(3)(4) Then: (A 1..A 3 ) -> (A 1 A 2 )(A 3 ), & (A 4 …A 6 ) -> A 4 (A 5 A 6 ) ABCDEF -> (ABC)(DEF) -> ((AB)C) (D(EF)) Backtrack on this table: (A 1..A 3 )(A 4 …A 6 ) ABCDEF -> (ABC)(DEF)

Inductive Proof of Matrix-chain Recurrence Induction base: 1.if r< l then absurd case, there is no matrix to multiply: return 0; 2.If r = = l, then only one matrix, no multiplication: return 0 Inductive hypothesis: 1.For all size<k, and k≥1, assume M(l,r) returns correct optimum Note: size = r-l+1 Inductive step, for size=k: 1.Consider all possible ways to break up (l…r) chain, for l  i<r : 2.Make sure to compute and add the resulting pair of matrices multiplication: row l.col i.col r 3.Since M(l,i) and M(i+1,r) are correct smaller size optimums, as per hypothesis, min { M(l, i) + M(i+1, r) + row l.col i.col r, for l  i<r } is the correct return value for M(l,r)

Be careful with the correctness of the Recurrence behind Dynamic Programming Inductive step: 1.Consider all possible ways to break up (l…r) chain, for l  i<r : 2.Make sure to compute the resulting pair of matrices multiplication: row l.col i.col r 3.Since M(l,i) and M(i+1,r) are correct smaller size optimums, per hypothesis, min{M(l, i) + M(i+1, r) + row l.col i.col r, for l  i<r} is the correct return value for M(l,r) It is NOT always possible to combine smaller steps to the larger one. Addition, Multiplication are associative: = (((4+3) +(1+2)) +9), but average(4,3,1,2,9) = av( av(av(4,3), av(1,2)), 9), NOT true DP needs the correct formulation of a Recurrence first, Then, bottom up combination such that smaller problems contributes to the larger ones

Problem 3: Optimal Binary Search Tree Binary Search Problem: Input: sorted objects, and key Output: the key’s index in the list, or ‘not found’ Binary Search on a Tree: Sorted list is organized as a binary tree –Recursively: each root t is l ≤t ≤r, where l is any left descendant and r is any right descendant Example sorted list of objects: A 2, A 3, A 4, A 7, A 9, A 10, A 13, A 15 Sample correct binary-search trees: A7A7 A3A3 A4A4 A2A2 A 10 A9A9 A 13 A 15 A1A1 A3A3 A4A4 A7A7 A9A9 A 10 A 13 A 15 There are many such correct trees

Problem 3: Optimal Binary Search Tree Problem: Optimal Binary-search Tree Organization Input: Sorted list, with element’s access frequency (how many times to be accessed/searched as a key over a period of time) Output: Optimal binary tree organization so that total cost is minimal Cost for accessing each object= frequency*access steps to the object in the tree) Number of Access steps for each node is its distance from the root, plus one. Example list sorted by object order (index of A): A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4) A3(5) A1(7) A2(10) A4(8) A5(4) Cost = 5*1 +7*2 + 10*3 +8*2 + 4*3 = 77 A1(7) A2(10) A3(5) A4(8) A5(4) Cost = 7*1 + 10*2 + 5*3 + 8*4 + 4*5=94

Problem 3: Optimal Binary Search Tree Input: Sorted list, with each element’s access frequency (how many times to be accessed/searched as a key over a period of time) Output: Optimal binary tree organization with minimal total cost –Every optimization problem optimizes an objective function: –Here it is total access cost Different tree organizations have different aggregate costs, –because the depths of the nodes are different on different tree. Problem: We want to find the optimal aggregate cost, and a corresponding bin-search tree.

Problem 3: Optimal Binary Search Tree Step 1 of DP: Formulate an objective function For our example list: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4) Objective function is: C(1, 5), cost for optimal tree for above list How many ways can it be broken to sub-trees? A i=3 C(1, i-1) C(i+1, 5) A 3 (5), ( A 1 (7), A 2 (10), ) ( A 4 (8), A 5 (4) )

A1A1 C(1, 1-1) C(1+1, 5) A2A2 C(1, 2-1) C(2+1, 5) (null left tree) A 1 (7) at root (A 2 (10), A 3 (5), A 4 (8), A 5 (4) on right) A 1 (7), A 2 (10), A 3 (5), A 4 (8), A 5 (4) (A 1 (7) left tree) A 2 (10) at root (A 3 (5), A 4 (8), A 5 (4) on right) How many such splits needed?

First i =1, A 1 (7), ( ) ( A 2 (10), A 3 (5), A 4 (8), A 5 (4) ) Next i= 3, A 3 (5), ( A 1 (7), A 2 (10) ) ( A 4 (8), A 5 (4) ) Next i= 2, A 2 (10), ( A 1 (7) ) ( A 3 (5),A 4 (8), A 5 (4) ) Next i= 4, A 4 (8), ( A 1 (7), A 2 (10), A 3 (5) ) ( A 5 (4) ) Last i= 5, A 5 (4) ( A 1 (7), A 2 (10), A 3 (5), A 4 (8) ) ( ) All values of i from 1 to 5 are to be tried, to find MINIMUM COST: C(1, 5)

A i (f i ) C(left, i-1) C(i+1, right) A 1 A 2 … A left … A i … A right … A n Generalize the Recurrence formulation for varying left and right pointers: C(l, r)  C(l, i-1) and C(i+1, r) For a choice of A i : C(l, r)  C(l, i-1) + C(i+1, r) +?

AiAi C(left, i-1) C(i+1, right) A 1 A 2 … A left … A i … A right … A n For a choice of A i we would like to write: C(l, r)  C(l, i-1) + C(i+1, r) + f i *1, l  i  r BUT,

C(stand alone left sub-tree) = 10*1 + 7*2 = = 24 Cost = 5*1 + 10*2 + 7*3 + 8*2 + 4*3 = … ….. But, A2(10) A1(7) A3(5) A2(10) A1(7) A4(8) A5(4)

AiAi C(left, i-1) C(i+1, right) A 1 A 2 … A left … A i … A right … A n Now Dynamic Programming does not work! C(l, i-1) and C(i+1, r) are no longer useful to compute C(l, r), unless …

C(stand alone left sub-tree) = 10*1 + 7*2 = = 24 C (inside full tree) = 10*(1+1) + 7*(2+1), for 1 extra steps = 24 + (10*1 + 7*1) = 41 Cost = 5*1 + 10*2 + 7*3 + 8*2 + 4*3 = … ….. Observe: A2(10) A1(7) A3(5) A2(10) A1(7) A4(8) A5(4)

C(stand alone left sub-tree) = 10*1 + 7*2 = = 24 C (inside full tree) = 10*(1+1) + 7*(2+1), for 1 extra steps = 24 + (10 + 7) = C(stand-alone) + (sum of node costs) We can make DP work by reformulating the recurrence! Cost = 5*1 + 10*2 + 7*3 + 8*2 + 4*3 = … ….. Generalize: A2(10) A1(7) A3(5) A2(10) A1(7) A4(8) A5(4)

Recurrence for Optimal Binary Search Tree If the i-th node is chosen as the root for this sub-tree, then C(l, r) = min[l  i  r] { f(i) + C(l, i-1) + C(i+1, r) +  j=l i-1  f(j) [additional cost for left sub-tree] +  j=i+1 r f(j) }[additional cost for right sub-tree] = min[l  i  r] {  j=l r f(j) + C(l, i-1) + C(i+1, r)}

Recurrence for Optimal Binary Search Tree If the i-th node is chosen as the root for this sub-tree, then C(l, r) = min[l  i  r] { f(i) + C(l, i-1) + C(i+1, r) +  j=l i-1  f(j) [additional cost for left sub-tree] +  j=i+1 r f(j) }[additional cost for right sub-tree] = min[l  i  r] {  j=l r f(j) + C(l, i-1) + C(i+1, r)} Recurrence termination? –Observe in the formula, which boundary values you will need. Final result in C(1, n). Start from zero element sub-tree (size=0), and gradually increase size, Finish when size= n, for the full tree

Optimal Binary Search Tree (Continued) Like matrix-chain multiplication-ordering we will develop a triangular part of the cost matrix (r>= l), and we will develop it diagonally (r = l + size), with varying size Note that our boundary condition, c(l, r)=0 if l > r (meaningless cost): This is recurrence termination We start from, l = r: single node trees (not with l=r-1: pairs of matrices, as in matrix-chain-mult case). Also, i now goes from ‘left’ through ‘right,’ and i is excluded from both the subtrees’ C’s.

Optimal Binary-search Tree Organization problem (Example) r ->12345 l=1 l=20 l=300 l=4000 l=50000 Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Initialize:

r ->12345 l=17 l=2010 l=3005 l=40008 l= Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Diagonals: C(1, 1)=7, C(2, 2)=10, …. Singlets Optimal Binary-search Tree Organization problem (Example)

r ->12345 l=17* l=2010 l=3005 l=40008 l= Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Diagonals: C(1, 1)=7, C(2, 2)=10, …. C(1,2)=min{ i=1 C(1,0)+C(2,2)+f1+f2= , Optimal Binary-search Tree Organization problem (Example)

r ->12345 l=17* l=2010 l=3005 l=40008 l= Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Diagonals: C(1, 1)=7, C(2, 2)=10, …. C(1,2)=min{ i=1 C(1,0)+C(2,2)+f1+f2= , i=2 C(1,1) + C(3,2) + f1+f2 = } Optimal Binary-search Tree Organization problem (Example)

Write the DP algorithm for Optimal Binary-search Tree Organization problem r ->12345 l=1724(2) l=2010 l=3005 l=40008 l= Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Diagonals: C(1, 1)=7, C(2, 2)=10, …. C(1,2)=min{ i=1 C(1,0)+C(2,2)+f1+f2= , i=2 C(1,1) + C(3,2) + f1+f2 = } = min{27, 24} = 24 (i=2)

Write the DP algorithm for Optimal Binary-search Tree Organization problem r ->12345 l=1724(2) l= l= l= l= Keys: A 1 (7), A 2 (10), A 3 (5), A 4 (8), and A 5 (4). Diagonals: C(1, 1)=7, C(2, 2)=10, …. C(1,2)=min{ i=1 C(1,0)+C(2,2)+f1+f2= , i=2 C(1,1) + C(3,2) + f1+f2 = } = min{27, 24} = 24 (i=2) Usual questions: How to keep track for finding optimum tree? Full Algorithm?

Problem4: All Pairs Shortest Path A variation of Djikstra’s algorithm. Called Floyd-Warshal’s algorithm. Good for dense graph. Algorithm Floyd // NOT A GOOD STYLE – WHY? Copy the distance matrix in d[1..n][1..n]; for k=1 through n do //consider each vertex as updating candidate for i=1 through n do for j=1 through n do if (d[i][k] + d[k][j] < d[i][j]) then d[i][j] = d[i][k] + d[k][j]; path[i][j] = k; // last updated via k End algorithm. Time: O(n 3 ), for 3 loops.Space: ? Cs.fit.edu/~dmitra/ Algorithms/lectures/ FloydExampleKormenPg696.pdf

Problem 5: DNA sequence Alignment (Approximate string matching) Score: 9 matches times 1 1 mismatch times (-1) 1 gap times (-2) = 6 Find a best-score alignment Source: Setubal & Meidanis, Bioinformatics

Problem 5: DNA sequence Alignment s 1 s 2 s 3 …… s n t 1 t 2 t 3 ……………..t m

Problem 5: DNA sequence Alignment Note, initialization, Or, Recurrence termination Alignment: start from (n,m)-corner, Stops at (0,0) corner

Problem 5: Gene Alignment

Problem 5-1: Gene Alignment, local  Global alignment We may want Local alignment 

Problem 6-1: Gene Alignment, local We may want Local alignment  Initialize with 0’s. Alignment, starts from highest value on the table, And stops with a zero  No negative score