ICS 353: Design and Analysis of Algorithms

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Dynamic Programming Typically applied to optimization problems
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Design & Analysis of Algorithm Dynamic Programming
Algorithmics - Lecture 11
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Dynamic programming techniques
Dynamic programming techniques
CSCE 411 Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Lecture 9 Dynamic programming
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
CS6045: Advanced Algorithms
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
3. Brute Force Selection sort Brute-Force string matching
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
All pairs shortest path problem
Dynamic Programming-- Longest Common Subsequence
3. Brute Force Selection sort Brute-Force string matching
ICS 353: Design and Analysis of Algorithms
Dynamic Programming.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming.
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
ICS 353: Design and Analysis of Algorithms
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Seminar on Dynamic Programming.
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

ICS 353: Design and Analysis of Algorithms King Fahd University of Petroleum & Minerals Information & Computer Science Department ICS 353: Design and Analysis of Algorithms Dynamic Programming

Reading Assignment M. Alsuwaiyel, Introduction to Algorithms: Design Techniques and Analysis, World Scientific Publishing Co., Inc. 1999. Chapter 7

Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation of the recursive solution results in identical recursive calls that are executed more than once. Dynamic programming implements such algorithms by evaluating the recurrence in a bottom-up manner, saving intermediate results that are later used in computing the desired solution

Fibonacci Numbers What is the recursive algorithm that computes Fibonacci numbers? What is its time complexity? Note that it can be shown that

Computing the Binomial Coefficient Recursive Definition Actual Value

Computing the Binomial Coefficient What is the direct recursive algorithm for computing the binomial coefficient? How much does it cost? Note that

Optimization Problems and Dynamic Programming Optimization problems with certain properties make another class of problems that can be solved more efficiently using dynamic programming. Development of a dynamic programming solution to an optimization problem involves four steps Characterize the structure of an optimal solution Optimal substructures, where an optimal solution consists of sub-solutions that are optimal. Overlapping sub-problems where the space of sub-problems is small in the sense that the algorithm solves the same sub-problems over and over rather than generating new sub-problems. Recursively define the value of an optimal solution. Compute the value of an optimal solution in a bottom-up manner. Construct an optimal solution from the computed optimal value.

Longest Common Subsequence Problem Problem Definition: Given two strings A and B over alphabet , determine the length of the longest subsequence that is common in A and B. A subsequence of A=a1a2…an is a string of the form ai1ai2…aik where 1i1<i2<…<ik n Example: Let  = { x , y , z }, A = xyxyxxzy, B=yxyyzxy, and C= zzyyxyz LCS(A,B)=yxyzy Hence the length = LCS(B,C)= Hence the length = LCS(A,C)= Hence the length =

Straight-Forward Solution Brute-force search How many subsequences exist in a string of length n? How much time needed to check a string whether it is a subsequence of another string of length m? What is the time complexity of the brute-force search algorithm of finding the length of the longest common subsequence of two strings of sizes n and m? How many subsequences exist in a string of length n? 2n How much time needed to check a string whether it is a subsequence of another string of length m? (m)

Dynamic Programming Solution Let L[i,j] denote the length of the longest common subsequence of a1a2…ai and b1b2…bj, which are substrings of A and B of lengths n and m, respectively. Then L[i,j] = when i = 0 or j = 0 L[i,j] = when i > 0, j > 0, ai=bj L[i,j] = when i > 0, j > 0, aibj

LCS Algorithm Algorithm LCS(A,B) Input: A and B strings of length n and m respectively Output: Length of longest common subsequence of A and B Initialize L[i,0] and L[0,j] to zero; for i ← 1 to n do for j ← 1 to m do if ai = bj then L[i,j] ← 1 + L[i-1,j-1] else L[i,j] ← max(L[i-1,j],L[i,j-1]) end if end for; return L[n,m];

Example (Q7.5 pp. 220) Find the length of the longest common subsequence of A=xzyzzyx and B=zxyyzxz

Example (Cont.) x z y

Example (Cont.) x z y   1 2  3 4

Complexity Analysis of LCS Algorithm What is the time and space complexity of the algorithm?

Matrix Chain Multiplication Assume Matrices A, B, and C have dimensions 210, 102, and 210 respectively. The number of scalar multiplications using the standard Matrix multiplication algorithm for (A B) C is A (B C) is Problem Statement: Find the order of multiplying n matrices in which the number of scalar multiplications is minimum.

Straight-Forward Solution Again, let us consider the brute-force method. We need to compute the number of different ways that we can parenthesize the product of n matrices. e.g. how many different orderings do we have for the product of four matrices? Let f(n) denote the number of ways to parenthesize the product M1, M2, …, Mn. (M1M2…Mk) (M k+1M k+2…Mn) What is f(2), f(3) and f(1)?

Catalan Numbers Cn=f(n+1) Using Stirling’s Formula, it can be shown that f(n) is approximately

Cost of Brute Force Method How many possibilities do we have for parenthesizing n matrices? How much does it cost to find the number of scalar multiplications for one parenthesized expression? Therefore, the total cost is

The Recursive Solution Since the number of columns of each matrix Mi is equal to the number of rows of Mi+1, we only need to specify the number of rows of all the matrices, plus the number of columns of the last matrix, r1, r2, …, rn+1 respectively. Let the cost of multiplying the chain Mi…Mj (denoted by Mi,j) be C[i,j] If k is an index between i+1 and j, what is the cost of multiplying Mi,j considering multiplying Mi,k-1 with Mk,j? Therefore, C[1,n]=

The Dynamic Programming Algorithm

Example (Q7.11 pp. 221-222) Given as input 2 , 3 , 6 , 4 , 2 , 7 compute the minimum number of scalar multiplications:

Example (Q7.11 pp. 221-222) M1 M2 M3 M4 M5

Example (Q7.11 pp. 221-222) M1 36 M1..M2 84 (M1..M2) M3 96 M1 (M2..M4) M1 36 M1..M2 84 (M1..M2) M3 96 M1 (M2..M4) M2 72 M2..M3 M2(M3..M4) 126 (M2..M4)M5 M3 48 M3..M4 132 (M3..M4) M5 M4 56 M4..M5 M5

Another Example (Activity Sheet) Given as input 5 , 2 , 3 , 6 , 4 , 2, 4 Compute the minimum number of scalar multiplications. Find the optimal parenthesization

MatChain Algorithm Algorithm MatChain Input: r[1..n+1] of +ve integers corresponding to the dimensions of a chain of matrices Output: Least number of scalar multiplications required to multiply the n matrices for i := 1 to n do C[i,i] := 0; // diagonal d0 for d := 1 to n-1 do // for diagonals d1 to dn-1 for i := 1 to n-d do j := i+d; C[i,j] := ; for k := i+1 to j do C[i,j] := min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1]; end for; return C[1,n];

Time and Space Complexity of MatChain Algorithm Time Complexity Space Complexity

All-Pairs Shortest Paths Problem: For every vertex u, v  V, calculate (u, v). Possible Solution 1: Cost of Possible Solution 1:

Dynamic Programming Solution Define a k-path from u to v, where u,v  {1 , 2 , … , n} to be any path whose intermediate vertices all have indices less than or equal to k. What is a 0-path? What is a 1-path? … What is an n-path?

Floyd’s Algorithm Algorithm Floyd Input: An n  n matrix length[1..n, 1..n] such that length[i,j] is the weight of the edge (i,j) in a directed graph G = ({1,2,…,n}, E) Output: A matrix D with D[i,j] = [i,j] 1 D = length; //copy the input matrix length into D 2 for k = 1 to n do 3 for i = 1 to n do 4 for j = 1 to n do 5 D[i,j] = min{D[i,j] , D[i,k] + D[k,j]}

Example 2 11 5 12 15 4 8 2 4 1 11 1 3 5

Example (Cont.) 0-p 1 2 3 4 1-p 1 2 3 4 2-p 1 2 3 4 3-p 1 2 3 4

Example (Cont.) 0-p 1 2 3 4 12 5  15 8 11 1-p 1 2 3 4 12 5  15 8 11 6 2-p 1 2 3 4 12 5 23 15 8 11 19 6 3-p 1 2 3 4 9 5 7 15 8 10 19 6

Example (Cont.) 3-p 1 2 3 4 9 5 7 15 8 10 19 6 4-p 1 2 3 4 9 5 7 11 8 10 6

Time and Space Complexity Time Complexity: Space Complexity:

The Knapsack Problem Let U = {u1, u2, …, un} be a set of n items to be packed in a knapsack of size C. Let sj and vj be the size and value of the jth item, where sj, vj , 1  j  n. The objective is to fill the knapsack with some items from U whose total size does not exceed C and whose total value is maximum. Assume that the size of each item does not exceed C.

The Knapsack Problem Formulation Given n +ve integers in U, we want to find a subset SU s.t. is maximized subject to the constraint Sometimes referred to as the 0-1 knapsack problem, since no more than one item is allowed from each type.

= max {V[i-1,j], V[ , ]+vi} if Inductive Solution Let V[i,j] denote the value obtained by filling a knapsack of size j with items taken from the first i items {u1, u2, …, ui} in an optimal way: The range of i is The range of j is The objective is to find V[ , ] V[i,0] = V[0,j] = V[i,j] = V[i-1,j] if = max {V[i-1,j], V[ , ]+vi} if

Example (pp. 223 Question 7.22) There are five items of sizes 3, 5, 7, 8, and 9 with values 4, 6, 7, 9, and 10 respectively. The size of the knapsack is 22. 4

Example (pp. 223 Question 7.22) There are five items of sizes 3, 5, 7, 8, and 9 with values 4, 6, 7, 9, and 10 respectively. The size of the knapsack is 22. 4 6 10 7 11 13 17 15 19 20 22 14 16 21 23 25

Algorithm Knapsack Algorithm Knapsack Input: A set of items U = {u1,u2,…,un} with sizes s1,s2,…,sn and values v1,v2,…,vn, respectively and knapsack capacity C. Output: the maximum value of subject to for i := 0 to n do V[i,0] := 0; for j := 0 to C do V[0,j] := 0; for i := 1 to n do for j := 1 to C do V[i,j] := V[i-1,j]; if si  j then V[i,j] := max{V[i,j], V[i-1,j-si]+vi} end for; return V[n,C];

Time and Space Complexity of the Knapsack Algorithm