CSC 252 Algorithms Haniya Aslam

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

UNC Chapel Hill Lin/Foskey/Manocha Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Lecture 3: Parallel Algorithm Design
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Dynamic Programming (16.0/15) The 3-d Paradigm 1st = Divide and Conquer 2nd = Greedy Algorithm Dynamic Programming = metatechnique (not a particular algorithm)
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Dynamic Programming UNC Chapel Hill Z. Guo.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming David Kauchak cs302 Spring 2013.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Advanced Algorithms Analysis and Design
Advanced Algorithms Analysis and Design
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Lecture 5 Dynamic Programming
Chapter 8 Dynamic Programming.
Unit-5 Dynamic Programming
Binhai Zhu Computer Science Department, Montana State University
Data Structure and Algorithms
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Dynamic Programming-- Longest Common Subsequence
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

CSC 252 Algorithms Haniya Aslam DYNAMIC PROGRAMMING: Matrix Chain Multiplication & Optimal Triangulation CSC 252 Algorithms Haniya Aslam

Presentation Overview Understanding dynamic programming Dynamic programming vs. Recursion and Demand & Conquer Matrix chain multiplication Optimal polygon triangulation Acknowledgements

Dynamic Programming A generalization of iteration and recursion. “Dynamic Programming is recursion’s somewhat neglected cousin. …(It) is the basis of comparison and alignment routines. Bottom-up design: Start at the bottom Solve small sub-problems Store solutions Reuse previous results for solving larger sub-problems

Dynamic Programming cont. Fibonacci: function Fibonacci(n: integer) : integer; var i : index; sum, interm1, interm2: integer; begin interm1:= 0; {F0} interm2:= 1; {F1} for i:=3 to n do sum :=interm1 + interm2; interm1:=interm2; interm2:= sum; end {for} Fibonacci := sum; end {Fibonacci}

Dynamic Programming vs. Recursion and Divide & Conquer In a recursive program, a problem of size n is solved by first solving a sub-problem of size n-1. In a divide & conquer program, you solve a problem of size n by first solving a sub-problem of size k and another of size k-1, where 1 < k < n. In dynamic programming, you solve a problem of size n by first solving all sub-problems of all sizes k, where k < n.

Matrix Chain Multiplication Given : a chain of matrices {A1,A2,…,An}. Once all pairs of matrices are parenthesized, they can be multiplied by using the standard algorithm as a sub-routine. A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. [Note: since matrix multiplication is associative, all parenthesizations yield the same product.]

Matrix Chain Multiplication cont. For example, if the chain of matrices is {A, B, C, D}, the product A, B, C, D can be fully parenthesized in 5 distinct ways: (A ( B ( C D ))), (A (( B C ) D )), ((A B ) ( C D )), ((A ( B C )) D), ((( A B ) C ) D ). The way the chain is parenthesized can have a dramatic impact on the cost of evaluating the product.

Matrix Chain Multiplication Optimal Parenthesization Example: A[30][35], B[35][15], C[15][5] minimum of A*B*C A*(B*C) = 30*35*5 + 35*15*5 = 7,585 (A*B)*C = 30*35*15 + 30*15*5 = 18,000 How to optimize: Brute force – look at every possible way to parenthesize : Ω(4n/n3/2) Dynamic programming – time complexity of Ω(n3) and space complexity of Θ(n2).

Matrix Chain Multiplication Structure of Optimal Parenthesization For n matrices, let Ai..j be the result of AiAi+1….Aj An optimal parenthesization of AiAi+1…An splits the product between Ak and Ak+1 where 1  k < n. Example, k = 4 (A1A2A3A4)(A5A6) Total cost of A1..6 = cost of A1..4 plus total cost of multiplying these two matrices together.

Matrix Chain Multiplication Overlapping Sub-Problems Overlapping sub-problems helps in reducing the running time considerably. Create a table M of minimum Costs Create a table S that records index k for each optimal sub-problem Fill table M in a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length. Compute cost for chains of length 1 (this is 0) Compute costs for chains of length 2 A1..2, A2..3, A3..4, …An-1…n Compute cost for chain of length n A1..n Each level relies on smaller sub-strings

Optimal Polygon Triangulation A triangulation of a polygon is a set of T chords of the polygon that divide the polygon into disjoint triangles. In a triangulation, no chords intersect (except at end-points) and the set T of chords is maximal: every chord not in T intersects some chord in T. The sides of triangles produced by the triangulation are either chords in the triangulation or sides of the polygon. Every triangulation of an n-vertex convex polygon has n-3 chords and divides the polygon into n-2 triangles. a. b.

Optimal Polygon Triangulation cont. In the optimal polygon triangulation problem , we are given a polygon P = {v0, v1, v2, …., vn-1} and a weight function w defined on triangles formed by sides and chords of P. The problem is to find a triangulation that minimizes the sum of the weights of the triangles in the triangulation. This problem, like matrix chain multiplication, uses parenthesization. A full parenthesization corresponds to a full binary tree also called a parse tree.

Parenthesization in Triangulation Above is the parse tree for the triangulation of a polygon.The internal nodes of the parse tree are the chords of the triangulation plus the side v0v6, which is the root.

Triangulation and Matrix Chain Multiplication Since a fully parenthesized product of n matrices corresponds to a parse tree with n leaves, it therefore also corresponds to a triangulation of an (n+1)-vertex polygon. Each matrix Ai in a product of A1A2…An corresponds to side vi-1vi of an (n+1)-vertex polygon. The matrix chain is actually a special case of the optimal triangulation problem.

Triangulation and Matrix Chain Multiplication cont. Given a matrix chain product A1A2….AN, we define an (n+1)-vertex convex polygon p = {v0,v1,….,vn}. If matrix Ai has dimensions pi-1 x pi, for I = 1,2,…,n, the weight function for the triangulation is defined as w(Δvivjvk) = pipjpk An optimal triangulation of P with respect to this weight function gives the parse tree for an optimal parenthesization of A1A2….An.

Substructure of an optimal triangulation Given: an optimal triangulation T of an (n+1)-vertex polygon P that includes the triangle Δv0vkvn, where 1  k  n-1. The weight of T is the sum of the weights of Δv0vkvn and triangles in the triangulation of the two sub-polygons {v0,v1,….vk} and {vk, vk+1,….vn}. The triangulation of the sub-polygons determined by T , therefore, must be optimal, since a lesser-weight triangulation of either sub-polygon would contradict the minimality of the eight of T.

T[i,j] = min {t[i,k] + t[k+1,j] + w(Δvi-1vkvj)} A Recursive Solution Given, for 1  I  j  n: T[i,j] is the weight of an optimal triangulation of the polygon {vi-1, vi, …..vj} [Since m[i,j] is the minimum cost of computing the matrix chain sub-product AiAi+1….Aj] In the case of a 2-vertex polygon: t[i,i] = 0 for i = 1,2,…,n. To minimize over all vertices vk, where k = i,i+1,…j-1, the weight of Δvi-1vkvj plus the weights of the optimal triangulation of the polygons {vi-1, vi,….,vk} and {vk,vk+1,…,vj}. The recursive solution then is: If i < j, T[i,j] = min {t[i,k] + t[k+1,j] + w(Δvi-1vkvj)} i k j-1

Acknowledgments http://www-cse/uta/edu/~holder/courses/cse5311/lectures/18/node18.html http://www.middlebury.edu/~dickerso/ccsc/ugcg.html http://www.eecs.harvard.edu/~nr/cs152/readings/dynamic.html http://www.catalase.com/dprog.htm http://mail.informs.org/classes/dynamic/node1.html http://cse.hanyang.ac.kr/~jmchoi/c…6-2/algorithm/classnote/node6.html http://people.bu.edu/rlynch/cs566/sld002.htm