Introduction to Algorithms

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Types of Algorithms.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Problem Solving Dr. Andrew Wallace PhD BEng(hons) EurIng
Dynamic Programming.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
1 Dynamic Programming Jose Rolim University of Geneva.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
Dynamic Programming CIS 606 Spring 2010.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Fundamentals of Algorithms MCS - 2 Lecture # 7
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Dynamic Programming academy.zariba.com 1. Lecture Content 1.Fibonacci Numbers Revisited 2.Dynamic Programming 3.Examples 4.Homework 2.
Advanced Algorithms Analysis and Design
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Lecture 12.
Introduction to the Design and Analysis of Algorithms
Advanced Design and Analysis Techniques
Types of Algorithms.
Types of Algorithms.
Dynamic Programming.
Chapter 15-1 : Dynamic Programming I
Analysis of Algorithms CS 477/677
Dynamic Programming.
DYNAMIC PROGRAMMING.
Types of Algorithms.
COMP108 Algorithmic Foundations Dynamic Programming
Analysis of Algorithms CS 477/677
Presentation transcript:

Introduction to Algorithms Chapter 15: Dynamic Programming

Dynamic Programming Well known algorithm design techniques: Brute-Force (iterative) algorithms Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming. Used when problem breaks down into recurring small subproblems Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.

Divide-and-conquer Divide-and-conquer method for algorithm design: Divide: If the input size is too large to deal with in a straightforward manner, divide the problem into two or more disjoint subproblems Conquer: conquer recursively to solve the subproblems Combine: Take the solutions to the subproblems and “merge” these solutions into a solution for the original problem

Divide-and-conquer - Example

Dynamic programming Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms. By “inefficient”, we mean that the same recursive call is made over and over. If same subproblem is solved several times, we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. Dynamic programming is applicable when the subproblems are dependent, that is, when subproblems share subsubproblems. “Programming” refers to a tabular method

Difference between DP and Divide-and-Conquer Using Divide-and-Conquer to solve these problems is inefficient because the same common subproblems have to be solved many times. DP will solve each of them once and their answers are stored in a table for future use.

Elements of Dynamic Programming (DP) DP is used to solve problems with the following characteristics: Simple subproblems We should be able to break the original problem to smaller subproblems that have the same structure Optimal substructure of the problems The optimal solution to the problem contains within optimal solutions to its subproblems. Overlapping sub-problems there exist some places where we solve the same subproblem more than once.

Steps to Designing a Dynamic Programming Algorithm 1. Characterize optimal substructure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution

Fibonacci Numbers Fn= Fn-1+ Fn-2 n ≥ 2 F0 =0, F1 =1 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … Straightforward recursive procedure is slow! Let’s draw the recursion tree

Fibonacci Numbers

Fibonacci Numbers We can calculate Fn in linear time by remembering solutions to the solved subproblems – dynamic programming Compute solution in a bottom-up fashion In this case, only two values need to be remembered at any time

Ex1:Assembly-line scheduling Automobiles factory with two assembly lines. Each line has the same number “n” of stations. Numbered j = 1, 2, ..., n. We denote the jth station on line i (where i is 1 or 2) by Si,j . The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j ). The time required at each station varies, even between stations at the same position on the two different lines, as each assembly line has different technology. time required at station Si,j is (ai,j) . There is also an entry time (ei) for the chassis to enter assembly line i and an exit time (xi) for the completed auto to exit assembly line i.

Ex1:Assembly-line scheduling After going through station Si,j, can either q stay on same line next station is Si,j+1 no transfer cost , or q transfer to other line next station is S3-i,j+1 transfer cost from Si,j to S3-i,j+1 is ti,j ( j = 1, … , n–1) No ti,n, because the assembly line is complete after Si,n

Ex1:Assembly-line scheduling (Time between adjacent stations are nearly 0).

Problem Definition Problem: Given all these costs, what stations should be chosen from line 1 and from line 2 for minimizing the total time for car assembly. “Brute force” is to try all possibilities. requires to examine Ω(2n) possibilities Trying all 2n subsets is infeasible when n is large. Simple example : 2 stations  (2n) possibilities =4

Step 1: Optimal Solution Structure optimal substructure : choosing the best path to Sij. The structure of the fastest way through the factory (from the starting point) The fastest possible way to get through Si,1 (i = 1, 2) Only one way: from entry starting point to Si,1 take time is entry time (ei)

Step 1: Optimal Solution Structure The fastest possible way to get through Si,j (i = 1, 2) (j = 2, 3, ..., n). Two choices: Stay in the same line: Si,j-1  Si,j Time is Ti,j-1 + ai,j If the fastest way through Si,j is through Si,j-1, it must have taken a fastest way through Si,j-1 Transfer to other line: S3-i,j-1  Si,j Time is T3-i,j-1 + t3-i,j-1 + ai,j Same as above

Step 1: Optimal Solution Structure An optimal solution to a problem finding the fastest way to get through Si,j contains within it an optimal solution to sub-problems finding the fastest way to get through either Si,j-1 or S3-i,j-1 Fastest way from starting point to Si,j is either: The fastest way from starting point to Si,j-1 and then directly from Si,j-1 to Si,j or The fastest way from starting point to S3-i,j-1 then a transfer from line 3-i to line i and finally to Si,j Optimal Substructure.

Example

Step 2: Recursive Solution Define the value of an optimal solution recursively in terms of the optimal solution to sub-problems Sub-problem here finding the fastest way through station j on both lines (i=1,2) Let fi [j] be the fastest possible time to go from starting point through Si,j The fastest time to go all the way through the factory: f* x1 and x2 are the exit times from lines 1 and 2, respectively

Step 2: Recursive Solution The fastest time to go through Si,j e1 and e2 are the entry times for lines 1 and 2

Example

Example

Step 2: Recursive Solution To help us keep track of how to construct an optimal solution, let us define li[j ]: line # whose station j-1 is used in a fastest way through Si,j (i = 1, 2, and j = 2, 3,..., n) we avoid defining li[1] because no station precedes station 1 on either lines. We also define l*: the line whose station n is used in a fastest way through the entire factory

Step 2: Recursive Solution Using the values of l* and li[j] shown in Figure (b) in next slide, we would trace a fastest way through the factory shown in part (a) as follows The fastest total time comes from choosing stations Line 1: 1, 3, & 6 Line 2: 2, 4, & 5

Step 2: Recursive Solution

Step 3: Optimal Solution Value

Step 3: Optimal Solution Value 2

Step 3: Optimal Solution Value 2

Step 3: Optimal Solution Value Use substitution method to solve it  O(2n)  exponential

Step 3: Optimal Solution Value

Step 3: Optimal Solution Value

Step 3: Optimal Solution Value

Step 4: Optimal Solution Constructing the fastest way through the factory