Richard Anderson Lecture 17 Dynamic Programming

Slides:



Advertisements
Similar presentations
Algorithm Design Methods (I) Fall 2003 CSE, POSTECH.
Advertisements

Algorithm Design Methods Spring 2007 CSE, POSTECH.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Chapter 5 Fundamental Algorithm Design Techniques.
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
CSEP 521 Applied Algorithms Richard Anderson Lecture 7 Dynamic Programming.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
1 ECE 665 Spring 2005 Computer Algorithms Dynamic Programming Knapsack revisited.
CSE 421 Algorithms Richard Anderson Lecture 16 Dynamic Programming.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Lecture 10: Support Vector Machines
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
Greedy Algorithms. What is “Greedy Algorithm” Optimization problem usually goes through a sequence of steps A greedy algorithm makes the choice looks.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
© The McGraw-Hill Companies, Inc., Chapter 1 Introduction.
Exhaustive search Exhaustive search is simply a brute- force approach to combinatorial problems. It suggests generating each and every element of the problem.
Algorithms Classroom Activities Richard Anderson University of Washington July 3, 20081IUCEE: Algorithm Activities.
CSE 421 Algorithms Richard Anderson Lecture 18 Dynamic Programming.
Advanced Algorithms Analysis and Design
Lecture 32 CSE 331 Nov 16, 2016.
Algorithm Design Methods
Weighted Interval Scheduling
Seminar on Dynamic Programming.
Robbing a House with Greedy Algorithms
Richard Anderson Lecture 17 Dynamic Programming
Approximation Algorithms
Richard Anderson Lecture 18 Dynamic Programming
Topic 25 Dynamic Programming
CS38 Introduction to Algorithms
Data Structures and Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Lecture 35 CSE 331 Nov 27, 2017.
Lecture 34 CSE 331 Nov 20, 2017.
Lecture 33 CSE 331 Nov 18, 2016.
Lecture 33 CSE 331 Nov 17, 2017.
Exam 2 LZW not on syllabus. 73% / 75%.
Lecture 35 CSE 331 Nov 28, 2016.
Richard Anderson Lecture 16 Dynamic Programming
Richard Anderson Lecture 18 Dynamic Programming
Richard Anderson Lecture 28 Coping with NP-Completeness
Richard Anderson Lecture 19 Longest Common Subsequence
CSEP 521 Applied Algorithms
Richard Anderson Lecture 16a Dynamic Programming
Richard Anderson Lecture 16 Dynamic Programming
Polynomial time approximation scheme
Lecture 34 CSE 331 Nov 21, 2016.
Richard Anderson Lecture 10 Minimum Spanning Trees
Algorithm Design Methods
Advanced LP models column generation.
Advanced LP models column generation.
Lecture 36 CSE 331 Nov 28, 2011.
Algorithm Design Methods
Dynamic Programming 動態規劃
Dynamic Programming Sequence of decisions. Problem state.
Richard Anderson Lecture 16, Winter 2019 Dynamic Programming
Richard Anderson Lecture 18 Dynamic Programming
Richard Anderson Lecture 17, Winter 2019 Dynamic Programming
Data Structures and Algorithm Analysis Lecture 15
Dynamic Programming Sequence of decisions. Problem state.
Algorithm Design Methods
Seminar on Dynamic Programming.
Presentation transcript:

Richard Anderson Lecture 17 Dynamic Programming CSE 421 Algorithms Richard Anderson Lecture 17 Dynamic Programming

Optimal linear interpolation Error = S(yi –axi – b)2

Determine set of K lines to minimize error

Optk[ j ] : Minimum error approximating p1…pj with k segments Express Optk[ j ] in terms of Optk-1[1],…,Optk-1[ j ] Optk[ j ] = mini {Optk-1[ i ] + Ei,j}

Optimal sub-solution property Optimal solution with k segments extends an optimal solution of k-1 segments on a smaller problem

Optimal multi-segment interpolation Compute Opt[ k, j ] for 0 < k < j < n for j := 1 to n Opt[ 1, j] = E1,j; for k := 2 to n-1 for j := 2 to n t := E1,j for i := 1 to j -1 t = min (t, Opt[k-1, i ] + Ei,j) Opt[k, j] = t

Determining the solution When Opt[ k ,j ] is computed, record the value of i that minimized the sum Store this value in a auxiliary array Use to reconstruct solution

Variable number of segments Segments not specified in advance Penalty function associated with segments Cost = Interpolation error + C x #Segments

Penalty cost measure Opt[ j ] = min(E1,j, mini(Opt[ i ] + Ei,j)) + P

Subset Sum Problem Let w1,…,wn = {6, 8, 9, 11, 13, 16, 18, 24} Find a subset that has as large a sum as possible, without exceeding 50

Adding a variable for Weight Opt[ j, K ] the largest subset of {w1, …, wj} that sums to at most K {2, 4, 7, 10} Opt[2, 7] = Opt[3, 7] = Opt[3,12] = Opt[4,12] =

Subset Sum Recurrence Opt[ j, K ] the largest subset of {w1, …, wj} that sums to at most K Opt[ j, K] = max(Opt[ j – 1, K], Opt[ j – 1, K – wj] + wj)

Subset Sum Grid Opt[ j, K] = max(Opt[ j – 1, K], Opt[ j – 1, K – wj] + wj) 4 3 2 1 {2, 4, 7, 10}

Subset Sum Code Opt[ j, K] = max(Opt[ j – 1, K], Opt[ j – 1, K – wj] + wj)

Knapsack Problem Items have weights and values The problem is to maximize total value subject to a bound on weght Items {I1, I2, … In} Weights {w1, w2, …,wn} Values {v1, v2, …, vn} Bound K Find set S of indices to: Maximize SieSvi such that SieSwi <= K

Knapsack Recurrence Subset Sum Recurrence: Opt[ j, K] = max(Opt[ j – 1, K], Opt[ j – 1, K – wj] + wj) Knapsack Recurrence:

Knapsack Grid Weights {2, 4, 7, 10} Values: {3, 5, 9, 16} 4 3 2 1 Opt[ j, K] = max(Opt[ j – 1, K], Opt[ j – 1, K – wj] + vj) 4 3 2 1 Weights {2, 4, 7, 10} Values: {3, 5, 9, 16}