Unit 4: Dynamic Programming

Slides:



Advertisements
Similar presentations
Reducibility Class of problems A can be reduced to the class of problems B Take any instance of problem A Show how you can construct an instance of problem.
Advertisements

1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
Design and Analysis of Algorithms Single-source shortest paths, all-pairs shortest paths Haidong Xue Summer 2012, at GSU.
CSE 380 – Computer Game Programming Pathfinding AI
Management Science 461 Lecture 2b – Shortest Paths September 16, 2008.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 7 Dynamic Programming 7.
Lecture 38 CSE 331 Dec 7, The last few days Today: Solutions to HW 9 (end of lecture) Wednesday: Graded HW 9 (?), Sample final, Blog post on the.
Floyd’s Algorithm (shortest-path problem) Section 8.2.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Algorithms All pairs shortest path
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
1 Shortest Path Calculations in Graphs Prof. S. M. Lee Department of Computer Science.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Chapter 5 Dynamic Programming 2001 년 5 월 24 일 충북대학교 알고리즘연구실.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Dynamic Programming Tutorial &Practice on Longest Common Sub-sequence.
Dynamic Programming Key characteristic of a Dynamic Program: Breaking up a large, unwieldy problem into a series of smaller, more tractable problems. Shortest.
8 -1 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if i  2 Solved.
Tuesday, April 30 Dynamic Programming – Recursion – Principle of Optimality Handouts: Lecture Notes.
Thursday, May 2 Dynamic Programming – Review – More examples Handouts: Lecture Notes.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
8 -1 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if i  2 Solved.
Dynamic Programming Tutorial &Practice on Longest Common Sub-sequence.
1 GRAPHS – Definitions A graph G = (V, E) consists of –a set of vertices, V, and –a set of edges, E, where each edge is a pair (v,w) s.t. v,w  V Vertices.
Lecture 20. Graphs and network models 1. Recap Binary search tree is a special binary tree which is designed to make the search of elements or keys in.
Unit – 5: Backtracking For detail discussion, students are advised to refer the class discussion.
Dynamic Programming - DP December 18, 2014 Operation Research -RG 753.
Limitation of Computation Power – P, NP, and NP-complete
NP-completeness Ch.34.
EMIS 8373: Integer Programming
Water Resources Planning and Management Daene McKinney
Greedy Technique.
Richard Anderson Lecture 26 NP-Completeness
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Shortest Path Problems
Shortest Path Problems
CS330 Discussion 6.
CS120 Graphs.
Discussion section #2 HW1 questions?
Unit 3 (Part-I): Greedy Algorithms
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Unit-5 Dynamic Programming
Dynamic Programming General Idea
Graphs Chapter 11 Objectives Upon completion you will be able to:
Shortest Path.
Analysis and design of algorithm
Chapter 8 Dynamic Programming
Course Contents: T1 Greedy Algorithm Divide & Conquer
Unit-4: Dynamic Programming
Shortest Path Problems
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Floyd-Warshall Algorithm
All pairs shortest path problem
Dynamic Programming General Idea
Backtracking and Branch-and-Bound
Weighted Graphs & Shortest Paths
Shortest Path Problems
Dynamic Programming 動態規劃
Shortest Path Problems
All-Pairs Shortest Paths
Presentation transcript:

Unit 4: Dynamic Programming chandakmb@gmail.com, hodcs@rknec.edu

Characteristics Developed by Richard Bellman in 1950. To provide accurate solution with the help of series of decisions. Define a small part of problem and find out the optimal solution to small part. Enlarge the small part to next version and again find optimal solution. Continue till problem is solved. Find the complete solution by collecting the solution of optimal problems in bottom up manner.

Characteristics Types of problems and solution strategies. 1. Problem can be solved in stages. 2. Each problem has number of states 3. The decision at a stage updates the state at the stage into the state for next stage. 4. Given the current state, the optimal decision for the remaining stages is independent of decisions made in previous states. 5. There is recursive relationship between the value of decision at a stage and the value of optimum decisions at previous stages.

Characteristics How memory requirement is reduce: How recursion overheads are converted into advantage - By storing the results of previous n-2 computations in computation of n-1 stage and will be used for computation of “n” stage. Not very fast, but very accurate It belongs to smart recursion class, in which recursion results and decisions are used for next level computation. Hence not only results but decisions are also generated. Final collection of results: Bottom Up Approach.

MULTI-STAGE GRAPH

Definition

Example of Multi-stage Graph

Approaches Backward Approach Forward Approach Base logic is movements in path finding activities.

Backward Algorithm

Sample Calculation

Sample Calculation ..

Application Shortest path in sensor network from Source to Destination node. Information Retrieval Systems Data Acquisition Systems Railway Reservation System

Algorithm:

Forward Algorithm

Example of Multi-Stage Graph

Travelling Salesman Problem Principle: To find out cycle in a given graph by visiting all the vertices once and reaching back to source vertex. Constraint: A vertex or an edge can be only visited only once. The total length of cycle should be minimum. Applied on complete graph (a matrix with only diagonal element as zero).

Formulation Formula used: g(i,S) = min {Cij + g(j, (S - {j}) ) j€S In this formula: “i" represents source vertex “j” represents the next vertex used as intermediate. “S” represents the vertices other than “i" and if out of these vertices “j” vertex is used as next intermediate, then remaining vertices left will be S - {j}.

Example 10 15 20 5 9 6 13 12 8 1 * 1 ** 1 *** 1

Example The path will consist of FIVE vertices: in which FIRST and LAST vertex will be SAME (SOURCE) Step: 1: To perform computation with zero intermediates an reach to Source, i.e., for position one place before the SOURCE Possible vertices which can be used are 2, 3, 4 g(2, *) = C21 = 5 g(3,*) = C31 = 6 g(4,*) = C41 = 8 1 1 *

Example: Step 2: Vertex 2 at pre-to-pre-final position g(2, {3}) = C23 + g(3,*) = 9+6=15 g(2, {4}) = C24 + g(4,*) = 10 + 8 = 18 Vertex 3 at pre-to-pre-final position g(3, {2}} = {C32 + g(3, *) } = 13 + 5 = 18 g(3, {4}} = {C34 + g(4, *) } = 12 + 8 = 20 Vertex 4 at pre-to-pre-final position g(4, {2}} = {C42 + g(2, *) } = 8 + 5 = 13 g(4, {3}} = {C43 + g(3, *) } = 9 + 6 = 15

Example: Step 3: This step will allow two intermediates g (2, {3,4}) = min C23+g 3,4 , C24+g 4,3 =25 g(3, {2,4}) = min C32+g 2,4 , C34+g 4,2 =25 g (4, {2,3}) = min C42+g 2,3 , C43+g 3,2 = 23 Step 4: The path from the source vertex g (1, {2,3,4} = min{C12 + g{2,{3,4}}, C13 + g{3, {2,4}}, C14 + g{4, {2,3}}} = min {10+25, 15+25, 30+23} = 35 Final Path: 1 2 4 3