1 Programming for Engineers in Python Autumn 2011-12 Lecture 12: Dynamic Programming.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Algorithm Design approaches Dr. Jey Veerasamy. Petrol cost minimization problem You need to go from S to T by car, spending the minimum for petrol. 2.
CS 206 Introduction to Computer Science II 02 / 27 / 2009 Instructor: Michael Eckmann.
Greedy Algorithms.
Types of Algorithms.
Analysis of Algorithms
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Greedy vs Dynamic Programming Approach
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
1 Programming for Engineers in Python Autumn Lecture 8: Recursion.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Lecture 12.
Dynamic Programming Sequence of decisions. Problem state.
Fibonacci Fibonacci series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 Definition:
Advanced Design and Analysis Techniques
Types of Algorithms.
Topic 25 Dynamic Programming
Algorithm Design Methods
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Programming for Engineers in Python
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Analysis of Algorithms CS 477/677
DYNAMIC PROGRAMMING.
Lecture 4 Dynamic Programming
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
This is not an advertisement for the profession
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming

2 Lecture 11: Highlights GUI ( Based on slides from the course Software1, CS, TAU) GUI in Python (Based on Chapter 19 from the book “Think Python”) Swampy Widgets Callbacks Event driven programming Display an image in a GUI Sorting: Merge sort Bucket sort

3 Plan Fibonacci (overlapping subproblems) Evaluating the performance of stock market traders (optimal substructure) Dynamic programming basics Maximizing profits of a shipping company (Knapsack problem) A little on the exams and course’s grade (if time allows)

4 Remember Fibonacci Series? Fibonacci series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 Definition: fib(0) = 0 fib(1) = 1 fib(n) = fib(n-1) + fib(n-2) en.wikipedia.org/wiki/Fibonacci_number סלט פיבונאצ'י

5 Recursive Fibonacci Series Every call with n > 1 invokes 2 function calls, and so on…

6 Redundant Calls Fib(4) Fib(3)Fib(2) Fib(1) Fib(0) Fib(1) Fib(0) Fib(5) Fib(3) Fib(2)Fib(1) Fib(0)

7 Redundant Calls Iterative vs. recursive Fibonacci

8 Number of Calls to Fibonacci nvalueNumber of calls

9 Demonstration: Iterative Versus Recursive Fibonacci

10 Demonstration: Iterative Versus Recursive Fibonacci (cont.) Output (shell):

11 Memoization Enhancing Efficiency of Recursive (Fibonacci) The problem: solving same subproblems many time The idea: avoid repeating the calculations of results for previously processed inputs, by solving each subproblem once, and using the solution the next time it is encountered How? Store subproblems solution in a list This technique is called memoization

12 Fibonacci with Memoization

13 Timeit Output (shell):

14 Fibonacci: Memoization Vs. Iterative Same time complexity O(N) Iterative x 5 times faster than Memoization – the overhead of the recursive calls So why do we need Memoization? We shall discuss that later

15 Overlapping Subproblems A problem has overlapping subproblems if it can be broken down into subproblems which are reused multiple times If divide-and-conquer is applicable, then each problem solved will be brand new A recursive algorithm is called exponential number of times because it solves the same problems repeatedly

16 Evaluating Traders’ Performance How to evaluate a trader’s performance on a given stock (e.g., טבע)? The trader earns $X on that stock, is it good? Mediocre? Bad? Define a measure of success: Maximal possible profit M$ Trader’s performance X/M (%) Define M: Maximal profit in a given time range How can it be calculated?

17 Evaluating Traders’ Performance Consider the changes the stock undergoes in the given time range M is defined as a continuous time sub-range in which the profit is maximal Examples (all numbers are percentages): [1,2,-5,4,7,-2]  [4,7]  M = 11% If X = 6%  traders performance is ~54% [1,5,-3,4,-2,1]  [1,5,-3,4]  M = 7% If X = 5% trader’s performance is ~71% Let’s make it a little more formal…

18 Maximum Subarray Sum Input: array of numbers Output: what (contiguous) subarray has the largest sum? Naïve solution (“brute force”): How many subarrays exist for an array of size n? n + n-1 + n-2 + …  O(n 2 ) The plan: check each and report the maximal Time complexity of O(n 2 ) We will return both the sum and the corresponding subarray

19 Naïve Solution (“Brute Force”)

20 Naïve Solution (shorter code)

21 Efficient Solution The solution for a[i:i] for all i is 0 (Python notation) Let’s assume we know the subarray of a[0:i] with the largest sum Can we use this information to find the subarray of a[0:i+1] with the largest sum? A problem is said to have optimal substructure if the globally optimal solution can be constructed from locally optimal solutions to subproblems

22 Optimal Substructure ii-1 k j s = a[j:k+1] is the optimal subarray of a[0:i] t = a[j:i] >= 0 (why?) What is the optimal solution’s structure for a[0:i+1]? Can it start before j? No! Can it start in the range j + 1  k? No! Can it start in the range k + 1  i-1? No! Otherwise t would have been negative at a earlier stage

23 Optimal Substructure ii-1 k j s = a[j:k+1] is the optimal subarray of a[0:i] t = a[j:i] >= 0 (why?) What is the optimal solution’s structure for a[0:i+1]? Set the new t = t + a[i] If t > s than s = t, the solution is (j,i+1) Otherwise the solution does not change If t < 0 than j is updated to i+1 so t = 0 (for next iteration) Otherwise (0 =< t <= s) change nothing

24 Example

25 The Code

26 Efficiency – O(n) Constant time

27 Efficiency – O(n) The "globally optimal" solution corresponds to a subarray with a globally maximal sum, but at each step we only make a decision relative to what we have already seen. At each step we know the best solution thus far, but might change our decision later based on our previous information and the current information. In this sense the problem has optimal substructure. Because we can make decisions locally we only need to traverse the list once.

28 O(n) Versus O(n 2 ) Output (shell):

Dynamic Programming (DP) Dynamic Programming is an algorithm design technique for optimization problems Similar to divide and conquer, DP solves problems by combining solutions to subproblems Not similar to divide and conquer, subproblems are not independent: Subproblems may share subsubproblems (overlapping subproblems) Solution to one subproblem may not affect the solutions to other subproblems of the same problem (optimal substructure)

Dynamic Programming (cont.) DP reduces computation by Solving subproblems in a bottom-up fashion Storing solution to a subproblem the first time it is solved Looking up the solution when subproblem is encountered again Key: determine structure of optimal solutions

31 Dynamic Programming Characteristics Overlapping subproblems Can be broken down into subproblems which are reused multiple times Examples: Factorial does not exhibit overlapping subproblems Fibonacci does Optimal substructure Globally optimal solution can be constructed from locally optimal solutions to subproblems Examples: Fibonacci, msum, Knapsack (coming next)

32 Optimizing Shipping Cargo (Knapsack) A shipping company is trying to sell a residual capacity of 1000 metric tones in a cargo ship to different shippers by an auction The company received 100 different offers from potential shippers each characterized by tonnage and offered reward The company wish to select a subset of the offers that fits into its residual capacity so as to maximize the total reward

33 Optimizing Shipping Cargo (Knapsack) The company wish to select a subset of the offers that fits into its residual capacity so as to maximize the total reward

34 Formalizing Shipping capacity W = 1000 Offers from potential shippers n = 100 Each offer i has a weight w i and an offered reward v i Maximize the reward given the W tonnage limit A(n,W) - the maximum value that can be attained from considering the first n items weighting at most W tons

35 First Try - Greedy Sort offers i by v i /w i ratio Select offers until the ship is full Counter example: W = 10, {(v i,w i )} = {(7,7),(4,5),(4,5)}

36 Solution A(i,j) - the maximum value that can be attained from considering the first i items with a j weight limit: A(0,j) = A(i,0) = 0 for all i ≤ n and j ≤ W If w i > j then A(i,j) = A(i-1,j) If w i < j we have two options: Do not include it so the value will be A(i-1,j) If we do include it the value will be v i + A(i-1,j-w i ) Which choice should we make? Whichever is larger! the maximum of the two Formally:

37 Optimal Substructure and Overlapping Subproblems Overlapping subproblems: at any stage (i,j) we might need to calculate A(k,l) for several k < i and l < j. Optimal substructure: at any point we only need information about the choices we have already made.

38 Solution (Recursive)

39 Solution (Memoization) – The Idea N W M(N,W)

40 Solution (Memoization) - Code

41 Solution (Iterative) – The Idea In Class N W M(N,W) “Bottom-Up”: start with solving small problems and gradually grow

42 DP VS. Memoization Same Big O computational complexity If all subproblems must be solved at least once, DP is better by a constant factor due to no recursive involvement If some subproblems may not need to be solved, Memoized algorithm may be more efficient, since it only solve these subproblems which are definitely required

Steps in Dynamic Programming 1.Characterize structure of an optimal solution 2.Define value of optimal solution recursively 3.Compute optimal solution values either top- down (memoization) or bottom-up (in a table) 4.Construct an optimal solution from computed values

44 Why Knapsack? בעיית הגנב

45 Extensions NP completeness Pseudo polynomial

46 References Intro to DP: Practice problems: