Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Programming for Engineers in Python Autumn 2011-12 Lecture 12: Dynamic Programming.

Similar presentations


Presentation on theme: "1 Programming for Engineers in Python Autumn 2011-12 Lecture 12: Dynamic Programming."— Presentation transcript:

1 1 Programming for Engineers in Python Autumn 2011-12 Lecture 12: Dynamic Programming

2 2 Lecture 11: Highlights GUI ( Based on slides from the course Software1, CS, TAU) GUI in Python (Based on Chapter 19 from the book “Think Python”) Swampy Widgets Callbacks Event driven programming Display an image in a GUI Sorting: Merge sort Bucket sort

3 3 Plan Fibonacci (overlapping subproblems) Evaluating the performance of stock market traders (optimal substructure) Dynamic programming basics Maximizing profits of a shipping company (Knapsack problem) A little on the exams and course’s grade (if time allows)

4 4 Remember Fibonacci Series? Fibonacci series 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 Definition: fib(0) = 0 fib(1) = 1 fib(n) = fib(n-1) + fib(n-2) en.wikipedia.org/wiki/Fibonacci_number http://www.dorbanot.com/3757http://www.dorbanot.com/3757 סלט פיבונאצ'י

5 5 Recursive Fibonacci Series Every call with n > 1 invokes 2 function calls, and so on…

6 6 Redundant Calls Fib(4) Fib(3)Fib(2) Fib(1) Fib(0) Fib(1) Fib(0) Fib(5) Fib(3) Fib(2)Fib(1) Fib(0)

7 7 Redundant Calls Iterative vs. recursive Fibonacci

8 8 Number of Calls to Fibonacci nvalueNumber of calls 111 213 325 232865792735 2446368150049

9 9 Demonstration: Iterative Versus Recursive Fibonacci

10 10 Demonstration: Iterative Versus Recursive Fibonacci (cont.) Output (shell):

11 11 Memoization Enhancing Efficiency of Recursive (Fibonacci) The problem: solving same subproblems many time The idea: avoid repeating the calculations of results for previously processed inputs, by solving each subproblem once, and using the solution the next time it is encountered How? Store subproblems solution in a list This technique is called memoization http://en.wikipedia.org/wiki/Memoization http://en.wikipedia.org/wiki/Memoization

12 12 Fibonacci with Memoization

13 13 Timeit Output (shell):

14 14 Fibonacci: Memoization Vs. Iterative Same time complexity O(N) Iterative x 5 times faster than Memoization – the overhead of the recursive calls So why do we need Memoization? We shall discuss that later

15 15 Overlapping Subproblems A problem has overlapping subproblems if it can be broken down into subproblems which are reused multiple times If divide-and-conquer is applicable, then each problem solved will be brand new A recursive algorithm is called exponential number of times because it solves the same problems repeatedly

16 16 Evaluating Traders’ Performance How to evaluate a trader’s performance on a given stock (e.g., טבע)? The trader earns $X on that stock, is it good? Mediocre? Bad? Define a measure of success: Maximal possible profit M$ Trader’s performance X/M (%) Define M: Maximal profit in a given time range How can it be calculated?

17 17 Evaluating Traders’ Performance Consider the changes the stock undergoes in the given time range M is defined as a continuous time sub-range in which the profit is maximal Examples (all numbers are percentages): [1,2,-5,4,7,-2]  [4,7]  M = 11% If X = 6%  traders performance is ~54% [1,5,-3,4,-2,1]  [1,5,-3,4]  M = 7% If X = 5% trader’s performance is ~71% Let’s make it a little more formal…

18 18 Maximum Subarray Sum http://en.wikipedia.org/wiki/Maximum_subarray_problem http://en.wikipedia.org/wiki/Maximum_subarray_problem Input: array of numbers Output: what (contiguous) subarray has the largest sum? Naïve solution (“brute force”): How many subarrays exist for an array of size n? n + n-1 + n-2 + ….. + 1  O(n 2 ) The plan: check each and report the maximal Time complexity of O(n 2 ) We will return both the sum and the corresponding subarray

19 19 Naïve Solution (“Brute Force”)

20 20 Naïve Solution (shorter code)

21 21 Efficient Solution The solution for a[i:i] for all i is 0 (Python notation) Let’s assume we know the subarray of a[0:i] with the largest sum Can we use this information to find the subarray of a[0:i+1] with the largest sum? A problem is said to have optimal substructure if the globally optimal solution can be constructed from locally optimal solutions to subproblems

22 22 Optimal Substructure ii-1 k j s = a[j:k+1] is the optimal subarray of a[0:i] t = a[j:i] >= 0 (why?) What is the optimal solution’s structure for a[0:i+1]? Can it start before j? No! Can it start in the range j + 1  k? No! Can it start in the range k + 1  i-1? No! Otherwise t would have been negative at a earlier stage

23 23 Optimal Substructure ii-1 k j s = a[j:k+1] is the optimal subarray of a[0:i] t = a[j:i] >= 0 (why?) What is the optimal solution’s structure for a[0:i+1]? Set the new t = t + a[i] If t > s than s = t, the solution is (j,i+1) Otherwise the solution does not change If t < 0 than j is updated to i+1 so t = 0 (for next iteration) Otherwise (0 =< t <= s) change nothing

24 24 Example

25 25 The Code

26 26 Efficiency – O(n) Constant time

27 27 Efficiency – O(n) The "globally optimal" solution corresponds to a subarray with a globally maximal sum, but at each step we only make a decision relative to what we have already seen. At each step we know the best solution thus far, but might change our decision later based on our previous information and the current information. In this sense the problem has optimal substructure. Because we can make decisions locally we only need to traverse the list once.

28 28 O(n) Versus O(n 2 ) Output (shell):

29 Dynamic Programming (DP) Dynamic Programming is an algorithm design technique for optimization problems Similar to divide and conquer, DP solves problems by combining solutions to subproblems Not similar to divide and conquer, subproblems are not independent: Subproblems may share subsubproblems (overlapping subproblems) Solution to one subproblem may not affect the solutions to other subproblems of the same problem (optimal substructure)

30 Dynamic Programming (cont.) DP reduces computation by Solving subproblems in a bottom-up fashion Storing solution to a subproblem the first time it is solved Looking up the solution when subproblem is encountered again Key: determine structure of optimal solutions

31 31 Dynamic Programming Characteristics Overlapping subproblems Can be broken down into subproblems which are reused multiple times Examples: Factorial does not exhibit overlapping subproblems Fibonacci does Optimal substructure Globally optimal solution can be constructed from locally optimal solutions to subproblems Examples: Fibonacci, msum, Knapsack (coming next)

32 32 Optimizing Shipping Cargo (Knapsack) A shipping company is trying to sell a residual capacity of 1000 metric tones in a cargo ship to different shippers by an auction The company received 100 different offers from potential shippers each characterized by tonnage and offered reward The company wish to select a subset of the offers that fits into its residual capacity so as to maximize the total reward

33 33 Optimizing Shipping Cargo (Knapsack) The company wish to select a subset of the offers that fits into its residual capacity so as to maximize the total reward

34 34 Formalizing Shipping capacity W = 1000 Offers from potential shippers n = 100 Each offer i has a weight w i and an offered reward v i Maximize the reward given the W tonnage limit A(n,W) - the maximum value that can be attained from considering the first n items weighting at most W tons

35 35 First Try - Greedy Sort offers i by v i /w i ratio Select offers until the ship is full Counter example: W = 10, {(v i,w i )} = {(7,7),(4,5),(4,5)}

36 36 Solution A(i,j) - the maximum value that can be attained from considering the first i items with a j weight limit: A(0,j) = A(i,0) = 0 for all i ≤ n and j ≤ W If w i > j then A(i,j) = A(i-1,j) If w i < j we have two options: Do not include it so the value will be A(i-1,j) If we do include it the value will be v i + A(i-1,j-w i ) Which choice should we make? Whichever is larger! the maximum of the two Formally:

37 37 Optimal Substructure and Overlapping Subproblems Overlapping subproblems: at any stage (i,j) we might need to calculate A(k,l) for several k < i and l < j. Optimal substructure: at any point we only need information about the choices we have already made.

38 38 Solution (Recursive)

39 39 Solution (Memoization) – The Idea N W M(N,W)

40 40 Solution (Memoization) - Code

41 41 Solution (Iterative) – The Idea In Class N W M(N,W) “Bottom-Up”: start with solving small problems and gradually grow

42 42 DP VS. Memoization Same Big O computational complexity If all subproblems must be solved at least once, DP is better by a constant factor due to no recursive involvement If some subproblems may not need to be solved, Memoized algorithm may be more efficient, since it only solve these subproblems which are definitely required

43 Steps in Dynamic Programming 1.Characterize structure of an optimal solution 2.Define value of optimal solution recursively 3.Compute optimal solution values either top- down (memoization) or bottom-up (in a table) 4.Construct an optimal solution from computed values

44 44 Why Knapsack? בעיית הגנב http://en.wikipedia.org/wiki/Knapsack_problem

45 45 Extensions NP completeness http://en.wikipedia.org/wiki/NP-complete http://en.wikipedia.org/wiki/NP-complete Pseudo polynomial http://en.wikipedia.org/wiki/Pseudo-polynomial_time http://en.wikipedia.org/wiki/Pseudo-polynomial_time

46 46 References Intro to DP: http://20bits.com/articles/introduction-to-dynamic-programming/ http://20bits.com/articles/introduction-to-dynamic-programming/ Practice problems: http://people.csail.mit.edu/bdean/6.046/dp/ http://people.csail.mit.edu/bdean/6.046/dp/


Download ppt "1 Programming for Engineers in Python Autumn 2011-12 Lecture 12: Dynamic Programming."

Similar presentations


Ads by Google