Download presentation
Presentation is loading. Please wait.
1
CSE 960 Spring 2005
2
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
3
Course Overview We will be looking at a collection of mathematical ideas and how they influence current research in algorithms –Mathematical programming –Game theory Applications to scheduling
4
Other Points of Emphasis Presentation skills –We will develop metrics for evaluating presentations to be given later Group work –I hope to make class periods interactive with you working in informal groups to make sure all understand the material being presented
5
Caveats We will not be doing an in-depth examination of any idea; instead, we will get exposure to the concepts so that you are prepared to learn more on your own I am NOT an expert in these areas and am teaching them because I need to learn them better Emphasis on scheduling applications because that is where my research interest lie
6
Grading Presentations: 50 % –1 group presentation of a textbook chapter –1 individual presentation of a scientific paper Homework: 25 % –Short assignments, each question 1 point Class Participation/group work: 25%
7
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
8
How should we evaluate a presentation? Group discussion of presentation evaluations –Create a “safe” environment for all to participate –Stay on task –Recorder role –Present to class
9
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
10
Scheduling The problem of assigning jobs to machines to minimize some objective function 3 parameter notation – machine environment | job characteristic | obj Machine environments: 1, P, Q, R Job characteristics: preemption, release dates, weights, values, deadlines Objectives: makespan, average completion time, average flow time, etc.
11
Example Problem 2 | | max C j –2 machines, no preemptions, no release dates, goal is to minimize the maximum completion time of any job Example input: jobs with length 1, 1, 2 What might be an obvious greedy algorithm for this problem? Argue why this problem is NP-hard
12
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
13
Approximation Algorithms Many problems we study will be NP-hard In such cases, we desire to have polynomial-time approximation algorithms –A(I)/OPT(I) ≤ c for some constant c (min objective) –Algorithm runs in polynomial time in n, the problem size Approximation algorithm for makespan scheduling?
14
PTAS Even better, we like to have polynomial-time approximation schemes (PTAS) –A(I, ε)/OPT(I) ≤ (1+ ε) for ε > 0 (min objective) –Running time is polynomial in n, the problem size, but may be exponential in 1/ ε Even better is if we can be polynomial in 1/ ε too –Often such schemes are not very practical, but are theoretically desirable PTAS for makespan scheduling?
15
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
16
Searching for Optimal Solution All topics may be viewed as searching a space of solutions for an optimal solution Dynamic programming –discrete search space –Recursive solution structure Mathematical programming –Discrete/Continuous search space –Constraint-based solution structure Game Theory –Discrete/mixed search space –Multiple selfish players
17
Dynamic Programming
18
Mathematical Programming X Y
19
Game Theory Player A Move Player B Move
20
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming
21
Overview The key idea behind dynamic program is that it is a divide-and-conquer technique at heart That is, we solve larger problems by patching together solutions to smaller problems However, dynamic programming is typically faster because we compute these solutions in a bottom-up fashion
22
Fibonacci numbers F(n) = F(n-1) + F(n-2) –F(0) = 0 –F(1) = 1 Top-down recursive computation is very inefficient –Many F(i) values are computed multiple times Bottom-up computation is much more efficient –Compute F(2), then F(3), then F(4), etc. using stored values for smaller F(i) values to compute next value –Each F(i) value is computed just once
23
Recursive Computation F(n) = F(n-1) + F(n-2) ; F(0) = 0, F(1) = 1 Recursive Solution: F(6) = 8 F(1) F(0) F(2) F(3) F(1)F(0) F(2) F(1) F(0) F(2) F(3) F(4) F(1)F(0) F(2) F(1) F(0) F(2) F(3) F(4) F(5)
24
Bottom-up computation We can calculate F(n) in linear time by storing small values. F[0] = 0 F[1] = 1 for i = 2 to n F[i] = F[i-1] + F[i-2] return F[n] Moral: We can sometimes trade space for time.
25
Key implementation steps Identify subsolutions that may be useful in computing whole solution –Often need to introduce parameters Develop a recurrence relation (recursive solution) –Set up the table of values/costs to be computed The dimensionality is typically determined by the number of parameters The number of values should be polynomial Determine the order of computation of values Backtrack through the table to obtain complete solution (not just solution value)
26
Example: Matrix Multiplication Input –List of n matrices to be multiplied together using traditional matrix multiplication –The dimensions of the matrices are sufficient Task –Compute the optimal ordering of multiplications to minimize total number of scalar multiplications performed Observations: –Multiplying an X Y matrix by a Y Z matrix takes X Y Z multiplications –Matrix multiplication is associative but not commutative
27
Example Input Input: –M 1, M 2, M 3, M 4 M 1 : 13 x 5 M 2 : 5 x 89 M 3 : 89 x 3 M 4 : 3 x 34 Feasible solutions and their values –((M 1 M 2 ) M 3 ) M 4 :10,582 scalar multiplications –(M 1 M 2 ) (M 3 M 4 ): 54,201 scalar multiplications –(M 1 (M 2 M 3 )) M 4 : 2856 scalar multiplications –M 1 ((M 2 M 3 ) M 4 ): 4055 scalar multiplications –M 1 (M 2 (M 3 M 4 )): 26,418 scalar multiplications
28
Identify subsolutions Often need to introduce parameters Define dimensions to be (d 0, d 1, …, d n ) where matrix M i has dimensions d i-1 x d i Let M(i,j) be the matrix formed by multiplying matrices M i through M j Define C(i,j) to be the minimum cost for computing M(i,j)
29
Develop a recurrence relation Definitions –M(i,j): matrices M i through M j –C(i,j): the minimum cost for computing M(i,j) Recurrence relation for C(i,j) –C(i,i) = ??? –C(i,j) = ??? Want to express C(i,j) in terms of “smaller” C terms
30
Set up table of values Table –The dimensionality is typically determined by the number of parameters –The number of values should be polynomial C1234 1 0 2 0 3 0 4 0
31
Order of Computation of Values Many orders are typically ok. –Just need to obey some constraints What are valid orders for this table? C1234 1 0123 2 045 3 06 4 0
32
Representing optimal solution P1234 1 0113 2 023 3 03 4 0 C1234 1 0578515302856 2 013351845 3 09078 4 0 P(i,j) records the intermediate multiplication k used to compute M(i,j). That is, P(i,j) = k if last multiplication was M(i,k) M(k+1,j)
33
Pseudocode int MatrixOrder() forall i, j C[i, j] = 0; for j = 2 to n for i = j-1 to 1 C(i,j) = min i<=k<=j-1 (C(i,k)+ C(k+1,j) + d i-1 d k d j ) P[i, j]=k; return C[1, n];
34
Backtracking Procedure ShowOrder(i, j) if (i=j) write ( “A i ”) ; else k = P [ i, j ] ; write “ ( ” ; ShowOrder(i, k) ; write “ ” ; ShowOrder (k+1, j) ; write “)” ;
35
Principle of Optimality In book, this is termed “Optimal substructure” An optimal solution contains within it optimal solutions to subproblems. More detailed explanation –Suppose solution S is optimal for problem P. –Suppose we decompose P into P 1 through P k and that S can be decomposed into pieces S 1 through S k corresponding to the subproblems. –Then solution S i is an optimal solution for subproblem P i
36
Outline Logistical details Presentation evaluations Scheduling Approximation Algorithms Search space view of mathematical concepts Dynamic programming –Extra notes on dynamic programming
37
Example 1 Matrix Multiplication –In our solution for computing matrix M(1,n), we have a final step of multiplying matrices M(1,k) and M(k+1,n). –Our subproblems then would be to compute M(1,k) and M(k+1,n) –Our solution uses optimal solutions for computing M(1,k) and M(k+1,n) as part of the overall solution.
38
Example 2 Shortest Path Problem –Suppose a shortest path from s to t visits u –We can decompose the path into s-u and u-t. –The s-u path must be a shortest path from s to u, and the u-t path must be a shortest path from u to t Conclusion: dynamic programming can be used for computing shortest paths
39
Example 3 Longest Path Problem –Suppose a longest path from s to t visits u –We can decompose the path into s-u and u-t. –Is it true that the s-u path must be a longest path from s to u? Conclusion?
40
Example 4: The Traveling Salesman Problem What recurrence relation will return the optimal solution to the Traveling Salesman Problem? If T(i) is the optimal tour on the first i points, will this help us in solving larger instances of the problem? Can we set T(i+1) to be T(i) with the additional point inserted in the position that will result in the shortest path?
41
No! T(4) Shortest TourT(5)
42
Summary of bad examples There almost always is a way to have the optimal substructure if you expand your subproblems enough For longest path and TSP, the number of subproblems grows to exponential size This is not useful as we do not want to compute an exponential number of solutions
43
When is dynamic programming effective? Dynamic programming works best on objects that are linearly ordered and cannot be rearranged –characters in a string –files in a filing cabinet –points around the boundary of a polygon –the left-to-right order of leaves in a search tree. Whenever your objects are ordered in a left-to- right way, dynamic programming must be considered.
44
Efficient Top-Down Implementation We can implement any dynamic programming solution top-down by storing computed values in the table –If all values need to be computed anyway, bottom up is more efficient –If some do not need to be computed, top-down may be faster
45
Trading Post Problem Input –n trading posts on a river –R(i,j) is the cost for renting at post i and returning at post j for i < j Note, cannot paddle upstream so i < j Task –Output minimum cost route to get from trading post 1 to trading post n
46
Longest Common Subsequence Problem Given 2 strings S and T, a common subsequence is a subsequence that appears in both S and T. The longest common subsequence problem is to find a longest common subsequence (lcs) of S and T –subsequence: characters need not be contiguous –different than substring Can you use dynamic programming to solve the longest common subsequence problem?
47
Longest Increasing Subsequence Problem Input: a sequence of n numbers x 1, x 2, …, x n. Task: Find the longest increasing subsequence of numbers –subsequence: numbers need not be contiguous Can you use dynamic programming to solve the longest common subsequence problem?
48
Book Stacking Problem Input –n books with heights h i and thicknesses t i –length of shelf L Task –Assignment of books to shelves minimizing sum of heights of tallest book on each shelf books must be stored in order to conform to catalog system (i.e. books on first shelf must be 1 through i, books on second shelf i+1 through k, etc.)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.