Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graphs - Shortest Paths 15-211 Fundamental Data Structures and Algorithms Margaret Reid-Miller 24 March 2005.

Similar presentations


Presentation on theme: "Graphs - Shortest Paths 15-211 Fundamental Data Structures and Algorithms Margaret Reid-Miller 24 March 2005."— Presentation transcript:

1

2 Graphs - Shortest Paths 15-211 Fundamental Data Structures and Algorithms Margaret Reid-Miller 24 March 2005

3 Announcements Today:  DAG single source shortest paths  All pairs shortest paths  Network flow  Greedy algorithms Reading: Sedgewick Chapter 21

4 Single Source Shortest Path on a DAG

5 Single Source Shortest Paths Dijkstra: Digraphs with non-negative edge cost O(n log n + e) Bellman-Ford: Digraphs may have negative edge cost Detects negative cost cycles O(ne) If a digraph is a directed acyclic graph (DAG), can we do better than Dijkstra?

6 DAG SS Shortest Paths Use topological sort to order the vertices from left to right. If we examine edges in topological order, then we can relax edges in each path in forward order. That is, we never relax an edge of an ancestor. In particular, when we relax an edge, we already have the shortest path up to that edge.

7 DAG SS Shortest Paths The algorithm is remarkably simple. topologically sort the vertices of G intialize foreach vertex u in topological order for all (u,v) in E relax(u,v) It runs in O(n+e). Why?

8 The All Pairs Shortest Path Algorithm (Floyd’s Algorithm)

9 Finding all pairs shortest paths  Given: A digraph G=(V,E) with cost(x,y) for each edge.  Problem: For every pair of vertices (v,w), find distance the shortest path from v to w.  Since the output is n 2 in size, we can use an adjacency matrix representation of the graph.  That is, find the table of distances of shortest paths, like you see on maps sometimes.

10 All-pairs Shortest Paths A[i][j] = dist(i,j) if there is an edge (i,j) A[i][j] = infinity(inf) if there is no edge (i,j) Adjacency matrix 3 2 1 5 0 4 30 10 -20 50 2 15 30

11 All Pair Shortest Paths  Two good ways to computing all pair shortest paths using dynamic programming. Each is based on different ways of breaking the problem into subproblems.  Approach 1: As with Bellmon-Ford, compute the shortest path that uses i or fewer edges.  Approach2: compute shortest path distances that uses only the vertices numbers {0,…,k}.

12 1. Algorithm  Subproblem: Use paths of length k.  Let A[i,j] = dist[i,j] for all i,j & ij  If (i,j) is not an edge, set A[i,j]=infinity and A[i,i]=0  Base case: The initial matrix is the distance of the shortest path that uses at most 1 edge (or infinity if there is no such path).

13 1. Matrix Multiply Algorithm  Next compute the distance of the shortest paths that uses at most 2 edges.  It turns out we want to find A 2, where multiplication is defined as min of sums instead sum of products.  That is (A 2 ) ij = min{ A ik + A kj | k =1,…,n}  Why is this shortest path distance from i to j with <= 2 edges?  Exercise: How would you change A so that A 2 is the shortest path distance from i to j of length exactly two?  What is the running time to find A 2 ?

14 1. Matrix Multiply Algorithm  Using A 2 you can find A 4 and then A 8 and so on.  By combining these matrices using matrix multiplication, we can find A n.  Therefore, to find A n we need log n matrix operations, each taking O(n 3 ).  Therefore this algorithm is O(n 3 log n). We will consider another algorithm next.

15 2. Floyd-Warshall Algorithm  Subproblem: Define matrix A k [i,j] to keep distances of paths between i and j that use vertices {1,2,…,k} only.  Then A 0 [i,j] = A[i,j]  A k [i,j] = min (A k-1 [i,j], A k-1 [i,k]+ A k-1 [k,j])  Why?

16 Floyd-Warshall Algorithm  Let p be the shortest path from i to j using only vertices {1, …, k}.  If vertex k is not in p then the shortest path using vertices {1, …, k-1} is also the shortest path using vertices {1, …, k}.  If vertex k is in p, then we break p down into p 1 (i to k) and p 2 (k to j) where p 1 and p 2 are the shortest path using vertices {1, …, k-1}.  That is, either we use k or we don’t. A k [i,j] = min (A k-1 [i,j], A k-1 [i,k]+ A k-1 [k,j])

17 Floyd-Warshall Implementation  Can be implemented using only one matrix. initialize all A[i,j] = dist[i,j]; initialize all A[i,i] = 0; for( k = 0; k < n; k++ ) for( i = 0; i < n; i++ ) for( j =0; j < n; j++) if ( A[i,j] > A[i,k] + A[k,j] ) A[i,j] = A[i,k] + A[k,j];  The complexity of this algorithm is O(n 3 )

18 Questions  To find all pairs shortest paths, would you ever use Dijkstra? If so, when?  So far we have found the distance of the shortest paths. How do we represent the shortest paths?

19 Representing Shortest Paths?  To find the actual path of the shortest path, we compute a predecessor matrix.  Initialization: P[i,j] = nil, if i = j or dist[i,j] = inf P[i,j] = i, otherwise  Update: if ( A[i,j] > A[i,k] + A[k,j] ) { A[i,j] = A[i,k]+ A[k,j]; P[i,j] = P[k,j]; }

20 Negative cycles?  How do we detect whether the graph has negative cycles?  At the end of each iteration of the outer loop check whether A[i,i] < 0 for all i. for( k = 0; k < n; k++ ) for( i = 0; i < n; i++ ) for( j =0; j < n; j++) if ( A[i,j] > A[i,k]+A[k,j] ) { A[i,j] = A[i,k]+A[k,j]; P[i,j] = P[k,j]; } for( i = 0; i < n; i++ ) if A[i,i] < 0 return false;

21 Application: Closure Given a binary relation S on {1,2,...,n}, the transitive reflexive closure trc(S) is the least relation R such that - x S y implies x R y - x R x for all x - x R y and y R z implies x R z If we model the relation by a graph, trc(S) has an edge (x,y) if the graph has a path from x to y. We can compute trc(S) by repeated calls to DFS (or BFS) on each vertex. Good solution if the graph is sparse.

22 Transitive Reflexive Closure But when S is dense one might as well bite the bullet and use a cubic (in n) algorithm that has good constants. Run Floyd-Warshall with cost A[i,j] = 1, if (i,j) in E A[i,j] = 0, otherwise. Return T[i,j] = 1 if A[i,j] > 0 T[i,j] = 0 otherwise More efficient to use Boolean operators

23 Warshall's Algorithm for( k = 0; k < n; k++ ) for( i = 0; i < n; i++ ) for( j =0; j < n; j++) A[i,j] = A[i,k] || ( A[i,k] && A[k,j] ); Upon completion, A is the adjacency matrix for the transitive reflexive closure of S. What is the space complexity of this method?

24 Network Flow Problems

25 Network flow problems  An important application of graphs is to model flows (of items or information) from one point to another.  A directed graph G = (V,E), with each edge e  E having a capacity C e.  The problem: Compute the maximum possible flow from a given source vertex to a given target vertex  Edge capacities must not be violated

26 A simple example s ab cd t 32 1 3 4 2 23

27 Network flow applications  Network flow problems have many applications  bandwidth in large networks  traffic capacities in road networks  flow in electrical circuits or water pipes

28 A possible greedy algorithm  Initial graph G  We incrementally building the flow graph G f.  G f shows the maximum flow attained thus far on each edge  At the same time, build up the residual graph G r.  G r shows, for edge, what is the remaining capacity

29 Greedy algorithm, cont’d  Initialize  G f : all edges have 0 flow  G r : initialize to G  At each stage, find a path in G r from s to t  The edge with the minimum capacity is the amount of flow that can be added to every edge on that path  adjust G f and G r accordingly  Remove saturated edges from G r

30 Greedy example, step 0 s ab cd t 32 1 3 4 2 23 s ab cd t 00 0 0 0 0 00 s ab cd t 32 1 3 4 2 23 G GfGf GrGr

31 Greedy example  For each stage, we will randomly choose a path from s to t

32 Greedy example, step 1 s ab cd t 32 1 3 4 2 23 s ab cd t 02 0 0 0 2 02 s ab cd t 3 1 3 4 21 G GfGf GrGr

33 Greedy example, step 2 s ab cd t 32 1 3 4 2 23 s ab cd t 22 0 2 0 2 22 s ab cd t 1 1 1 4 1 G GfGf GrGr

34 Greedy example, step 3 s ab cd t 32 1 3 4 2 23 s ab cd t 32 0 2 1 2 23 s ab cd t 1 1 3 G GfGf GrGr Maximum flow of 5 units into t

35 Greedy doesn’t work!  Although we have managed to compute the correct maximum flow, in general this greedy algorithm doesn’t work  Suppose, for example, we started out differently…

36 Greedy failure s ab cd t 32 1 3 4 2 23 s ab cd t 30 0 0 3 0 03 s ab cd t 2 1 3 1 2 2 G GfGf GrGr Algorithm terminates with maxflow=3

37 Fixing the algorithm  In order to fix this problem, we need a way to “undo” greedy choices  How:  Whenever we add flow x to an edge (v,w) in G f, we will add a new edge (w,v) with capacity x to G r  This will allow the path to be undone later, if necessary

38 Correct algorithm, step 1 s ab cd t 32 1 3 4 2 23 s ab cd t 30 0 0 3 0 03 G GfGf GrGr s ab cd t 32 1 3 1 2 23 3

39 Correct algorithm, step 2 s ab cd t 32 1 3 4 2 23 s ab cd t 32 0 2 3 2 23 G GfGf GrGr s ab cd t 32 1 1 3 2 23 13 2 Maximum flow of 5 units into t

40 Running time  If all capacities are integers and the max flow is f, then at most f stages to compute the max flow  Each stage requires finding a shortest path (O(|E|))  So, total running time is O(f|E|)

41 The worst case s t ab nn n n 1 What could happen when we run the algorithm? How might this be avoided?

42 Greed is Good

43 Example 1: Counting change  Suppose we want to give out change, using the minimal number of bills and coins.

44 A change-counting algorithm  An easy algorithm for giving out N cents in change:  Choose the largest bill or coin that is N.  Subtract the value of the chosen bill/coin from N, to get a new value of N.  Repeat until a total of N cents has been counted.  Does this work? I.e., does this really give out the minimal number of coins and bills?

45 Our simple algorithm  For US currency, this simple algorithm actually works.  Why do we call this a greedy algorithm?

46 Greedy algorithms  At every step, a greedy algorithm  makes a locally optimal decision,  with the idea that in the end it all adds up to a globally optimal solution.  Being optimistic like this usually leads to very simple algorithms (i.e., easy to code).

47 Change counting is greedy  Makes a locally optimal decision. Uses the next-largest bill or coin. But once a coin is accepted, it is permanently included in the solution. Once a coin is rejected, it is permanently excluded from the solution.  To reach a globally optimal solution. Can you prove it for US currency?

48 But…  What happens if we have a 12-cent coin?

49 Example 2: Fractional Knapsack Problem (FKP) You rob a store: find n kinds of items –Gold dust. Wheat. Beer.

50

51 Example 2: Fractional knapsack problem (FKP) You rob a store: find n kinds of items –Gold dust. Wheat. Beer. The total inventory for the i th kind of item: –Weight: w i pounds –Value: v i dollars Knapsack can hold a maximum of W pounds. Q: how much of each kind of item should you take? (Can take fractional weight)

52 FKP: solution  Greedy solution 1:  Get a bigger knapsack!  Build up extra muscles if necessary.

53 FKP: solution  Greedy solution 1:  Get a bigger knapsack!  Build up extra muscles if necessary. But seriously folks…

54 FKP: solution  Greedy solution 1:  Get a bigger knapsack!  Build up extra muscles if necessary. But seriously folks…  Greedy solution 2:  Fill knapsack with “most valuable” item until all is taken.  Most valuable = v i /w i (dollars per pound)  Then next “most valuable” item, etc.  Until knapsack is full.

55 Ingredients of a greedy alg. 1.Optimization problem  Of the many feasible solutions, finds the minimum or maximum solution. 2.Can only proceed in stages  no direct solution available 3.Greedy-choice property: A locally optimal (greedy) choice will lead to a globally optimal solution. 4.Optimal substructure: An optimal solution contains within it optimal solutions to subproblems. Show the proof

56 FKP is greedy  An optimization problem:  Maximize value of loot, subject to maximum weight W. (constrained optimization)  Proceeds in stages:  Knapsack is filled with one item at a time.

57 FKP is greedy  Greedy-choice property: A locally greedy choice will lead to a globally optimal solution.  Proof: Step 1: Prove that an optimal solution contains the greedy choice. Step 2: Prove that the greedy choice can always be made first.

58 FKP: Greedy-choice proof: step 1 We want to show that the optimal solution always contains the greedy choice.  Consider total value, V, of knapsack.  Suppose item h is the item with highest $/lb.  Knapsack must contain item h:  Why? Because if h is not included, we can replace some other item in knapsack with an equivalent weight of h, and increase V.  This can continue until knapsack is full, or all of h is taken. Therefore any optimal solution must include greedy-choice.

59 For item i let w i be the total inventory, v i be the total value, k i be the weight in knapsack. Let item h be the item with highest $/lb. If k h 0 for some jh, then replace j with an equal weight of h. Let new total value = V’. Difference in total value: since, by definition of h, Therefore all of item h should be taken. More rigorously… Assume total optimal value :

60 FKP: Greedy-choice proof: step 2 Next, we want to show that we can always make the greedy choice first.  If item h is more than what knapsack can hold, then fill knapsack completely with h.  No other item gives higher total value.  Otherwise, knapsack contains all of h and some other item. We can always make h the first choice, without changing total value V. In either case the greedy-choice can always be made FIRST.

61 More rigorously…  Case I: w h  W  Fill knapsack completely with h.  No other item gives higher total value.  Case II: w h < W  Let 1st choice be item i, and kth choice be h, then we can always swap our 1st and kth choices, and total value V remains unchanged.  Therefore greedy-choice can always be made first.

62 FKP: Optimal substructure  The optimal substructure property: An optimal solution contains within it optimal solutions to subproblems.  If we remove weight w of one item i from the optimal load, then the remaining load must be optimal solution using the remaining items.  The subproblem is the most valuable load of maximum weight W-w from n-1 items and w i - w of item i.

63 FKP: Optimal substructure proof  We want to show that an optimal solution contains within it optimal solutions to subproblems.  Consider the most valuable load L, weighing W lbs.  Remove w pounds of some item i.  Remaining load L’ must be the most valuable load for a smaller fractional knapsack problem:  Maximum weight is W – w lbs.  Only n-1 items and w i - w lbs. of item i.  Why? Because otherwise we can find a load, L’’, more valuable than L’, add w pounds of item i, and this will be more valuable than L. (Contradiction!)

64 Example 3: Binary knapsack problem (BKP)  Variation on FKP.  The “Supermarket Shopping Spree”!  Suppose, instead, that you can only take an item wholly, or not at all (no fractions allowed).  Diamond rings. Laptops. Watches.  Q: How many of each item to take?  Will the greedy approach still work? Surprisingly, no.

65 The Binary Knapsack Problem  You win the Supermarket Shopping Spree contest.  You are given a shopping cart with capacity C.  You are allowed to fill it with any items you want from Giant Eagle.  Giant Eagle has items 1, 2, … n, which have values v 1, v 2, …, v n, and sizes s 1, s 2, …, s n.  How do you (efficiently) maximize the value of the items in your cart?

66

67 BKP is not greedy  The obvious greedy strategy of taking the maximum value item that still fits in the cart does not work.  Consider:  Suppose item i has size s i = C and value v i.  It can happen that there are items j and k with combined size s j +s k  C but v j +v k > v i.

68 BKP: Greedy approach fails item 1 item 2 item 3 knapsack $60, 10 lbs $100, 20 lbs $120, 30 lbs Maximum weight = 50 lbs Dollars/pound Item 1$6 Item 2$5 Item 3$4 BKP has optimal substructure, but not greedy-choice property: optimal solution does not contain greedy choice. $160 $180$220 (optimal)

69 Exercise:  How can we (efficiently) solve the binary knapsack problem?  Hint:  Dynamic programming

70 Succeeding with greed. 4 ingredients needed: 1.Optimization problem 2.Can only proceed in stages 3.Greedy-choice property: A greedy choice will lead to a globally optimal solution. 4.Optimal substructure: An optimal solution contains within it optimal solutions to subproblems.


Download ppt "Graphs - Shortest Paths 15-211 Fundamental Data Structures and Algorithms Margaret Reid-Miller 24 March 2005."

Similar presentations


Ads by Google