Download presentation
Presentation is loading. Please wait.
Published byΕιδοθεα Δημητρακόπουλος Modified over 6 years ago
1
離散數學 DISCRETE and COMBINATORIAL MATHEMATICS
13 Optimization and Matching 指導老師:鄧有光
2
重要章節 13-1 Dijkstra’s Shortest-Path Algorithm
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim 13-3 Transport Networks : The Max- Flow Min-Cut Theorem
3
13-1 Dijkstra’s Shortest-Path Algorithm
4
13-1 Dijkstra’s Shortest-Path Algorithm
Step 1: Set the counter i=0 and S0 = {v0}. Label v0 with (0 , -) and each v ≠ v0 with (∞ , -). If n=1, then V = {v0} and the problem is solved. If n>1, continue to step (2). Step 2: For each v in Si replace, when possible, the label on v by the new label (L (v), y) where L (v) = min{ L (v), L (u) + wt( u, v)}, And y is a vertex in Si that produces the minimum L (v). [when a replacement does take place, it is due to the fast that we can go from v0 to v and travel a shorter distance by going along a path that includes the edge (y, v). ] u in Si
5
13-1 Dijkstra’s Shortest-Path Algorithm
Step 3: If every vertex in Si (for some 0 ≦ i ≦ n-2 ) has the label (∞ , - ), then the labeled graph contains the information we are seeking . If not, then there is at least one vertex v in Si that is not labeled by (∞ , - ), and we perform the following tasks: 1) Select a vertex vi+1 where L( vi+1) is a minimum (for all such v). There may be more than one such vertex, in which case we are free to choose among the possible candidates. The vertex vi+1 is an element of Si that is closest to v0. 2) Assign Si ∪{ vi+1} to Si+1. 3) Increase the counter I by 1. If i = n-1, the labeled graph contains the information we want. If i < n-1, return to step (2).
6
13-1 Dijkstra’s Shortest-Path Algorithm
EX: Apply Dijkstra’s algorithm to the weight graph G = (V, E) shown in Fig in order to find the shortest distance from vertex c (= v0) to each of the other five vertices in G.
7
13-1 Dijkstra’s Shortest-Path Algorithm
<Sol> Initialization: (i=0).Set S0 = {c}. Label c with (0, -) and all other vertices in G with (∞, -). First Iteration: (S0 = {a, b, f, g, h}). Here i = 0 in step (2) and we find, for example, that L (a) = min {L (a), L (c) + wt (c, a) } = min {∞, 0 + ∞} = ∞, whereas L (f) = min {L (f), L (c) + wt (c, f) } = min {∞, 0 + 6} = 6. ↓Next page ↓
8
13-1 Dijkstra’s Shortest-Path Algorithm
Similar calculations yield L (b) = L (g) = ∞ and L (h) = 11. So we label the vertex f with (6, c) and the vertex h with (11, c). The other vertices in S0 remain labeled by (∞, -). [See Fig. 13.2(a).] In step (3) we see that f is the vertex v1 in S0 closest to v0, so we assign to S1 the set S0 ∪ {f} = {c, f} and increase the counter i to 1. Since i = 1 < 5 (= 6 - 1) , we return to step (2). Second Iteration: (S1 = {a, b, g, h}). Now i=1 in step (2), and for each v in S1 we set L (v) = min {L (v), L (u) + wt (u, v)}. This yields L (a) = min {L (a), L (c) + wt (c, a), L (f) + wt (f, a)} = min {∞, 0 + ∞,6 + 11} = 17 ↓Next page ↓ u in Si
9
13-1 Dijkstra’s Shortest-Path Algorithm
10
13-1 Dijkstra’s Shortest-Path Algorithm
so vertex a is labeled (17, f). In a similar manner, we find L (b) = min {∞, 0 + ∞, 6 + ∞} = ∞ L (g) = min {∞, 0 + ∞, 6 + 9} = 15 L (h) = min {11, , 6 + 4} = 10 [These results provide the labeling in Fig. 13.2(b). ] In step (3) we find that the vertex v2 is h because h in S1 and L (h) is a minimum. Then S2 is assigned S1 ∪ {h} = {c, f, h}, the counter is increased to 2, and since 2 < 5, the algorithm directs us back to step (2). Third Iteration: (S2 = {a, b, g}). With i = 2 in step (2) the following are now computed : L (a) = min {L (a), L (u) + wt (u, a)} = min {17, 0 + ∞, , } = 17 u in Si ↓ Next page↓
11
13-1 Dijkstra’s Shortest-Path Algorithm
(so the label on a is not changed); L (b) = min {∞, 0 + ∞, 6 + ∞, 10 + ∞} = ∞ (so the label on b remains ∞); and L (g) = min {15, 0 + ∞, 6 + 9, } = 14 < 15, so the label on g is changed to (14, h) because 14 = L (h) + wt (h, g). Among the vertices in S2, g is the closest to v0 since L (g) is a minimum. In step (3), vertex v3 is defined as g and S3 = S2 ∪ {g} = {c, f, h, g}. Then the counter i is increased to 3<5, and we return to step (2). Fourth Iteration: (S3 = {a, b}). With i = 3, the following are determined in step (2): L (a) = 17; L (b) = ∞. ( Thus no label are changed during this iteration. ) ↓ Next page↓
12
13-1 Dijkstra’s Shortest-Path Algorithm
We set v4 = a and S4 = S3∪{a} = {c, f, h, g, a} in step (3). Then the counter i is increased to 4 (<5), and we return to step (2). Fifth Iteration: (S4 = {b}). Here i=4 in step (2), and we find L (b) = L (a) + wt (a, b) = = 22. Now the label on b is changed to (22 , a). Then v5 = b in step (3), S5 is set to {c, f, h, g, a, b}, and i is incremented to 5. But now that i= 5 = |V|-1, the process terminates. We reach the labeled graph shown in Fig
13
13-1 Dijkstra’s Shortest-Path Algorithm
From the Fig we have the following shortest distances from c to the other five vertices in G : <Ans>: 1) d (c, f) = L (f) = 6. 2) d (c, h) = L (h) = 10. 3) d (c, g) = L (g) = 14. 4) d (c, a) = L (a) = 17. 5) d (c, b) = L (b) = 22.
14
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
15
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Kruskal’s Algorithm Step1: Set the counter i=1 and select an edge e1 in G, where wt(e1) is as small as possible. Step2: For 1 ≦ i ≦ n-2, if edges e1,e2,…,ei have been selected, then select edge ei+1 from the remaining edges in G so that (a) wt(ei+1) is as small as possible and (b) the sub graph of G determined by the edge e1,e2,…,ei,ei+1(and the vertices they are incident with) contains no cycles. Step3: Replace i by i + 1. If i = n – 1, the sub graph of G determined by edges e1,e2,…,en-1 is connected with n vertices and n – 1 edges, and is an optimal spanning tree for G. If i < n – 1, return to step (2).
16
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
EX : Apply Kruskal’s algorithm to the graph shown in Fig
17
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
<Sol> : Initialization : (i=1). Select weight 1(smallest), start with T = {{e, g}}. First Iteration : Select weight 2( next smallest). Select {d ,f}. Now T = {{e, g} ,{d, f}}. i = 2 < 6. Second Iteration : Select weight 2( now smallest have 2). Select {d , e}. Now T = {{e, g} ,{d, f}, {d, e}}. i = 3 < 6. Third Iteration : Select weight 3( next smallest). Select {c , e}. Now T = {{e, g} ,{d, f}, {d, e}, {c, e}}. i = 4 < 6. Fourth Iteration : Select weight 4( next smallest). Select {b ,e}. Now T = {{e, g} ,{d, f}, {d, e}, {c, e}, {b, e}}. i = 5 < 6.
18
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Fifth Iteration : Select weight 5( next smallest). Select {a , b}. Now T = {{e, g} ,{d, f}, {d, e}, {c, e}, {b, e}, {a, b}}. i = 6 the end. So T is an optimal tree for graph G and has weight = 17.
19
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Prim’s Algorithm Step 1 : Set the counter i =1 and place an aribitrary vertex v1 in V into set P. Define N = V – {v1} and T = empty. Step 2 : For 1≦ i ≦ n – 1, where |V| = n, let P = {v1,v2,…,vi}, T={e1,e2,…,ei-1}, and N = V – P. Add to T a shortest edge (an edge of minimal weight) in G that connects a vertex x in P with a vertex y (=vi+1) in N. Place y in P and delete it rom N. Step 3 : Increase the counter by 1. If i = n, the sub graph of G determined by the edges e1,e2,…,en-1 is connected with n vertices and n-1 edges and is an optimal tree for G. If i < n, return to step (2).
20
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
EX : Prim’s algorithm generates an optimal tree as follows.
21
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Initialization : i = 1; P = {a}; N = {b, c, d, e, f, g}; T = empty. First Iteration : T = {{a, b}}; P = {a, b}; N = {c, d, e, f, g}; i = 2. Second Iteration : T = {{a, b}, {b, e}}; P = {a, b, e}; N = {c, d, f, g}; i = 3. Third Iteration : T = {{a, b}, {b, e}, {e, g}}; P = {a, b, e, g}; N = {c, d, f}; i = 4. Fourth Iteration : T = {{a, b}, {b, e}, {e, g}, {d, e}}; P = {a, b, e, g, d}; N = {c, f}; i = 5. Fifth Iteration : T = {{a, b}, {b, e}, {e, g}, {d, e}, {f, g}}; P = {a, b, e, g, d, f}; N = {c}; i = 6.
22
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Sixth Iteration : T = {{a, b}, {b, e}, {e, g}, {d, e}, {f, g}, {c, g}}; P = {a, b, e, g, d, f, c}= V; N = empty; i = 7 = |V|. Hence T is an optimal spanning tree of weight 17 for G, as seen in Fig
23
13-2 Minimal Spanning Trees : The Algorithms of Kruskal and Prim
Homework : Apply Kruskal’s and Prim’s algorithms to determine minimal spanning trees for the graph shown in Fig
24
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
25
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
Definition : Let N = (V, E) be a loop-free connected directed graph. Then N is called a network, or transport network, if the following conditions are satisfied: a) There exists a unique vertex a in V with id (a), the in degree of a, equal to 0. This vertex a is called the source. b) There is a unique vertex z in V, called the sink, where od (z), the out degree of z, equals 0. c) The graph N is weighted, so there is a function from E to the set of nonnegative integers that assigns to each edge e = (v, w) in E a capacity, denoted by c (e) = c (v, w).
26
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
EX(說明): id (a) = 0 , od (z) = 0. id ≧ od. The graph max flow = 11.
27
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
EX: (a) = (b)
28
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
EX(舉例說明) :
29
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
The Edmonds-Karp Algorithm Step 1: Place the source a into set P (thus initializing the set of processed vertices.) Assign the label ( , 1) to a and set the counter i=2. Step 2: While the sink z is not in P If there is a usable edge in N Let e = {v, w} be usable with labeled vertex v having the smallest counter assignment. If w is unlabeled Label w with (v, i) Place w in P Increase the counter i by 1. Else Return the minimum cut (P, P). ↓Next page ↓
30
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
The Edmonds-Karp Algorithm Step 3 : If z is in P, start with z and backtrack to a using the first component of the vertex labels. (This provides an f-augmenting path p with the smallest number of edges.)
31
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
The Ford-Fulkerson Algorithm Step 1: Define the initial flow f on the edges of N by f (e) = 0 for each e in E. Step 2: Repeat Apply the Edmonds-Karp algorithm to determine an f-augmenting path p. Let △p = mine in p {△e}. For each e in p If e is a forward edge f (e) := f (e) + △p Else (e is a backward edge) f (e) := f (e) - △p ↓Next page ↓
32
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
The Ford-Fulkerson Algorithm Until no f-augmenting path p can be found in N. Return the maximum flow f. Step 3 : Return the minimum cut (P, P) (from the last application of the Edmonds-Karp algorithm, where no further f-augmenting path could be constructed).
33
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
EX: Use the Ford-Fulkerson and Edmonds-Karp algorithms to find a maximum flow for the transport network in Fig (i).
34
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
35
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
Homework : Apply the Edmonds-Karp and Ford-Fulkerson algorithms to find a maximum flow in Examples 13.12, 13.13, and ↓Next page ↓
36
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
↓Next page ↓
37
13-3 Transport Networks : The Max-Flow Min-Cut Theorem
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.