Presentation is loading. Please wait.

Presentation is loading. Please wait.

Spanning tree Lecture 4.

Similar presentations


Presentation on theme: "Spanning tree Lecture 4."— Presentation transcript:

1 Spanning tree Lecture 4

2 Minimum Spanning Tree Problem
Instance: An undirected graph G, weights c: E(G) → R . Task: Find a spanning tree in G of minimum weight or decide that G is not connected.

3 Maximum Weight Forest Problem
Instance: An undirected graph G, weights c: E(G) → R . Task: Find a forest in G of maximum weight.

4 Equivalent Problems We say that a problem P linearly reduces to a problem Q if there are functions f and g, each computable in linear time, such that f transforms an instance x of P to an instance y of Q, and g transforms a solution f (x) to a solution of x. If P linearly reduces to Q and Q linearly reduces to P , then both problem are called equivalent.

5 MVF ⇔ MST Proposition 4.1. The Maximum Weight Forest Problem and Minimum Spanning Tree Problem the are equivalent.

6 Proof (1) Given an instance (G, c) of the Maximum Weight Forest Problem. Delete all edges of negative weight, set Add a minimum set F of edges (with arbitrary weight) to make the graph connected. Let us call the resulting graph G’. Then instance (G',c') of the Minimum Spanning Tree Problem is equivalent in the following sence: Deleting the edges of F froma minimum weight spanning tree in (G',c') yields a maximum forest in (G, c).

7 Proof (2) Given an instance (G, c) of the Minimum Spanning Tree Problem. Set for all e  E(G), where All edges in a new instance have a positive weight. Then the instance (G, c) of the Maximum Weight Forest Problem is equivalent, since all spanning trees have the same number of edges. From now we will consider the Minimum Spanning Tree Problem.

8 Optimality conditions of MST
Theorem 4.2. Let (G ,c) be an instance of the Minimum Spanning Tree Problem, and let T b a spanning tree in G Then the following statements are equivalent: T is optimum. For every e={x, y} ∊ E(G)\ E(T), no edge on the x-y-path in T has higher cost than e. For every e ∊ E(T), e is a minimum cost edge of (V(C)), where C is a connected component of T– e.

9 Exercise 4.1 Proof (a)(b) . T is optimum.
For every e={x, y} ∊ E(G)\ E(T), no edge on the x-y-path in T has higher cost than e.

10 Proof (a)(b) Suppose (b) is violated.
Let e = {x,y}  E(G)\ E(T) and let f be an edge on the x-y-path in T with c(f) > c(e). Then (T – f ) + e is a spanning tree with lower cost.

11 Exercise 4.2 Proof (b)(c) .
b) For every e={x, y} ∊ E(G)\ E(T), no edge on the x-y-path in T has higher cost than e. c) For every e ∊ E(T), e is a minimum cost edge of (V(C)), where C is a connected component of T– e.

12 Proof (b)(c) Suppose (c) is violated.
Let e  E(T), C is a connected component of T – e and f ={x,y} ∊ (V(C)) with c(f) < c(e). Observe that the x-y-path in T must contain an edge of (V(C)), but the only such edge is e. So (b) is violated.

13 Proof (c)(a) Suppose T satisfies (c), and let T* be an optimum spanning tree with E(T) ∩ E(T*) as large as possible. We show that T = T*. Suppose there is an edge e = {x,y}  E(T)\ E(T*). Let C be a connected component of T– e. T* + e contains a circuit D. Since e  E(D) ∩ δ(V(C)), at least one more edge f (f ≠ e) of D must belong to δ(V(C)). Observe that (T* + e) – f is a spanning tree. Since T* is optimum, c(e) ≥ c(f). But since (c) holds for T , we also have c(f) ≥ c(e). So c(f) = c(e) and (T* + e) – f is another optimum spanning tree. This is a contradiction. because a new tree has one edge more in common with T.

14 Kruskal’s Algorithm Input: A connected undirected graph G, weights c: E(G) → R . Output: A spanning tree T of minimum weight. Sort the edges such that c(e1) ≤ c(e2) ≤…≤ c(em). Set T  (V(G), ). For i  1 to m do: If T+ei contains no circuit then set T  T +ei.

15 Kruskal’s Algorithm (2)
Theorem 4.3. Kruskal’s Algorithm works correctly. It is clear that the algorithm constructs a spanning tree T. It is also guarantees condition (b) of the previous theorem. So T is optimum.

16 Prim’s Algorithm Input: A connected undirected graph G, weights c: E(G) → R . Output: A spanning tree T of minimum weight. Choose v ∊ V(G). Set T  ({v}, ). While V(T) ≠V(G) do: Choose an edge e ∊ G(V(T)) of minimum cost. Set T  T +e.

17 Prim’s Algorithm (2) Theorem 4.5.
Prim’s Algorithm works correctly. Its running time is O(n2).

18 Maximum Weight Branching Problem
Instance: A digraph G, weights c: E(G) → R . Task: Find a maximum weight branching in G.

19 Minimum Weight Arborescence Problem
Instance: A digraph G, weights c: E(G) → R . Task: Find a minimum weight spanning arborescence in G or decide that none exists.

20 Minimum Weight Rooted Arborescence Problem
Instance: A digraph G, a vertex r ∊V(G), weights c: E(G) → R . Task: Find a minimum weight spanning arborescence rooted at r in G or decide that none exists.

21 MVB ⇔ MWA ⇔ MWRA Proposition 4.6.
The Maximum Weight Branching Problem, the Minimum Weight Arborescence Problem, and the Minimum Weight Rooted Arborescence Problem are all equivalent.

22 Homework Prove the following proposition. Proposition 4.6.
The Maximum Weight Branching Problem, the Minimum Weight Arborescence Problem, and the Minimum Weight Rooted Arborescence Problem are all equivalent.

23 Maximum Weight Branching Problem
Instance: A digraph G, weights c: E(G) → R . Task: Find a maximum weight branching in G. In the rest of our class we will consider the …. I remind you that a brancjing

24 Branching A digraph B is a branching if the underlying graph is a forest and each vertex v has at most one entering edge. Equivalently, a branching is an acyclic digraph B with for all x  V(B). Proposition 4.7. Let B be a graph with for all x ∊ V(B). Then B contains a circuit if and only if the underlying undirected graph contains a circuit.

25 How to solve MWB Let G be a directed graph and c: E(G) → R.
We can ignore negative weights since such edges will never appear in optimum branching. A first idea is to take the best entering edge for each vertex. Of course the returning graph may contain circuits. We must delete at least one edge of each circuit. The following lemma says that one is enough. the maximum weighted branching problem

26 Branching and circuits
Lemma 4.8. (Karp [1972]) Let B0 be a maximum weight subgraph of G with for all v ∊ V(B0). Then there exists an optimum branching B of G such that for each circuit C in B0, |E(C)\ E(B)| = 1.

27 Proof of Lemma a1 b1 С  B0 a2 b3 b2 a3
Let B be an optimum branching of G containing as many edges of B0 as possible. Let C be some circuit in B0. a1 b1 С  B0 a2 b3 Let E(C)\ E(B)={(a1, b1),…, (ak, bk)} and k greater than 1. b2 This is a contradiction because the union of these paths is a closed walk in B, and a branching cannot contain a closed walk. a3 We claim that B contains bi-bi-1-path for each i=1,…,k, (b0=bk).

28 Show that B contains bi-bi-1-path for each i.
[bi-1 ai]B ai PB bi+1 eE(B) bi Consider B’ with … It would be optimum. Since k>1 P is not completely on C. Consider the last edge of P not belonging to C. ai+1 V(B′):=V(G) and E(B′):={(x,y)E(B)}\{e}U{(ai ,bi)} B′ contains more edges of B0 than B  B′ is not branching. So B′ contains a circuit.  B contains a bi-ai-path P.

29 Main Idea To find B0, as above, and than contract every circuit of B0 in G. If we choose the weights of the resulting graph G1 correctly, any optimum branching in G1 will correspond to an optimum branching in G.

30 Edmonds’ Branching Algorithm
Input: A digraph G, weights c: E(G) → R+ . Output: A maximum weight branching B of G. Set i 0, G0  G, and c0  c. Let Bi be a maximum weight subgraph of Gi with for all v ∊ Bi . If Bi contains no circuits then set B  Bi and go to (5). Construct (Gi+1, ci+1) from (Gi, ci) by doing the following for each circuit C of Bi . Contract C to a single vertex vC in Gi+1. For each edge e = ( z, y) E(Gi) with zV(C), yV(C) do: Set ci+1 (e′)  ci (e) – ci ((e,C)) + ci (eC) and (e′) e, where e′  ( z, vC), (e,C)=(x,y) E(C), and eC is some cheapest edge of C. Set i:=i+1 and go to (2). If i = 0 then stop. For each circuit C of Bi-1 do: If there is an edge e′  ( z, vC)  E(B) then set E(B)  (E(B)\{e′ }) ∪(e′)∪(E(C)\ {((e′),C)}) else set E(B)  E(B) ∪(E(C)\{eC}). Set V(B)  V(Gi-1), i  i–1 and go to (5).

31 Step 4 z z e x α(e,C) vC e′ y С  Bi eC
For each edge e = ( z, y) E(Gi) with zV(C), yV(C) do: Set ci+1 (e′)  ci (e) – ci ((e,C)) + ci (eC) and (e′) e, where e′  ( z, vC), (e,C)=(x,y) E(C), and eC is some cheapest edge of C.

32 Edmonds’ Branching Algorithm
Input: A digraph G, weights c: E(G) → R+ . Output: A maximum weight branching B of G. Set i 0, G0  G, and c0  c. Let Bi be a maximum weight subgraph of Gi with for all v ∊ Bi . If Bi contains no circuits then set B  Bi and go to (5). Construct (Gi+1, ci+1) from (Gi, ci) by doing the following for each circuit C of Bi . Contract C to a single vertex vC in Gi+1. For each edge e = ( z, y) E(Gi) with zV(C), yV(C) do: Set ci+1 (e′)  ci (e) – ci ((e,C)) + ci (eC) and (e′) e, where e′  ( z, vC), (e,C)=(x,y) E(C), and eC is some cheapest edge of C. Set i:=i+1 and go to (2). If i = 0 then stop. For each circuit C of Bi-1 do: If there is an edge e′  ( z, vC)  E(B) then set E(B)  (E(B)\{e′ }) ∪(e′)∪(E(C)\ {((e′),C)}) else set E(B)  E(B) ∪(E(C)\{eC}). Set V(B)  V(Gi-1), i  i–1 and go to (5).

33 Step 6 z e z x α(e,C) y vC e′ E(B) С  Bi eC x α(e,C) y vC С  Bi eC

34 Edmonds’ Branching Algorithm(2)
Theorem 4.9. Edmonds’ Branching Algorithm works correctly.

35 Proof Applying step 4 of the algorithm, we obtain a sequence (Gi, ci), i = 0,…, k. We show that each time just before execution of step 5, B is an optimum branching of Gi. This is trivial, for the first time we reach step 5. So we have to show that step 6 transforms an optimum branching B of Gi into an optimum branching B* for Gi–1.

36 Edmonds’ Branching Algorithm
Input: A digraph G, weights c: E(G) → R+ . Output: A maximum weight branching B of G. Set i 0, G0  G, and c0  c. Let Bi be a maximum weight subgraph of Gi with for all v ∊ Bi . If Bi contains no circuits then set B  Bi and go to (5). Construct (Gi+1, ci+1) from (Gi, ci) by doing the following for each circuit C of Bi . Contract C to a single vertex vC in Gi+1. For each edge e = ( z, y) E(Gi) with zV(C), yV(C) do: Set ci+1 (e′)  ci (e) – ci ((e,C)) + ci (eC) and (e′) e, where e′  ( z, vC), (e,C)=(x,y) E(C), and eC is some cheapest edge of C. Set i:=i+1 and go to (2). If i = 0 then stop. For each circuit C of Bi-1 do: If there is an edge e′  ( z, vC)  E(B) then set E(B)  (E(B)\{e′ }) ∪(e′)∪(E(C)\ {((e′),C)}) else set E(B)  E(B) ∪(E(C)\{eC}). Set V(B)  V(Gi-1), i  i–1 and go to (5).

37 Proof (2) Let B'i–1 be any branching of Gi–1, such that |E(C)\ E(B'i–1)| = 1 for each circuit C of Bi–1. Let B'i results from B'i–1 by contracting the circuits of Bi–1. Then B'i is a branching of Gi. Furthermore we have

38 Step 4 z z e x α(e,C) vC e′ y С  Bi eC
Set ci+1 (e′)  ci (e) – ci ((e,C)) + ci (eC)

39 Induction By the induction hypothesis, B is an optimum branching of Gi . So we have ci(B) ≥ ci(B'i) We conclude that

40 Step 6 z e z x α(e,C) y vC e′ E(B) С  Bi eC x α(e,C) y vC С  Bi eC

41 Induction By the induction hypothesis, B is an optimum branching of Gi . So we have ci(B) ≥ ci(B'i). This, together with Lemma, implies that B* is an optimum branching of Gi–1. We conclude that

42 Exercise 4.3 Given an undirected graph G with arbitrary weights c: E(G) → R. We ask for a minimum weight connected spanning subgraph. Can you solve this problem efficiently.

43 Exercise 4.4 Given an undirected graph G with weights c: E(G) → R and a vertex v ∊ V(G). We ask for a minimum weight spanning tree in G where v is not a leaf. Can you solve this problem in polynomial time.


Download ppt "Spanning tree Lecture 4."

Similar presentations


Ads by Google