Minimum Spanning Trees

Slides:



Advertisements
Similar presentations
Chapter 23 Minimum Spanning Tree
Advertisements

Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Greed is good. (Some of the time)
Minimum Spanning Trees Definition of MST Generic MST algorithm Kruskal's algorithm Prim's algorithm.
More Graph Algorithms Minimum Spanning Trees, Shortest Path Algorithms.
1 Greedy 2 Jose Rolim University of Geneva. Algorithmique Greedy 2Jose Rolim2 Examples Greedy  Minimum Spanning Trees  Shortest Paths Dijkstra.
Chapter 23 Minimum Spanning Trees
CSE 780 Algorithms Advanced Algorithms Minimum spanning tree Generic algorithm Kruskal’s algorithm Prim’s algorithm.
Lecture 18: Minimum Spanning Trees Shang-Hua Teng.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Analysis of Algorithms CS 477/677
Lecture 12 Minimum Spanning Tree. Motivating Example: Point to Multipoint Communication Single source, Multiple Destinations Broadcast – All nodes in.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
1 Minimum Spanning Trees. Minimum- Spanning Trees 1. Concrete example: computer connection 2. Definition of a Minimum- Spanning Tree.
Chapter 23: Minimum Spanning Trees: A graph optimization problem Given undirected graph G(V,E) and a weight function w(u,v) defined on all edges (u,v)
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
1 22c:31 Algorithms Minimum-cost Spanning Tree (MST)
Minimum-Cost Spanning Tree weighted connected undirected graph cost of spanning tree is sum of edge costs find spanning tree that has minimum cost Network.
MST Lemma Let G = (V, E) be a connected, undirected graph with real-value weights on the edges. Let A be a viable subset of E (i.e. a subset of some MST),
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Minimum Spanning Trees Problem Description Why Compute MST? MST in Unweighted Graphs MST in Weighted Graphs –Kruskal’s Algorithm –Prim’s Algorithm 1.
November 22, Algorithms and Data Structures Lecture XII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Greedy Algorithms General principle of greedy algorithm
Lecture ? The Algorithms of Kruskal and Prim
Chapter 9 : Graphs Part II (Minimum Spanning Trees)
Introduction to Algorithms
Minimum Spanning Trees
Lecture 22 Minimum Spanning Tree
Spanning Trees Kruskel’s Algorithm Prim’s Algorithm
Introduction to Algorithms`
CS 3343: Analysis of Algorithms
Minimum Spanning Trees
Lecture 12 Algorithm Analysis
CISC 235: Topic 10 Graph Algorithms.
Parallel Graph Algorithms
Minimum Spanning Trees
Minimum Spanning Tree.
Minimum Spanning Trees
Algorithms and Data Structures Lecture XII
Data Structures – LECTURE 13 Minumum spanning trees
Greedy Algorithm (17.4/16.4) Greedy Algorithm (GA)
Chapter 23 Minimum Spanning Tree
Minimum-Cost Spanning Tree
Minimum Spanning Trees
Kruskal’s Minimum Spanning Tree Algorithm
Advanced Algorithms Analysis and Design
Analysis of Algorithms CS 477/677
Lecture 12 Algorithm Analysis
Minimum Spanning Trees
Minimum Spanning Tree.
Algorithms Searching in a Graph.
Minimum spanning tree Shortest path algorithms
Greedy Algorithms Comp 122, Spring 2004.
Introduction to Algorithms: Greedy Algorithms (and Graphs)
Lecture 12 Algorithm Analysis
Total running time is O(E lg E).
Advanced Algorithms Analysis and Design
Parallel Graph Algorithms
Binhai Zhu Computer Science Department, Montana State University
Chapter 23: Minimum Spanning Trees: A graph optimization problem
Topological Sorting Minimum Spanning Trees Shortest Path
Prim’s algorithm IDEA: Maintain V – A as a priority queue Q. Key each vertex in Q with the weight of the least-weight edge connecting it to a vertex in.
INTRODUCTION A graph G=(V,E) consists of a finite non empty set of vertices V , and a finite set of edges E which connect pairs of vertices .
Minimum Spanning Trees
Presentation transcript:

Minimum Spanning Trees A tree (i.e., connected, acyclic graph) which contains all the vertices of the graph Minimum Spanning Tree Spanning tree with the minimum sum of weights a b c d e h g f i 4 8 7 11 1 2 14 9 10 6

Prim’s Algorithm Starts from an arbitrary “root”: VA = {a} At each step: Find a light edge crossing (VA, V - VA) Add this edge to set A (The edges in set A always form a single tree) Repeat until the tree spans all vertices 8 7 b c d 4 9 2 a 11 i 14 e 4 7 6 8 10 h g f 1 2

Example 0         Q = {a, b, c, d, e, f, g, h, i} VA =  4 8 7 11 1 2 14 9 10 6 0         Q = {a, b, c, d, e, f, g, h, i} VA =  Extract-MIN(Q)  a 4    key [b] = 4  [b] = a key [h] = 8  [h] = a 4      8  Q = {b, c, d, e, f, g, h, i} VA = {a} Extract-MIN(Q)  b 8 7 b c d 4 9  2  a 11 i 14 e 4 7 6 8 10 h g f 1 2    8

key [h] = 8  [h] = a - unchanged 8     8  Extract-MIN(Q)  c Q = {c, d, e, f, g, h, i} VA = {a, b} key [c] = 8  [c] = b key [h] = 8  [h] = a - unchanged 8     8  Extract-MIN(Q)  c 4  8  8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6  Q = {d, e, f, g, h, i} VA = {a, b, c} key [d] = 7  [d] = c key [f] = 4  [f] = c key [i] = 2  [i] = c 7  4  8 2 Extract-MIN(Q)  i 4  8 7 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 2 4

key [d] = 7  [d] = c unchanged key [e] = 10  [e] = f 7 10 2 7 4 7  8 2 Q = {d, e, f, g, h} VA = {a, b, c, i} key [h] = 7  [h] = i key [g] = 6  [g] = i 7  4 6 7 Extract-MIN(Q)  f a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 7 6 4 7  6 8 2 Q = {d, e, g, h} VA = {a, b, c, i, f} key [g] = 2  [g] = f key [d] = 7  [d] = c unchanged key [e] = 10  [e] = f 7 10 2 7 Extract-MIN(Q)  g a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 10 2

Q = {d, e, h} VA = {a, b, c, i, f, g} key [h] = 1  [h] = g 7 10 1 4 7 10 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 Q = {d, e, h} VA = {a, b, c, i, f, g} key [h] = 1  [h] = g 7 10 1 Extract-MIN(Q)  h 1 4 7 10 1 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 Q = {d, e} VA = {a, b, c, i, f, g, h} 7 10 Extract-MIN(Q)  d

Q = {e} VA = {a, b, c, i, f, g, h, d} key [e] = 9  [e] = d 9 Extract-MIN(Q)  e Q =  VA = {a, b, c, i, f, g, h, d, e} 4 7 10 1 2 8 a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 9

PRIM(V, E, w, r) % r : starting vertex Q ←  for each u  V do key[u] ← ∞ π[u] ← NIL INSERT(Q, u) DECREASE-KEY(Q, r, 0) % key[r] ← 0 while Q   do u ← EXTRACT-MIN(Q) for each v  Adj[u] do if v  Q and w(u, v) < key[v] then π[v] ← u DECREASE-KEY(Q, v, w(u, v)) Total time: O(VlgV + ElgV) = O(ElgV) O(V) if Q is implemented as a min-heap O(lgV) Min-heap operations: O(VlgV) Executed |V| times Takes O(lgV) Executed O(E) times total Constant Takes O(lgV) O(ElgV)

Prim’s Algorithm Total time: O(ElgV ) Prim’s algorithm is a “greedy” algorithm Greedy algorithms find solutions based on a sequence of choices which are “locally” optimal at each step. Nevertheless, Prim’s greedy strategy produces a globally optimum solution!

Kruskal’s Algorithm Start with each vertex being its own component Repeatedly merge two components into one by choosing the lightest edge that connects them Which components to consider at each iteration? Scan the set of edges in monotonically increasing order by weight. Choose the smallest edge. a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 We would add edge (c, f)

Example a b c d e h g f i 1: (h, g) 2: (c, i), (g, f) Add (h, g) Add (c, i) Add (g, f) Add (a, b) Add (c, f) Ignore (i, g) Add (c, d) Ignore (i, h) Add (a, h) Ignore (b, c) Add (d, e) Ignore (e, f) Ignore (b, h) Ignore (d, f) {g, h}, {a}, {b}, {c},{d},{e},{f},{i} {g, h}, {c, i}, {a}, {b}, {d}, {e}, {f} {g, h, f}, {c, i}, {a}, {b}, {d}, {e} {g, h, f}, {c, i}, {a, b}, {d}, {e} {g, h, f, c, i}, {a, b}, {d}, {e} {g, h, f, c, i, d}, {a, b}, {e} {g, h, f, c, i, d, a, b}, {e} {g, h, f, c, i, d, a, b, e} a b c d e h g f i 4 8 7 11 1 2 14 9 10 6 1: (h, g) 2: (c, i), (g, f) 4: (a, b), (c, f) 6: (i, g) 7: (c, d), (i, h) 8: (a, h), (b, c) 9: (d, e) 10: (e, f) 11: (b, h) 14: (d, f) {a}, {b}, {c}, {d}, {e}, {f}, {g}, {h}, {i}

Operations on Disjoint Data Sets Kruskal’s Alg. uses Disjoint Data Sets (UNION-FIND : Chapter 21) to determine whether an edge connects vertices in different components MAKE-SET(u) – creates a new set whose only member is u FIND-SET(u) – returns a representative element from the set that contains u. It returns the same value for any element in the set UNION(u, v) – unites the sets that contain u and v, say Su and Sv E.g.: Su = {r, s, t, u}, Sv = {v, x, y} UNION (u, v) = {r, s, t, u, v, x, y} We had seen earlier that FIND-SET can be done in O(lgn) or O(1) time and UNION operation can be done in O(1) (see Chapter 21)

KRUSKAL(V, E, w) Running time: O(V+ElgE+ElgV)  O(ElgE) O(V) O(ElgE) for each vertex v  V do MAKE-SET(v) sort E into non-decreasing order by w for each (u, v) taken from the sorted list do if FIND-SET(u)  FIND-SET(v) then A ← A  {(u, v)} UNION(u, v) return A Running time: O(V+ElgE+ElgV)  O(ElgE) Implemented by using the disjoint-set data structure (UNION-FIND) Kruskal’s algorithm is “greedy” It produces a globally optimum solution O(V) O(ElgE) O(E) O(lgV)

Another Example for Prim’s Method a b c d e f D[.] = 2 0 10 1   1 f a 2 3 7 S 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new D[i] = Min{ D[i], w(k, i) } where k is the newly-selected node and w[.] is the distance between k and i

new D[i] = Min{ D[i], w(k, i) } a b c d e f L [.] = 2 0 10 1   1 f a 2 3 a b c d e f new L [.] = 2 0 8 1 9  7 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new D[i] = Min{ D[i], w(k, i) } where k is the newly-selected node and w[.] is the distance between k and i

new D[i] = Min{ D[i], w(k, i) } a b c d e f L [.] = 2 0 8 1 9  1 f a 2 3 a b c d e f new L [.] = 2 0 7 1 9 1 7 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new D[i] = Min{ D[i], w(k, i) } where k is the newly-selected node and w[.] is the distance between k and i

f a b c e d a b c d e f L [.] = 2 0 7 1 9 1 a b c d e f 3 a b c d e f new L [.] = 2 0 7 1 3 1 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d

f a b c e d a b c d e f L [.] = 2 0 7 1 3 1 a b c d e f new L [.] = 2 0 2 1 3 1 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d

Running time: (V2) (array representation) a b c d e f L [.] = 2 0 2 1 3 1 1 f a a b c d e f new L [.] = 2 0 2 1 3 1 2 7 3 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d Running time: (V2) (array representation) (ElgV) (Min-Heap+Adjacency List) Which one is better?

Greedy MST Methods Prim’s method is fastest. O(n2) (worst case) O(E log n) if a Min Heap is used to keep track of distances of vertices to partially built tree. If e=O(n2), MinHeap is not a good idea! Kruskal’s uses union-find trees to run in O(E log n) time.

Parallel MST Algorithm (Prim’s) P processors, n=|V| vertices Each processor is assigned n/p vertices (Pi gets the set Vi) Each PE holds the n/p columns of A and n/p elements of d[] array P0 P1 Pi Pp-1 d[.] . . . . . . n/p columns | | | | | | | | | | | | ….. ….. A

Parallel MST Algorithm (Prim’s) 1. Initialize: Vt := {r}; d[k] =  for all k (except d[r] = 0) 2. P0 broadcasts selectedV = r using one-to-all broadcast. 3. The PE responsible for "selectedV" marks it as belonging to set Vt. 4. For v = 2 to n=|V| do 5. Each Pi updates d[k] = Min[d[k], w(selectedV, k)] for all k  Vi 6. Each Pi computes MIN-di =(minimum d[] value among its unselected elements) 7. PEs perform a "global minimum" using MIN-di values and store the result in P0. Call the winning vertex, selectedV. 8. P0 broadcasts "selectedV" using one-to-all broadcast. 9. The PE responsible for "selectedV" marks it as belonging to set Vt. 10. EndFor

Parallel MST Algorithm (Prim’s) TIME COMPLEXITY ANALYSIS: E=O(n2) then Tseq = n2 (Hypercube) Tpar = n*(n/p) + n*logp computation + communication (Mesh) Tpar = n*(n/p) + n * Sqrt(p) The algorithm is cost-optimal on a hypercube if plogp/n =O(1)

Dijkstra’s SSSP Algorithm (adjacency matrix) a b c d e f L [.] = 2 0 10 1   1 f a 2 3 7 S 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new L[i] = Min{ L[i], L[k] + W[k, i] } where k is the newly-selected intermediate node and W[.] is the distance between k and i

SSSP cont. f a b c e d new L[i] = Min{ L[i], L[k] + W[k, i] } a b c d e f L [.] = 2 0 10 1   1 f a 2 3 a b c d e f new L [.] = 2 0 9 1 10  7 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new L[i] = Min{ L[i], L[k] + W[k, i] } where k is the newly-selected intermediate node and W[.] is the distance between k and i

new L[i] = Min{ L[i], L[k] + W[k, i] } a b c d e f L [.] = 2 0 9 1 10  1 f a 2 3 a b c d e f new L [.] = 2 0 9 1 10 3 7 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d new L[i] = Min{ L[i], L[k] + W[k, i] } where k is the newly-selected intermediate node and W[.] is the distance between k and i

f a b c e d a b c d e f L [.] = 2 0 9 1 10 3 a b c d e f 7 3 a b c d e f new L [.] = 2 0 9 1 6 3 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d

f a b c e d a b c d e f L [.] = 2 0 9 1 6 3 a b c d e f 7 3 a b c d e f new L [.] = 2 0 8 1 6 3 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d

Running time: (V2) (array representation) a b c d e f L [.] = 2 0 8 1 6 3 1 f a a b c d e f new L [.] = 2 0 8 1 6 3 2 7 3 10 b c 2 a b c d e f a 0 2 7   1 b 2 0 10 1   c 7 10 0 8 2  d  1 8 0 9  e   2 9 0 3 f 1    3 0 e 8 1 9 d Running time: (V2) (array representation) (ElgV) (Min-Heap+Adjacency List) Which one is better?

Task Partitioning for Parallel SSSP Algorithm P processors, n=|V| vertices Each processor is assigned n/p vertices (Pi gets the set Vi) Each PE holds the n/p columns of A and n/p elements of L[] array as shown below P0 P1 Pi Pp-1 L[.] . . . . . . n/p columns | | | | | | | | | | | | ….. ….. A

Parallel SSSP Algorithm (Dijkstra’s) 1. Initialize: Vt := {r}; L[k] =  for all k except L[r] = 0; 2. P0 broadcasts selectedV = r using one-to-all broadcast. 3. The PE responsible for "selectedV" marks it as belonging to set Vt. 4. For v = 2 to n=|V| do 5. Each Pi updates L[k] = Min[ L[k], L(selectedV)+W(selectedV, k) ] for k  Vi 6. Each Pi computes MIN-Li = (minimum L[.] value among its unselected elements) 7. PEs perform a "global minimum" using MIN-Li values and result is stored in P0. Call the winning vertex, selectedV. 8. P0 broadcasts "selectedV" and L[selectedV] using one-to-all broadcast. 9. The PE responsible for "selectedV" marks it as belonging to set Vt. 10. EndFor

Parallel SSSP Algorithm (Dijkstra’s) TIME COMPLEXITY ANALYSIS: In the worst-case, Tseq = n2 (Hypercube) Tpar = n*(n/p) + n*logp computation + communication (Mesh) Tpar = n*(n/p) + n * Sqrt(p) The algorithm is cost-optimal on a hypercube if plogp/n = O(1)