Presentation is loading. Please wait.

Presentation is loading. Please wait.

Minimum Spanning Trees

Similar presentations


Presentation on theme: "Minimum Spanning Trees"— Presentation transcript:

1 Minimum Spanning Trees
Graphs 4/21/2017 3:28 PM Minimum Spanning Trees CSCI 2720 Spring 2005 MSTs

2 Extended Dijkstra // extending Dijkstra’s algorithm to compute the edges of the shortest paths Algorithm Dijkstra(G, s) // same implementation choices (heap or not) as for Dijkstra // initialize // same performance as for Dijkstra for all v  G.vertices() // same code except for the two red lines if (v = s) setDistance(v, 0) else setDistance(v, ) v is not in cloud setParent(v, NULL) while there are nodes not in cloud u  node not in cloud with min. distance add u to cloud // update neighbors of u for all e  G.adjacentEdges(u) z  G.getOpposite(e, u) d_new  getDistance(u) + weight(e) if d_new < getDistance(z) updateDistance(z, d_new) updateParent(z, u) MSTs

3 Example (same graph as before)
C B A E D F 3 2 7 5 8 4 1 9 A.parent = NULL B.parent = E C.parent = A D.parent = C E.parent = C F.parent = D MSTs

4 All-Pairs Shortest Paths
Find the distance between every pair of vertices in a weighted directed graph G. Create a matrix, i’th row = distances for vertex vi (and another matrix if we want to store parent values) We can make n calls to Dijkstra’s algorithm (if no negative edges), one for each vertex as source. Each call fills in one row of the matrix. This takes O(nm log n) time with heap implementation (of sparse graph with adjacency lists), O(n3) otherwise. MSTs

5 DAGs and Topological Ordering
A directed acyclic graph (DAG) is a digraph that has no directed cycles Halfway between trees and digraphs Used instead of expression trees, to “fold together” common subexpressions Often used to represent dependencies: tasks/jobs -- (x,y) means x must be done before y course requirements – (x,y) means x must be taken before y preferences – (x,y) means that x is preferable to y In each case, acyclicity is important B A D C E DAG G MSTs

6 Topological Sorting B A D C E 3 4 5 1 2
Number vertices, so that (u,v) in E implies u < v Often needed in DAG applications tasks/jobs – the order in which to do jobs course requirements – the order in which to take courses preferences – the overall “rating Theorem A digraph admits a topological ordering if and only if it is a DAG (why?) B A D C E 3 4 5 1 2 MSTs

7 Algorithm for Topological Sorting
Algorithm TopologicalSort(G) H  G // Temporary copy of G n  G.numVertices() while H is not empty do v  find vertex with no outgoing edges Label v  n n  n - 1 Remove v from H Whenever there is a choice between multiple vertices with no outgoing edges, the output is not unique. Naïve implementation is O(n2). Why? MSTs

8 Topological Sorting using DFS
Topological sort can be implemented in O(n + m). (Is that always faster?) This is done as a variant of depth-first search We label every node at the end of visiting it (before returning). The node labels are in descending order (from highest down). Result is the same as with naïve version before. MSTs

9 Topological Sorting using DFS
Algorithm topologicalDFS(G, v) Input graph G and a start vertex v of G Output labeling of the vertices of G in the connected component of v setLabel(v, VISITED) for all e  G.incidentEdges(v) if getLabel(e) = UNEXPLORED w  opposite(v,e) if getLabel(w) = UNVISITED setLabel(e, DISCOVERY) topologicalDFS(G, w) else {e is a forward or cross edge} Set the topological order of v to n n  n – 1 // end of forall loop Except for blue lines, this is the same DFS code as before. Algorithm topologicalDFS(G) Input dag G Output topological ordering of G n  G.numVertices() for all u  G.vertices() setLabel(u, UNVISITED) for all e  G.edges() setLabel(e, UNEXPLORED) for all v  G.vertices() if getLabel(v) = UNEXPLORED topologicalDFS(G, v) MSTs

10 Topological Sorting Example
MSTs

11 Topological Sorting Example
9 MSTs

12 Topological Sorting Example
8 9 MSTs

13 Topological Sorting Example
7 8 9 MSTs

14 Topological Sorting Example
6 7 8 9 MSTs

15 Topological Sorting Example
6 5 7 8 9 MSTs

16 Topological Sorting Example
4 6 5 7 8 9 MSTs

17 Topological Sorting Example
3 4 6 5 7 8 9 MSTs

18 Topological Sorting Example
2 3 4 6 5 7 8 9 MSTs

19 Topological Sorting Example
2 1 3 4 6 5 7 8 9 MSTs

20 Minimum Spanning Tree ORD PIT DEN DCA STL DFW ATL Spanning subgraph
Subgraph of a graph G containing all the vertices of G Spanning tree Spanning subgraph that is itself a (free) tree Minimum spanning tree (MST) Spanning tree of a weighted graph with minimum total edge weight Applications Communications networks: Connect all servers, at least cost Transportation networks: Connect all cities, at least cost ORD 10 1 PIT DEN 6 7 9 3 DCA STL 4 8 5 2 DFW ATL MSTs

21 Replacing f with e yields another MST
Partition Property U V 7 f Partition Property: Consider a partition of the vertices of G into subsets U and V Let e be an edge of minimum weight across the partition There is a minimum spanning tree of G containing edge e Proof (by contradiction): Suppose no MSTs of G contain e. Let T be an MST of G; T does not contain e Consider the cycle C formed by e with T and let f be an edge of C across the partition By the cycle property, weight(f)  weight(e) By replacing f with e, we obtain a better spanning tree! 4 9 5 2 8 8 3 e 7 Replacing f with e yields another MST U V 7 f 4 9 5 2 8 3 e 7 MSTs

22 MST algorithms 2 algorithms to compute MST of a given weighted graph
Prim-Jarnik (similar to Dijkstra’s) Kruskal (uses a new ADT, “union-find”) Both are greedy greedy: An algorithm that always takes the best immediate, or local, solution while finding an answer. Greedy algorithms find the overall, or globally, optimal solution for some optimization problems, but may find less-than-optimal solutions for some instances of other problems Prim-Jarnik greedily chooses nodes for MST Kruskal’s algorithm greedily chooses edges for MST. MSTs

23 Prim-Jarnik’s Algorithm
Similar to Dijkstra’s algorithm; the only difference is what distance means (now it’s just the edge weight). We pick an arbitrary vertex s and we grow the MST as a cloud of vertices, starting from s We store with each vertex v a label d(v) = the smallest weight of an edge connecting v to a vertex in the cloud At each step: We add to the cloud the vertex u outside the cloud with the smallest distance label We update the labels of the vertices adjacent to u B D C A F E 7 4 2 8 5 3 9 MSTs

24 Example 7 7 D 7 D 2 2 B 4 B 4 8 9 5 9 5 5 2 F 2 F C C 8 8 3 3 8 8 E E A 7 A 7 7 7 7 B D C A F E 7 4 2 8 5 3 9 7 D 2 B 4 5 9 5 2 F C 8 3 8 E A 7 7 MSTs

25 Example (contd.) 7 7 D 2 B 4 9 4 5 5 2 F C 8 3 8 E A 3 7 7 7 D 2 B 4 5
7 D 2 B 4 5 9 4 5 2 F C 8 3 8 E A 3 7 MSTs

26 Pseudocode Algorithm Prim-Jarnik(G) // same implementation choices (heap or not) as for Dijkstra // initialize // same performance as for Dijkstra s  a vertex of G // same code except for the two red lines for all v  G.vertices() if (v = s) setDistance(v, 0) else setDistance(v, ) v is not in cloud setParent(v, NULL) while there are nodes not in cloud u  node not in cloud with min. distance add u to cloud // update neighbors of u for all e  G.adjacentEdges(u) z  G.getOpposite(e, u) d_new  weight(e) if d_new < getDistance(z) updateDistance(z, d_new) updateParent(z, u) MSTs


Download ppt "Minimum Spanning Trees"

Similar presentations


Ads by Google