Presentation is loading. Please wait.

Presentation is loading. Please wait.

Many slides here are based on D. Luebke slides

Similar presentations


Presentation on theme: "Many slides here are based on D. Luebke slides"— Presentation transcript:

1 Many slides here are based on D. Luebke slides
CS583 Lecture 11 Jana Kosecka Review shortest paths Amortized/Accounting Analysis Dynamic Programming Many slides here are based on D. Luebke slides

2 Review: Shortest-Path Algorithms
How does the Bellman-Ford algorithm work? How can we do better for DAGs? Under what conditions can we use Dijkstra’s algorithm?

3 Review: Single-Source Shortest Path
Problem: given a weighted directed graph G, find the minimum-weight path from a given source vertex s to another vertex v “Shortest-path” = path of minimum weight Weight of path is sum of edges E.g., a road map: what is the shortest path from Fairfax to Washington DC?

4 Review: Shortest Path Properties
Optimal substructure: the shortest path consists of shortest subpaths Let (u,v) be the weight of the shortest path from u to v. Shortest paths satisfy the triangle inequality: (u,v)  (u,x) + (x,v) In graphs with negative weight cycles, some shortest paths will not exist

5 Review: Relaxation A key technique in shortest path algorithms is relaxation Idea: for all v, maintain upper bound d[v] on (s,v) Relax(u,v,w) { if (d[v] > d[u]+w) then d[v]=d[u]+w; } 9 5 2 7 Relax 6

6 Review: Bellman-Ford Algorithm
Initialize d[], which will converge to shortest-path value  BellmanFord() for each v  V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v)  E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w Relaxation: Make |V|-1 passes, relaxing each edge Test for solution: have we converged yet? Ie,  negative cycle?

7 Review: Bellman-Ford Algorithm
for each v  V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v)  E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w B s -1 2 A E 2 3 1 -3 4 C D 5 Ex: work on board

8 Review: Bellman-Ford Algorithm
for each v  V d[v] = ; d[s] = 0; for i=1 to |V|-1 for each edge (u,v)  E Relax(u,v, w(u,v)); if (d[v] > d[u] + w(u,v)) return “no solution”; Relax(u,v,w): if (d[v] > d[u]+w) then d[v]=d[u]+w What will be the running time?

9 Review: Bellman-Ford Running time: O(VE)
Not so good for large dense graphs But a very practical algorithm in many ways Note that order in which edges are processed affects how quickly it converges

10 Review: Bellman-Ford Note that order in which edges are processed affects how quickly it converges Correctness: show d[v] = (s,v) after |V|-1 passes Lemma: d[v]  (s,v) always Initially true Let v be first vertex for which d[v] < (s,v) Let u be the vertex that caused d[v] to change: d[v] = d[u] + w(u,v) since by assumption d[v] < (s,v) (s,v)  (s,u) + w(u,v) (Why?) (s,u) + w(u,v)  d[u] + w(u,v) (Why?) So d[v] < d[u] + w(u,v). Contradiction.

11 Review: DAG Shortest Paths
Problem: finding shortest paths in DAG Bellman-Ford takes O(VE) time. How can we do better? Idea: use topological sort If were lucky and processes vertices on each shortest path from left to right, would be done in one pass Every path in a dag is subsequence of topologically sorted vertex order, so processing verts in that order, we will do each path in forward order (will never relax edges out of vert before doing all edges into vert). Thus: just one pass. What will be the running time?

12 Review: Dijkstra’s Algorithm
If no negative edge weights, we can beat BF Similar to breadth-first search Grow a tree gradually, advancing from vertices from a queue Also similar to Prim’s algorithm for MST Use a priority queue keyed on d[v]

13 Review: Dijkstra’s Algorithm
Dijkstra(G) for each v  V d[v] = ; d[s] = 0; S = ; Q = V; while (Q  ) u = ExtractMin(Q); S = S U {u}; for each v  u->Adj[] if (d[v] > d[u]+w(u,v)) d[v] = d[u]+w(u,v); B C D A 10 4 3 2 1 5 Ex: run the algorithm Relaxation Step Note: this is really a call to Q->DecreaseKey()

14 Correctness Of Dijkstra's Algorithm
p2 u s y x p2 Want to show that when vertex is added to set S, d[u] = (s,u) and through out note that d[u]  (s,u) u Proof by contradiction d[u] is not equal to (s,u) Before u gets added, some other vertex y on that shortest path needs to be added; claim that d[y] = (s,y) when added. Know that d[x] = (s,x) and (s,y) <= (s,u) and d[y] = (s,y), so d[y] <= d[u] But both y and u are outside of S when is chosen so d[u] <= d[y] Hence d[y] = d[u] = (s,y) = (s,y)

15 Review: Kruskal’s Algorithm
What will affect the running time? 1 Sort O(V) MakeSet() calls O(E) FindSet() calls O(V) Union() calls (Exactly how many Union()s?) Kruskal() { T = ; for each v  V MakeSet(v); sort E by increasing edge weight w for each (u,v)  E (in sorted order) if FindSet(u)  FindSet(v) T = T U {{u,v}}; Union(FindSet(u), FindSet(v)); }

16 Review: Correctness Of Kruskal’s Algorithm
Sketch of a proof that this algorithm produces an MST for T: Assume algorithm is wrong: result is not an MST Then algorithm adds a wrong edge at some point If it adds a wrong edge, there must be a lower weight edge (cut and paste argument) But algorithm chooses lowest weight edge at each step. Contradiction Again, important to be comfortable with cut and paste arguments

17 Review: Amortized Analysis
Amortized analysis computes average times without using probabilities – analysis over all operations performed Worst case: Each time element is copied, element in smaller set must have the pointer updated 1st time resulting set size at least 2 members  2 2nd time  4 (lg n)-th time  n With our new Union(), any individual element is copied at most lg n times when forming the complete set from 1-element sets After lg n times the resulting set will have already n numbers For n Union operations time spent updating objects pointers O(n lg n)

18 Review: Amortized Analysis of Disjoint Sets
Since we have n elements each copied at most lg n times, n Union()’s takes O(n lg n) time Therefore we say the amortized cost of a Union() operation is O(lg n) This is the aggregate method of amortized analysis: n operations take time T(n) Average cost of an operation = T(n)/n In this style of analysis the amortized cost is applied to each operation, although different operations may have different costs

19 Accounting Analysis Another method for analyzing time to perform sequence of operations If we have more then one type of operation, each operation can have different amortized cost Example: Dynamic Tables Adjust the size of the table on the fly Charge each operation $3 amortized cost Use $1 to perform immediate Insert() Store $2 When table doubles $1 reinserts old item, $1 reinserts another old item Point is, we’ve already paid these costs Upshot: constant (amortized) cost per operation

20 Amortized Analysis: Accounting Method
Charge each operation an amortized cost Amount not used stored in “bank” Later operations can used stored money Balance must not go negative Book also discusses potential method But we will not discuss it here

21 Accounting Method Example: Dynamic Tables
Implementing a table (e.g., hash table) for dynamic data, want to make it small as possible Problem: if too many items inserted, table may be too small Idea: allocate more memory as needed

22 Dynamic Tables 1. Init table size m = 1
2. Insert elements until number n > m 3. Generate new table of size 2m 4. Reinsert old elements into new table (need table to be in continuous block of memory) 5. (back to step 2) What is the worst-case cost of an insert? One insert can be costly, but the total? Analyze cost on n Insert() ’s of initially empty table

23 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1

24 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2

25 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3

26 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4

27 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5

28 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6

29 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7

30 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost Insert(1) 1 1 1 Insert(2) 2 1 + 1 2 Insert(3) 4 1 + 2 3 Insert(4) 4 1 4 Insert(5) 8 1 + 4 5 Insert(6) 8 1 6 Insert(7) 8 1 7 Insert(8) 8 1 8

31 Analysis Of Dynamic Tables
Let ci = cost of i-th insert ci = i if i-1 is exact power of 2, 1 otherwise Example: Operation Table Size Cost 1 2 3 4 5 6 7 Insert(1) 1 1 8 1 Insert(2) 2 1 + 1 2 9 Insert(3) 4 1 + 2 Insert(4) 4 1 Insert(5) 8 1 + 4 Insert(6) 8 1 Insert(7) 8 1 Insert(8) 8 1 Insert(9) 16 1 + 8

32 Aggregate Analysis n Insert() operations cost
At most n operations are of cost 1 + costs of expansions Expansion happens only where (i-1) is power of 2 Average cost of operation = (total cost)/(# operations) < 3 Asymptotically, then, a dynamic table costs the same as a fixed-size table Both O(1) per Insert() operation

33 Accounting Analysis We have shown that amortized cost is 3 per Insert() Each Insert() ‘pays’ 3 By the time the table is full – each item paid some amount There is enough credit to expand the table and re-insert each element

34 Dynamic Programming Chap 15.

35 Dynamic Programming Another strategy for designing algorithms is dynamic programming A metatechnique, not an algorithm (like divide & conquer) The word “programming” is historical and predates computer programming Use when problem breaks down into recurring small subproblems

36 Dynamic Programming Examples Matrix chain multiplication
Longest common subsequence Optimal triangulation Robot motion planning Stereo matching Computational Biology Variable elimination in probabilistic inference alg.

37 Dynamic Programming Problem solving methodology (as divide and conquer) Idea: divide into sub-problems, solve sub-problems Applicable to optimization problems Ingredients 1. Characterize the optimal solution 2. Recursively define a value of the optimal solution 3. Compute values of optimal solution bottom up 4. Construct an optimal solution from computed inf.

38 Dynamic programming It is used, when the solution can be recursively described in terms of solutions to subproblems (optimal substructure) Algorithm finds solutions to subproblems and stores them in memory for later use More efficient than “brute-force methods”, which solve the same subproblems over and over again

39 Dynamic Programming Example: Longest Common Subsequence
Longest common subsequence (LCS) problem: Given two sequences x[1..m] and y[1..n], find the longest subsequence which occurs in both Ex: x = {A B C B D A B }, y = {B D C A B A} {B C} and {A A} are both subsequences of both What is the LCS? Brute-force algorithm: For every subsequence of x, check if it’s a subsequence of y How many subsequences of x are there? What will be the running time of the brute-force alg?

40 Longest Common Subsequence (LCS)
Application: comparison of two DNA strings Ex: X= {A B C B D A B }, Y= {B D C A B A} Longest Common Subsequence: X = A B C B D A B Y = B D C A B A Brute force algorithm would compare each subsequence of X with the symbols in Y

41 LCS Algorithm Brute-force algorithm: 2m subsequences of x to check against n elements of y: O(n 2m) We can do better: for now, let’s only worry about the problem of finding the length of LCS When finished we will see how to backtrack from this solution back to the actual LCS Notice LCS problem has optimal substructure Subproblems: LCS of pairs of prefixes of x and y

42 LCS Algorithm First we’ll find the length of LCS. Later we’ll modify the algorithm to find LCS itself. Define Xi, Yj to be the prefixes of X and Y of length i and j respectively Define c[i,j] to be the length of LCS of Xi and Yj Then the length of LCS of X and Y will be c[m,n]

43 LCS recursive solution
We start with i = j = 0 (empty substrings of x and y) Since X0 and Y0 are empty strings, their LCS is always empty (i.e. c[0,0] = 0) LCS of empty string and any other string is empty, so for every i and j: c[0, j] = c[i,0] = 0

44 LCS recursive solution
When we calculate c[i,j], we consider two cases: First case: x[i]=y[j]: one more symbol in strings X and Y matches, so the length of LCS Xi and Yj equals to the length of LCS of smaller strings Xi-1 and Yi-1 , plus 1

45 LCS recursive solution
Second case: x[i] != y[j] As symbols don’t match, our solution is not improved, and the length of LCS(Xi , Yj) is the same as before (i.e. maximum of LCS(Xi, Yj-1) and LCS(Xi-1,Yj) Why not just take the length of LCS(Xi-1, Yj-1) ?

46 LCS Length Algorithm LCS-Length(X, Y)
1. m = length(X) // get the # of symbols in X 2. n = length(Y) // get the # of symbols in Y 3. for i = 1 to m c[i,0] = 0 // special case: Y0 4. for j = 1 to n c[0,j] = 0 // special case: X0 5. for i = 1 to m // for all Xi 6. for j = 1 to n // for all Yj 7. if ( Xi == Yj ) 8. c[i,j] = c[i-1,j-1] + 1 9. else c[i,j] = max( c[i-1,j], c[i,j-1] ) 10. return c

47 What is the Longest Common Subsequence
LCS Example We’ll see how LCS algorithm works on the following example: X = ABCB Y = BDCAB What is the Longest Common Subsequence of X and Y? LCS(X, Y) = BCB X = A B C B Y = B D C A B

48 LCS Example (0) j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 B 2 3 C 4 B
A 1 B 2 3 C 4 B X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,4]

49 LCS Example (1) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 B 2 3 C
A 1 B 2 3 C 4 B for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0

50 LCS Example (2) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 B 2 3 C
A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

51 LCS Example (3) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 B 2 3 C
A 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

52 LCS Example (4) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 B 2 3
A 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

53 LCS Example (5) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2
A 1 1 1 B 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

54 LCS Example (6) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2
A 1 1 1 B 2 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

55 LCS Example (7) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2
A 1 1 1 B 2 1 1 1 1 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

56 LCS Example (8) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2
A 1 1 1 B 2 1 1 1 1 2 3 C 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

57 LCS Example (10) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

58 LCS Example (11) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

59 LCS Example (12) 6/4/2018 ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 6/4/2018

60 LCS Example (13) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

61 LCS Example (14) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

62 LCS Example (15) ABCB BDCAB j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

63 LCS Algorithm Running Time
LCS algorithm calculates the values of each entry of the array c[m,n] So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array

64 How to find actual LCS For example, here
So far, we have just found the length of LCS, but not LCS itself. We want to modify this algorithm to make it output Longest Common Subsequence of X and Y Each c[i,j] depends on c[i-1,j] and c[i,j-1] or c[i-1, j-1] For each c[i,j] we can say how it was acquired: For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3 2 2 2 3

65 How to find actual LCS - continued
Remember that So we can start from c[m,n] and go backwards Whenever c[i,j] = c[i-1, j-1]+1, remember x[i] (because x[i] is a part of LCS) When i=0 or j=0 (i.e. we reached the beginning), output remembered letters in reverse order

66 Finding LCS j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2 3 C
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2

67 Finding LCS (2) j 0 1 2 3 4 5 i Yj B D C A B Xi A 1 1 1 B 2 1 1 1 1 2
A 1 1 1 B 2 1 1 1 1 2 3 C 1 1 2 2 2 3 4 B 1 1 2 2 B C B LCS (reversed order): LCS (straight order):

68 Review: Dynamic Programming
Summary of the basic idea: Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems Overlapping subproblems: few subproblems in total, many recurring instances of each Solve bottom-up, building a table of solved subproblems that are used to solve larger ones Variations: “Table” could be 3-dimensional, triangular, a tree, etc.

69 Matrix Chain Multiplication
Given sequence of matrices And their dimensions What is the optimal order of multiplication Example: why two different orders matter ? Brute force strategy – examine all possible parenthezations Solution to the recurrence

70 Matrix Chain Multiplication
Substructure property Total cost will be cost of solving the two subproblems and multiplying the two resulting matrices Optimal substructure find the split which will yield the minimal total cost Idea: try to define it recursively

71 Matrix Chain Multiplication
Define the cost recursively m[i,j] cost of multiplying

72 Matrix Chain Multiplication
Option 1: Compute the cost recursively, remember good splits Draw the recurrence tree for

73 Matrix Chain Multiplication
Look up the pseudo-code in the textbook Core is the recursive call C = RecursiveMatrixChain(p,i,k) + RecursiveMatrixChain(p,k+1,j) + Prove by substitution Recursive solution would still take exponential time 

74 Matrix Chain Multiplication
Idea: memoization Look at the recursion tree, many of the sub-problems repeat Remember then and reuse in the How many sub-problems do we have ? Why ? Compute the solution to all subproblems bottom up Memoize in the table store intermediate cost m[i,j]

75 Matrix Chain Multiplication
n = length(p)-1 for i=1 to n m[i,i] = 0; % initialize for l=2 to n % l is the chain length for i=1 to n-l % first compute all m[i,i+1], then m[i,i+2] do j := i+1-1 m[i,j]  inf for k = i to j-1 do q = m[i,k] + m[k+1,j] + p(i-1)p(k)p(j) if q < m[i,j] then m[i,j] = q; s[i,j] = k; % remember k with min cost end Return m and s

76 Matrix Chain Multiplication
Example

77 Dynamic Programming What is the structure of the sub-problem
Common pattern: Optimal solution requires making a choice which leads to optimal solution Hard part: what is the optimal subproblem structure How many subproblems ? How many choices we have which sub-problem to use ? Matrix chain multiplication LCS

78 Dynamic Programming What is the structure of the sub-problem
Common pattern: Optimal solution requires making a choice which leads to optimal solution Hard part: what is the optimal subproblem structure How many sub-problems ? How many choices we have which sub-problem to use ? Matrix chain multiplication: 2 subproblems, j-i choices LCS: 3 suproblems 3 choices Subtleties (graph examples) shortest path, longest path

79 Review: Strongly Connected components
SSC algorithm will induce component graph which is a DAG 13|14 11|16 1|10 2|7 3 | 4 12|15 8|9 5|6 D A C B E F G H ABD EF CH G

80 Review: Strongly Connected component
How to use DFS to find strongly connected component When running DFS_visit recursively it will stop once all nodes reachable from start are visited – as a result you will have one DFS tree. Observation DFS tree will contain one of more strongly connected components How to run DFS so it will break (finish DFS_visit) just after finishing only one SCC Need to start in the node who belongs to the sink of the graph DAG representing the SCC graph But we do not know what is the DAG How to find the source ?

81 Review: Strongly connected components
Vertex with the highest finishing number in DFS belongs to the source of SCC graph Why ? Property: Suppose you have two SCC’s C and C’. If there is an edge between C and C’, then vertex of C visited first has the highest finishing number: If DFS is started at C it visits all vertices in C and C’ before it gets “stuck”.

82 Review: Strongly Connected Components
Call DFS to compute finishing times f[u] of each vertex Create transpose graph (directions of edges reversed) Call DFS on the transpose graph, but in the main loop of DFS, consider vertices in the decreasing order of f[u] Output the vertices of each tree in the depth-first forest formed in line 3 as a separate strongly connected component Example

83 13|14 11|16 1|10 2|7 3 | 4 12|15 8|9 5|6 D A C B E F G H ABD EF CH G A B E F 2|5 1|6 7|10 8|9 D C H G 3|4 12|13 14|15 16|17


Download ppt "Many slides here are based on D. Luebke slides"

Similar presentations


Ads by Google