All-pairs Shortest paths Transitive Closure

Slides:



Advertisements
Similar presentations
* Bellman-Ford: single-source shortest distance * O(VE) for graphs with negative edges * Detects negative weight cycles * Floyd-Warshall: All pairs shortest.
Advertisements

Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
1 Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a 1, a 2, …, a n } and a size.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
CHAPTER 8 Dynamic Programming. Algorithm Computing the Fibonacci Numbers, Version 1 This dynamic-programming algorithm computes the Fibonacci number.
Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a1, a2, …, an} and a size s(a)
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Induction and recursion
Minimal Spanning Trees What is a minimal spanning tree (MST) and how to find one.
Chapter 9 – Graphs A graph G=(V,E) – vertices and edges
Fixed Parameter Complexity Algorithms and Networks.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
Introduction to Algorithms Jiafen Liu Sept
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Graph Theory. undirected graph node: a, b, c, d, e, f edge: (a, b), (a, c), (b, c), (b, e), (c, d), (c, f), (d, e), (d, f), (e, f) subgraph.
8.4 Closures of Relations Definition: The closure of a relation R with respect to property P is the relation obtained by adding the minimum number of.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
CompSci 102 Discrete Math for Computer Science March 13, 2012 Prof. Rodger Slides modified from Rosen.
1 Closures of Relations Based on Aaron Bloomfield Modified by Longin Jan Latecki Rosen, Section 8.4.
Data Structures and Algorithm Analysis Graph Algorithms Lecturer: Jing Liu Homepage:
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
Hubert Chan (Chapters 1.6, 1.7, 4.1)
MA/CSSE 473 Day 27 Dynamic Programming Binomial Coefficients
Greedy Algorithms General principle of greedy algorithm
Dynamic Programming Typically applied to optimization problems
Lecture 12.
Inductive Proof (the process of deriving generalities from particulars) Mathematical Induction (reasoning over the natural numbers)
Design & Analysis of Algorithm Dynamic Programming
CSC317 Shortest path algorithms
Computing Connected Components on Parallel Computers
Greedy method Idea: sequential choices that are locally optimum combine to form a globally optimum solution. The choices should be both feasible and irrevocable.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Hubert Chan (Chapters 1.6, 1.7, 4.1)
COMP 6/4030 ALGORITHMS Prim’s Theorem 10/26/2000.
Greedy Algorithms / Interval Scheduling Yin Tat Lee
EMIS 8374 Dijkstra’s Algorithm Updated 18 February 2008
CS330 Discussion 6.
Lecture 22: Parallel Algorithms
Dynamic Programming.
Dynamic Programming.
Lecture 7 All-Pairs Shortest Paths
ICS 353: Design and Analysis of Algorithms
Data Structures and Algorithms
Unit-5 Dynamic Programming
Analysis and design of algorithm
Lectures on Graph Algorithms: searching, testing and sorting
Topic: Divide and Conquer
Dynamic Programming.
Advanced Algorithms Analysis and Design
All pairs shortest path problem
Dynamic Programming.
DYNAMIC PROGRAMMING.
Greedy Algorithms Comp 122, Spring 2004.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Closures of Relations Epp, section 10.1,10.2 CS 202.
Directed Graphs (Part II)
Disjoint Set Operations: “UNION-FIND” Method
Presentation transcript:

All-pairs Shortest paths Transitive Closure Dynamic Programming All-pairs Shortest paths Transitive Closure Also Longest Common Subsequence

Dynamic Programming Divide-and-Conquer: a divisive (top-down) method for breaking a complex problem into sub-problems (of fractional size). E.g. Fibonacci numbers: Fn = Fn−1 + Fn−2 recursion explodes: Fn requires ≥ Φn calls, but simple loop exists to compute it in n steps (draw recursion tree) Dynamic Programming: agglomerative, i.e. solves problems bottom-up by storing the results for sub-problems, rather than re-computing them. Solutions to sub-problems are used to solve main problem. There are a polynomial number of overlapping sub-problems. Naïve recursion can produce excessive recomputation. CLR 16.2

Memory functions + … F0 F1 F2 F3 F4 This can be simulated in a top-down recursive algorithm by a technique called memoization. Values of recursive calls are saved in a table for subsequent lookup. Because the table’s structure is generally not known ahead of time (except for number of entries), the call’s arguments are usually hashed. Hence, if most all sub-problems are called, bottom-up is faster (by a constant). On the other hand, if many sub-problems are not encountered in the recursion, memoization can be faster. 8.4

Problem Give change for the amount n using the minimum number of coins valued at 1 = d1 < d2 < … < dm assuming unlimited quantities of coins for each of the m denominations. I.e. find smallest F(n) = 𝑖=1 𝑚 𝑞 𝑖 such that each qi ≥ 0 and: 𝑛= 𝑖=1 𝑚 𝑞 𝑖 𝑑 𝑖

Solution Let F(n) be the minimum number of coins whose values add up to n. Clearly F(0) = 0. The amount n can only be obtained by adding one coin of denomination dj ≤ n to the amount n − dj. Therefore, we can consider all such denominations for j = 1, 2, …, m and select the one minimizing F(n − dj) + 1. So, we have the following recurrence: 𝐹 𝑛 = min 𝐹 𝑛− 𝑑 𝑗 :𝑛≥ 𝑑 𝑗 +1 𝑛>0 0 𝑛=0

Algorithm Input: n ≥ 0, D[1 … m] of denominations (D[1] = 1). Output: minimum number of coins adding up to n. F[0] ← 0 for i ← 1 to n do temp ← ∞; j ← 1 while j ≤ m and i ≥ D[j] do temp ← min(F[i − D[j]], temp) j ← j + 1 F[i] ← temp + 1 Return F[n]

Example

Floyd’s algorithm for All-pairs Shortest Distances Given a graph G =  {1, …, n}, E  in adjacency matrix format W 0 if i = j the weight of directed edge (i, j)  E i ≠ j ∞ if i ≠ j and (i, j)  E assume no negative cycles where W[i, j] = Idea: dk[i, j] = length of shortest path from vertex i to vertex j without going through any vertex numbered higher than k. The figure illustrates the recurrence i k j ≤ k − 1 k only appears once in a shortest path Include figures 8.6 and example in figure 8.7 from book W[i, j] if k = 0 min(dk−1[i, j], dk−1[i, k] + dk−1[k, j]) if k > 0 Define dk[i, j] =

Floyd-Warshall Algorithm Using dynamic programming to compute these results in θ(n3): for i, j = 1 to n ► all pairs of indices d0[i, j] ← W[i, j] ► k = 0 for k = 1 to n for i, j = 1 to n ► all pairs again dk[i, j] ← min(dk−1[i, j], dk−1[i, k] + dk−1[k, j]) return D = n x n matrix of dn ► answer Is possible to execute in place so it is not necessary to distinguish between the old and new values of d. [Exercise]

Warshall’s algorithm for Transitive Closure Given: G =  V, E  where V = {1, 2, …, n} Compute: G =  V, E*  where E* = {(i, j): i →…→ j} Use the Floyd-Warshall algorithm with weights T, F False if (i, j)  E W[i, j] = True if (i, j)  E So the recurrence for computing E* would be compare with Boolean matrix multiplication D* = \/ Di, AND for *, OR for + Include figures 8.3 and example in figure 8.4 from book F if i ≠ j and (i, j)  E t0[i, j] = T if i = j or (i, j)  E tk[i, j] = tk−1[i, j]  (tk−1[i, k]  tk−1[k, j])

Transitive Closure in Logic logical definition: Ei (x, y) if there is a path of length at most i from x to y. So for n < i, this is: transitive reflexive closure Take out of math transitive only speed-up

Transitive closure of simple graph in quadratic time Symmetric matrix used as both input & output. for i = 1 to n (from top to bottom row) if there is a least j > i such that A[i, j] let A[j] = A[j] or A[i] (pointwise as rows) else (greatest index of this component) for j < i if A[i, j] then (same component) let A[j] = A[i] (backfill previous rows) let A[i, i] = false (keep it simple) See handout for analysis. Isn’t it much easier to just do a DFS and number the components? 17

Example Note: Row A[i] = {j : A[i, j]}, and set out to prove that any and only paths are found. Also, the algorithm is monotone. 1 Need a better example

Soundness Theorem: If j is put in A[i] then it must be the case that i ~ j. Proof: By induction on statement executed -- the idea is that it only ORs rows with a known path between them. Base Case: Initially, j in A[i] means A[i, j] which implies i − j. Induction Step: in line 3, if k is put in A[j] then either it was there before and hence k ~ j trivially by IH, or else it is in A[i], which means k ~ i by IH, so together with i ~ j by IH on line 2 yields k ~ j. In line 6, if k is put in A[j] then it was in A[i] which implies i ~ k by IH, so combining that with i ~ j by IH on line 5 yields k ~ j. (Note that we do not use monotonicity here even though an element of A[j] will never be removed (except for possibly j to maintain simplicity).

Completeness Claim: At iteration m, A[m] contains all nodes Cm with a path to m via nodes numbered less than m. Proof: Let [m] = {k : k ~ m}. Proceed by induction on m. Basis: If m is the first (smallest) node in [m], then Cm = {i ≥ m : i − m} is a star around m, which is clearly contained in A[m] from the beginning since these are just edges. Induction: Let m' be the largest node < m in [m]. Then by IH, at iteration m', A[m'] contains Cm' of which m is a member. So m is in A[m'], and is the smallest such above m'. So by line 2, line 3 will put all of Cm' into A[m] at iteration m'. And if k in Cm uses m' to reach m then k must already be in Cm'. second part of proof isn’t quite right b/c it ignores nodes directly attached to m.