Fibonacci Numbers F n = F n-1 + F n-2 F 0 =0, F 1 =1 – 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 … Straightforward recursive procedure is slow! Why? How slow? Lets.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Chapter 23 Minimum Spanning Tree
Advanced Algorithm Design and Analysis Jiaheng Lu Renmin University of China
Single Source Shortest Paths
* Bellman-Ford: single-source shortest distance * O(VE) for graphs with negative edges * Detects negative weight cycles * Floyd-Warshall: All pairs shortest.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
Greedy Algorithms Greed is good. (Some of the time)
Greed is good. (Some of the time)
October 31, Algorithms and Data Structures Lecture XIII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
November 14, Algorithms and Data Structures Lecture XIII Simonas Šaltenis Aalborg University
Introduction To Algorithms CS 445 Discussion Session 6 Instructor: Dr Alon Efrat TA : Pooja Vaswani 03/21/2005.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
Minimum Spanning Trees Definition Algorithms –Prim –Kruskal Proofs of correctness.
Shortest Path Problems
Shortest Paths Definitions Single Source Algorithms –Bellman Ford –DAG shortest path algorithm –Dijkstra All Pairs Algorithms –Using Single Source Algorithms.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
1 8-ShortestPaths Shortest Paths in a Graph Fundamental Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 15 Shortest paths algorithms Properties of shortest paths Bellman-Ford algorithm.
Shortest Paths Definitions Single Source Algorithms
DAST 2005 Tirgul 12 (and more) sample questions. DAST 2005 Q.We’ve seen that solving the shortest paths problem requires O(VE) time using the Belman-Ford.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
All-Pairs Shortest Paths
Prim’s Algorithm and an MST Speed-Up
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
Minimum Spanning Trees. Subgraph A graph G is a subgraph of graph H if –The vertices of G are a subset of the vertices of H, and –The edges of G are a.
Dijkstra's algorithm.
Graphs – Shortest Path (Weighted Graph) ORD DFW SFO LAX
Algorithms and Data Structures Lecture X
Data Structures and Algorithms Graphs Minimum Spanning Tree PLSD210.
Shortest Path Algorithms. Kruskal’s Algorithm We construct a set of edges A satisfying the following invariant:  A is a subset of some MST We start with.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 8 Greedy Graph.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Algorithm Course Dr. Aref Rashad February Algorithms Course..... Dr. Aref Rashad Part: 6 Shortest Path Algorithms.
Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph. Want to compute a shortest path for each possible.
Dijkstra’s Algorithm Supervisor: Dr.Franek Ritu Kamboj
Introduction to Algorithms Jiafen Liu Sept
The single-source shortest path problem (SSSP) input: a graph G = (V, E) with edge weights, and a specific source node s. goal: find a minimum weight (shortest)
November 13, Algorithms and Data Structures Lecture XII Simonas Šaltenis Aalborg University
Lecture 13 Algorithm Analysis
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
TIRGUL 10 Dijkstra’s algorithm Bellman-Ford Algorithm 1.
November 22, Algorithms and Data Structures Lecture XII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Lecture ? The Algorithms of Kruskal and Prim
Algorithms and Data Structures Lecture XIII
Algorithms and Data Structures Lecture XII
Algorithms (2IL15) – Lecture 5 SINGLE-SOURCE SHORTEST PATHS
Advanced Algorithms Analysis and Design
Algorithms and Data Structures Lecture XIII
Lecture 13 Algorithm Analysis
Lecture 13 Algorithm Analysis
CSC 413/513: Intro to Algorithms
Lecture 13 Algorithm Analysis
Lecture 13 Algorithm Analysis
All pairs shortest path problem
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
Algorithms and Data Structures Lecture X
Graph Theory Dijkstra's Algorithm.
Lecture 12 Shortest Path.
CS 3013: DS & Algorithms Shortest Paths.
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Data Structures and Algorithms
Presentation transcript:

Fibonacci Numbers F n = F n-1 + F n-2 F 0 =0, F 1 =1 – 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 … Straightforward recursive procedure is slow! Why? How slow? Lets draw the recursion tree

Fibonacci Numbers (2) We keep calculating the same value over and over! F(6) = 8 F(5) F(4) F(3) F(1) F(2) F(0) F(1) F(2) F(0) F(3) F(1) F(2) F(0) F(1) F(4) F(3) F(1) F(2) F(0) F(1) F(2) F(0)

Fibonacci Numbers (3) How many summations are there? Golden ratio Thus F n 1.6 n Our recursion tree has only 0s and 1s as leaves, thus we have 1.6 n summations Running time is exponential!

Fibonacci Numbers (4) We can calculate F n in linear time by remembering solutions to the solved subproblems – dynamic programming Compute solution in a bottom-up fashion Trade space for time! – In this case, only two values need to be remembered at any time (probably less than the depth of your recursion stack!) Fibonacci(n) F 0 0 F 1 1 for i 1 to n do F i F i-1 + F i-2 Fibonacci(n) F 0 0 F 1 1 for i 1 to n do F i F i-1 + F i-2

Optimization Problems We have to choose one solution out of many – a one with the optimal (minimum or maximum) value. A solution exhibits a structure – It consists of a string of choices that were made – what choices have to be made to arrive at an optimal solution? The algorithms computes the optimal value plus, if needed, the optimal solution

Two matrices, A – n m matrix and B – m k matrix, can be multiplied to get C with dimensions n k, using nmk scalar multiplications Problem: Compute a product of many matrices efficiently Matrix multiplication is associative – (AB)C = A(BC) Multiplying Matrices

Multiplying Matrices (2) The parenthesization matters Consider A B C D, where – A is 30 1, B is 1 40, C is 40 10, D is Costs: – (AB)C)D = – (AB)(CD) = – A((BC)D) = 1400 We need to optimally parenthesize

Multiplying Matrices (3) Let M(i,j) be the minimum number of multiplications necessary to compute Key observations – The outermost parenthesis partitions the chain of matrices (i,j) at some k, (i k<j): (A i … A k )(A k+1 … A j ) – The optimal parenthesization of matrices (i,j) has optimal parenthesizations on either side of k: for matrices (i,k) and (k+1,j)

Multiplying Matrices (4) We try out all possible k. Recurrence: A direct recursive implementation is exponential – there is a lot of duplicated work (why?) But there are only different subproblems (i,j), where 1 i j n

Multiplying Matrices (5) Thus, it requires only (n 2 ) space to store the optimal cost M(i,j) for each of the subproblems: half of a 2d array M[1..n,1..n] Matrix-Chain-Order(d 0 …d n ) 1 for i 1 to n do 2 M[i,i] 3 for l 2 to n do 4 for i 1 to n-l+1 do 5 j i+l-1 6 M[i,j] for k i to j-l do 8 q M[i,k]+M[k+1,j]+d i-1 d k d j 9 if q < M[i,j] then 10 M[i,j] q 11 c[i,j] k 12return M, c Matrix-Chain-Order(d 0 …d n ) 1 for i 1 to n do 2 M[i,i] 3 for l 2 to n do 4 for i 1 to n-l+1 do 5 j i+l-1 6 M[i,j] for k i to j-l do 8 q M[i,k]+M[k+1,j]+d i-1 d k d j 9 if q < M[i,j] then 10 M[i,j] q 11 c[i,j] k 12return M, c

Multiplying Matrices (6) After execution: M[1,n] contains the value of the optimal solution and c contains optimal subdivisions (choices of k) of any subproblem into two subsubproblems A simple recursive algorithm Print-Optimal- Parents(c, i, j) can be used to reconstruct an optimal parenthesization Let us run the algorithm on d = [10, 20, 3, 5, 30]

Multiplying Matrices (7) Running time – It is easy to see that it is O(n 3 ) – It turns out, it is also (n 3 ) From exponential time to polynomial

Memoization If we still like recursion very much, we can structure our algorithm as a recursive algorithm: – Initialize all M elements to and call Lookup-Chain(d, i, j) Lookup-Chain(d,i,j) 1 if M[i,j] < then 2 return M[i,j] 3 if i=j then 4 M[i,j] 0 5else for k i to j-1 do 6 q Lookup-Chain(d,i,k)+ Lookup-Chain(d,k+1,j)+d i-1 d k d j 7 if q < M[i,j] then 8 M[i,j] q 9return M[i,j] Lookup-Chain(d,i,j) 1 if M[i,j] < then 2 return M[i,j] 3 if i=j then 4 M[i,j] 0 5else for k i to j-1 do 6 q Lookup-Chain(d,i,k)+ Lookup-Chain(d,k+1,j)+d i-1 d k d j 7 if q < M[i,j] then 8 M[i,j] q 9return M[i,j]

Dynamic Programming In general, to apply dynamic programming, we have to address a number of issues: – 1. Show optimal substructure – an optimal solution to the problem contains within it optimal solutions to sub- problems Solution to a problem: – Making a choice out of a number of possibilities (look what possible choices there can be) – Solving one or more sub-problems that are the result of a choice (characterize the space of sub-problems) Show that solutions to sub-problems must themselves be optimal for the whole solution to be optimal (use cut-and-paste argument)

Dynamic Programming (2) – 2. Write a recurrence for the value of an optimal solution M opt = Min over all choices k {(Sum of M opt of all sub-problems, resulting from choice k) + (the cost associated with making the choice k)} – Show that the number of different instances of sub- problems is bounded by a polynomial

Dynamic Programming (3) – 3. Compute the value of an optimal solution in a bottom-up fashion, so that you always have the necessary sub-results pre-computed (or use memoization) – See if it is possible to reduce the space requirements, by forgetting solutions to sub- problems that will not be used any more – 4. Construct an optimal solution from computed information (which records a sequence of choices made that lead to an optimal solution)

Longest Common Subsequence Two text strings are given: X and Y There is a need to quantify how similar they are: – Comparing DNA sequences in studies of evolution of different species – Spell checkers One of the measures of similarity is the length of a Longest Common Subsequence (LCS)

LCS: Definition Z is a subsequence of X, if it is possible to generate Z by skipping some (possibly none) characters from X For example: X =ACGGTTA, Y=CGTAT, LCS(X,Y) = CGTA or CGTT To solve LCS problem we have to find skips that generate LCS(X,Y) from X, and skips that generate LCS(X,Y) from Y

LCS: Optimal Substructure We make Z to be empty and proceed from the ends of X m =x 1 x 2 …x m and Y n =y 1 y 2 …y n – If x m =y n, append this symbol to the beginning of Z, and find optimally LCS(X m-1, Y n-1 ) – If x m y n, Skip either a letter from X or a letter from Y Decide which decision to do by comparing LCS(X m, Y n-1 ) and LCS(X m-1, Y n ) – Cut-and-paste argument

LCS: Recurence The algorithm could be easily extended by allowing more editing operations in addition to copying and skipping (e.g., changing a letter) Let c[i,j] = LCS(X i, Y j ) Observe: conditions in the problem restrict sub- problems (What is the total number of sub- problems?)

LCS: Compute the Optimum LCS-Length(X, Y, m, n) 1 for i 1 to m do 2 c[i,0] 3 for j 0 to n do 4 c[0,j] 5 for i 1 to m do 6 for j 1 to n do 7 if x i = y j then 8 c[i,j] c[i-1,j-1]+1 9 b[i,j] copy 10 else if c[i-1,j] c[i,j-1] then 11 c[i,j] c[i-1,j] 12 b[i,j] skipx 13 else 14 c[i,j] c[i,j-1] 15 b[i,j] skipy 16return c, b LCS-Length(X, Y, m, n) 1 for i 1 to m do 2 c[i,0] 3 for j 0 to n do 4 c[0,j] 5 for i 1 to m do 6 for j 1 to n do 7 if x i = y j then 8 c[i,j] c[i-1,j-1]+1 9 b[i,j] copy 10 else if c[i-1,j] c[i,j-1] then 11 c[i,j] c[i-1,j] 12 b[i,j] skipx 13 else 14 c[i,j] c[i,j-1] 15 b[i,j] skipy 16return c, b

LCS: Example Lets run: X =ACGGTTA, Y=CGTAT How much can we reduce our space requirements, if we do not need to reconstruct LCS?

February 8, Shortest Path Generalize distance to weighted setting Digraph G = (V,E) with weight function W: E R (assigning real values to edges) Weight of path p = v 1 v 2 … v k is Shortest path = a path of the minimum weight Applications – static/dynamic network routing – robot motion planning – map/route generation in traffic

February 8, Shortest-Path Problems Shortest-Path problems – Single-source (single-destination). Find a shortest path from a given source (vertex s) to each of the vertices. The topic of this lecture. – Single-pair. Given two vertices, find a shortest path between them. Solution to single-source problem solves this problem efficiently, too. – All-pairs. Find shortest-paths for every pair of vertices. Dynamic programming algorithm. – Unweighted shortest-paths – BFS.

February 8, Optimal Substructure Theorem: subpaths of shortest paths are shortest paths Proof (cut and paste) – if some subpath were not the shortest path, one could substitute the shorter subpath and create a shorter total path

February 8, Triangle Inequality Definition – (u,v) weight of a shortest path from u to v Theorem – (u,v) (u,x) + (x,v) for any x Proof – shortest path u Î v is no longer than any other path u Î v – in particular, the path concatenating the shortest path u Î x with the shortest path x Î v

February 8, Negative Weights and Cycles? Negative edges are OK, as long as there are no negative weight cycles (otherwise paths with arbitrary small lengths would be possible) Shortest-paths can have no cycles (otherwise we could improve them by removing cycles) – Any shortest-path in graph G can be no longer than n – 1 edges, where n is the number of vertices

February 8, Relaxation For each vertex in the graph, we maintain d[v], the estimate of the shortest path from s, initialized to at start Relaxing an edge (u,v) means testing whether we can improve the shortest path to v found so far by going through u uv vu 2 2 Relax(u,v) uv vu 2 2 Relax(u,v) Relax (u,v,w) if d[v] > d[u]+w(u,v)then d[v] d[u]+w(u,v) [v] u

February 8, Dijkstra's Algorithm Non-negative edge weights Greedy, similar to Prim's algorithm for MST Like breadth-first search (if all weights = 1, one can simply use BFS) Use Q, priority queue keyed by d[v] (BFS used FIFO queue, here we use a PQ, which is re-organized whenever some d decreases) Basic idea – maintain a set S of solved vertices – at each step select "closest" vertex u, add it to S, and relax all edges from u

February 8, Dijkstras Pseudo Code Graph G, weight function w, root s relaxing edges

February 8, Dijkstras Example s uv yx s uv yx uv s yx s uv yx

February 8, Observe – relaxation step (lines 10-11) – setting d[v] updates Q (needs Decrease-Key) – similar to Prim's MST algorithm Dijkstras Example (2) uv yx uv yx

February 8, Dijkstras Correctness We will prove that whenever u is added to S, d[u] = (s,u), i.e., that d is minimum, and that equality is maintained thereafter Proof – Note that v, d[v] (s,v) – Let u be the first vertex picked such that there is a shorter path than d[u], i.e., that d[u] (s,u) – We will show that this assumption leads to a contradiction

February 8, Dijkstra Correctness (2) Let y be the first vertex V – S on the actual shortest path from s to u, then it must be that d[y] = (s,y) because – d[x] is set correctly for y's predecessor x S on the shortest path (by choice of u as the first vertex for which d is set incorrectly) – when the algorithm inserted x into S, it relaxed the edge (x,y), assigning d[y] the correct value

February 8, But d[u] > d[y] algorithm would have chosen y (from the PQ) to process next, not u Contradiction Thus d[u] = (s,u) at time of insertion of u into S, and Dijkstra's algorithm is correct Dijkstra Correctness (3)

February 8, Dijkstras Running Time Extract-Min executed |V| time Decrease-Key executed |E| time Time = |V| T Extract-Min + |E| T Decrease-Key T depends on different Q implementations QT(Extract -Min) T(Decrease- Key) Total array (V) (1) (V 2 ) binary heap (lg V) (E lg V) Fibonacci heap (lg V) (1) (amort.) (V lgV + E)

February 8, Bellman-Ford Algorithm Dijkstras doesnt work when there are negative edges: – Intuition – we can not be greedy any more on the assumption that the lengths of paths will only increase in the future Bellman-Ford algorithm detects negative cycles (returns false) or returns the shortest path-tree

February 8, Bellman-Ford Algorithm Bellman-Ford(G,w,s) 01 for each v V[G] 02 d[v] 03 d[s] 0 04 [s] NIL 05 for i 1 to |V[G]|-1 do 06 for each edge (u,v) E[G] do 07 Relax (u,v,w) 08 for each edge (u,v) E[G] do 09 if d[v] > d[u] + w(u,v) then return false 10 return true

February 8, Bellman-Ford Example 5 s zy x t -4 s zy x t -4 s zy x t -4 s zy x t

February 8, Bellman-Ford Example s zy x t -4 Bellman-Ford running time: – (|V|-1)|E| + |E| = (VE) 5

February 8, Correctness of Bellman-Ford Let i (s,u) denote the length of path from s to u, that is shortest among all paths, that contain at most i edges Prove by induction that d[u]= i (s,u) after the i-th iteration of Bellman-Ford – Base case (i=0) trivial – Inductive step (say d[u] = i-1 (s,u)): Either i (s,u) = i-1 (s,u) Or i (s,u) = i-1 (s,z) + w(z,u) In an iteration we try to relax each edge ((z,u) also), so we will catch both cases, thus d[u] = i (s,u)

February 8, Correctness of Bellman-Ford After n-1 iterations, d[u] = n-1 (s,u), for each vertex u. If there is still some edge to relax in the graph, then there is a vertex u, such that n (s,u) < n- 1 (s,u). But there are only n vertices in G – we have a cycle, and it must be negative. Otherwise, d[u]= n-1 (s,u) = (s,u), for all u, since any shortest path will have at most n-1 edges

February 8, Shortest-Path in DAGs Finding shortest paths in DAGs is much easier, because it is easy to find an order in which to do relaxations – Topological sorting! DAG-Shortest-Paths(G,w,s) 01 for each v V[G] 02 d[v] 03 d[s] 0 04 topologically sort V[G] 05 for each vertex u, taken in topolog. order do 06 for each vertex v Adj[u] do 07 Relax(u,v,w)