Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis and design of algorithm

Similar presentations


Presentation on theme: "Analysis and design of algorithm"— Presentation transcript:

1 Analysis and design of algorithm
Sub code:10CS43 Analysis and design of algorithm Unit-4 Engineered for Tomorrow Presented by Shruthi N Asst.Professor CSE, MVJCE 12/8/13

2 Dynamic Programming Definition: Dynamic programming (DP) is a general algorithm design technique for solving problems with overlapping sub-problems. This technique was invented by American mathematician ―Richard Bellman‖ in 1950s. Dynamic Programming Properties • An instance is solved using the solutions for smaller instances. • The solutions for a smaller instance might be needed multiple times, so store their results in a table. • Thus each smaller instance is solved only once. • Additional space is used to save time.

3 Dynamic Programming vs. Divide & Conquer
LIKE divide & conquer, dynamic programming solves problems by combining solutions to sub-problems. UNLIKE divide & conquer, sub-problems are NOT independent in dynamic programming.

4 Warshall’s Algorithm Directed Graph: A graph whose every edge is directed is called directed graph OR digraph • Adjacency matrix: The adjacency matrix A = {aij} of a directed graph is the boolean matrix that has 1 - if there is a directed edge from ith vertex to the jth vertex 0 - Otherwise • Transitive Closure: Transitive closure of a directed graph with n vertices can be defined as the n-by-n matrix T={tij}, in which the elements in the ith row (1≤ i ≤ n) and the jth column(1≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a directed path of a positive length) from the ith vertex to the jth vertex, otherwise tij is 0. The transitive closure provides reach ability information about a digraph.

5 Algorithm: Algorithm Warshall(A[1..n, 1..n])
// Computes transitive closure matrix // Input: Adjacency matrix A // Output: Transitive closure matrix R R(0) A for k→1 to n do for i→ 1 to n do for j→ 1 to n do R(k)[i, j]→ R(k-1)[i, j] OR (R(k-1)[i, k] AND R(k-1)[k, j] ) return R(n)

6 Example:

7 Contd.. k = 4 Vertex {1,2,3,4 } can be intermediate

8 Contd.. Efficiency: • Time efficiency is Θ(n3)
• Space efficiency: Requires extra space for separate matrices for recording intermediate results of the algorithm.

9 Single-Source Shortest Path Problem
Single-Source Shortest Path Problem - The problem of finding shortest paths from a source vertex v to all other vertices in the graph.

10 Dijkstra's algorithm Dijkstra's algorithm - is a solution to the single-source shortest path problem in graph theory.   Works on both directed and undirected graphs. However, all edges must have nonnegative weights. Approach: Greedy Input: Weighted graph G={E,V} and source vertex v∈V, such that all edge weights are nonnegative  Output: Lengths of shortest paths (or the shortest paths themselves) from a given source vertex v∈V to all other vertices

11 Dijkstra's algorithm - Pseudocode
dist[s] ← (distance to source vertex is zero) for  all v ∈ V–{s}         do  dist[v] ←∞ (set all other distances to infinity) S←∅ (S, the set of visited vertices is initially empty) Q←V  (Q, the queue initially contains all vertices)               while Q ≠∅ (while the queue is not empty) do   u ← mindistance(Q,dist) (select the element of Q with the min. distance)       S←S∪{u} (add u to list of visited vertices)        for all v ∈ neighbors[u]               do  if   dist[v] > dist[u] + w(u, v) (if new shortest path found)                          then      d[v] ←d[u] + w(u, v) (set new value of shortest path) (if desired, add traceback code) return dist

12 Dijkstra Animated Example

13 Dijkstra Animated Example

14 Dijkstra Animated Example

15 Dijkstra Animated Example

16 Dijkstra Animated Example

17 Dijkstra Animated Example

18 Dijkstra Animated Example

19 Dijkstra Animated Example

20 Dijkstra Animated Example

21 Dijkstra Animated Example

22 Implementations and Running Times
2222 Implementations and Running Times     The simplest implementation is to store vertices in an array or linked list. This will produce a running time of   O(|V|^2 + |E|) For sparse graphs, or graphs with very few edges and many nodes, it can be implemented more efficiently storing the graph in an adjacency list using a binary heap or priority queue. This will produce a running time of O((|E|+|V|) log |V|)

23 Dijkstra's Algorithm - Why It Works
As with all greedy algorithms, we need to make sure that it is a correct algorithm (e.g., it always returns the right solution if it is given correct input). A formal proof would take longer than this presentation, but we can understand how the argument works intuitively.

24 Dijkstra's Algorithm - Why It Works
To understand how it works, we’ll go over the previous example again. However, we need two mathematical results first: Lemma 1: Triangle inequality If δ(u,v) is the shortest path length between u and v, δ(u,v) ≤ δ(u,x) + δ(x,v) Lemma 2: The subpath of any shortest path is itself a shortest path. The key is to understand why we can claim that anytime we put a new vertex in S, we can say that we already know the shortest path to it.

25 Dijkstra's Algorithm - Why use it?
As mentioned, Dijkstra’s algorithm calculates the shortest path to every vertex. However, it is about as computationally expensive to calculate the shortest path from vertex u to every vertex using Dijkstra’s as it is to calculate the shortest path to some particular vertex v. Therefore, anytime we want to know the optimal path to some other vertex from a determined origin, we can use Dijkstra’s algorithm.

26 Applications of Dijkstra's Algorithm
- Traffic Information Systems are most prominent use  - Mapping (Map Quest, Google Maps) - Routing Systems

27 Floyd’s Algorithm to find -ALL PAIRS SHORTEST PATHS
Weighted Graph: Each edge has a weight (associated numerical value). Edge weights may represent costs, distance/lengths, capacities, etc. depending on the problem. Weight matrix: W(i,j) is 0 if i=j ∞ if no edge b/n i and j. ―weight of edge‖ if edge b/n i and j.

28 Algorithm: Algorithm Floyd(W[1..n, 1..n])
// Implements Floyd‘s algorithm // Input: Weight matrix W // Output: Distance matrix of shortest paths‘ length D W for k → 1 to n do for i→ 1 to n do for j→ 1 to n do D [ i, j]→ min { D [ i, j], D [ i, k] + D [ k, j] return D

29 Example:

30 0/1 Knapsack Problem Memory function
Definition: Given a set of n items of known weights w1,…,wn and values v1,…,vn and a knapsack of capacity W, the problem is to find the most valuable subset of the items that fit into the knapsack. Knapsack problem is an OPTIMIZATION PROBLEM

31 Dynamic programming approach to solve knapsack problem
Step 1: Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k} Step 2: Recursively define the value of an optimal solution in terms of solutions to smaller problems. Initial conditions: V[ 0, j ] = 0 for j ≥ 0 V[ i, 0 ] = 0 for i ≥ 0 Recursive step: max { V[ i-1, j ], vi +V[ i-1, j - wi ] } V[ i, j ] = if j - wi ≥ 0 V[ i-1, j ] if j - wi < 0 Step 3: Bottom up computation using iteration

32 Question: Apply bottom-up dynamic programming algorithm to the following instance of the knapsack problem Capacity W= 5

33 Solution:

34 Contd..

35 Contd..

36 Travelling salesman problem
Let G(V,E) be a directed graph with edge cost cij is defined such that cij >0 for all i and j and cij = ,if <i,j>  E. Let V =n and assume n>1. The traveling salesman problem is to find a tour of minimum cost. A tour of G is a directed cycle that include every vertex in V. The cost of the tour is the sum of cost of the edges on the tour. The tour is the shortest path that starts and ends at the same vertex

37 Application : Suppose we have to route a postal van to pick up mail from the mail boxes located at ‘n’ different sites. An n+1 vertex graph can be used to represent the situation. One vertex represent the post office from which the postal van starts and return. Edge <i,j> is assigned a cost equal to the distance from site ‘i’ to site ‘j’. The route taken by the postal van is a tour and we are finding a tour of minimum length.

38 Example optimal cost is 35 the shortest path is,
3838 optimal cost is 35  the shortest path is, g(1,{2,3,4}) = c g(2,{3,4}) => 1->2  g(2,{3,4}) = c g(4,{3}) => 1->2->4  g(4,{3}) = c g(3{}) => 1->2->4- >3->1  so the optimal tour is 1  2  4 3  1 Example 10 j k 15 10 15 8 6 13 9 9 20 12 m l 7

39 1-3939 3939 Engineered for Tomorrow Assignment questions 1. What is dynamic programming? 2. Write an all pair shortest path problem algorithm. 3. Write an algorithm for warshall‟s problem algorithm 4. What are memory functions? Explain how they are used to solve the knapsack problem. Solve the instance of the knapsack problem below. Capacity W= 5 . Item Weight Value $12 $10 $20 $15

40 Assignment questions 1-4040 4040
Engineered for Tomorrow Assignment questions 6. Using Warshall‟s algorithm, obtain the transitive closure of the matrix given below. 1 1 R= 1 1

41 1-4141 4141 Engineered for Tomorrow Assignment questions 8. Using the Dynamic programming .solve the following Knapsack instance: n =3,[w1,w2,w3] =[1,2,2] and [p1,p2,p3]=[18,16,6] and M=4 9. Write the algorithm to find the shortest path using Floyd‟s approach 10. Explain dynamic programming. List out the differences between divide and conquer and dynamic programming.

42 Thank You..


Download ppt "Analysis and design of algorithm"

Similar presentations


Ads by Google