Presentation is loading. Please wait.

Presentation is loading. Please wait.

Paths in a Graph : A Brief Tutorial Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1.

Similar presentations


Presentation on theme: "Paths in a Graph : A Brief Tutorial Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1."— Presentation transcript:

1 Paths in a Graph : A Brief Tutorial Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1

2 Introduction Many problems can be modeled using graphs with weights assigned to their edges: Airline flight times Telephone communication costs Computer networks response times Tokyo Subway Map 2

3 Weighted Graphs In a weighted graph, each edge has an associated numerical value, called the weight of the edge Edge weights may represent, distances, costs, etc. Example: In a flight route graph, the weight of an edge represents the distance in miles between the endpoint airports ORD PVD MIA DFW SFO LAX LGA HNL 849 802 1387 1743 1843 1099 1120 1233 337 2555 142 1205 3

4 Shortest Path Problem Given a weighted graph and two vertices u and v, we want to find a path of minimum total weight between u and v. Length of a path is the sum of the weights of its edges. Example: Shortest path between Providence and Honolulu Applications Internet packet routing Flight reservations Driving directions ORD PVD MIA DFW SFO LAX LGA HNL 849 802 1387 1743 1843 1099 1120 1233 337 2555 142 1205 4

5 Dijkstra’s Algorithm to compute the Shortest Path  G is a simple connected graph.  A simple graph G = (V, E) consists of V, a nonempty set of vertices, and E, a set of unordered pairs of distinct elements of V called edges.  Each edge has an associated weight  It is a greedy algorithm  A greedy algorithm is any algorithm that follows the problem solving metaheuristic of making the locally optimal choice at each stage with the hope of finding the global optimum. 5

6 Dijkstra’s Algorithm The distance of a vertex v from a vertex s is the length of a shortest path between s and v Dijkstra’s algorithm computes the distances of all the vertices from a given start vertex s Assumptions: the graph is connected the edges are undirected the edge weights are nonnegative We grow a “cloud” of vertices, beginning with s and eventually covering all the vertices We store with each vertex v a label d(v) representing the distance of v from s in the subgraph consisting of the cloud and its adjacent vertices At each step We add to the cloud the vertex u outside the cloud with the smallest distance label, d(u) We update the labels of the vertices adjacent to u 6

7 Application of Dijkstra’s Algorithm Find shortest path from s to t using Dijkstra’s algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 7

8 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6        0 distance label S = { } P = { s, 2, 3, 4, 5, 6, 7, t } 8

9 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6        0 distance label S = { } P = { s, 2, 3, 4, 5, 6, 7, t } delmin 9

10 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 distance label S = { s } P = { 2, 3, 4, 5, 6, 7, t } decrease key  X   X X 10

11 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 distance label S = { s } P = { 2, 3, 4, 5, 6, 7, t }  X   X X delmin 11

12 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 S = { s, 2 } P = { 3, 4, 5, 6, 7, t }  X   X X 12

13 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 S = { s, 2 } P = { 3, 4, 5, 6, 7, t }  X   X X decrease key X 33 13

14 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 S = { s, 2 } P = { 3, 4, 5, 6, 7, t }  X   X X X 33 delmin 14

15 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9    14  0 S = { s, 2, 6 } P = { 3, 4, 5, 7, t }  X   X X X 33 44 X X 32 15

16 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6 } P = { 3, 4, 5, 7, t }  X   X X 44 X delmin  X 33 X 32 16

17 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6, 7 } P = { 3, 4, 5, t }  X   X X 44 X 35 X 59 X 24  X 33 X 32 17

18 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 6, 7 } P = { 3, 4, 5, t }  X   X X 44 X 35 X 59 X delmin  X 33 X 32 18

19 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 6, 7 } P = { 4, 5, t }  X   X X 44 X 35 X 59 XX 51 X 34  X 33 X 32 19

20 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 6, 7 } P = { 4, 5, t }  X   X X 44 X 35 X 59 XX 51 X 34 delmin  X 33 X 32 24 20

21 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 5, 6, 7 } P = { 4, t }  X   X X 44 X 35 X 59 XX 51 X 34 24 X 50 X 45  X 33 X 32 21

22 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 5, 6, 7 } P = { 4, t }  X   X X 44 X 35 X 59 XX 51 X 34 24 X 50 X 45 delmin  X 33 X 32 22

23 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7 } P = { t }  X   X X 44 X 35 X 59 XX 51 X 34 24 X 50 X 45  X 33 X 32 23

24 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7 } P = { t }  X   X X 44 X 35 X 59 XX 51 X 34 X 50 X 45 delmin  X 33 X 32 24

25 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7, t } P = { }  X   X X 44 X 35 X 59 XX 51 X 34 X 50 X 45  X 33 X 32 25

26 Dijkstra's Shortest Path Algorithm s 3 t 2 6 7 4 5 24 18 2 9 14 15 5 30 20 44 16 11 6 19 6 15 9   14  0 S = { s, 2, 3, 4, 5, 6, 7, t } P = { }  X   X X 44 X 35 X 59 XX 51 X 34 X 50 X 45  X 33 X 32 26

27 In-Class Exercise Find the shortest route to reach Honolulu (HNL) from Providence (PVD) Use Dikjstra’s algorithm ORD PVD MIA DFW SFO LAX LGA HNL 849 802 1387 1743 1843 1099 1120 1233 337 2555 142 1205 27

28 Why It Doesn’t Work for Negative- Weight Edges If a node with a negative incident edge were to be added late to the cloud, it could mess up distances for vertices already in the cloud. CB A E D F 0 457 59 4 8 71 25 6 0-8 Dijkstra’s algorithm is based on the greedy method. It adds vertices by increasing distance. C’s true distance is 1, but it is already in the cloud with d(C)=5! 28

29 Remarks on Dijkstra’s Shortest Path Algorithm Dijkstra’s algorithm doesn’t account for graphs whose edges may have negative weights Bellman-Ford’s algorithm accounts for negative weight paths Dijkstra’s algorithm works for a single source and a single sink pair Floyd’s algorithm solves for the shortest path among all pairs of vertices. Dijkstra’s shortest path algorithm can be solved in polynomial time in graphs without negative-weight cycles It takes O(E+VlogV) time 29

30 Bellman-Ford algorithm Bellman-Ford Algorithm takes O(E*V) time 30

31 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT We fix the source node as A 31

32 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 32

33 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 33

34 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 34

35 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 35

36 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 36

37 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 37

38 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 38

39 Example of Bellman-Ford Algorithm Courtesy: Eric Demaine, MIT 39

40 Tractability Some problems are undecidable: no computer can solve them E.g., Turing’s “Halting Problem” Other problems are decidable, but intractable: as they grow large, we are unable to solve them in reasonable time What constitutes “reasonable time”? Some problems are provably decidable in polynomial time on an ordinary computer We say such problems belong to the set P Technically, a computer with unlimited memory 40

41 NP Some problems are provably decidable in polynomial time on a nondeterministic computer We say such problems belong to the set NP Can think of a nondeterministic computer as a parallel machine that can freely spawn an infinite number of processes P = set of problems that can be solved in polynomial time NP = set of problems for which a solution can be verified in polynomial time P  NP The big question: Does P = NP? 41

42 NP-Completeness How would you define NP-Complete? They are the “hardest” problems in NP P NP NP-Complete 42

43 NP-Completeness The NP-Complete problems are an interesting class of problems whose status is unknown No polynomial-time algorithm has been discovered for an NP-Complete problem No superpolynomial lower bound has been proved for any NP-Complete problem, either The two chief characteristics of NP-complete problems are : NP-complete is a subset of NP set of all decision problems whose solutions can be verified in polynomial time; A problem s in NP is also in NP-complete if and only if every other problem in NP can be transformed into s in polynomial time Unless there is a dramatic change in the current thinking, there is no “efficient time” algorithm which can solve NP-complete problems A brute-force search is often required 43

44 Longest Path Algorithm The longest path problem is the problem of finding a simple path of maximum length in a given graph. Unlike the shortest path problem, the longest path problem is NP- complete The optimal solution cannot be found in polynomial time unless P = NP. The NP-completeness of the decision problem can be shown using a reduction from the Hamiltonian path problem. If a certain general graph has a Hamiltonian path, this Hamiltonian path is the longest path possible, as it traverses all possible vertices. 44

45 Longest Path Algorithm A Hamiltonian path (or traceable path) is a path in an undirected graph which visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a cycle in an undirected graph which visits each vertex exactly once and also returns to the starting vertex. Determining whether such paths and cycles exist in graphs is the Hamiltonian path problem which is NP-complete. 45

46 Statistics Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 46

47 Contents History of Statistics Basic terms involved in Statistics Sampling Examples & In-Class Exercise Estimation theory 47

48 History of Statistics 48

49 Contents History of Statistics Basic terms involved in Statistics Sampling Examples & In-Class Exercise Estimation theory 49

50 Basic Terms Involved in Statistics To understand sampling, you need to first understand a few basic definitions. The total set of observations that can be made is called the population. A sample is a collection of observations made It is usually a much smaller subset of the population A parameter is a measurable characteristic of a population, such as a mean or standard deviation. A statistic is a measurable characteristic of a sample, such as a mean or standard deviation. 50

51 Terminology Used The notation used to describe these measures appears below: N: Number of observations in the population. n: Number of observations in the sample. μ : The population mean. x: The sample mean. σ 2 : The variance of the population. σ : The standard deviation of the population. s 2 : The variance of the sample. s: The standard deviation of the sample. 51

52 Contents History of Statistics Basic terms involved in Statistics Sampling Examples & In-Class Exercise Estimation theory 52

53 Sampling Theory Simple random sampling refers to a sampling method that has the following properties. population consists of N objects. sample consists of n objects. all possible samples of n objects are equally likely to occur. The main benefit of simple random sampling is that it guarantees that the sample chosen is representative of the population. ensures that the statistical conclusions will be valid. 53

54 Mean & Variance of the Sample The sample mean is the arithmetic average of the values in a random sample It is denoted by: x = (x 1 +x 2 … +x n )/n = 1/n * Σ x i Since it is taken from a random sample, x is a random variable The variance of a sample is the average squared deviation from the sample mean It is denoted by: s 2 = Σ ( x i - x ) 2 / ( n - 1 ) where s 2 is the sample variance, x is the sample mean, x i is the i th element from the sample, and n is the number of elements in the sample Note: Each x i can be defined as a random variable 54

55 Computing the Mean of Population We have a sample with mean x and variance s 2 We know that μ is population mean & x is sample mean. How do we compute the mean of the population using this? Since, E(cX) = c E(X) Since, E(x1) = E(x2) … = μ Hence, population mean can be estimated by computing the expectation of the sample mean. 55

56 Computing the Variance of Population We have a sample with mean x and variance s 2 We know that μ is population mean & x is sample mean. We know that Var(x) = σ 2 /n (from slide 6 of lecture 11) In the next slide, we have a skeleton for the proof. Use it to derive the proof yourself as an in-class exercise How do we compute variance of the population using this? 56

57 (From definition of s 2 ) (Use E(A+B) = E(A)+E(B) ) (Use x is constant & x =1/n Σ x i ) (Use, E(x 1 ) = E(x 2 ) … = E(x n ) ) (Substitute, Var(X) =σ 2 = E(X 2 )- μ 2 & Var(x) =σ 2 /n = E(X 2 )- μ 2 ) Expand the Squared expression on the RHS In-class exercise : Derivation of variance of Population Hints 57

58 (From definition of s 2 ) (Since, E(A+B) = E(A)+E(B) ) (Since, x is constant & x =1/n Σ x i ) (Since, E(x 1 ) = E(x 2 ) … = E(x n ) ) (Since, Var(X) =σ 2 = E(X 2 )- μ 2 & Var(x) =σ 2 /n = E(X 2 )- μ 2 ) Complete Solution 58

59 Computing the Variance of Population We have a sample with mean x and variance s 2 We know that μ is population mean & x is sample mean. We know that Var(x) = σ 2 /n How do we compute variance of the population using this? Hence, population variance can be estimated by computing the expectation of the sample variance. From the derivation, we obtained E(s 2 ) = σ 2 59


Download ppt "Paths in a Graph : A Brief Tutorial Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1."

Similar presentations


Ads by Google