Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic.

Similar presentations


Presentation on theme: "Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic."— Presentation transcript:

1 Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic

2 Characterizing networks: How far apart are things? 2

3 Network metrics: paths A path is any sequence of vertices such that every consecutive pair of vertices in the sequence is connected by an edge in the network. For directed: traversed in the correct direction for the edges. path can visit itself (vertex or edge) more than once Self-avoiding paths do not intersect themselves. Path length r is the number of edges on the path Called hops 3

4 Network metrics: paths 4

5 Network metrics: shortest paths A B C D E 1 2 2 3 3 5

6 Structural metrics: Average path length 6 1 ≤ L ≤ D ≤ N-1

7 Eulerian Path Euler’s Seven Bridges of Königsberg one of the first problems in graph theory Is there a route that crosses each bridge only once and returns to the starting point? Source: http://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg Image 1 – GNU v1.2: Bogdan, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_Licensehttp://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 2 – GNU v1.2: Booyabazooka, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_Licensehttp://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 3 – GNU v1.2: Riojajar, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_Licensehttp://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License

8 Eulerian and Hamiltonian paths Hamiltonian path is self avoiding If starting point and end point are the same: only possible if no nodes have an odd degree as each path must visit and leave each shore If don’t need to return to starting point can have 0 or 2 nodes with an odd degree Eulerian path: traverse each edge exactly once Hamiltonian path: visit each vertex exactly once

9 Characterizing networks: Is everything connected? 9

10 Network metrics: components If there is a path from every vertex in a network to every other, the network is connected otherwise, it is disconnected Component: A subset of vertices such that there exist at least one path from each member of the subset to others and there does not exist another vertex in the network which is connected to any vertex in the subset Maximal subset A singeleton vertex that is not connected to any other forms a size one component Every vertex belongs to exactly one component 10

11 components in directed networks A B C D E F G H Weakly connected components A B C D E G H F 11 Strongly connected components Each node within the component can be reached from every other node in the component by following directed links Strongly connected components B C D E A G H F Weakly connected components: every node can be reached from every other node by following links in either direction A B C D E F G H

12 components in directed networks Every strongly connected component of more than one vertex has at least one cycle Out-component: set of all vertices that are reachable via directed paths starting at a specific vertex v Out-components of all members of a strongly connected component are identical In-component: set of all vertices from which there is a direct path to a vertex v In-components of all members of a strongly connected component are identical 12 A B C D E F G H

13 network metrics: size of giant component if the largest component encompasses a significant fraction of the graph, it is called the giant component 13

14 bowtie model of the web The Web is a directed graph: webpages link to other webpages The connected components tell us what set of pages can be reached from any other just by surfing no ‘jumping’ around by typing in a URL or using a search engine Broder et al. 1999 – crawl of over 200 million pages and 1.5 billion links. SCC – 27.5% IN and OUT – 21.5% Tendrils and tubes – 21.5% Disconnected – 8% 14

15 Independent paths Edge independent paths: if they share no common edge Vertex independent paths: if they share no common vertex except start and end vertices Vertex-independent => Edge-independent Also called disjoint paths These set of paths are not necessarily unique Connectivity of vertices: the maximal number of independent paths between a pair of vertices Used to identify bottlenecks and resiliency to failures 15

16 Cut Sets and Maximum Flow A minimum cut set is the smallest cut set that will disconnect a specified pair of vertices Need not to be unique Menger’s theorem: If there is no cut set of size less than n between a pair of vertices, then there are at least n independent paths between the same vertices. Implies that the size of min cut set is equal to maximum number of independent paths (for both edge and vertex independence) Maximum Flow between a pair of vertices is the number of edge independent paths times the edge capacity. 16

17 Graph Laplacian 17

18 Eigenvalues and eigenvectors have their origins in physics, in particular in problems where motion is involved, although their uses extend from solutions to stress and strain problems to differential equations and quantum mechanics. Eigenvectors are vectors that point in directions where there is no rotation. Eigenvalues are the change in length of the eigenvector from the original length. The basic equation in eigenvalue problems is: Eigenvalues and eigenvectors Slides from Fred K. Duennebier

19 In words, this deceptively simple equation says that for the square matrix A, there is a vector x such that the product of Ax such that the result is a SCALAR,, that, when multiplied by x, results in the same product. The multiplication of vector x by a scalar constant is the same as stretching or shrinking the coordinates by a constant value. (E.01) Eigenvalues and eigenvectors

20 The vector x is called an eigenvector and the scalar, is called an eigenvalue. Do all matrices have real eigenvalues? No, they must be square and the determinant of A- I must equal zero. This is easy to show: This can only be true if det(A- I )=|A- I |=0 Are eigenvectors unique? No, if x is an eigenvector, then  x is also an eigenvector and  is an eigenvalue. A(  x)=  Ax =  x =  (  x) (E.02) (E.03) (E.04)

21 How do you calculate eigenvectors and eigenvalues? Expand equation (E.03): det(A- I )=|A- I |=0 for a 2x2 matrix: For a 2-dimensional problem such as this, the equation above is a simple quadratic equation with two solutions for. In fact, there is generally one eigenvalue for each dimension, but some may be zero, and some complex. (E.05)

22 The solution to E.05 is: (E.06) This “characteristic equation” does not involve x, and the resulting values of can be used to solve for x. Consider the following example: (E.07) Eqn. E.07 doesn’t work here because a 11 a 22 -a 12 a 12 =0, so we use E.06:

23 We see that one solution to this equation is =0, and dividing both sides of the above equation by yields =5. Thus we have our two eigenvalues, and the eigenvectors for the first eigenvalue, =0 are: These equations are multiples of x=-2y, so the smallest whole number values that fit are x=2, y=-1

24 For the other eigenvalue, =5: This example is rather special; A -1 does not exist, the two rows of A- I are dependent and thus one of the eigenvalues is zero. (Zero is a legitimate eigenvalue!) EXAMPLE: A more common case is A =[1.05.05 ;.05 1] used in the strain exercise. Find the eigenvectors and eigenvalues for this A, and then calculate [V,D]=eig[A]. The procedure is: 1)Compute the determinant of A- I 2)Find the roots of the polynomial given by | A- I|=0 3)Solve the system of equations (A- I)x=0

25 Or we could find the eigenvalues of A and obtain A 100 very quickly using eigenvalues. What is A 100 ? We can get A 100 by multiplying matrices many many times: What good are such things? Consider the matrix:

26 For now, I’ll just tell you that there are two eigenvectors for A: The eigenvectors are x 1 =[.6 ;.4] and x 2 =[1 ; -1], and the eigenvalues are 1 =1 and 2 =0.5. Note that if we multiply x 1 by A, we get x 1. If we multiply x 1 by A again, we STILL get x 1. Thus x 1 doesn’t change as we mulitiply it by A n.

27 What about x 2 ? When we multiply A by x 2, we get x 2 /2, and if we multiply x 2 by A 2, we get x 2 /4. This number gets very small fast. Note that when A is squared the eigenvectors stay the same, but the eigenvalues are squared! Back to our original problem we note that for A 100, the eigenvectors will be the same, the eigenvalues 1 =1 and 2 =(0.5) 100, which is effectively zero. Each eigenvector is multiplied by its eigenvalue whenever A is applied,


Download ppt "Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic."

Similar presentations


Ads by Google