Degree and Eigenvector Centrality

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Eigenvalues and Eigenvectors
Applied Discrete Mathematics Week 12: Trees
Symmetric Matrices and Quadratic Forms
example: four masses on springs
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Correlation. The sample covariance matrix: where.
Network Measures Social Media Mining. 2 Measures and Metrics 2 Social Media Mining Network Measures Klout.
Dominant Eigenvalues & The Power Method
1 Entropy Waves, The Zigzag Graph Product, and New Constant-Degree Expanders Omer Reingold Salil Vadhan Avi Wigderson Lecturer: Oded Levy.
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
Google’s Billion Dollar Eigenvector Gerald Kruse, PhD. John ‘54 and Irene ‘58 Dale Professor of MA, CS and I T Interim Assistant Provost Juniata.
Eigenvalues and Eigenvectors
Three different ways There are three different ways to show that ρ(A) is a simple eigenvalue of an irreducible nonnegative matrix A:
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
1 Entropy Waves, The Zigzag Graph Product, and New Constant-Degree Expanders Omer Reingold Salil Vadhan Avi Wigderson Lecturer: Oded Levy.
Fundamental Graph Theory (Lecture 1) Lectured by Hung-Lin Fu 傅 恆 霖 Department of Applied Mathematics National Chiao Tung University.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
CSE 554 Lecture 8: Alignment
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
Eigenvalues and Eigenvectors
Calculus, Section 1.4.
4. The Eigenvalue.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Copyright © Zeph Grunschlag,
Eigenvalues and Eigenvectors
5 Systems of Linear Equations and Matrices
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Markov Chains Mixing Times Lecture 5
Boyce/DiPrima 10th ed, Ch 7.7: Fundamental Matrices Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
Systems of Ordinary Differential Equations Case I: real eigenvalues of multiplicity 1 MAT 275.
Boyce/DiPrima 10th ed, Ch 7.8: Repeated Eigenvalues Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce.
PageRank and Markov Chains
Introduction to Graph Theory
Network analysis.
Node Similarity Ralucca Gera,
Systems of First Order Linear Equations
Quantum One.
Centralities (2) Ralucca Gera,
Graphs.
Singular Value Decomposition
Section 7.12: Similarity By: Ralucca Gera, NPS.
Quantum Two.
Iterative Aggregation Disaggregation
Closeness Centrality.
Piyush Kumar (Lecture 2: PageRank)
Centralities (4) Ralucca Gera,
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Three Dimensional Systems of Linear Ordinary Differential Equations
Numerical Analysis Lecture 16.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
geometric representations of graphs
Closeness Centrality.
Ilan Ben-Bassat Omri Weinstein
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Katz Centrality (directed graphs).
3.3 Network-Centric Community Detection
Symmetric Matrices and Quadratic Forms
Section 8.3: Degree Distribution
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigenvalues and Eigenvectors
Locality In Distributed Graph Algorithms
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Ch 7.5: Homogeneous Linear Systems with Constant Coefficients
Presentation transcript:

Degree and Eigenvector Centrality The “Centralities” Degree and Eigenvector Centrality

The “Centralities” Degree Centrality

Degree Centrality Quality: what makes a node important (central) Mathematical Description Appropriate Usage Identification Lots of one-hop connections from 𝑣 The number of vertices that 𝑣 influences directly Local influence matters Small diameter Degree centrality deg⁡(𝑣) Lots of one-hop connections from 𝑣 relative to the size of the graph The proportion of the vertices that 𝑣 influences directly Normalized degree centrality deg⁡(𝑣) |V(G)|

Degree Centrality

Degree Centrality per vertex the sequence gives {vertex label: degree, …} {0: 4, 1: 7, 2: 2, 3: 4, 4: 2, 5: 1, 6: 1, 7: 1, 8: 3, 9: 1, 10: 1, 11: 3, 12: 1, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1, 19: 1}

Degree Centrality per vertex as a vector 4 7 2 4 2 1 1 1 3 1 1 3 1 1 1 1 1 1 1 1

Degree centrality (distribution)

Normalized Degree Centrality per vertex 4/20 7 20 2/20 4/20 2/20 1/20 1/20 1/20 3/20 1/20 1/20 3/20 1/20 1/20 1/20 1/20 1/20 1/20 1/20 1/20 Or divide by 𝑛−1 not counting the vertex at which centrality is computed. Or divide by Δ(𝐺) to normalize it since we only care for the relative centrality. 8

Eigenvector Centrality The “Centralities” Eigenvector Centrality

Recall: Quality: what makes a node important (central) Mathematical Description Appropriate Usage Identification Lots of one-hop connections from 𝑣 The number of vertices that 𝑣 influences directly Local influence matters Small diameter Degree centrality (or simply the C i = deg⁡(𝑖)) Lots of one-hop connections from 𝑣 relative to the size of the graph The proportion of the vertices that 𝑣 influences directly Normalized degree centrality C i = deg⁡(𝑖) |V(G)| Lots of one-hop connections to high centrality vertices A weighted degree centrality based on the weight of the neighbors (instead of a weight of 1 as in degree centrality) For example when the people you are connected to matter.

Recall: HOW? Quality: what makes a node important (central) Mathematical Description Appropriate Usage Identification Lots of one-hop connections from 𝑣 The number of vertices that 𝑣 influences directly Local influence matters Small diameter Degree centrality (or simply the C i = deg⁡(𝑖)) Lots of one-hop connections from 𝑣 relative to the size of the graph The proportion of the vertices that 𝑣 influences directly Normalized degree centrality C i = deg⁡(𝑖) |V(G)| Lots of one-hop connections to high centrality vertices A weighted degree centrality based on the weight of the neighbors (instead of a weight of 1 as in degree centrality) For example when the people you are connected to matter. HOW?

Recall: Quality: what makes a node important (central) Mathematical Description Appropriate Usage Identification Lots of one-hop connections from 𝑣 The number of vertices that 𝑣 influences directly Local influence matters Small diameter Degree centrality (or simply the C i = deg⁡(𝑖)) Lots of one-hop connections from 𝑣 relative to the size of the graph The proportion of the vertices that 𝑣 influences directly Normalized degree centrality C i = deg⁡(𝑖) |V(G)| Lots of one-hop connections to high centrality vertices A weighted degree centrality based on the weight of the neighbors (instead of a weight of 1 as in degree centrality) For example when the people you are connected to matter. Eigenvector centrality (recursive formula): 𝐶 𝑖 ∝ j 𝐶 𝑗

Eigenvector Centrality A generalization of the degree centrality: a weighted degree vector that depends on the centrality of its neighbors (rather than every neighbor having a centrality of 1) How do we find it? By finding the largest eigenvalue and its associated eigenvector (leading eigenvector) of the adjacency matrix. We will get to the “why eigenvectors work” part.

Example 1 (Eigenvector centrality) Node 𝑖: Eigenvector centrality 𝐶 𝑖 0: 0, 1: 0.5298987782873977, 2: 0.3577513877490464, 3: 0.5298987782873977, 4: 0.3577513877490464, 5: 0.4271328349194304

Example 1 (Eigenvector centrality) Node 𝑖: Eigenvector centrality 𝐶 𝑖 0: 0, 1: 0.5298987782873977, 2: 0.3577513877490464, 3: 0.5298987782873977, 4: 0.3577513877490464, 5: 0.4271328349194304 Notice that deg(5) < deg (4). Why 𝐶 5 > 𝐶 4 ?

Example 2 (Eigenvector centrality) Node 𝑖: Eigenvector centrality 𝐶 𝑖 0: 0.5842167062067959, 1: 0.4569862980311799, 2: 0.18307279191182163, 3: 0.4569862980311799, 4: 0.41711700125458717, 5: 0.18307279191182163 Slightly lower for vertex 4 here, but in a large graph it may make a bigger difference. deg(4) > deg(3)

Example 3 (Adjacency matrix) [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0.] [ 0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 0.] [ 0. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 1.] [ 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1.] [ 0. 0. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0.] [ 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0.]]

Example 3 (Eigenvector centrality) 0: 0.08448651593556764, 1: 0.1928608426462633, 2: 0.3011603786470362, 3: 0.17530527234882679, 4: 0.40835121533077895, 5: 0.2865100597893966, 6: 0.2791290343288953, 7: 0.1931920790704947, 8: 0.24881035953707603, 9: 0.13868390351302598, 10: 0.336067959653752, 11: 0.16407815738375933, 12: 0.33838887484747293, 13: 0.2871391639624871, 14: 0.22848023925633135

Example 3 (Eigenvector centrality) 2: 0.3011603786470362, 4: 0.40835121533077895, 5: 0.2865100597893966, 6: 0.2791290343288953, 8: 0.24881035953707603, 10: 0.336067959653752, 12: 0.33838887484747293, 13: 0.2871391639624871, 14: 0.22848023925633135 Also, deg(4) = deg(2) = Δ(G), yet…

Example 3 (Eigenvector centrality) 2: 0.3011603786470362, 4: 0.40835121533077895, 5: 0.2865100597893966, 6: 0.2791290343288953, 8: 0.24881035953707603, 10: 0.336067959653752, 12: 0.33838887484747293, 13: 0.2871391639624871, 14: 0.22848023925633135 Also, deg(4) = deg(2) = Δ(G), yet 𝐶 4 > 𝐶 2 .

Example 3 (Eigenvector centrality) Adjacent to vertices of small degree Adjacent to vertices of large degree

Example 3 (Eigenvector centrality) {0: 0.08448651593556764, 1: 0.1928608426462633, 2: 0.3011603786470362, 3: 0.17530527234882679, 4: 0.40835121533077895, 5: 0.2865100597893966, 6: 0.2791290343288953, 7: 0.1931920790704947, 8: 0.24881035953707603, 9: 0.13868390351302598, 10: 0.336067959653752, 11: 0.16407815738375933, 12: 0.33838887484747293, 13: 0.2871391639624871, 14: 0.22848023925633135} deg(13) < deg(8)

Example 3 (Eigenvector centrality) {0: 0.08448651593556764, 1: 0.1928608426462633, 2: 0.3011603786470362, 3: 0.17530527234882679, 4: 0.40835121533077895, 5: 0.2865100597893966, 6: 0.2791290343288953, 7: 0.1931920790704947, 8: 0.24881035953707603, 9: 0.13868390351302598, 10: 0.336067959653752, 11: 0.16407815738375933, 12: 0.33838887484747293, 13: 0.2871391639624871, 14: 0.22848023925633135}

Eigenvector Centrality Define the centrality 𝑥′ 𝑖 of 𝑖 recursively in terms of the centrality of its neighbors 𝑥 𝑖 ′ = 𝑘∈𝑁(𝑖) 𝑥 𝑘 𝑜𝑟 𝑥 𝑖 ′ = 𝑗 𝐴 𝑖𝑗 𝑥 𝑗 With initial vertex centrality 𝑥 𝑗 =1, ∀𝑗 (𝑖𝑛𝑐𝑙𝑢𝑑𝑖𝑛𝑔 𝑖)—we’ll see why on next slide That is equivalent to: 𝑥 𝑖 (𝑡)= 𝑗 𝐴 𝑖𝑗 𝑥 𝑗 (𝑡−1) with the centrality at time t=0 being 𝑥 𝑗 0 =1, ∀𝑗 The centrality of vertices 𝑖 and 𝑗 at time t and t-1, respectively

In class: Eigenvector Centrality Adjacency matrix A for the graph to the right: A= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 Then the vector x(t) = 𝑥 1 𝑥 2 : 𝑥 𝑛 gives a random surfer’s behavior. Answer the following questions based on the information above

In class: Eigenvector Centrality Q1: Find x(1). What does it represent? Answer: 𝑥(1)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 = ?

In class: Eigenvector Centrality Q1: Find x(1). What does it represent? Answer: 𝑥(1)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1 = 3 3 3 0 3 2 The Degree Centrality vector

In class: Eigenvector Centrality Q2: Find x(2). What does it represent? Answer: 𝑥(2)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 3 3 3 0 3 2 = ?

In class: Eigenvector Centrality Q2: Find x(2). What does it represent? Answer: 𝑥(2)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 3 3 3 0 3 2 = 9 9 8 0 8 6 A weighted Degree Centrality vector (distance 2 or less)

In class: Eigenvector Centrality Q3: Find x(3). What does it represent? Answer: 𝑥(3)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 9 9 8 0 8 6 = ?

In class: Eigenvector Centrality Q3: Find x(3). What does it represent? Answer: 𝑥(3)= 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 1 0 9 9 8 0 8 6 = 25 25 24 0 24 16 Another weighted Degree Centrality vector (distance 3 or less)

In class: Eigenvector Centrality 0: 0.49122209552166, 1: 0.49122209552166, 2: 0.4557991200411896, 3: 0, 4: 0.4557991200411896, 5: 0.31921157573304415

Discussion: What did you notice? What is x(3)? Answ: 𝑥 3 =𝐴(𝐴(𝐴 1 1 1 1 1 1 ))) = 𝐴 3 𝑥 0 𝑥 3 depends on the centrality of its distance 3 or less neighbors

Discussion: Can you generalize it? What is x(t)? Answer: 𝑥 t =𝐴(𝐴…(𝐴 1 1 1 1 1 1 ))) = 𝐴 𝑡 𝑥 0 , 𝑡>0 𝑥 t depends on the centrality of its distance t or less neighbors

Eigenvector Centrality Derivation We can consolidate the eigenvector centralities of all the nodes in a recursive formula with vectors: x 𝑡 =𝐴 ∙ 𝐱(𝑡−1) with the centrality at time t=0 being x 0 =𝟏 (as a vector) Then, we solve: x 𝑡 = 𝐴 𝑡 ∙ 𝐱(0), with x 0 =𝟏 Let 𝒗 𝑘 be the eigenvectors of the adjacency matrix 𝐴 Let 𝜆 1 be the largest eigenvalue. Let x 0 = 𝑘 𝑐 𝑘 𝑣 𝑘 , be a linear combination of 𝒗 𝑘 (eigenvectors are orthogonal since 𝐴 is real and symmetric)

Eigenvector Centrality Derivation Facts from previous page: x 𝑡 = 𝐴 𝑡 ∙ 𝐱(0), with x 0 =1 𝒗 𝒌 are the eigenvectors of the adjacency matrix A x 0 = 𝑘 𝑐 𝑘 𝑣 𝑘 is a linear combination of 𝒗 𝒌 𝜆 1 be the largest eigenvalue. Then x 𝑡 = 𝐴 𝑡 ∙ 𝐱 0 = 𝐴 𝑡 𝑘 𝑐 𝑘 𝑣 𝑘 = 𝑘 𝑐 𝑘 𝜆 𝑘 𝑡 𝑣 𝑘 = = 𝜆 1 𝑡 𝑘 𝑐 𝑘 𝜆 𝑘 𝑡 𝜆 1 𝑡 𝑣 𝑘 = 𝜆 1 𝑡 ( 𝑐 1 𝜆 1 𝑡 𝜆 1 𝑡 𝑣 1 + 𝑐 2 𝜆 2 𝑡 𝜆 1 𝑡 𝑣 2 + 𝑐 3 𝜆 3 𝑡 𝜆 1 𝑡 𝑣 3 …) x 𝑡  𝜆 1 𝑡 𝑐 1 𝑣 1 since 𝜆 𝑘 𝑡 𝜆 1 𝑡 0 as t ∞ (as you repeat the process)

Eigenvector Centrality Thus the eigenvector centrality is 𝒙 𝑡 = 𝜆 1 𝑡 𝑐 1 𝒗 1 where 𝒗 𝟏 is the eigenvector corresponding to the largest eigenvalue 𝜆 1 𝑡 So the eigenvector centrality (as a vector), 𝐱 𝑡 , is a multiple of the eigenvector 𝒗 1 , i.e. 𝐱 𝑡 is an eigenvector of 𝐴. A x 𝑡 = 𝜆 1 𝑡 x 𝑡 Meaning that the eigenvector centrality of each node is given by the entries of the leading eigenvector (the one corresponding to the largest eigenvalue λ= 𝜆 1 𝑡 )

Is it well defined? That is: Is the vector guaranteed to exist? Is it unique? Is the eigenvalue unique? Can we have negative entries in the eigenvector? We say that a matrix/vector is positive if all of its entries are positive Perron-Frobenius theorem: A real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector has strictly positive components Perron-Frobenius theorem applies to positive matrices (but it gives similar information for nonnegative ones)

Perron-Frobenius theorem for nonnegative symmetric (0,1)-matrices Let A ∈ 𝑅 𝑛 𝑋 𝑛 be symmetric (0,1)-nonnegative, then there is a unique maximal eigenvalue λ 1 of the matrix A (for any other eigenvalue λ, we have λ < λ 1 , with the possibility of |λ| = λ 1 for nonnegative matrices) λ 1 is real, simple (i.e., has multiplicity one), and positive (trace is zero so some are positive and some negative), the associated eigenvector is nonnegative (and there are no other nonnegative ones since all eigenvectors are orthogonal) If you have not seen this and its proof in linear algebra, see a proof on pages 346-347 of Newman’s textbook

In class problem: Let G be an r-regular graph. Find the eigenvector centrality, Find the leading eigenvalue. Solution: use a matrix A and the vector 1 to begin with, and follow a similar argument as the earlier in class problem Take away: even this centrality may not differentiate vertices that it should, such as:

In class problem (2) Note that 𝑥 𝑖 2 𝑥 𝑖 1 ≠ 𝑥 𝑖 3 𝑥 𝑖 2 , where 𝑥 𝑖 is the 𝑖 𝑡ℎ entry. For example: 1 1 1 1 1 1 , 3 3 3 0 3 2 , 9 9 8 0 8 6 , 25 25 24 0 24 16 (in finding the eigenvectors, these vectors get normalized as they are computed using the power method from Linear Algebra - see page 248 in Newman’s book) However, the ratios will converge to λ 1

Eigenvector Centrality Therefore: it is a generalized degree centrality (takes into consideration the global network) It is extremely useful, one of the most common ones used for non-oriented networks 𝐶 𝑖 ∝ j 𝐶 𝑗 or 𝐶 𝑖 = 𝜆 −1 𝑗 𝐴 𝑖𝑗 𝐶 𝑗 or 𝐶 𝑖 = 𝑖𝑗 є 𝐸(𝐺) 𝐶 𝑗 Why is Eigenvector Centrality not often used for directed graphs? Adjacency matrix is asymmetric…use left or right leading eigenvector? Choose right leading eigenvector…importance bestowed by vertices pointing toward you (same problem with left). Any vertex with in degree zero has centrality value zero and “passes” that value to all vertices to which it points. The fix: Katz centrality