Download presentation
Presentation is loading. Please wait.
Published byHilary Chapman Modified over 6 years ago
1
Computing Network Centrality Measures using Neural Networks
Dvir Cohen, Matan Hugi, Zion Sofer Advisors: Prof. Aryeh Kontorovich, Dr. Rami Puzis, Prof. Lior Rokach
2
Introduction Our goal is to compute node centrality in graphs using Neural Networks. Our course of action was building a neural network that computes node centrality using vertex embedding methods. The neural network computed the centrality without any external supervision or direction.
3
Motivation In graph theory and network analysis, indicators of centrality identify the most important vertices within a graph. Applications include: Finding the most influential person(s) in a social network. PageRank – Google’s method for deciding which page is displayed first. Key infrastructure nodes in the internet or urban networks. Super spreaders of disease. Predicting signaling behaviors in bottlenose dolphin groups.
4
Centrality Measures Which one is the most important one?
Centrality measures is a way to determine which nodes in the graph is the most “important”. There are several deterministic centrality measures including Degree Centrality, Eigenvector Centrality, Closeness Centrality and Betweenness Centrality. Which one is the most important one?
5
Centrality Measures limitations
One of the reasons why so many centrality measures have been defined is because all of the measures have limitations. For example: in a simple analysis of who's most popular in a social network, prioritizing nodes by betweenness centrality could be misleading. Meanwhile, a lot of immunization strategies in analyses of epidemic spreading rely on immunizing nodes with high betweenness. In this circumstance, immunizing people based on their degree centrality could be sub-optimal.
6
Random Walks 𝑊={′5′, ′2′, ′1′, ′5′, ′4′}
A random walk is a finite series of connected vertices in a graph, chosen randomly at each point in the walk. To generate a random walk from a certain vertex, we simply choose one of the neighbors of the vertex at random (If available) and continue from the chosen vertex in the same manner. For example, a random walk of length 4 from vertex “5” for the following graph could be 𝑊={′5′, ′2′, ′1′, ′5′, ′4′}
7
word2vec Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. And so, the linear combination [king] – [man] + [woman] Will be equal (In high probability) to [queen]
8
Algorithm Flow Graph Random walks word2vec Centrality
9
Training the word2vec network
['Node0 Node8 Node4 Node8 Node0 Node8‘] ['Node0 Node8 Node9 Node6 Node5 Node9'] ['Node3 Node7 Node3 Node7 Node9 Node5'] ['Node6 Node9 Node8 Node9 Node7 Node5'] ['Node9 Node7 Node5 Node6 Node5 Node7']
10
Learning the centrality of the nodes
0.106 0.025 0.026 0.076 0.083 0.090 0.080 0.154 0.164 0.192
11
Results We computed the correlation between the output vector generated from our algorithm with popular closeness measures computed on the same graph Name Pearson Spearman linregress correlation p-value DMZ centrality 1 closeness centrality 0.8798 0.0007 0.9224 0.0001 0.774 0.3082 betweenness centrality 0.8443 0.0021 0.8739 0.0009 0.7129 0.0943 eigenvector centrality 0.715 0.02 0.8292 0.003 0.5113 0.2527 degree centrality 0.7072 0.0221 0.7236 0.0179 0.5001 0.3195
12
Conclusion ניתן לראות מהנתונים כי המידע עקבי והגיוני לאורך הניסוי, על סוגים שונים של גרפים. התחום של עבודה עם גרפים ורשתות נוירונים הינו תחום יחסית בחיתוליו ואנו מקווים שהפרויקט שלנו יוכל לתרום לקידום התחום. (להגיד אולי אם יוצא מאמר – לדבר עם רמי לפני) לראיה, בעת שהיינו בלב הפרויקט, יצא מאמר לפני כ – 3 חודשים בשם node2vec אשר משתמש במתודולוגיה הדומה לשלנו, אם כי למטרה שונה לחלוטין.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.