ONR MURI: NexGeNetSci From Consensus to Social Learning in Complex Networks Ali Jadbabaie Skirkanich Associate Professor of innovation Electrical & Systems.

Slides:



Advertisements
Similar presentations
Class 6: Hypothesis testing and confidence intervals
Advertisements

Kick-off Meeting, July 28, 2008 ONR MURI: NexGeNetSci Distributed Coordination, Consensus, and Coverage in Networked Dynamic Systems Ali Jadbabaie Electrical.
ONR MURI: NexGeNetSci From Local Network Motifs to Global Invariants Third Year Review, October 29, 2010 Victor M. Preciado and Ali Jadbabaie Department.
Flocks, Herds and Schools Modeling and Analytic Approaches.
Bayesian Wrap-Up (probably). 5 minutes of math... Marginal probabilities If you have a joint PDF:... and want to know about the probability of just one.
The General Linear Model. The Simple Linear Model Linear Regression.
STAT 497 APPLIED TIME SERIES ANALYSIS
ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009.
11 - Markov Chains Jim Vallandingham.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
Mining and Searching Massive Graphs (Networks)
Algorithmic and Economic Aspects of Networks Nicole Immorlica.
Statistics & Modeling By Yan Gao. Terms of measured data Terms used in describing data –For example: “mean of a dataset” –An objectively measurable quantity.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
Geographic Gossip: Efficient Aggregations for Sensor Networks Author: Alex Dimakis, Anand Sarwate, Martin Wainwright University: UC Berkeley Venue: IPSN.
Link Analysis, PageRank and Search Engines on the Web
Convergence Speed of Binary Interval Consensus Moez Draief Imperial College London Milan Vojnović Microsoft Research IEEE Infocom 2010, San Diego, CA,
Theory Data Analysis Numerical Experiments Lab Experiments Field Exercises Real-World Operations First principles Rigorous math Algorithms Proofs Correct.
Bayesian Networks Alan Ritter.
Kick-off Meeting, July 28, 2008 ONR MURI: NexGeNetSci Next Generation Network Science: An Overview Michael Kearns and Ali Jadbabaie University of Pennsylvania.
From Consensus to Social Learning Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory Alvaro Sandroni Penn Econ. and Kellogg.
Linear Algebra and Image Processing
Models of Influence in Online Social Networks
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
The Erdös-Rényi models
Information Networks Power Laws and Network Models Lecture 3.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Spectral coordinate of node u is its location in the k -dimensional spectral space: Spectral coordinates: The i ’th component of the spectral coordinate.
2 Introduction: phase transition phenomena Phase transition: qualitative change as a parameter crosses threshold Matter temperature magnetism demagnetism.
Principles of Pattern Recognition
Traffic Modeling.
Theory of Probability Statistics for Business and Economics.
Chapter 6 Lecture 3 Sections: 6.4 – 6.5.
Average Consensus Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Ch5. Probability Densities II Dr. Deshi Ye
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Limitations of Cotemporary Classification Algorithms Major limitations of classification algorithms like Adaboost, SVMs, or Naïve Bayes include, Requirement.
Random Graph Generator University of CS 8910 – Final Research Project Presentation Professor: Dr. Zhu Presented: December 8, 2010 By: Hanh Tran.
Synchronization in complex network topologies
Workshop on Optimization in Complex Networks, CNLS, LANL (19-22 June 2006) Application of replica method to scale-free networks: Spectral density and spin-glass.
KPS 2007 (April 19, 2007) On spectral density of scale-free networks Doochul Kim (Department of Physics and Astronomy, Seoul National University) Collaborators:
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Sampling and estimation Petter Mostad
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Fundamental Limitations of Networked Decision Systems Munther A. Dahleh Laboratory for Information and Decision Systems MIT AFOSR-MURI Kick-off meeting,
Chapter 13 (Prototype Methods and Nearest-Neighbors )
BASIC STATISTICAL CONCEPTS Statistical Moments & Probability Density Functions Ocean is not “stationary” “Stationary” - statistical properties remain constant.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Consensus Problems in Networks Aman Agarwal EYES 2007 intern Advisor Prof. Mostofi ECE, University of New Mexico July 5, 2007.
geometric representations of graphs
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
A Tutorial on Spectral Clustering Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Statistics and Computing, Dec. 2007, Vol. 17, No.
UCLA March 2, 2006 IPAM Workshop on Swarming by Nature and by Design thanks to the organizers: A. Bertozzi D. Grunbaum P. S. Krishnaprasad I. Schwartz.
Parameter, Statistic and Random Samples
Random Walk for Similarity Testing in Complex Networks
Privacy and Fault-Tolerance in Distributed Optimization Nitin Vaidya University of Illinois at Urbana-Champaign.
Consensus in Random Networks
Degree and Eigenvector Centrality
Statistical Methods For Engineers
Hidden Markov Models Part 2: Algorithms
geometric representations of graphs
Properties and applications of spectra for networks
Shan Lu, Jieqi Kang, Weibo Gong, Don Towsley UMASS Amherst
ACHIEVEMENT DESCRIPTION
Presentation transcript:

ONR MURI: NexGeNetSci From Consensus to Social Learning in Complex Networks Ali Jadbabaie Skirkanich Associate Professor of innovation Electrical & Systems Engineering and GRASP Laboratory University of Pennsylvania First Year Review, August 27, 2009 With Alireza Tahbaz-Salehi and Victor Preciado

Theory Data Analysis Numerical Experiments Lab Experiments Field Exercises Real-World Operations First principles Rigorous math Algorithms Proofs Correct statistics Only as good as underlying data Simulation Synthetic, clean data Stylized Controlled Clean, real-world data Semi- Controlled Messy, real-world data Unpredictable After action reports in lieu of data Jadbabaie Collective behavior, social aggregation

Good news: Spectacular progress Consensus and information aggregation Random spectral graph theory synchronization, virus spreading New abstractions beyond graphs: understanding network topology simplicial homology computing homology groups

Consensus, Flocking and Synchronization Opinion dynamics, crowd control, synchronization and flocking

Flocking and opinion dynamics Bounded confidence opinion model (Krause, 2000) –Nodes update their opinions as a weighted average of the opinion value of their friends –Friends are those whose opinion is already close –When will there be fragmentation and when will there be convergence of opinions? – Dynamics changes topology

Consensus in random networks Consider a network with n nodes and a vector of initial values, x(0) Consensus using a switching and directed graph G n (t) In each time step, G n (t) is a realization of a random graph where edges appear with probability, Pr(a ij =1)=p, independently of each other Random Ensemble Consensus dynamics Stationary behavior Despite its easy formulation, very little is known about x* and v

Random Networks The graphs could be correlated so long as they are stationary-ergodic.

What about the consensus value? Random graph sequence means that consensus value is a random variable Question: What is its distribution? A relatively easy case : – Distribution is degenerate (a Dirac) if and only if all matrices have the same left eigenvector with probability 1. In general: Where is the eigenvector associated with the largest eigenvalue (Perron vector) Can we say more?

E[W k  W k ] for Erdos-Renyi graphs Define:

For simplicity in our explanation, we illustrate the structure of E[W k  W k ] using the case n=4: Random Consensus These entries have the following expressions: where q=1-p and H(p,n) is a special function that can be written in terms of a hypergeometric function (the detailed expression is not relevant in our exposition)

Defining the parameter we can finally write the left eigenvector of the expected Kronecker as: Furthermore, substituting the above eigenvector in our original expression for the variance (and simple algebraic simplifications) we deduce the following final expression as a function of p, n, and x(0): where Variance of consensus value for Erdos-Renyi graphs

var(x*) for initial conditions uniformly distributed in [0,1], n Є {3,6,9,12,15}, and p varying in the range (0,1] Random Consensus (plots) p Var(x*) n=3 n=6 n=9 n=12 n=15 What about other random graphs?

Static Model with Prescribed Expected Degree Distribution Generalized static models [Chung and Lu, 2003] : –Random graph with a prescribed expected degree sequence –We can impose an expected degree w i on the i-th node Degree distributions are useful to the extent that they tell us something about the spectral properties (at least for distributed computation/optimization) i j

1000 nodes 500 nodes100 nodes Eigenvalues of Chung-Lu Graph What is the eigenvalue distribution of the adjacency matrix for very large Chung-Lu random networks? Numerical Experiment: Represent the histogram of eigenvalues for several realizations of this random graph Limiting Spectral Density: Analytical expression only possible for very particular cases. Contribution: Estimation of the shape of the bulk for a given expected degree sequence, (w 1,…,w n ).

Spectral moments of random graphs and degree distributions Degree distributions can reveal the moments of the spectra of graph Laplacians Determine synchronizability Speed of convergence of distributed algorithms Lower moments do not necessarily fix the support, but they fix the shape Analysis of virus spreading (depends on spectral radius of adjacency) Non-conservative synchronization conditions on graphs with prescribed degree distributions Analytic expressions for spectral moments of random geometric graphs

Consensus and Naïve Social learning When is consensus a good thing? Need to make sure update converges to the correct value

Naïve vs. Bayesian just average with neighbors Fuse info with Bayes Rule Naïve learning

Social learning There is a true state of the world, among countably many We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations Ideally we use Bayes rule to do the information aggregation Works well when there is one agent (Blackwell, Dubins’1962), become impossible when more than 2!

Locally Rational, Globally Naïve: Bayesian learning under peer pressure

Model Description

Belief Update Rule

Why this update?

Eventually correct forecasts Eventually-correct estimation of the output!

Why strong connectivity? No convergence if different people interpret signals differently N is misled by listening to the less informed agent B

Example One can actually learn from others

Learning from others Information in i’th signal only good for distinguishing

Convergence of beliefs and consensus on correct value!

Learning from others

Summary Only one agent needs a positive prior on the true state!