From Consensus to Social Learning Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory Alvaro Sandroni Penn Econ. and Kellogg School of Management, Northwestern University Block Island Workshop on Swarming, June 2009 With Alireza Tahbaz-Salehi and Victor Preciado
Emergence of Consensus, synchronization, flocking Opinion dynamics, crowd control, synchronization and flocking
Flocking and opinion dynamics Bounded confidence opinion model (Krause, 2000) Nodes update their opinions as a weighted average of the opinion value of their friends Friends are those whose opinion is already close When will there be fragmentation and when will there be convergence of opinions? Dynamics changes topology
Conditions for reaching consensus Theorem (Jadbabaie et al. 2003, Tsitsiklis’84): If there is a sequence of bounded, non-overlapping time intervals T k, such that over any interval of length T k, the network of agents is “jointly connected ”, then all agents will reach consensus on their velocity vectors. Convergence time (Olshevsky, Tsitskilis) : T(eps)=O(n 3 log n/eps) Similar result when network changes randomly.
Random Networks Random Networks The graphs could be correlated so long as they are stationary-ergodic.
Variance of consensus value for ER graphs Variance of consensus value for ER graphs New results for finite random graphs: Explicit expression for the variance of x*. The variance is a function of c, n and the initial conditions x(0) (although the explicit expression is messy) Plots of Var(x*) for initial conditions uniformly distributed in [0,1] The average weight matrix is symmetric!! p Var(x*) n=3 n=6 n=9 n=12 n=15 where r(p,n) is a non-trivial (although closed-form) function that goes to 1 as n goes to infinity
Consensus and Naïve Social learning When is consensus a good thing? Need to make sure update converges to the correct value
Naïve vs. Rational Decision Making Just average! Fuse info with Bayes Rule Naïve learning
Social learning There is a true state of the world, among countably many We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations Ideally we use Bayes rule to do the information aggregation Works well when there is one agent (Blackwell, Dubin’1963), become impossible when more than 2!
Locally Rational, Globally Naïve: Bayesian learning under peer pressure
Model Description
Belief Update Rule
Why this update?
Eventually correct forecasts Eventually-correct estimation of the output!
Why strong connectivity? No convergence if different people interpret signals differently N is misled by listening to the less informed agent B
Example One can actually learn from others
Convergence of beliefs and consensus on correct value!
Learning from others
Summary