Download presentation
Presentation is loading. Please wait.
1
From Consensus to Social Learning Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory Alvaro Sandroni Penn Econ. and Kellogg School of Management, Northwestern University Block Island Workshop on Swarming, June 2009 With Alireza Tahbaz-Salehi and Victor Preciado
2
Emergence of Consensus, synchronization, flocking Opinion dynamics, crowd control, synchronization and flocking
3
Flocking and opinion dynamics Bounded confidence opinion model (Krause, 2000) Nodes update their opinions as a weighted average of the opinion value of their friends Friends are those whose opinion is already close When will there be fragmentation and when will there be convergence of opinions? Dynamics changes topology
4
Conditions for reaching consensus Theorem (Jadbabaie et al. 2003, Tsitsiklis’84): If there is a sequence of bounded, non-overlapping time intervals T k, such that over any interval of length T k, the network of agents is “jointly connected ”, then all agents will reach consensus on their velocity vectors. Convergence time (Olshevsky, Tsitskilis) : T(eps)=O(n 3 log n/eps) Similar result when network changes randomly.
5
Random Networks Random Networks The graphs could be correlated so long as they are stationary-ergodic.
6
Variance of consensus value for ER graphs Variance of consensus value for ER graphs New results for finite random graphs: Explicit expression for the variance of x*. The variance is a function of c, n and the initial conditions x(0) (although the explicit expression is messy) Plots of Var(x*) for initial conditions uniformly distributed in [0,1] The average weight matrix is symmetric!! p Var(x*) n=3 n=6 n=9 n=12 n=15 where r(p,n) is a non-trivial (although closed-form) function that goes to 1 as n goes to infinity
7
Consensus and Naïve Social learning When is consensus a good thing? Need to make sure update converges to the correct value
8
Naïve vs. Rational Decision Making Just average! Fuse info with Bayes Rule Naïve learning
9
Social learning There is a true state of the world, among countably many We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations Ideally we use Bayes rule to do the information aggregation Works well when there is one agent (Blackwell, Dubin’1963), become impossible when more than 2!
10
Locally Rational, Globally Naïve: Bayesian learning under peer pressure
11
Model Description
13
Belief Update Rule
14
Why this update?
15
Eventually correct forecasts Eventually-correct estimation of the output!
16
Why strong connectivity? No convergence if different people interpret signals differently N is misled by listening to the less informed agent B
17
Example One can actually learn from others
18
Convergence of beliefs and consensus on correct value!
19
Learning from others
20
Summary
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.