From Consensus to Social Learning Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory Alvaro Sandroni Penn Econ. and Kellogg.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Markov Networks Alan Ritter.
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Bayesian models for fMRI data
Kick-off Meeting, July 28, 2008 ONR MURI: NexGeNetSci Distributed Coordination, Consensus, and Coverage in Networked Dynamic Systems Ali Jadbabaie Electrical.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
(speaker) Fedor Groshev Vladimir Potapov Victor Zyablov IITP RAS, Moscow.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Direction Election in Flocking Swarms Ohad Ben-Shahar, Shlomi Dolev Andrey Dolgin, Michael Segal Ben-Gurion University of the Negev.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
Entropy Rates of a Stochastic Process
Bayesian inference Gil McVean, Department of Statistics Monday 17 th November 2008.
Zhixin Liu Complex Systems Research Center,
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Algorithmic and Economic Aspects of Networks Nicole Immorlica.
ONR MURI: NexGeNetSci From Consensus to Social Learning in Complex Networks Ali Jadbabaie Skirkanich Associate Professor of innovation Electrical & Systems.
Prediction and model selection
A Model of Opinion Dynamics Among Firms Volker Barth, GELENA research group, Oldenburg University Potentials of Complexity Science for Business, Governments,
1 Computation in a Distributed Information Market Joan Feigenbaum (Yale) Lance Fortnow (NEC Labs) David Pennock (Overture) Rahul Sami (Yale)
Learning and Planning for POMDPs Eyal Even-Dar, Tel-Aviv University Sham Kakade, University of Pennsylvania Yishay Mansour, Tel-Aviv University.
N.E. Leonard – Block Island Workshop on Swarming – June 3, 2009 Slide 1 Spatial Dynamics, Information Flow and Collective Behavior Naomi Ehrich Leonard.
Collective Revelation: A Mechanism for Self-Verified, Weighted, and Truthful Predictions Sharad Goel, Daniel M. Reeves, David M. Pennock Presented by:
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
2 Introduction: phase transition phenomena Phase transition: qualitative change as a parameter crosses threshold Matter temperature magnetism demagnetism.
Bayesian and non-Bayesian Learning in Games Ehud Lehrer Tel Aviv University, School of Mathematical Sciences Including joint works with: Ehud Kalai, Rann.
Marin Stamov CS 765 Oct Social network Importance of the information spread in social networks Information types Advertising in social networks.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Using Trust in Distributed Consensus with Adversaries in Sensor and Other Networks Xiangyang Liu, and John S. Baras Institute for Systems Research and.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
9. Convergence and Monte Carlo Errors. Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A.
Consensus in Multi-agent Systems with Second-order Dynamics Wenwu Yu Department of Mathematics Southeast University, Nanjing, China Supervisor: Guanrong.
Lab3: Bayesian phylogenetic Inference and MCMC Department of Bioinformatics & Biostatistics, SJTU.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
Module networks Sushmita Roy BMI/CS 576 Nov 18 th & 20th, 2014.
Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane.
The star-tree paradox in Bayesian phylogenetics Bengt Autzen Department of Philosophy, Logic and Scientific Method LSE.
Week 41 Estimation – Posterior mean An alternative estimate to the posterior mode is the posterior mean. It is given by E(θ | s), whenever it exists. This.
Tim Marks, Dept. of Computer Science and Engineering Random Variables and Random Vectors Tim Marks University of California San Diego.
Confidence Interval & Unbiased Estimator Review and Foreword.
1 A Comparison of Information Management using Imprecise Probabilities and Precise Bayesian Updating of Reliability Estimates Jason Matthew Aughenbaugh,
Lecture 2: Statistical learning primer for biologists
1 Computation in a Distributed Information Market Joan Feigenbaum (Yale) Lance Fortnow (NEC Labs) David Pennock (Overture) Rahul Sami (Yale) To appear.
Sampling and estimation Petter Mostad
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Fundamental Limitations of Networked Decision Systems Munther A. Dahleh Laboratory for Information and Decision Systems MIT AFOSR-MURI Kick-off meeting,
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Asymptotic behaviour of blinking (stochastically switched) dynamical systems Vladimir Belykh Mathematics Department Volga State Academy Nizhny Novgorod.
Decentralized Control of Mobile Sensor Networks Kevin M. Lynch Laboratory for Intelligent Mechanical Systems Mechanical Engineering Department Northwestern.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Mix networks with restricted routes PET 2003 Mix Networks with Restricted Routes George Danezis University of Cambridge Computer Laboratory Privacy Enhancing.
Consensus Problems in Networks Aman Agarwal EYES 2007 intern Advisor Prof. Mostofi ECE, University of New Mexico July 5, 2007.
MAIN RESULT: We assume utility exhibits strategic complementarities. We show: Membership in larger k-core implies higher actions in equilibrium Higher.
UCLA March 2, 2006 IPAM Workshop on Swarming by Nature and by Design thanks to the organizers: A. Bertozzi D. Grunbaum P. S. Krishnaprasad I. Schwartz.
1 Distributed Motion Coordination: From Swarming to Synchronization Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Outline Historical note about Bayes’ rule Bayesian updating for probability density functions –Salary offer estimate Coin trials example Reading material:
Random Walk for Similarity Testing in Complex Networks
Data Analysis Patrice Koehl Department of Biological Sciences
Privacy and Fault-Tolerance in Distributed Optimization Nitin Vaidya University of Illinois at Urbana-Champaign.
Term Project Presentation By: Keerthi C Nagaraj Dated: 30th April 2003
Consensus in Random Networks
Hidden Markov Models Part 2: Algorithms
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
EE513 Audio Signals and Systems
Sampling Distributions
CS639: Data Management for Data Science
Presentation transcript:

From Consensus to Social Learning Ali Jadbabaie Department of Electrical and Systems Engineering and GRASP Laboratory Alvaro Sandroni Penn Econ. and Kellogg School of Management, Northwestern University Block Island Workshop on Swarming, June 2009 With Alireza Tahbaz-Salehi and Victor Preciado

Emergence of Consensus, synchronization, flocking Opinion dynamics, crowd control, synchronization and flocking

Flocking and opinion dynamics Bounded confidence opinion model (Krause, 2000) Nodes update their opinions as a weighted average of the opinion value of their friends Friends are those whose opinion is already close When will there be fragmentation and when will there be convergence of opinions? Dynamics changes topology

Conditions for reaching consensus Theorem (Jadbabaie et al. 2003, Tsitsiklis’84): If there is a sequence of bounded, non-overlapping time intervals T k, such that over any interval of length T k, the network of agents is “jointly connected ”, then all agents will reach consensus on their velocity vectors. Convergence time (Olshevsky, Tsitskilis) : T(eps)=O(n 3 log n/eps) Similar result when network changes randomly.

Random Networks Random Networks The graphs could be correlated so long as they are stationary-ergodic.

Variance of consensus value for ER graphs Variance of consensus value for ER graphs New results for finite random graphs: Explicit expression for the variance of x*. The variance is a function of c, n and the initial conditions x(0) (although the explicit expression is messy) Plots of Var(x*) for initial conditions uniformly distributed in [0,1] The average weight matrix is symmetric!! p Var(x*) n=3 n=6 n=9 n=12 n=15 where r(p,n) is a non-trivial (although closed-form) function that goes to 1 as n goes to infinity

Consensus and Naïve Social learning When is consensus a good thing? Need to make sure update converges to the correct value

Naïve vs. Rational Decision Making Just average! Fuse info with Bayes Rule Naïve learning

Social learning There is a true state of the world, among countably many We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations Ideally we use Bayes rule to do the information aggregation Works well when there is one agent (Blackwell, Dubin’1963), become impossible when more than 2!

Locally Rational, Globally Naïve: Bayesian learning under peer pressure

Model Description

Belief Update Rule

Why this update?

Eventually correct forecasts Eventually-correct estimation of the output!

Why strong connectivity? No convergence if different people interpret signals differently N is misled by listening to the less informed agent B

Example One can actually learn from others

Convergence of beliefs and consensus on correct value!

Learning from others

Summary