15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Global Value Numbering using Random Interpretation Sumit Gulwani George C. Necula CS Department University of California, Berkeley.
Applications Computational LogicLecture 11 Michael Genesereth Spring 2004.
Bellwork If you roll a die, what is the probability that you roll a 2 or an odd number? P(2 or odd) 2. Is this an example of mutually exclusive, overlapping,
Advanced Piloting Cruise Plot.
Optimal Algorithms for k-Search with Application in Option Pricing Julian Lorenz, Konstantinos Panagiotou, Angelika Steger Institute of Theoretical.
Optimal Adaptive Execution of Portfolio Transactions
PROCESS SELECTION Chapter 4.
Introductory Mathematics & Statistics for Business
Milan Vojnović Microsoft Research Cambridge Collaborators: E. Perron and D. Vasudevan 1 Consensus – with Limited Processing and Signalling.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Fig 2.1 Chapter 2.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 38.
STATISTICS Sampling and Sampling Distributions
By D. Fisher Geometric Transformations. Reflection, Rotation, or Translation 1.
1 of 21 Information Strategy Developing an Information Strategy © FAO 2005 IMARK Investing in Information for Development Information Strategy Developing.
Source of slides: Introduction to Automata Theory, Languages and Computation.
Combining Like Terms. Only combine terms that are exactly the same!! Whats the same mean? –If numbers have a variable, then you can combine only ones.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
0 - 0.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
ADDING INTEGERS 1. POS. + POS. = POS. 2. NEG. + NEG. = NEG. 3. POS. + NEG. OR NEG. + POS. SUBTRACT TAKE SIGN OF BIGGER ABSOLUTE VALUE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
Assumptions underlying regression analysis
ZMQS ZMQS
The Value of Information with and without Control Gordon Hazen, Northwestern University.
Graph Colouring The Team : Aymen Dammak Sébastien Jagueneau Florian Lajus Xavier Loubatier Cyril Rayot Mathieu Rey Mentor : Paul-Yves Gloess.
1 Alberto Montanari University of Bologna Basic Principles of Water Resources Management.
Chapter 4: Basic Estimation Techniques
Department of Engineering Management, Information and Systems
Randomized Algorithms Randomized Algorithms CS648 1.
Tutorial 8, STAT1301 Fall 2010, 16NOV2010, By Joseph Dong.
6. Statistical Inference: Example: Anorexia study Weight measured before and after period of treatment y i = weight at end – weight at beginning For n=17.
Parallel List Ranking Advanced Algorithms & Data Structures Lecture Theme 17 Prof. Dr. Th. Ottmann Summer Semester 2006.
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
The Capacity of Wireless Networks
Civics: Government and Economics in Action
© S Haughton more than 3?
LT Codes Paper by Michael Luby FOCS ‘02 Presented by Ashish Sabharwal Feb 26, 2003 CSE 590vg.
The Solow Model and Beyond
Phase II/III Design: Case Study
Chapter 4 Inference About Process Quality
Linking Verb? Action Verb or. Question 1 Define the term: action verb.
Routing and Congestion Problems in General Networks Presented by Jun Zou CAS 744.
Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
CRT RSA Algorithm Protected Against Fault Attacks WISTP - 5/10/07 Arnaud BOSCHER Spansion EMEA Robert NACIRI Oberthur Card Systems Emmanuel PROUFF Oberthur.
1 First EMRAS II Technical Meeting IAEA Headquarters, Vienna, 19–23 January 2009.
Addition 1’s to 20.
25 seconds left…...
Test B, 100 Subtraction Facts
Complexity ©D.Moshkovits 1 Where Can We Draw The Line? On the Hardness of Satisfiability Problems.
Week 1.
We will resume in: 25 Minutes.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 15 Probability Rules!
1 Unit 1 Kinematics Chapter 1 Day
The Small World Phenomenon: An Algorithmic Perspective Speaker: Bradford Greening, Jr. Rutgers University – Camden.
Lecture 5 1 Continuous distributions Five important continuous distributions: 1.uniform distribution (contiuous) 2.Normal distribution  2 –distribution[“ki-square”]
all-pairs shortest paths in undirected graphs
1 A Graph-Theoretic Network Security Game M. Mavronicolas , V. Papadopoulou , A. Philippou  and P. Spirakis § University of Cyprus, Cyprus  University.
Information Extraction Lecture 7 – Linear Models (Basic Machine Learning) CIS, LMU München Winter Semester Dr. Alexander Fraser, CIS.
1 Review Lecture: Guide to the SSSII Assignment Gwilym Pryce 5 th March 2006.
1 Almost all cop-win graphs contain a universal vertex Anthony Bonato Ryerson University CanaDAM 2011.
How to Schedule a Cascade in an Arbitrary Graph F. Chierchetti, J. Kleinberg, A. Panconesi February 2012 Presented by Emrah Cem 7301 – Advances in Social.
The Structure of Networks with emphasis on information and social networks T-214-SINE Summer 2011 Chapter 16 Ýmir Vigfússon.
Cascade Principles, Bayes Rule and Wisdom of the Crowds Lecture 6 (Largely drawn from Kleinberg book)
CASE − Cognitive Agents for Social Environments
Presentation transcript:

Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH Zürich

Julian Lorenz, 2 Observational Learning Examples: Brand choice, fashion, bestseller list … Stock market bubbles (Animal) Mating: Females choose males they observed being selected by other females (Gibson/Hoglund ´92, Copying and Sexual Selection) When people make a decision, they typically look around how others have decided. Decision process of a group where each individual combines own opinion and observation of others Word of mouth learning, social learning:

Julian Lorenz, 3 Model of Sequential Observational Learning Agents are Bayes-rational and decide using Stochastic private signal (correct with prob. >0.5 ) Observation of other agents actions Macro-behavior of such learning processes? How well does population as a whole? (Bikhchandani, Hirshleifer, Welch 1998) Population of n agents makes one-time decision between two alternatives (a and b) sequentially a or b is aposteriori superior choice for all agents (unknown during decision process)

Julian Lorenz, 4 Model of Sequential Observational Learning Agents can observe actions of all predecessors = Predecessors that chose option If tie, follow private signal. Majority voting of observed actions & private signal In totalvotes. Bayes optimal local decision rule in [BHW98]: Can show: Bayes optimal strategy for each agent (optimizes probability of correct choice) Information externality Imitation rational

Julian Lorenz, 5 Sequential Observational Learning [BHW98] a b b a a … Example:

Julian Lorenz, 6 Informational Cascades in [BHW98]: Agent chooses a if ¸ 2, b if · -2 and follows private signal if -1 · · +1. Equivalent version of decision rule Obviously, key variable is. Eventually, hit absorbing state or In the long run, almost all agents make same decision Incorrect informational cascades quite likely! ? ? ? ? ? ? ? ? ? ? ? ? Globally inefficient use of information

Julian Lorenz, 7 Informational Cascades in [BHW98]: [correct cascade] [incorrect cascade] Confidence of private signal [correct cascade] Even in cascade imitation is rational Locally rational vs. globally beneficial Remark:

Julian Lorenz, 8 Wisdom of Crowds Actions observable of subset only What would improve global behavior? Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (2004) … vs. incorrect informational cascades ?

Julian Lorenz, 9 Learning in Random Networks Random graph on n vertices, each edge present with probability p. Agents can only observe actions of their acquaintances modeled by random graph : Then agent chooses Agents local decision rule: Same as [BHW98] be #acquaintances that chose option Let and For p=1: Recover [BHW98]

Julian Lorenz, 10 Theorem (L., Marciniszyn, Steger 07) = # correct agents in 1 a.a.s. almost all agents correct Network of agents is random graph, > 0.5 Result: Macro-behavior of process depends on p=p(n) 2 constant: with constant probability almost all agents incorrect

Julian Lorenz, 11 Remark: Sparse networks 3 No significant herding towards a or b. Why? Sparse random graph contains (with ) isolated vertices ( independent decisions)

Julian Lorenz, 12 Discussion 2 constant: with constant probability almost all agents incorrect Generalization of [BHW98] Entire population benefits from learning and imitation Intuition: Agents make independent decisions in the beginning, information accumulates locally first Less information for each individual Entire population better off 1 a.a.s. almost all agents correct

Julian Lorenz, 13 whp next agent pk 1 À 1 neighbors & majority correct whp correct decision! Idea of Proof (I) Suppose correct bias among first k 1 À p -1 agents However, technical difficulties: Need to establish correct critical mass Almost all subsequent agents must be correct … and everything must be with high probability 1 a.a.s. almost all agents correct Proof uses Chernoff type bounds and techniques from random graph theory

Julian Lorenz, 14 Idea of Proof (II) 2 const.: const prob almost all agents incorrect With constant probability, an incorrect criticial mass will emerge 1 Herding as in Because of high density of network, no local accumulation of information.

Julian Lorenz, 15 1 a.a.s. almost all agents correct Phase I : whpfraction correct Phase II : whpfraction correct Phase III : whp almost all agents correct We show: Proof Phase IPhase II Phase III Early adoptors Critical phase Herding Choose suitable and. Then: Because of follows. 1

Julian Lorenz, 16 1 a.a.s. almost all agents correct Phase II : During Phase II, increases to Lemma : More and more agents disregard private signal But: Proof Consider groups of agents who are almost independent. But: Conditional probabilities & dependencies between agents in Phase II … Critical phase

Julian Lorenz, 17 1 a.a.s. almost all agents correct Proof Phase II (cont) w.h.p. edge in each W i … … … & sharp concentration Iteratively, w.h.p. fraction correct in Phase II correct agents

Julian Lorenz, 18 1 a.a.s. almost all agents correct Proof Phase III : Whp almost all agents correct in Phase III. … w.h.p next agent has À 1 neighbors & follows majority … again technical difficulties (consider groups of agents), but finally …. Herding

Julian Lorenz, 19 p=1/log n, correct cascade Numerical Experiments Population size n Relative Frequency p=, correct cascade p=0.5, correct cascade p=0.5, incorrect cascade Signal confidence: =0.75

Julian Lorenz, 20 Conclusion Macro-behavior of observational learning depends on density of random network Intuition Future work Critical mass of independent decisions in beginning (information accumulates) Correct herding of almost all subsequent agents Dense: incorrect informational cascades possible Moderately linked: whp correct informational cascade Other types of random networks (scale-free networks etc.)

Julian Lorenz, 21 Thank you very much for your attention! Questions?