Exploring subjective probability distributions using Bayesian statistics Tom Griffiths Department of Psychology Cognitive Science Program University of.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Bayesian Belief Propagation
Slice Sampling Radford M. Neal The Annals of Statistics (Vol. 31, No. 3, 2003)
Bayesian Estimation in MARK
Bayesian Inference for Signal Detection Models of Recognition Memory Michael Lee Department of Cognitive Sciences University California Irvine
Causes and coincidences Tom Griffiths Cognitive and Linguistic Sciences Brown University.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Bayesian Reasoning: Markov Chain Monte Carlo
Stochastic approximate inference Kay H. Brodersen Computational Neuroeconomics Group Department of Economics University of Zurich Machine Learning and.
BAYESIAN INFERENCE Sampling techniques
Part IV: Monte Carlo and nonparametric Bayes. Outline Monte Carlo methods Nonparametric Bayesian models.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Priors and predictions in everyday cognition Tom Griffiths Cognitive and Linguistic Sciences.
A Bayesian view of language evolution by iterated learning Tom Griffiths Brown University Mike Kalish University of Louisiana.
Nonparametric Bayes and human cognition Tom Griffiths Department of Psychology Program in Cognitive Science University of California, Berkeley.
Particle filters (continued…). Recall Particle filters –Track state sequence x i given the measurements ( y 0, y 1, …., y i ) –Non-linear dynamics –Non-linear.
Today Introduction to MCMC Particle filters and MCMC
Priors and predictions in everyday cognition Tom Griffiths Cognitive and Linguistic Sciences.
Baysian Approaches Kun Guo, PhD Reader in Cognitive Neuroscience School of Psychology University of Lincoln Quantitative Methods 2011.
Markov chain Monte Carlo with people Tom Griffiths Department of Psychology Cognitive Science Program UC Berkeley with Mike Kalish, Stephan Lewandowsky,
Exploring cultural transmission by iterated learning Tom Griffiths Brown University Mike Kalish University of Louisiana With thanks to: Anu Asnaani, Brian.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Bayesian Analysis for Extreme Events Pao-Shin Chu and Xin Zhao Department of Meteorology School of Ocean & Earth Science & Technology University of Hawaii-
Analyzing iterated learning Tom Griffiths Brown University Mike Kalish University of Louisiana.
Normative models of human inductive inference Tom Griffiths Department of Psychology Cognitive Science Program University of California, Berkeley.
Robin McDougall, Ed Waller and Scott Nokleby Faculties of Engineering & Applied Science and Energy Systems & Nuclear Science 1.
Bayes Factor Based on Han and Carlin (2001, JASA).
Bayesian approaches to cognitive sciences. Word learning Bayesian property induction Theory-based causal inference.
Priors, Normal Models, Computing Posteriors
Optimal predictions in everyday cognition Tom Griffiths Josh Tenenbaum Brown University MIT Predicting the future Optimality and Bayesian inference Results.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
R2WinBUGS: Using R for Bayesian Analysis Matthew Russell Rongxia Li 2 November Northeastern Mensurationists Meeting.
Inferring structure from data Tom Griffiths Department of Psychology Program in Cognitive Science University of California, Berkeley.
1 Gil McVean Tuesday 24 th February 2009 Markov Chain Monte Carlo.
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
Summary We propose a framework for jointly modeling networks and text associated with them, such as networks or user review websites. The proposed.
Bayesian Hierarchical Modeling for Longitudinal Frequency Data Joseph Jordan Advisor: John C. Kern II Department of Mathematics and Computer Science Duquesne.
1 Tree Crown Extraction Using Marked Point Processes Guillaume Perrin Xavier Descombes – Josiane Zerubia ARIANA, joint research group CNRS/INRIA/UNSA INRIA.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
Reducing MCMC Computational Cost With a Two Layered Bayesian Approach
Human and Optimal Exploration and Exploitation in Bandit Problems Department of Cognitive Sciences, University of California. A Bayesian analysis of human.
A tutorial on Markov Chain Monte Carlo. Problem  g (x) dx I = If{X } form a Markov chain with stationary probability  i  I  g(x ) i  (x ) i  i=1.
Everyday inductive leaps Making predictions and detecting coincidences Tom Griffiths Department of Psychology Program in Cognitive Science University of.
Inference of Non-Overlapping Camera Network Topology by Measuring Statistical Dependence Date :
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Bayesian Modelling Harry R. Erwin, PhD School of Computing and Technology University of Sunderland.
Bayesian statistics named after the Reverend Mr Bayes based on the concept that you can estimate the statistical properties of a system after measuting.
Daphne Koller Sampling Methods Metropolis- Hastings Algorithm Probabilistic Graphical Models Inference.
Density Estimation in R Ha Le and Nikolaos Sarafianos COSC 7362 – Advanced Machine Learning Professor: Dr. Christoph F. Eick 1.
Outline Historical note about Bayes’ rule Bayesian updating for probability density functions –Salary offer estimate Coin trials example Reading material:
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
MCMC Output & Metropolis-Hastings Algorithm Part I
Advanced Statistical Computing Fall 2016
Bayesian data analysis
Markov chain Monte Carlo with people
Jun Liu Department of Statistics Stanford University
Omiros Papaspiliopoulos and Gareth O. Roberts
Inference Concerning a Proportion
CSCI 5822 Probabilistic Models of Human and Machine Learning
Revealing priors on category structures through iterated learning
CS639: Data Management for Data Science
Mathematical Foundations of BME Reza Shadmehr
Adaptive Variable Selection
Presentation transcript:

Exploring subjective probability distributions using Bayesian statistics Tom Griffiths Department of Psychology Cognitive Science Program University of California, Berkeley

Perception is optimal Körding & Wolpert (2004)

Cognition is not

Bayesian models of cognition What would an “ideal cognizer” do? –“rational analysis” (Anderson, 1990) How can structured representations be combined with statistical inference? –graphical models, probabilistic grammars, etc. What knowledge guides human inferences? –questions about priors and likelihoods

Exploring subjective distributions Natural statistics in cognition (joint work with Josh Tenenbaum) Markov chain Monte Carlo with people (joint work with Adam Sanborn)

Exploring subjective distributions Natural statistics in cognition (joint work with Josh Tenenbaum) Markov chain Monte Carlo with people (joint work with Adam Sanborn)

Natural statistics Images of natural scenes Prior distributions p(x)p(x)

Predicting the future How often is Google News updated? t = time since last update t total = time between updates What should we guess for t total given t?

Bayesian inference p(t total |t)  p(t|t total ) p(t total ) posterior probability likelihoodprior

Bayesian inference p(t total |t)  p(t|t total ) p(t total ) p(t total |t)  1/t total p(t total ) assume random sample (0 < t < t total ) posterior probability likelihoodprior

The effects of priors

Evaluating human predictions Different domains with different priors: –a movie has made $60 million [power-law] –your friend quotes from line 17 of a poem [power-law] –you meet a 78 year old man [Gaussian] –a movie has been running for 55 minutes [Gaussian] –a U.S. congressman has served for 11 years [Erlang] Prior distributions derived from actual data Use 5 values of t for each People predict t total

people parametric prior empirical prior Gott’s rule

Exploring subjective distributions Natural statistics in cognition (joint work with Josh Tenenbaum) Markov chain Monte Carlo with people (joint work with Adam Sanborn)

Markov chain Monte Carlo

Metropolis-Hastings algorithm (Metropolis et al., 1953; Hastings, 1970) Step 1: propose a state (we assume symmetrically) Q(x (t+1) |x (t) ) = Q(x (t) )|x (t+1) ) Step 2: decide whether to accept, with probability Metropolis acceptance function Barker acceptance function

A task Ask subjects which of two alternatives comes from a target category Which animal is a frog?

A Bayesian analysis of the task Assume:

Response probabilities If people probability match to the posterior, response probability is equivalent to the Barker acceptance function for target distribution p(x|c)

Collecting the samples Which is the frog? Trial 1Trial 2Trial 3

Sampling from natural categories Examined distributions for four natural categories: giraffes, horses, cats, and dogs Presented stimuli with nine-parameter stick figures (Olman & Kersten, 2004)

Choice task

Samples from Subject 3 (projected onto a plane)

Mean animals by subject giraffe horse cat dog S1S2S3S4S5S6S7S8

Marginal densities (aggregated across subjects) Giraffes are distinguished by neck length, body height and body tilt Horses are like giraffes, but with shorter bodies and nearly uniform necks Cats have longer tails than dogs

Markov chain Monte Carlo with people Probabilistic models can guide the design of experiments to measure psychological variables Markov chain Monte Carlo can be used to sample from subjective probability distributions –category distributions (Metropolis-Hastings) –prior distributions (Gibbs sampling) Effective for exploring large stimulus spaces, with distributions on a small part of the space

Conclusion Probabilistic models give us a way to explore the knowledge that guides people’s inferences Basic problem for both cognition and perception: identifying subjective probability distributions Two strategies: –natural statistics –Markov chain Monte Carlo with people