Download presentation
Presentation is loading. Please wait.
1
Priors and predictions in everyday cognition Tom Griffiths Cognitive and Linguistic Sciences
2
data behavior What computational problem is the brain solving? Do optimal solutions to that problem help to explain human behavior?
3
Inductive problems Inferring structure from data Perception –e.g. structure of 3D world from 2D visual data data hypotheses cube shaded hexagon
4
Inductive problems Inferring structure from data Perception –e.g. structure of 3D world from 2D data Cognition –e.g. form of causal relationship from samples datahypotheses
5
Reverend Thomas Bayes
6
Bayes’ theorem Posterior probability LikelihoodPrior probability Sum over space of hypotheses h: hypothesis d: data
7
Bayes’ theorem h: hypothesis d: data
8
Perception is optimal Körding & Wolpert (2004)
9
Cognition is not
10
Do people use priors? Standard answer: no (Tversky & Kahneman, 1974)
11
Explaining inductive leaps How do people –infer causal relationships –identify the work of chance –predict the future –assess similarity and make generalizations –learn functions, languages, and concepts... from such limited data?
12
Explaining inductive leaps How do people –infer causal relationships –identify the work of chance –predict the future –assess similarity and make generalizations –learn functions, languages, and concepts... from such limited data? What knowledge guides human inferences?
13
Prior knowledge matters when… …using a single datapoint –predicting the future …using secondhand data –effects of priors on cultural transmission
14
Outline …using a single datapoint –predicting the future –joint work with Josh Tenenbaum (MIT) …using secondhand data –effects of priors on cultural transmission –joint work with Mike Kalish (Louisiana) Conclusions
15
Outline …using a single datapoint –predicting the future –joint work with Josh Tenenbaum (MIT) …using secondhand data –effects of priors on cultural transmission –joint work with Mike Kalish (Louisiana) Conclusions
16
Predicting the future How often is Google News updated? t = time since last update t total = time between updates What should we guess for t total given t?
17
Everyday prediction problems You read about a movie that has made $60 million to date. How much money will it make in total? You see that something has been baking in the oven for 34 minutes. How long until it’s ready? You meet someone who is 78 years old. How long will they live? Your friend quotes to you from line 17 of his favorite poem. How long is the poem? You see taxicab #107 pull up to the curb in front of the train station. How many cabs in this city?
18
Making predictions You encounter a phenomenon that has existed for t units of time. How long will it continue into the future? (i.e. what’s t total ?) We could replace “time” with any other variable that ranges from 0 to some unknown upper limit.
19
Bayesian inference p(t total |t) p(t|t total ) p(t total ) posterior probability likelihoodprior
20
Bayesian inference p(t total |t) p(t|t total ) p(t total ) p(t total |t) 1/t total p(t total ) assume random sample (0 < t < t total ) posterior probability likelihoodprior
21
Bayesian inference p(t total |t) p(t|t total ) p(t total ) p(t total |t) 1/t total 1/t total assume random sample (0 < t < t total ) posterior probability likelihoodprior “uninformative” prior (Gott, 1993)
22
How about maximal value of p(t total |t)? Bayesian inference p(t total |t) 1/t total 1/t total posterior probability What is the best guess for t total ? p(t total |t) t total t total = t random sampling “uninformative” prior
23
Bayesian inference p(t total |t) t total What is the best guess for t total ? Instead, compute t* such that p(t total > t*|t) = 0.5: p(t total |t) 1/t total 1/t total posterior probability random sampling “uninformative” prior
24
Bayesian inference Yields Gott’s Rule: P(t total > t*|t) = 0.5 when t* = 2t i.e., best guess for t total = 2t What is the best guess for t total ? Instead, compute t* such that p(t total > t*|t) = 0.5. p(t total |t) 1/t total 1/t total posterior probability random sampling “uninformative” prior
25
Applying Gott’s rule t 4000 years, t* 8000 years
26
Applying Gott’s rule t 130,000 years, t* 260,000 years
27
Predicting everyday events You read about a movie that has made $78 million to date. How much money will it make in total? –“$156 million” seems reasonable You meet someone who is 35 years old. How long will they live? –“70 years” seems reasonable Not so simple: –You meet someone who is 78 years old. How long will they live? –You meet someone who is 6 years old. How long will they live?
28
The effects of priors Different kinds of priors p(t total ) are appropriate in different domains. Gott: p(t total ) t total -1
29
The effects of priors Different kinds of priors p(t total ) are appropriate in different domains. e.g., wealth, contacts e.g., height, lifespan
30
The effects of priors
31
Evaluating human predictions Different domains with different priors: –A movie has made $60 million –Your friend quotes from line 17 of a poem –You meet a 78 year old man –A move has been running for 55 minutes –A U.S. congressman has served for 11 years –A cake has been in the oven for 34 minutes Use 5 values of t for each People predict t total
32
people parametric prior empirical prior Gott’s rule
33
You learn that in ancient Egypt, there was a great flood in the 11th year of a pharaoh’s reign. How long did he reign?
34
How long did the typical pharaoh reign in ancient Egypt?
35
…using a single datapoint People produce accurate predictions for the duration and extent of everyday events Strong prior knowledge –form of the prior (power-law or exponential) –distribution given that form (parameters) –non-parametric distribution when necessary Reveals a surprising correspondence between probabilities in the mind and in the world
36
Outline …using a single datapoint –predicting the future –joint work with Josh Tenenbaum (MIT) …using secondhand data –effects of priors on cultural transmission –joint work with Mike Kalish (Louisiana) Conclusions
37
Cultural transmission Most knowledge is based on secondhand data Some things can only be learned from others –language –religious concepts How do priors affect cultural transmission?
38
Iterated learning (Briscoe, 1998; Kirby, 2001) data hypothesis learning production datahypothesis learning production Each learner sees data, forms a hypothesis, produces the data given to the next learner c.f. the playground game “telephone”
39
Explaining linguistic universals Human languages are a subset of all logically possible communication schemes –universal properties common to all languages (Comrie, 1981; Greenberg, 1963; Hawkins, 1988) Two questions: –why do linguistic universals exist? –why are particular properties universal?
40
Explaining linguistic universals Traditional answer: –linguistic universals reflect innate constraints specific to a system for acquiring language Alternative answer: –iterated learning imposes “information bottleneck” –universal properties survive this bottleneck (Briscoe, 1998; Kirby, 2001)
41
Analyzing iterated learning What are the consequences of iterated learning? Simulations Analytic results Complex algorithms Simple algorithms Komarova, Niyogi, & Nowak (2002) Brighton (2002) Kirby (2001) Smith, Kirby, & Brighton (2003) ?
42
Iterated Bayesian learning Learners are rational Bayesian agents –covers a wide range of learning algorithms Defines a Markov chain on (h, d) pairs d0d0 h1h1 d1d1 h2h2 inference sampling inference sampling p(h|d)p(h|d) p(d|h)p(d|h) p(d|h)p(d|h) p(h|d)p(h|d)
43
Analytic results Stationary distribution of Markov chain is Convergence under easily checked conditions Rate of convergence is geometric –iterated learning is a Gibbs sampler on p(d,h) –Gibbs sampler converges geometrically (Liu, Wong, & Kong, 1995)
44
Analytic results Stationary distribution of Markov chain is Corollaries: –distribution over hypotheses converges to p(h) –distribution over data converges to p(d) –the proportion of a population of iterated learners with hypothesis h converges to p(h)
45
An example: Gaussians If we assume… –data, d, is a single real number, x –hypotheses, h, are means of a Gaussian, –prior, p( ), is Gaussian( 0, 0 2 ) …then p(x n+1 |x n ) is Gaussian( n, x 2 + n 2 )
46
0 = 0, 0 2 = 1, x 0 = 20 Iterated learning results in rapid convergence to prior
47
Implications for linguistic universals Two questions: –why do linguistic universals exist? –why are particular properties universal? Different answers: –existence explained through iterated learning –universal properties depend on the prior Focuses inquiry on the priors of the learners
48
A method for discovering priors Iterated learning converges to the prior… …evaluate prior by producing iterated learning
49
Iterated function learning Assume –data, d, are pairs of real numbers (x, y) –hypotheses, h, are functions An example: linear regression –hypotheses have slope and pass through origin –p( ) is Gaussian( 0, 0 2 ) } x = 1 y
50
} y 0 = 1, 0 2 = 0.1, y 0 = -1
51
Function learning in the lab Stimulus Response Slider Feedback Examine iterated learning with different initial data
52
1 2 3 4 5 6 7 8 9 Iteration Initial data
53
…using secondhand data Iterated Bayesian learning converges to the prior Constraints explanations of linguistic universals Provides a method for evaluating priors –concepts, causal relationships, languages, … Open questions in Bayesian language evolution –variation in priors –other selective pressures
54
Outline …using a single datapoint –predicting the future …using secondhand data –effects of priors on cultural transmission Conclusions
55
Bayes’ theorem A unifying principle for explaining inductive inferences
56
Bayes’ theorem inference = f(data,knowledge)
57
Bayes’ theorem inference = f(data,knowledge) A means of evaluating the priors that inform those inferences
58
Explaining inductive leaps How do people –infer causal relationships –identify the work of chance –predict the future –assess similarity and make generalizations –learn functions, languages, and concepts... from such limited data? What knowledge guides human inferences?
60
Markov chain Monte Carlo Sample from a Markov chain which converges to target distribution Allows sampling from an unnormalized posterior distribution Can compute approximate statistics from intractable distributions (MacKay, 2002)
61
Markov chain Monte Carlo States of chain are variables of interest Transition matrix chosen to give target distribution as stationary distribution xx x xx x x x Transition matrix P(x (t+1) |x (t) ) = T(x (t),x (t+1) )
62
Gibbs sampling Particular choice of proposal distribution (for single component Metropolis-Hastings) For variables x = x 1, x 2, …, x n Draw x i (t+1) from P(x i |x -i ) x -i = x 1 (t+1), x 2 (t+1),…, x i-1 (t+1), x i+1 (t), …, x n (t) (a.k.a. the heat bath algorithm in statistical physics)
63
Gibbs sampling (MacKay, 2002)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.