Download presentation
Presentation is loading. Please wait.
Published byKelley McDaniel Modified over 9 years ago
1
9.94 The cognitive science of intuitive theories J. Tenenbaum, T. Lombrozo, L. Schulz, R. Saxe
2
Plan for the class Goal: –Introduce you to an exciting field of research unfolding at MIT and in the broader world.
3
Plan for the class An integrated series of talks and discussions –Today: Josh Tenenbaum (MIT), “Computational models of theory-based inductive inference” –Tuesday: Tania Lombrozo (Harvard/Berkeley) “Explanation in intuitive theories” –Wednesday: Rebecca Saxe (Harvard/MIT), “Understanding other minds” –Thursday: Laura Schulz (MIT), “Theories and evidence” –Friday: Special Mystery Guest, “When theories fail”
4
Plan for the class Requirements for credit (pass/fail, 3 units) –Attend the classes –Participate in discussions –Take-home quiz: Emailed to you this weekend (after the class) Due back to me by email on Wednesday, Feb. 1 If you are not registered on the list below, make sure to register and send me an email message at: jbt@mit.edu
5
Class list Azeez,Zainab O Belova,Nadezhda Brenman,Stephanie V Cherian,Tharian S Clark,Abigail M Clark,Samuel D Curry,Justin M Dai,Jizi Dean,Clare Liu,Yicong McGuire,Salafa'anius Ovadya,Aviv Poon,Samuel H Pradhan,Nikhil T Ren,Danan T Rotter,Juliana C Slagle,Amy M Tang,Di Ferranti,Darlene E Frazier,Jonathan J Gordon,Matthew A Green,Delbert A Huhn,Anika M Hunt,Beatrice P. Kamrowski,Kaitlin M Kanaga,Noelle J Kwon,Jennifer Taub,Daniel M Tung,Roland Voelbel,Kathleen Vosoughi,Soroush Willmer,Anjuli J Ye,Diana F Yuen,Grace J Zhao,Bo
6
Scheduling Today: –Go to 4:30 (with a break in the middle)? Friday: –An hour earlier: 1:00 – 3:00?
7
The big problem of intelligence How does the mind get so much out of so little?
8
The big problem of intelligence How does the mind get so much out of so little? Three-dimensional: Two-dimensional:
9
The big problem of intelligence How can we generalize new concepts reliably from just one or a few examples? –Learning word meanings “horse”
10
The objects of planet Gazoob “tufa”
11
The big problem of intelligence How can we generalize new concepts reliably from just one or a few examples? –Learning about new properties of categories Cows have T4 hormones. Bees have T4 hormones. Salmon have T4 hormones. Humans have T4 hormones. Cows have T4 hormones. Goats have T4 hormones. Sheep have T4 hormones. Humans have T4 hormones.
12
The big problem of intelligence How do we use concepts in ways that go beyond our experience? “dog” Is it still a dog if you… –Put a penguin costume on it? –Surgically alter it until it looks just like a penguin? –Pre-natally inject a substance that causes it to look just like a penguin? … and it can mate with penguins and produce penguin offspring?
13
The big problem of intelligence How do we use concepts in ways that go beyond our experience? “alive” –Yes: people, dogs, bees, worms, trees, flowers, grass, coral, moss –No: chairs, cars, tricycles, computers, the sun, Roomba, clocks, rocks Only makes sense in the context of a relational network of concepts –Alive, dead, animate, energy, growth, … –Children learn these concepts together, as a system.
14
The big problem of intelligence How do we use concepts in ways that go beyond our experience? Two cars were reported stolen by the Groveton police yesterday. The judge sentenced the killer to die in the electric chair for the second time. No one was injured in the blast, which was attributed to a buildup of gas by one town official. One witness told the commissioners that she had seen sexual intercourse taking place between two parked cars in front of her house.
15
The big problem of intelligence How do we use concepts in ways that go beyond our experience? Given: Mary’s home is in Chicopee and her son is named Fred. Millie’s home is in Chicopee and her son is named Fred. Likely inferences: Mary and Millie live in the same town, but they don’t have the same son.
16
The big problem of intelligence How do we use concepts in ways that go beyond our experience? Consider a man named Boris. –Is the mother of Boris’s father his grandmother? –Is the mother of Boris’s sister his mother? –Is the son of Boris’s sister his son? (Note: Boris and his family were stranded on a desert island when he was a young boy.)
17
What makes us so smart? Memory? Logical inference?
18
What makes us so smart? Memory? No. –The difference between a test that you can pass on rote memory and a test that shows whether you “actually learned something”. Logical inference? No. –The difference between deductive inference and inductive inference.
19
Modes of inference Deductive inference: Inductive inference: All mammals have biotinic acid in their blood. Horses are mammals. Horses have biotinic acid in their blood. Horses are mammals. All mammals have biotinic acid in their blood.
20
What makes us so smart? Intuitive theories –Systems of concepts that are in some important respects like scientific theories. –Abstract knowledge that supports prediction, explanation, exploration, and decision-making for an infinite range of situations that we have not previously encountered.
21
Some questions about intuitive theories What is their content? How are they represented in the mind or brain? How are they used to generalize to new situations? How are they acquired?
22
Some questions about intuitive theories What is their content? How are they represented in the mind or brain? How are they used to generalize to new situations? How are they acquired? Can they be described in computational terms? In what essential ways are they similar to or different from scientific theories? How good (accurate, comprehensive, rich) are they, under what circumstances? What can we learn from their failures?
23
What can we learn from perceptual or cognitive illusions? Goal of visual perception is to recover world structure from visual images. Why the problem is hard: many world structures can produce the same visual input. Scene hypotheses Image data
24
What can we learn from perceptual or cognitive illusions? Goal of visual perception is to recover world structure from visual images. Why the problem is hard: many world structures can produce the same visual input. Illusions reveal the visual system’s implicit theories of the physical world and the process of image formation.
25
The study of “cognitive illusions” Analogy with optical illusions: –Any rational system for inductive inference must make some assumptions about how the world typically works. –Those assumptions will sometimes be wrong, leading to illusions. –Studying illusions can show us how inference works successfully under normal conditions.
26
Computational models of theory- based inductive inference Josh Tenenbaum Department of Brain and Cognitive Sciences Computer Science and Artificial Intelligence Laboratory MIT
27
Plan for today A general framework for solving under- constrained inference problems –Bayesian inference Applications in perception and cognition –lightness perception –predicting the future (with Tom Griffiths) –learning about properties of natural species (with Charles Kemp)
28
Modes of inference Deductive inference: Inductive inference: All mammals have biotinic acid in their blood. Horses are mammals. Horses have biotinic acid in their blood. Horses are mammals. All mammals have biotinic acid in their blood. Logic Probability
29
Bayesian inference Definition of conditional probability: Bayes’ rule: “Posterior probability”: “Prior probability”: “Likelihood”:
30
Bayesian inference Bayes’ rule: An example –Data: John is coughing –Some hypotheses: 1. John has a cold 2. John has emphysema 3. John has a stomach flu –Prior favors 1 and 3 over 2 –Likelihood P(d|h) favors 1 and 2 over 3 –Posterior P(d|h) favors 1 over 2 and 3
31
Bayesian inference Bayes’ rule: What makes a good scientific argument? P(h|d) is high if: –Hypothesis is plausible: P(h) is high –Hypothesis strongly predicts the observed data: P(d|h) is high –Data are surprising: is low
32
Coin flipping HHTHT HHHHH What process produced these sequences?
33
Comparing two simple hypotheses Contrast simple hypotheses: –H 1 : “fair coin”, P( H ) = 0.5 –H 2 :“always heads”, P( H ) = 1.0 Bayes’ rule: With two hypotheses, use odds form
34
Comparing two simple hypotheses D: HHTHT H 1, H 2 : “fair coin”, “always heads” P(D|H 1 ) =1/2 5 P(H 1 ) =? P(D|H 2 ) =0 P(H 2 ) = 1-?
35
Comparing two simple hypotheses D: HHTHT H 1, H 2 : “fair coin”, “always heads” P(D|H 1 ) =1/2 5 P(H 1 ) =999/1000 P(D|H 2 ) =0 P(H 2 ) = 1/1000
36
Comparing two simple hypotheses D: HHHHH H 1, H 2 : “fair coin”, “always heads” P(D|H 1 ) =1/2 5 P(H 1 ) =999/1000 P(D|H 2 ) =1 P(H 2 ) = 1/1000
37
Comparing two simple hypotheses D: HHHHHHHHHH H 1, H 2 : “fair coin”, “always heads” P(D|H 1 ) =1/2 10 P(H 1 ) =999/1000 P(D|H 2 ) =1 P(H 2 ) = 1/1000
38
The role of intuitive theories The fact that HHTHT looks representative of a fair coin and HHHHH does not reflects our implicit theories of how the world works. –Easy to imagine how a trick all-heads coin could work: high prior probability. –Hard to imagine how a trick “ HHTHT ” coin could work: low prior probability.
39
Plan for today A general framework for solving under- constrained inference problems –Bayesian inference Applications in perception and cognition –lightness perception –predicting the future (with Tom Griffiths) –learning about properties of natural species (with Charles Kemp)
40
Gelb / Gilchrist demo
41
Explaining the illusion The problem of lightness constancy –Separating the intrinsic reflectance (“color”) of a surface from the intensity of the illumination. Anchoring heuristic: –Assume that the brightest patch in each scene is white. Questions: –Is this really right? –Why (and when) is it a good solution to the problem of lightness constancy?
42
Why is lightness constancy hard? The physics of light reflection: L = I x R L: luminance (light emitted from surface) I: intensity of illumination in the world R: reflectance of surface in the world The problem: Given L, solve for I and R.
43
Why is lightness constancy hard? The physics of light reflection: L 1 = I x R 1 L 2 = I x R 2... L n = I x R n The problem: Given L 1, …, L n, solve for I and R 1, …, R n.
44
L = {2, 4, 5, 9} Scene hypotheses Image data L = I x R I = 10 R = {0.2, 0.4, 0.5, 0.9} I = 100 R = {0.02, 0.04, 0.05, 0.09} I = 15 R = {0.13, 0.26, 0.33, 0.60} Why is lightness constancy hard?
45
A simplified theory of the visual world Really bright illuminants are rare. P(I)P(I) I 0 P(I)P(I) I 0
46
A simplified theory of the visual world Really bright illuminants are rare. Any surface color is equally likely. P(Ri)P(Ri) RiRi 0 1 (black) (white) P(I)P(I) I 0 P(I)P(I) I 0
47
A simplified theory of the visual world Really bright illuminants are rare. Observed luminances, L i = I x R i, are a random sample from 0 to I. P(L i |I ) LiLi 0 I P(I)P(I) I 0 P(I)P(I) I 0
48
A simplified theory of the visual world Really bright illuminants are rare. Observed luminances, L i = I x R i, are a random sample from 0 to I. P(L i |I ) LiLi 0 I’ P(I)P(I) I 0 P(I)P(I) I 0 I
49
Scene hypotheses h Image data d I = 10 I = 15 I = 100 h 1 P(h 1 ): high h 2 P(h 2 ): med h 3 P(h 3 ): low L = {9}
50
Scene hypotheses h Image data d I = 10 I = 15 I = 100 h 1 P(h 1 ): high h 2 P(h 2 ): med h 3 P(h 3 ): low L = {2, 4, 5, 9} Prior probability alone can’t explain how inference changes with more data.
51
Scene hypotheses h Image data d I = 10 I = 15 I = 100 h 1 P(h 1 ): high h 2 P(h 2 ): med h 3 P(h 3 ): low L = {9}
52
Scene hypotheses h Image data d I = 10 I = 15 I = 100 h 1 P(h 1 ): high h 2 P(h 2 ): med h 3 P(h 3 ): low L = {2, 4, 5, 9}
53
L 9 0 p(L= l | I ) I =10 I =15 p({l 1 }| I=10) p({l 1 }| I=15) Graphing the likelihood
54
L 2 4 5 9 0 p(L= l | I ) I =10 I =15 p({l 1, l 2, l 3, l 4,}| I=10) >> p({l 1, l 2, l 3, l 4,}| I=15)
55
Explanations lightness constancy Anchoring heuristic: Assume that the brightest patch in each scene is white. –Is this really right? –Why (and when) is it a good solution to the problem? Bayesian analysis –Explains the computational basis for inference. –Explains why confidence in “brightest = white” increases as more samples are observed.
56
Applications to cognition Predicting the future (with Tom Griffiths) Learning about properties of natural species (with Charles Kemp)
57
Everyday prediction problems You read about a movie that has made $60 million to date. How much money will it make in total? You see that something has been baking in the oven for 34 minutes. How long until it’s ready? You meet someone who is 78 years old. How long will they live? Your friend quotes to you from line 17 of his favorite poem. How long is the poem? You see taxicab #107 pull up to the curb in front of the train station. How many cabs in this city?
58
Making predictions You encounter a phenomenon that has existed for t past units of time. How long will it continue into the future? (i.e. what’s t total ?) We could replace “time” with any other variable that ranges from 0 to some unknown upper limit (c.f. lightness).
59
Bayesian inference P(t total |t past ) P(t past |t total ) P(t total ) posterior probability likelihoodprior
60
Bayesian inference P(t total |t past ) P(t past |t total ) P(t total ) P(t total |t past ) 1/t total P(t total ) posterior probability likelihoodprior Assume random sample (0 < t past < t total )
61
Bayesian inference P(t total |t past ) P(t past |t total ) P(t total ) P(t total |t past ) 1/t total 1/t total posterior probability likelihoodprior “Uninformative” prior Assume random sample (0 < t past < t total )
62
How about maximal value of P(t total |t past )? Bayesian inference P(t total |t past ) 1/t total 1/t total posterior probability What is the best guess for t total ? P(t total |t past ) t total t total = t past Random sampling “Uninformative” prior
63
Bayesian inference P(t total |t past ) t total What is the best guess for t total ? Instead, compute t such that P(t total > t|t past ) = 0.5: P(t total |t past ) 1/t total 1/t total posterior probability Random sampling “Uninformative” prior
64
Bayesian inference Yields Gott’s Rule: P(t total > t|t past ) = 0.5 when t = 2t past i.e., best guess for t total = 2t past. P(t total |t past ) 1/t total 1/t total posterior probability Random sampling “Uninformative” prior What is the best guess for t total ? Instead, compute t such that P(t total > t|t past ) = 0.5.
65
Evaluating Gott’s Rule You read about a movie that has made $78 million to date. How much money will it make in total? –“$156 million” seems reasonable. You meet someone who is 35 years old. How long will they live? –“70 years” seems reasonable. Not so simple: –You meet someone who is 78 years old. How long will they live? –You meet someone who is 6 years old. How long will they live?
66
The effects of priors Different kinds of priors P(t total ) are appropriate in different domains. Gott: P(t total ) t total -1
67
The effects of priors Different kinds of priors P(t total ) are appropriate in different domains. e.g., wealth, contacts e.g., height, lifespan
68
The effects of priors
69
Evaluating human predictions Different domains with different priors: –A movie has made $60 million –Your friend quotes from line 17 of a poem –You meet a 78 year old man –A move has been running for 55 minutes –A U.S. congressman has served for 11 years –A cake has been in the oven for 34 minutes Use 5 values of t past for each. People predict t total.
71
You learn that in ancient Egypt, there was a great flood in the 11th year of a pharaoh’s reign. How long did he reign?
72
You learn that in ancient Egypt, there was a great flood in the 11th year of a pharaoh’s reign. How long did he reign? How long did the typical pharaoh reign in ancient egypt?
73
Assumptions guiding inference Random sampling Strong prior knowledge –Form of the prior (power-law or exponential) –Specific distribution given that form (parameters) –Non-parametric distribution when necessary. With these assumptions, strong predictions can be made from a single observation
74
Applications to cognition Predicting the future (with Tom Griffiths) Learning about properties of natural species (with Charles Kemp)
75
Which argument is stronger? Cows have biotinic acid in their blood Horses have biotinic acid in their blood Rhinos have biotinic acid in their blood All mammals have biotinic acid in their blood Cows have biotinic acid in their blood Dolphins have biotinic acid in their blood Squirrels have biotinic acid in their blood All mammals have biotinic acid in their blood “Diversity phenomenon”
76
Osherson, Smith, Wilkie, Lopez, Shafir (1990): 20 subjects rated the strength of 45 arguments: X 1 have property P. X 2 have property P. X 3 have property P. All mammals have property P. 40 different subjects rated the similarity of all pairs of 10 mammals.
77
Traditional psychological models Osherson et al. consider two similarity-based models: Sum-Similarity: Max-Similarity:
78
Model Data Data vs. models Each “ ” represents one argument: X 1 have property P. X 2 have property P. X 3 have property P. All mammals have property P..
79
Three data sets Max-sim Sum-sim Conclusion kind: Number of examples: “all mammals”“horses” 3 2 1, 2, or 3
80
Open questions Explaining similarity: –Why does Max-sim fit so well? When worse? –Why does Sum-sim fit so poorly? When better? Explaining Max-sim: –Is there some rational computation that Max-sim implements or approximates? – What theory about this task and domain is implicit in Max-sim? (c.f., analysis of lightness constancy)
81
Species generated by an evolutionary branching process. –A tree-structured taxonomy of species. Taxonomy also central in folkbiology (Atran). A simplified theory of biology
82
Theory-based Bayesian model Begin by reconstructing intuitive taxonomy from similarity judgments: chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal clustering
83
Hypothesis space H: each taxonomic cluster is a possible hypothesis for the extension of the novel property. chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal h1h1 h3h3 h6h6 h 17... h 0 : “all mammals” Theory-based Bayesian model
84
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal h 0 : “all mammals” p(h): uniform
85
How taxonomy constrains induction Atran (1998): “Fundamental principle of systematic induction” (Warburton 1967, Bock 1973) –Given a property found among members of any two species, the best initial hypothesis is that the property is also present among all species that are included in the smallest higher-order taxon containing the original pair of species.
86
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal “all mammals” Cows have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. Strong (0.76 [max = 0.82])
87
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Cows have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. Cows have property P. Horses have property P. Rhinos have property P. All mammals have property P. “large herbivores” Strong: 0.76 [max = 0.82])Weak: 0.17 [min = 0.14]
88
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Seals have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. Cows have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. “all mammals” Strong: 0.76 [max = 0.82]Weak: 0.30 [min = 0.14]
89
Max-sim Sum-sim Conclusion kind: Number of examples: “all mammals”“horses” 3 2 1, 2, or 3 Bayes (taxonomic)
90
Max-sim Sum-sim Conclusion kind: Number of examples: “all mammals” 3 Seals have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. Cows have property P. Dolphins have property P. Squirrels have property P. All mammals have property P. Bayes (taxonomic)
91
A simplified theory of biology Species generated by an evolutionary branching process. –A tree-structured taxonomy of species. Features generated by stochastic mutation process and passed on to descendants. –Novel features can appear anywhere in tree, but some distributions are more likely than others.
92
Hypothesis space H: each taxonomic cluster is a possible hypothesis for the extension of a novel feature. chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal h1h1 h3h3 h6h6 h 17... h 0 : “all mammals” Theory-based Bayesian model
93
Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Theory-based Bayesian model
94
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
95
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
96
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
97
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
98
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
99
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal Induced prior p(h): Every subset of objects is a possible hypothesis Prior p(h) depends on the number and length of branches needed to span h. Generate hypotheses for novel feature F via (Poisson arrival) mutation process over branches b: Theory-based Bayesian model
100
Samples from the prior Labelings that cut the data along longer branches are more probable: > x x chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal
101
Samples from the prior Labelings that cut the data along fewer branches are more probable: > “monophyletic”“polyphyletic” xx x chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal chimp gorilla horse cow elephant rhino mouse squirrel dolphin seal
102
elephant squirrel chimp gorilla horse cow rhino mouse dolphin seal h 0 : “all mammals” p(h): “evolutionary” process (mutation + inheritance)
103
Max-sim Sum-sim Conclusion kind: Number of examples: “all mammals”“horses” 3 2 1, 2, or 3 Bayes (taxonomic)
104
Max-sim Sum-sim Conclusion kind: Number of examples: “all mammals”“horses” 3 2 1, 2, or 3 Bayes (taxonomy+ mutation)
105
Explaining similarity Why does Max-sim fit so well? –An efficient and accurate approximation to Bayesian (evolutionary) model. Correlation (r) Correlation with Bayes on three-premise general arguments, over 100 simulated tree structures: Mean r = 0.94 There’s also a theorem.
106
Biology: Summary Theory-based statistical inference explains inductive reasoning in folk biology. Mathematical modeling reveals people’s implicit theories about the world. –Category structure: taxonomic tree. –Feature distribution: stochastic mutation process + inheritance. Clarifies traditional psychological models. –Why Max-sim over Sum-sim?
107
Beyond taxonomic similarity Generalization based on known dimensions: (Smith et al., 1993; Blok et al., 2002) Poodles can bite through wire. German shepherds can bite through wire. Dobermans can bite through wire. German shepherds can bite through wire.
108
Beyond taxonomic similarity Generalization based on known dimensions: (Smith et al., 1993; Blok et al., 2002) Generalization based on causal relations: (Medin et al., 2004; Shafto & Coley, 2003) Salmon carry E. Spirus bacteria. Grizzly bears carry E. Spirus bacteria. Salmon carry E. Spirus bacteria. Poodles can bite through wire. German shepherds can bite through wire. Dobermans can bite through wire. German shepherds can bite through wire.
109
Predicate type “has T4 hormones” “can bite through wire” “carries E. Spirus bacteria” Generative theory taxonomic tree directed chain directed network + mutation + unknown threshold + noisy transmission Class C Class A Class D Class E Class G Class F Class B Class C Class A Class D Class E Class G Class F Class B Class A Class B Class C Class D Class E Class F Class G... Class C Class G Class F Class E Class D Class B Class A Hypotheses
110
Kelp Human Dolphin Sand shark Mako shark Tuna Herring KelpHuman Dolphin Sand shark Mako sharkTunaHerring Island ecosystem Taxonomy Food web (Shafto, Kemp, Baraff, Coley, Tenenbaum)
111
Datasets Models Mammal ecosystem: - disease - genetic property Island ecosystem: - disease - genetic property 0.75 -0.15 0.07 0.25 0.92 0.87 0.79 0.01 0.17 0.31 0.89 0.86 r = Bayes Bayes Max- (food web) (tree) Sim
112
Assumptions guiding inferences Qualitatively different priors are appropriate for different domains of inductive generalizaiton. In each domain, a prior that matches the world’s structure fits people’s inductive judgments better than alternative priors. A common framework for representing people’s domain models: a graph structure defined over entities or classes, and a probability distribution for predicates over that graph.
113
Conclusion The hard problem of intelligence: how do we “go beyond the information given”? The solution: –Bayesian statistical inference: –Implicit theories about the structure of the world, generating P(h) and P(d | h). Cows have property P. Dolphins have property P. Squirrels have property P. All mammals have property P.
114
Discussion How is this intuitive theory of biology like or not like a scientific theory? In what sense does the visual system have a theory of the world? How is it like or not like a cognitive theory of biology, or a scientific theory?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.