Download presentation
Presentation is loading. Please wait.
Published byDennis Horn Modified over 9 years ago
1
Bayesian models of human inductive learning Josh Tenenbaum MIT Department of Brain and Cognitive Sciences Computer Science and AI Lab (CSAIL)
2
Charles KempPat Shafto Lauren Schmidt Chris Baker Collaborators Funding: US NSF, AFOSR, ONR, DARPA, NTT Communication Sciences Laboratories, Schlumberger, Eli Lilly & Co., James S. McDonnell Foundation Vikash Mansinghka Tom Griffiths Takeshi Yamada Naonori Ueda
3
The probabilistic revolution in AI Principled and effective solutions for inductive inference from ambiguous data: –Vision –Robotics –Machine learning –Expert systems / reasoning –Natural language processing Standard view: no necessary connection to how the human brain solves these problems.
4
Bayesian models of cognition Visual perception [Weiss, Simoncelli, Adelson, Richards, Freeman, Feldman, Kersten, Knill, Maloney, Olshausen, Jacobs, Pouget,...] Language acquisition and processing [Brent, de Marken, Niyogi, Klein, Manning, Jurafsky, Keller, Levy, Hale, Johnson, Griffiths, Perfors, Tenenbaum, …] Motor learning and motor control [Ghahramani, Jordan, Wolpert, Kording, Kawato, Doya, Todorov, Shadmehr, …] Associative learning [Dayan, Daw, Kakade, Courville, Touretzky, Kruschke, …] Memory [Anderson, Schooler, Shiffrin, Steyvers, Griffiths, McClelland, …] Attention [Mozer, Huber, Torralba, Oliva, Geisler, Yu, Itti, Baldi, …] Categorization and concept learning [Anderson, Nosfosky, Rehder, Navarro, Griffiths, Feldman, Tenenbaum, Rosseel, Goodman, Kemp, Mansinghka, …] Reasoning [Chater, Oaksford, Sloman, McKenzie, Heit, Tenenbaum, Kemp, …] Causal inference [Waldmann, Sloman, Steyvers, Griffiths, Tenenbaum, Yuille, …] Decision making and theory of mind [Lee, Stankiewicz, Rao, Baker, Goodman, Tenenbaum, …]
5
Everyday inductive leaps How can people learn so much about the world from such limited evidence? –Learning concepts from examples “horse”
6
Learning concepts from examples “tufa”
7
Everyday inductive leaps How can people learn so much about the world from such limited evidence? –Kinds of objects and their properties –The meanings of words, phrases, and sentences –Cause-effect relations –The beliefs, goals and plans of other people –Social structures, conventions, and rules
8
The solution Strong prior knowledge (inductive bias).
9
What is the relation between y and x?
15
The solution Strong prior knowledge (inductive bias). –How does background knowledge guide learning from sparsely observed data? –What form does the knowledge take, across different domains and tasks? –How is that knowledge itself learned? Our goal: Computational models that answer these questions, with strong quantitative fits to human behavioral data and a bridge to state-of-the-art AI and machine learning.
16
1.How does background knowledge guide learning from sparsely observed data? Bayesian inference: 2. What form does background knowledge take, across different domains and tasks? Probabilities defined over structured representations: graphs, grammars, predicate logic, schemas, theories. 3. How is background knowledge itself acquired, constraining learning while maintaining flexibility? Hierarchical probabilistic models, with inference at multiple levels of abstraction. Nonparametric models in which complexity grows automatically as the data require. The approach : from statistics to intelligence
17
Basics of Bayesian inference Bayes’ rule: An example –Data: John is coughing –Some hypotheses: 1. John has a cold 2. John has lung cancer 3. John has a stomach flu –Likelihood P(d|h) favors 1 and 2 over 3 –Prior probability P(h) favors 1 and 3 over 2 –Posterior probability P(h|d) favors 1 over 2 and 3
18
How likely is the conclusion, given the premises? “Similarity”, “Typicality”, “Diversity” Gorillas have T9 hormones. Seals have T9 hormones. Squirrels have T9 hormones. Horses have T9 hormones. Gorillas have T9 hormones. Chimps have T9 hormones. Monkeys have T9 hormones. Baboons have T9 hormones. Horses have T9 hormones. Gorillas have T9 hormones. Seals have T9 hormones. Squirrels have T9 hormones. Flies have T9 hormones. Property induction
19
The computational problem ???????????????? Features New property ? Horse Cow Chimp Gorilla Mouse Squirrel Dolphin Seal Rhino Elephant 85 features for 50 animals (Osherson et al.): e.g., for Elephant: ‘gray’, ‘hairless’, ‘toughskin’, ‘big’, ‘bulbous’, ‘longleg’, ‘tail’, ‘chewteeth’, ‘tusks’, ‘smelly’, ‘walks’, ‘slow’, ‘strong’, ‘muscle’, ‘fourlegs’,… “Transfer Learning”, “Semi-Supervised Learning”
20
???????????????? Horse Cow Chimp Gorilla Mouse Squirrel Dolphin Seal Rhino Elephant... Horses have T9 hormones Rhinos have T9 hormones Cows have T9 hormones X Y } Prior P(h) Hypotheses h
21
???????????????? Horse Cow Chimp Gorilla Mouse Squirrel Dolphin Seal Rhino Elephant... Horses have T9 hormones Rhinos have T9 hormones Cows have T9 hormones } Prediction P(Y | X)Hypotheses h Prior P(h) X Y
22
F: form S: structure D: data Tree with species at leaf nodes mouse squirrel chimp gorilla mouse squirrel chimp gorilla F1 F2 F3 F4 Has T9 hormones ?????? … P(structure | form) P(data | structure) P(form) Hierarchical Bayesian Framework
23
The value of structural form knowledge: a more abstract level of inductive bias
24
F: form S: structure D: data Tree with species at leaf nodes mouse squirrel chimp gorilla mouse squirrel chimp gorilla F1 F2 F3 F4 Has T9 hormones ?????? … Hierarchical Bayesian Framework Property induction
25
Smooth: P(h) high P(D|S): How the structure constrains the data of experience Define a stochastic process over structure S that generates candidate property extensions h. –Intuition: properties should vary smoothly over structure. Not smooth: P(h) low
26
S y Gaussian Process (~ random walk, diffusion) Threshold P(D|S): How the structure constrains the data of experience h [Zhu, Lafferty & Ghahramani 2003]
27
S y Gaussian Process (~ random walk, diffusion) Threshold P(D|S): How the structure constrains the data of experience [Zhu, Lafferty & Ghahramani 2003] h
28
Let d ij be the length of the edge between i and j (= if i and j are not connected) A graph-based prior A Gaussian prior ~ N(0, ), with (Zhu, Lafferty & Ghahramani, 2003)
29
Species 1 Species 2 Species 3 Species 4 Species 5 Species 6 Species 7 Species 8 Species 9 Species 10 Structure S Data D Features 85 features for 50 animals (Osherson et al.): e.g., for Elephant: ‘gray’, ‘hairless’, ‘toughskin’, ‘big’, ‘bulbous’, ‘longleg’, ‘tail’, ‘chewteeth’, ‘tusks’, ‘smelly’, ‘walks’, ‘slow’, ‘strong’, ‘muscle’, ‘fourlegs’,…
31
[c.f., Lawrence, 2004; Smola & Kondor 2003]
32
Species 1 Species 2 Species 3 Species 4 Species 5 Species 6 Species 7 Species 8 Species 9 Species 10 FeaturesNew property Structure S Data D ???????????????? 85 features for 50 animals (Osherson et al.): e.g., for Elephant: ‘gray’, ‘hairless’, ‘toughskin’, ‘big’, ‘bulbous’, ‘longleg’, ‘tail’, ‘chewteeth’, ‘tusks’, ‘smelly’, ‘walks’, ‘slow’, ‘strong’, ‘muscle’, ‘fourlegs’,…
33
Gorillas have property P. Mice have property P. Seals have property P. All mammals have property P. Cows have property P. Elephants have property P. Horses have property P. Tree 2D
34
Testing different priors Correct bias Wrong bias Too weak bias Too strong bias Inductive bias
35
A connectionist alternative (Rogers and McClelland, 2004) Features Species Emergent structure: clustering on hidden unit activation vectors
36
Learning about spatial properties Geographic inference task: “Given that a certain kind of native American artifact has been found in sites near city X, how likely is the same artifact to be found near city Y?” Tree 2D
37
Beyond similarity-based induction Biological property Disease property Tree Web “Given that A has property P, how likely is it that B does?” Kelp Human Dolphin Sand shark Mako shark Tuna Herring Kelp Human Dolphin Sand shark Mako shark TunaHerring e.g., P = “has X cells” e.g., P = “has X disease”
38
Summary so far A framework for modeling human inductive reasoning as rational statistical inference over structured knowledge representations –Qualitatively different priors are appropriate for different domains of property induction. –In each domain, a prior that matches the world’s structure fits people’s judgments well, and better than alternative priors. –A language for representing different theories: graph structure defined over objects + probabilistic model for the distribution of properties over that graph. Remaining question: How can we learn appropriate structures for different domains?
39
Hierarchical Bayesian Framework F: form S: structure D: data mouse squirrel chimp gorilla F1 F2 F3 F4 Tree mouse squirrel chimp gorilla mouse squirrel chimp gorilla SpaceChain chimp gorilla squirrel mouse
40
Discovering structural forms OstrichRobinCrocodileSnakeBatOrangutanTurtle Ostrich Robin Crocodile Snake Bat Orangutan Turtle OstrichRobinCrocodileSnakeBatOrangutanTurtle
41
OstrichRobinCrocodileSnakeBatOrangutanTurtle Ostrich Robin Crocodile Snake Bat Orangutan Turtle Angel God Rock Plant OstrichRobinCrocodileSnakeBatOrangutanTurtle Discovering structural forms Linnaeus “Great chain of being”
42
Scientific discoveries Children’s cognitive development –Hierarchical structure of category labels –Clique structure of social groups –Cyclical structure of seasons or days of the week –Transitive structure for value People can discover structural forms Tree structure for biological species Periodic structure for chemical elements (1579) (1837) Systema Naturae Kingdom Animalia Phylum Chordata Class Mammalia Order Primates Family Hominidae Genus Homo Species Homo sapiens (1735) “great chain of being”
43
Typical structure learning algorithms assume a fixed structural form Flat Clusters K-Means Mixture models Competitive learning Line Guttman scaling Ideal point models Tree Hierarchical clustering Bayesian phylogenetics Circle Circumplex models Euclidean Space MDS PCA Factor Analysis Grid Self-Organizing Map Generative topographic mapping
44
The ultimate goal “Universal Structure Learner” K-Means Hierarchical clustering Factor Analysis Guttman scaling Circumplex models Self-Organizing maps ··· Data Representation
45
Hypothesis space of structural forms Order ChainRingPartition HierarchyTreeGridCylinder
46
A “universal grammar” for structural forms Form Process
47
F: form S: structure D: data mouse squirrel chimp gorilla F1 F2 F3 F4 Favors simplicity Favors smoothness [Zhu et al., 2003] Tree mouse squirrel chimp gorilla ClustersLinear chimp gorilla squirrel mouse squirrel chimp gorilla
50
Primate troop Bush administration Prison inmates Kula islands “x beats y” “x told y”“x likes y” “x trades with y” Dominance hierarchy Tree Cliques Ring Structural forms from relational data
51
Using structural forms: Inductive bias for learning about new objects
52
Lab studies of learning structural forms Training: Observe messages passed between employees (a, b, c, …) in a company. Transfer test: Predict messages sent to and from new employees x and y. Link observed in training Link observed in transfer test
53
Development of structural forms as more data are observed “blessing of abstraction”
54
Summary so far Bayesian inference over hierarchies of structured representations provides a framework to understand core questions of human cognition: –What is the content and form of human knowledge, at multiple levels of abstraction? –How does abstract domain knowledge guide learning of new concepts? –How is abstract domain knowledge learned? What must be built in? F: form S: structure D: data mouse squirrel chimp gorilla mouse squirrel chimp gorilla F1 F2 F3 F4
55
Other questions How can we learn domain structures if we do not already know in advance which features are relevant? How can we discover richer models of a domain, with multiple ways of structuring objects? How can we learn models for more complex domains, with not just a single object-property matrix but multiple different types of objects, their properties and relations to each other? How do these ideas & tools apply to other aspects of cognition, beyond categorizing and predicting the properties of objects?
56
Raw data matrix: A single way of structuring a domain rarely describes all its features…
57
Conventional clustering (CRP mixture): A single way of structuring a domain rarely describes all its features…
58
Learning multiple structures to explain different feature subsets (Shafto et al.; Shafto, Mansinghka, Tenenbaum, Yamada & Ueda, 2007) System 1System 2System 3CrossCat:
59
Discovering structure in relational data 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 InputOutput person TalksTo(person,person) person
60
O z Infinite Relational Model (IRM) (Kemp, Tenenbaum, Griffiths, Yamada & Ueda, AAAI 06) 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15 0.9 0.1 0.9 0.1 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15 3 9 1 13 5 11 7 14 2 10 6 12 4 8 15
61
Model fitting
62
concept predicate Biomedical predicate data from UMLS (McCrae et al.): –134 concepts: enzyme, hormone, organ, disease, cell function... –49 predicates: affects(hormone, organ), complicates(enzyme, cell function), treats(drug, disease), diagnoses(procedure, disease) … Infinite Relational Model (IRM) (Kemp, Tenenbaum, Griffiths, Yamada & Ueda, AAAI 06)
63
Learning a medical ontology e.g., Diseases affect Organisms Chemicals interact with Chemicals Chemicals cause Diseases
64
International relations circa 1965 (Rummel) –14 countries: UK, USA, USSR, China, …. –54 binary relations representing interactions between countries: exports to( USA, UK ), protests( USA, USSR ), …. –90 (dynamic) country features: purges, protests, unemployment, communists, # languages, assassinations, …. Infinite Relational Model (IRM) (Kemp, Tenenbaum, Griffiths, Yamada & Ueda, AAAI 06)
66
patients conditions has(patient,condition) Learning causal models Bayesian network Observed events
67
Classes = { R, D, S } Laws = { R D, D S } ( : possible causal link) Classes = { R, D, S } Laws = { S D } Classes = { C } Laws = { C C } Abstract causal theories
68
patients conditions has(patient,condition) Classes = { R, D, S } Laws = { R D, D S } R: working in factory, smoking, stress, high fat diet, … D: flu, bronchitis, lung cancer, heart disease, … S: headache, fever, coughing, chest pain, … Abstract theory Observed events Bayesian network Learning causal theories
69
patients conditions has(patient,condition) Causal graphical model Classes z Laws 1 2 3 4 0.3 0.00.01 0.0 0.25 0.0 5 6 7 8 9 10 11 12 ‘B’ ‘D’ ‘S’ ‘B’‘D’‘S’ ‘B’ ‘D’ ‘S’ Abstract theory Observed events Bayesian network IRM 1 2 3 4 5 6 7 8 9 10 11 12
70
patients conditions has(patient,condition) Causal graphical model Classes z Laws 1 2 3 4 0.8 0.00.01 0.0 0.75 0.0 5 6 7 8 9 10 11 12 ‘B’ ‘D’ ‘S’ ‘B’‘D’‘S’ ‘B’ ‘D’ ‘S’ Abstract theory Observed events Bayesian network IRM
71
attributes (1-12) observed data True network Sample 75 observations… patients Learning with a uniform prior on network structures:
72
True network Sample 75 observations… Learning a block- structured prior on network structures: (Mansinghka et al. UAI 06) attributes (1-12) observed data patients z 1 2 3 4 0.8 0.00.01 0.0 0.75 0.0 5 6 7 8 9 10 11 12
73
True structure of Bayesian network N: edge (N) class (Z) edge (N) 123456 78910111213141516 # of samples: 20 80 1000 Data D Network N Data D Network N Abstract Theory 1 2 3 4 5 6 … 7 8 9 10 11 12 13 14 15 16 … … 0.4 0.0 … … (Mansinghka, Kemp, Tenenbaum, Griffiths UAI 06) c1c1 c2c2 c1c1 c2c2 c1c1 c2c2 Classes Z “blessing of abstraction”
74
The flexibility of a nonparametric prior edge (N) class (Z) edge (N) 1 2 3 4 5 6 7 8 9 10 11 12 # of samples: 40 100 1000 True structure of Bayesian network N: Data D Network N Data D Network N Abstract Theory 1 2 3 4 5 6 7 8 9 10 11 12 … 0.1 c1c1 c1c1 c1c1 Classes Z … … …
75
Phrase structure Utterance Speech signal Grammar “Universal Grammar” Hierarchical phrase structure grammars (e.g., CFG, HPSG, TAG) P(phrase structure | grammar) P(utterance | phrase structure) P(speech | utterance) (c.f. Chater and Manning, 2006) P(grammar | UG)
76
(Han & Zhu, 2006; c.f., Zhu, Yuanhao & Yuille NIPS 06 ) Vision as probabilistic parsing
77
Goal-directed action (production and comprehension) (Wolpert, Doya and Kawato, 2003)
78
Goal inference as inverse probabilistic planning (Baker, Tenenbaum & Saxe) ConstraintsGoals Actions Rational planning (PO)MDP model predictions human judgments
79
Conclusions The big questions: How does the mind build rich models of the world from sparse data? What is the form and function of abstract knowledge, and how can abstractions be learned? –These questions are central in vision, language, categorization, causal reasoning, planning, social understanding… perhaps all of cognition? Some powerful tools for making progress on these questions: –Bayesian inference in probabilistic generative models –Hierarchical models, with inference at all levels of abstraction –Structured representations: graphs, grammars, logic –Flexible representations, growing in response to observed data New ways to think about development of cognitive systems. –Domain-specific representations can be learned by domain-general mechanisms. –Structure symbolic knowledge can support and even be acquired via statistical learning. –Powerful abstractions can be learned “from the top down”, together with or prior to learning more concrete knowledge.
81
Extra slides
82
Summary Structure Data mouse squirrel chimp gorilla mouse squirrel chimp gorilla F1 F2 F3 F4 Abstract knowledge Modeling human inductive learning as Bayesian inference over hierarchies of flexibly structured representations. Classes of variables: B, D, S Causal laws: B D, D S “dax” “zav” “fep” “zav ” “dax” “fep” Shape varies across categories but not within categories. Texture, color, size vary within categories. Word learningProperty inductionCausal learning
83
Conclusions Learning algorithms for discovering domain structure, given feature or relational data. Broader themes –Combining structured representations with statistical inference yields powerful knowledge discovery tools. –Hierarchical Bayesian modeling allows us to learn domain structure at multiple levels of abstraction. –Nonparametric Bayesian formulations allow the complexity of representations to be determined automatically and on the fly, growing as the data require.
84
Beyond similarity-based induction Reasoning based on dimensional thresholds: (Smith et al., 1993) Reasoning based on causal relations: (Medin et al., 2004; Coley & Shafto, 2003) Poodles can bite through wire. German shepherds can bite through wire. Dobermans can bite through wire. German shepherds can bite through wire. Salmon carry E. Spirus bacteria. Grizzly bears carry E. Spirus bacteria. Salmon carry E. Spirus bacteria.
85
Different sources for priors Chimps have T9 hormones. Gorillas have T9 hormones. Poodles can bite through wire. Dobermans can bite through wire. Salmon carry E. Spirus bacteria. Grizzly bears carry E. Spirus bacteria. Taxonomic similarity Jaw strength Food web relations
86
Property type “has T9 hormones” “can bite through wire” “carry E. Spirus bacteria” Theory Structure taxonomic tree directed chain directed network + diffusion process + drift process + noisy transmission Class C Class A Class D Class E Class G Class F Class B Class C Class A Class D Class E Class G Class F Class B Class A Class B Class C Class D Class E Class F Class G... Class C Class G Class F Class E Class D Class B Class A Hypotheses
87
Reasoning with two property types Biological property Disease property Tree Web Kelp Human Dolphin Sand shark Mako shark TunaHerring Kelp Human Dolphin Sand shark Mako shark Tuna Herring (Shafto, Kemp, Bonawitz, Coley & Tenenbaum) “Given that X has property P, how likely is it that Y does?”
88
Summary so far A framework for modeling human inductive reasoning as rational statistical inference over structured knowledge representations –Qualitatively different priors are appropriate for different domains of property induction. –In each domain, a prior that matches the world’s structure fits people’s judgments well, and better than alternative priors. –A language for representing different theories: graph structure defined over objects + probabilistic model for the distribution of properties over that graph. Remaining question: How can we learn appropriate theories for different domains?
89
Principles Structure Data Whole-object principle Shape bias Taxonomic principle Contrast principle Basic-level bias Learning word meanings
90
“tufa” Word learning Bayesian inference over tree- structured hypothesis space: (Xu & Tenenbaum; Schmidt & Tenenbaum)
91
Causal learning with prior knowledge (Griffiths, Sobel, Tenenbaum & Gopnik) AB Trial A Trial Initial “Backwards blocking” paradigm:
92
Learning grounded causal models (Goodman, Mansinghka & Tenenbaum) A child learns that petting the cat leads to purring, while pounding leads to growling. But how to learn these symbolic event concepts over which causal links are defined? a b c a b c
93
The big picture What we need to understand: the mind’s ability to build rich models of the world from sparse data. –Learning about objects, categories, and their properties. –Causal inference –Understanding other people’s actions, plans, thoughts, goals –Language comprehension and production –Scene understanding What do we need to understand these abilities? –Bayesian inference in probabilistic generative models –Hierarchical models, with inference at all levels of abstraction –Structured representations: graphs, grammars, logic –Flexible representations, growing in response to observed data
94
Overhypotheses Syntax:Universal Grammar Phonology Faithfulness constraints Markedness constraints Word LearningShape bias Principle of contrast Whole object bias Folk physicsObjects are unified, bounded and persistent bodies PredicabilityM-constraint Folk biologyTaxonomic principle... (Spelke) (Markman) (Keil) (Atran) (Chomsky) (Prince, Smolensky)
95
Beyond similarity-based induction Inference based on dimensional thresholds: (Smith et al., 1993) Inference based on causal relations: (Medin et al., 2004; Coley & Shafto, 2003) Poodles can bite through wire. German shepherds can bite through wire. Dobermans can bite through wire. German shepherds can bite through wire. Salmon carry E. Spirus bacteria. Grizzly bears carry E. Spirus bacteria. Salmon carry E. Spirus bacteria.
96
Property type “has T9 hormones” “can bite through wire” “carry E. Spirus bacteria” Form of background knowledge taxonomic tree directed chain directed network + diffusion process + drift process + noisy transmission Class C Class A Class D Class E Class G Class F Class B Class C Class A Class D Class E Class G Class F Class B Class A Class B Class C Class D Class E Class F Class G... Class C Class G Class F Class E Class D Class B Class A Hypotheses
97
Beyond similarity-based induction Biological property Disease property Tree Web Kelp Human Dolphin Sand shark Mako shark TunaHerring Kelp Human Dolphin Sand shark Mako shark Tuna Herring (Shafto, Kemp, Bonawitz, Coley & Tenenbaum) “Given that X has property P, how likely is it that Y does?”
98
Node-replacement graph grammars Production (Line) Derivation
99
Production (Line) Derivation Node-replacement graph grammars
100
Production (Line) Derivation Node-replacement graph grammars
101
Model fitting Evaluate each form in parallel For each form, heuristic search over structures based on greedy growth from a one-node seed:
102
Synthetic 2D data FlatLineRingTreeGrid FlatLineRingTreeGrid log posterior probabilities Model Selection results: Data: Continuous features drawn from a Gaussian field over these points.
103
FlatLineRingTreeGridScoresTrue
104
Clustering models for relational data Social networks: block models Does person x respect person y? Does prisoner x like prisoner y?
105
Learning a hierarchical ontology
106
F: form S: structure D: data People cluster into cliques 1 2 3 4 5 6 7 8 = “x likes y” Relational Data
107
More abstract relational structure Edges class (z) Class graph Graph type Dominance Cliques Ring Tree hierarchy
109
“tufa” Concept learning Bayesian inference over tree- structured hypothesis space: (Xu & Tenenbaum; Schmidt & Tenenbaum)
110
20 subjects rated the strength of 45 arguments: X 1 have property P. (e.g., Cows have T4 hormones.) X 2 have property P. X 3 have property P. All mammals have property P. [General argument] 20 subjects rated the strength of 36 arguments: X 1 have property P. X 2 have property P. Horses have property P. [Specific argument] Experiments on property induction ( Osherson, Smith, Wilkie, Lopez, Shafir, 1990)
112
Conclusions Computational tools for studying core questions of human learning (and building more human-like ML?) –What is the content and form of human knowledge, at multiple levels of abstraction? –How does abstract domain knowledge guide new learning? –How can abstract domain knowledge itself be learned? –How can inductive biases be so strong yet so flexible? Go beyond the traditional dichotomies of cog sci (and AI). –Instead of “nature vs. nurture”: Powerful abstractions can be learned “from the top down”, together with or prior to learning more concrete knowledge. –Instead of “domain-general” vs. “domain-specific”: Domain-general learning mechanisms acquire domain-specific knowledge representations? –Instead of “statistics” vs. “structure”: How can structured symbolic representations be acquired by statistical learning?
113
The solution Strong prior knowledge (inductive bias). –How does background knowledge guide learning from sparsely observed data? –What form does the knowledge take, across different domains and tasks? –How is that knowledge itself learned? –How can inductive biases be so strong yet so flexible?
114
Notes on slide before Highlight the issue of inductive bias, balancing flexibility and constraint. Put this text after the third question on previous slide, setting up the fourth. In principle, inductive biases don’t have to be learned. In ML, they are often thought of as hard-wired, engineered complements to the data-driven component. In cog sci, innate knowledge. But some have to be learned. Some of the important ones aren’t present in the youngest children, but appear later, and are clearly influenced by experience. We are also ready to give them up and adopt new biases seemingly very quickly. E.g., prism adaptation, physics adaptation. The third and fourth questions – the problem of learning good inductive biases, and exploiting strong biases while maintaining flexibility – are key ones for ML. And they may be key to distinctively human learning. the cognitive niche. As best as we can tell, other animals can be very smart, and often can have very clever inductive biases. But more or less these are hard-wired through evolution, they think about the same things they have always thought about. Exceptions to this trend are the most human- like ways that animals act. Continue to consider the order of the three questions. Issue is flow, both in intro and the talk transitions (from shape bias to the rest). Also, tradeoff in representational richness vs. learnability (BUT PROBABLY KEEP THIS ONLY IMPLICIT, NOT EXPLICIT).
115
1. How does background knowledge guide learning from sparsely observed data? Bayesian inference, with priors based on background knowledge. 2. What form does background knowledge take, across different domains and tasks? Probabilities defined over structured representations: graphs, grammars, predicate logic, relational schemas, theories. 3. How is background knowledge itself learned? Hierarchical Bayesian models, with inference at multiple levels of abstraction. 4. How can inductive biases be so strong yet so flexible? Nonparametric models, growing in complexity as the data require. The approach : from statistics to intelligence
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.