Context Model, Bayesian Exemplar Models, Neural Networks.

Slides:



Advertisements
Similar presentations
Pattern Association.
Advertisements

REMERGE: A new approach to the neural basis of generalization and memory-based inference Dharshan Kumaran, UCL Jay McClelland, Stanford University.
Text Categorization.
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
Posner and Keele; Rosch et al.. Posner and Keele: Two Main Points Greatest generalization is to prototype. –Given noisy examples of prototype, prototype.
Genetic Statistics Lectures (5) Multiple testing correction and population structure correction.
A Comparison of Rule-Based versus Exemplar-Based Categorization Using the ACT-R Architecture Matthew F. RUTLEDGE-TAYLOR, Christian LEBIERE, Robert THOMSON,
Making Simple Decisions
Classification. Introduction A discriminant is a function that separates the examples of different classes. For example – IF (income > Q1 and saving >Q2)
Identifying Conditional Independencies in Bayes Nets Lecture 4.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
OMEN: A Probabilistic Ontology Mapping Tool Mitra et al.
Computer vision: models, learning and inference
Probabilistic inference in human semantic memory Mark Steyvers, Tomas L. Griffiths, and Simon Dennis 소프트컴퓨팅연구실오근현 TRENDS in Cognitive Sciences vol. 10,
Cognitive Psychology Chapter 7. Cognitive Psychology: Overview  Cognitive psychology is the study of perception, learning, memory, and thought  The.
Chapter 7 – K-Nearest-Neighbor
Brain Mechanisms of Unconscious Inference J. McClelland Symsys 100 April 22, 2010.
Knowing Semantic memory.
CS 590M Fall 2001: Security Issues in Data Mining Lecture 3: Classification.
Single Category Classification Stage One Additive Weighted Prototype Model.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Nonparametric Bayes and human cognition Tom Griffiths Department of Psychology Program in Cognitive Science University of California, Berkeley.
Bayesian Networks What is the likelihood of X given evidence E? i.e. P(X|E) = ?
Exemplar-based accounts of “multiple system” phenomena in perceptual categorization R. M. Nosofsky and M. K. Johansen Presented by Chris Fagan.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
Bayesian and Connectionist Approaches to Learning Tom Griffiths, Jay McClelland Alison Gopnik, Mark Seidenberg.
Integrating New Findings into the Complementary Learning Systems Theory of Memory Jay McClelland, Stanford University.
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
Brain Mechanisms of Unconscious Inference J. McClelland Symsys 100 April 7, 2011.
CSC321: Neural Networks Lecture 24 Products of Experts Geoffrey Hinton.
CSC 2535 Lecture 8 Products of Experts Geoffrey Hinton.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Two Variable Statistics Introduction To Chi-Square Test for Independence.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
DCM – the theory. Bayseian inference DCM examples Choosing the best model Group analysis.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Principled Probabilistic Inference and Interactive Activation Psych209 January 25, 2013.
Inference Algorithms for Bayes Networks
Oct 29th, 2001Copyright © 2001, Andrew W. Moore Bayes Net Structure Learning Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon.
Verbal Representation of Knowledge
GRE SENTENCE EQUIVALENCE STRATEGY. SENTENCE EQUIVALENCE STRATEGY Read the Sentence & look for Clues Predict an Answer Select the two Choices that most.
Probability Distributions Table and Graphical Displays.
Graphical Models for Psychological Categorization David Danks Carnegie Mellon University; and Institute for Human & Machine Cognition.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Bayesian inference in neural networks
Deep Feedforward Networks
Perception, interaction, and optimality
Inference in Bayesian Networks
Unsupervised Learning Networks
Active Learning Lecture Slides
Hypothesis testing. Chi-square test
Oliver Schulte Machine Learning 726
Bayesian inference in neural networks
From frequency to meaning: vector space models of semantics
Henrik Singmann Karl Christoph Klauer Sieghard Beller
Markov Networks.
Important Distinctions in Learning BNs
Chapter 2: Evaluative Feedback
A Hierarchical Bayesian Look at Some Debates in Category Learning
CS4705 Natural Language Processing
Some Basic Aspects of Perceptual Inference Under Uncertainty
5 Categorical Syllogisms
Rational models of categorization
Michael L. Mack, Alison R. Preston, Bradley C. Love  Current Biology 
Chapter 2: Evaluative Feedback
Presentation transcript:

Context Model, Bayesian Exemplar Models, Neural Networks

Medin and Shaffer’s ‘Context Model’ No category information -- only specific items or exemplars. Evidence for category A given probe p: E A,p =  i in a S(p,i)/(  i in a S(p,i) +  i in b S(p,i)) Where S(p,i) =  j (P j = I ij ? 1:  j ) ;  j = c,f,s,p Prob. of choosing category A given probe p: P A,p = E A,p

Medin and Shaffer’s ‘Context Model’ No category information -- only specific items or exemplars. Evidence for category A given probe p: E A,p =  i in a S(p,i)/(  i in a S(p,i) +  i in b S(p,i)) Where S(p,i) =  j (P j = I ij ? 1:  j ) ;  j = c,f,s,p Probability of choosing category A given probe p: P A,p = E A,p

Some things about the model Good matches count more than weak matches An exact match counts a lot But many weak matches can work together to make a (non- presented) prototype come out better than any exemplar Dimension weights like ‘effective distance’ (or maybe ‘log of effective distance?’ If weight = 0, we get a categorical effect Dimension weights are important – how are they determined? – Best fit to data? – Best choice to categorize examples correctly?

Independent cue models For items 1, 2, 3 and 7:

Neural Network Model Similar to Context Model Choice rule: if net i (t) > 0 else Within each pool, units inhibit each other; between pools, they are mutually exictatory

What REMERGE Adds to Exemplar Models Recurrence allows similarity between stored items to influence performance, independent of direct activation by the probe. X

Bayes/Exemplar-like Version of the Remerge Model inp i Choice rule: Hedged softmax function: Logistic function:

Acquired Equivalence (Shohamy & Wagner, 2008) Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) S1 S2 S3 S4 Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

F1 S1 F2 S2 F3 S3 F4 S4 Acquired Equivalence (Shohamy & Wagner, 2008) S1 S2 S3 S4 Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?

Acquired Equivalence (Shohamy & Wagner, 2008) Study: – F1-S1; – F3-S3; – F2-S1; – F2-S2; – F4-S3; – F4-S4 Test: – Premise: F1: S1 or S3? – Inference: F1: S2 or S4?