What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering.

Slides:



Advertisements
Similar presentations
What can computational models tell us about face processing?
Advertisements

Artificial Intelligence 12. Two Layer ANNs
CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
Introduction to Cognitive Science Adrienne Moore, Office hours: Wed. 4-5, Cognitive Science Building,
HMAX Models Architecture Jim Mutch March 31, 2010.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering.
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Artificial Intelligence (CS 461D)
I. Face Perception II. Visual Imagery. Is Face Recognition Special? Arguments have been made for both functional and neuroanatomical specialization for.
Organizational Notes no study guide no review session not sufficient to just read book and glance at lecture material midterm/final is considered hard.
CSE 153 Cognitive ModelingChapter 3 Representations and Network computations In this chapter, we cover: –A bit about cortical architecture –Possible representational.
Visual Cognition II Object Perception. Theories of Object Recognition Template matching models Feature matching Models Recognition-by-components Configural.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
COGNITIVE NEUROSCIENCE
Chapter Seven The Network Approach: Mind as a Web.
Visual Expertise Is a General Skill Maki Sugimoto University of California, San Diego November 20, 2000.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Radial Basis Function (RBF) Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
MIND: The Cognitive Side of Mind and Brain  “… the mind is not the brain, but what the brain does…” (Pinker, 1997)
Solving “The Visual Expertise Mystery” with Models Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) The Perceptual Expertise Network The Temporal.
1 Backprop, 25 years later… Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Temporal Dynamics.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
The changing face of face research Vicki Bruce School of Psychology Newcastle University.

Artificial Neural Network Unsupervised Learning
Machine Learning CSE 681 CH2 - Supervised Learning.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Representations for object class recognition David Lowe Department of Computer Science University of British Columbia Vancouver, Canada Sept. 21, 2006.
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
Talk #3: EMPATH A Neural Network Model of Human Facial Expression Recognition Gary Cottrell Joint work with Matt Dailey, Curt Padgett, and Ralph Adolphs.
An Unsupervised Connectionist Model of Rule Emergence in Category Learning Rosemary Cowell & Robert French LEAD-CNRS, Dijon, France EC FP6 NEST Grant.
Expertise, Millisecond by Millisecond Tim Curran, University of Colorado Boulder 1.
EMPATH: A Neural Network that Categorizes Facial Expressions Matthew N. Dailey and Garrison W. Cottrell University of California, San Diego Curtis Padgett.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
The Emergent Structure of Semantic Knowledge
Dialog Processing with Unsupervised Artificial Neural Networks Andrew Richardson Thomas Jefferson High School for Science and Technology Computer Systems.
1 Backprop: Representations, Representations, Representations Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Today’s Lecture Neural networks Training
Big data classification using neural network
Do Expression and Identity Need Separate Representations?
Self-Organizing Network Model (SOM) Session 11
Neural Network Architecture Session 2
Deeply learned face representations are sparse, selective, and robust
The Components of the Phenomenon of Repetition Suppression
Gary Cottrell & * 07/16/96 The Face of Fear: A Neural Network Model of Human Facial Expression Recognition Garrison W. Cottrell Gary's Unbelievable Research.
A principled way to principal components analysis
Neuropsychology of Vision Anthony Cate April 19, 2001
Cognitive Modeling Gary Cottrell
What can computational models tell us about face processing?
Backprop, Representations, Representations, Representations,
Artificial Intelligence 12. Two Layer ANNs
Machine Learning for Visual Scene Classification with EEG Data
The Network Approach: Mind as a Web
Presentation transcript:

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang.

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang

What can computational models tell us about face processing? Garrison W. Cottrell Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Collaborators, Past, Present and Future: Ralph Adolphs, Luke Barrington, Serge Belongie, Kristin Branson, Tom Busey, Andy Calder, Matthew Dailey, Piotr Dollar, Michael Fleming, AfmZakaria Haque, Janet Hsiao, Carrie Joyce, Brenden Lake, Tim Marks, Janet Metcalfe, Jonathan Nelson, Nam Nguyen, Curt Padgett, Honghao Shan, Maki Sugimoto, Matt Tong, Brian Tran, Keiji Yamada, Lingyun Zhang

Why model? Models rush in where theories fear to tread. Models can be manipulated in ways people cannot Models can be analyzed in ways people cannot.

Models rush in where theories fear to tread Theories are high level descriptions of the processes underlying behavior. They are often not explicit about the processes involved. They are difficult to reason about if no mechanisms are explicit -- they may be too high level to make explicit predictions. Theory formation itself is difficult. working model Using machine learning techniques, one can often build a working model of a task for which we have no theories or algorithms (e.g., expression recognition). A working model provides an “intuition pump” for how things might work, especially if they are “neurally plausible” (e.g., development of face processing - Dailey and Cottrell). A working model may make unexpected predictions (e.g., the Interactive Activation Model and SLNT).

Models can be manipulated in ways people cannot We can see the effects of variations in cortical architecture (e.g., split (hemispheric) vs. non-split models (Shillcock and Monaghan word perception model)). We can see the effects of variations in processing resources (e.g., variations in number of hidden units in Plaut et al. models). We can see the effects of variations in environment (e.g., what if our parents were cans, cups or books instead of humans? I.e., is there something special about face expertise versus visual expertise in general? (Sugimoto and Cottrell, Joyce and Cottrell, Tong & Cottrell)). We can see variations in behavior due to different kinds of brain damage within a single “brain” (e.g. Juola and Plunkett, Hinton and Shallice).

Models can be analyzed in ways people cannot In the following, I specifically refer to neural network models. We can do single unit recordings. We can selectively ablate and restore parts of the network, even down to the single unit level, to assess the contribution to processing. We can measure the individual connections -- e.g., the receptive and projective fields of a unit. We can measure responses at different layers of processing (e.g., which level accounts for a particular judgment: perceptual, object, or categorization? (Dailey et al. J Cog Neuro 2002).

How (I like) to build Cognitive Models I like to be able to relate them to the brain, so “neurally plausible” models are preferred -- neural nets. The model should be a working model of the actual task, rather than a cartoon version of it. Of course, the model should nevertheless be simplifying (i.e. it should be constrained to the essential features of the problem at hand): Do we really need to model the (supposed) translation invariance and size invariance of biological perception? As far as I can tell, NO! Then, take the model “as is” and fit the experimental data: 0 fitting parameters is preferred over 1, 2, or 3.

The other way (I like) to build Cognitive Models Same as above, except: Use them as exploratory models -- in domains where there is little direct data (e.g. no single cell recordings in infants or undergraduates) to suggest what we might find if we could get the data. These can then serve as “intuition pumps.” Examples: Why we might get specialized face processors Why those face processors get recruited for other tasks

Today: Talk #1: The Face of Fear: A Neural Network Model of Human Facial Expression Recognition Joint work with Matt Dailey

A Good Cognitive Model Should: Be psychologically relevant (i.e. it should be in an area with a lot of real, interesting psychological data). Actually be implemented. If possible, perform the actual task of interest rather than a cartoon version of it. Be simplifying (i.e. it should be constrained to the essential features of the problem at hand). Fit the experimental data. Make new predictions to guide psychological research.

The Issue: Are Similarity and Categorization Two Sides of the Same Coin? Some researchers believe perception of facial expressions is a new example of categorical perception: Like the colors of a rainbow, the brain separates expressions into discrete categories, with: Sharp boundaries between expressions, and… Higher discrimination of faces near those boundaries.

Percent of subjects subjectswho pressed this button button

The Issue: Are Similarity and Categorization Two Sides of the Same Coin? Some researchers believe the underlying representation of facial expressions is NOT discrete: There are two (or three) underlying dimensions, e.g., intensity and valence (found by MDS). Our perception of expressive faces induces a similarity structure that results in a circle in this space

The Face Processing System PCA Happy Sad Afraid Angry Surprised Disgusted Neural Net Pixel (Retina) Level Object (IT) Level Perceptual (V1) Level Category Level Gabor Filtering

The Face Processing System PCA Bob Carol Ted Alice Neural Net Pixel (Retina) Level Object (IT) Level Perceptual (V1) Level Category Level Gabor Filtering

The Face Processing System PCA Gabor Filtering Bob Carol Ted Cup Can Book Neural Net Pixel (Retina) Level Object (IT) Level Perceptual (V1) Level Category Level Featurelevel

The Face Processing System LSF PCA HSF PCA Gabor Filtering Bob Carol Ted Cup Can Book Neural Net Pixel (Retina) Level Object (IT) Level Perceptual (V1) Level Category Level

The Gabor Filter Layer Basic feature: the 2-D Gabor wavelet filter (Daugman, 85): These model the processing in early visual areas Convolution * Magnitudes Subsample in a 29x36 grid

Principal Components Analysis The Gabor filters give us 40,600 numbers We use PCA to reduce this to 50 numbers PCA is like Factor Analysis: It finds the underlying directions of Maximum Variance PCA can be computed in a neural network through a competitive Hebbian learning mechanism Hence this is also a biologically plausible processing step We suggest this leads to representations similar to those in Inferior Temporal cortex

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

How to do PCA with a neural network (Cottrell, Munro & Zipser, 1987; Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; O’Toole et al. 1991) n n A self-organizing network that learns whole-object representations (features, Principal Components, Holons, eigenfaces) (features, Principal Components, Holons, eigenfaces)... Holons (Gestalt layer) Input from Perceptual Layer

Holons They act like face cells (Desimone, 1991): Response of single units is strong despite occluding eyes, e.g. Response drops off with rotation Some fire to my dog’s face A novel representation: Distributed templates -- each unit’s optimal stimulus is a ghostly looking face (template- like), but many units participate in the representation of a single face (distributed). For this audience: Neither exemplars nor prototypes! Explain holistic processing: Why? If stimulated with a partial match, the firing represents votes for this template: Units “downstream” don’t know what caused this unit to fire. (more on this later…)

The Final Layer: Classification (Cottrell & Fleming 1990; Cottrell & Metcalfe 1990; Padgett & Cottrell 1996; Dailey & Cottrell, 1999; Dailey et al. 2002) The holistic representation is then used as input to a categorization network trained by supervised learning. Excellent generalization performance demonstrates the sufficiency of the holistic representation for recognition Holons Categories... Output: Cup, Can, Book, Greeble, Face, Bob, Carol, Ted, Happy, Sad, Afraid, etc. Input from Perceptual Layer

The Final Layer: Classification Categories can be at different levels: basic, subordinate. Simple learning rule (~delta rule). It says (mild lie here): add inputs to your weights (synaptic strengths) when you are supposed to be on, subtract them when you are supposed to be off. This makes your weights “look like” your favorite patterns – the ones that turn you on. When no hidden units => No back propagation of error. When hidden units: we get task-specific features (most interesting when we use the basic/subordinate distinction)

Outline Visual Expertise: What is it? How do you measure it? The Visual Expertise Mystery Discussion

Are you a perceptual expert? Take the expertise test!!!** “Identify this object with the first name that comes to mind.” **Courtesy of Jim Tanaka, University of Victoria

“Car” or “Chevy” - Not an expert “2013 Chevy Volt” - Expert!

“Bird” or “Blue Bird” - Not an expert “Indigo Bunting” - Expert!

“Face” or “Man” - Not an expert “ George Dubya”- Expert! “Jerk” or “Megalomaniac” - Democrat!

How is an object to be named? Bird Basic Level (Rosch et al., 1971) Indigo Bunting Subordinate Species Level Animal Superordinate Level

Entry Point Recognition Bird Indigo Bunting Animal Visual analysis Fine grain visual analysis Semantic analysis Entry Point Downward Shift Hypothesis

Dog and Bird Expert Study Each expert had a least 10 years experience in their respective domain of expertise. None of the participants were experts in both dogs and birds. Participants provided their own controls. Tanaka & Taylor, 1991

Robin Bird Animal Object Verification Task Superordinate Basic Subordinate Plant Dog Sparrow YES NO

600 Mean Reaction Time (msec) SuperordinateBasicSubordinate Downward Shift Expert Domain Novice Domain Dog and bird experts recognize objects in their domain of expertise at subordinate levels. AnimalBird/DogRobin/Beagle

Is face recognition a general form of perceptual expertise? George W. Bush Indigo Bunting 2002 Series 7 BMW

600 Mean Reaction Time (msec) SuperordinateBasicSubordinate Tanaka, 2001 Downward Shift Face experts recognize faces at the individual level of unique identity Faces Objects

Bentin, Allison, Puce, Perez & McCarthy, 1996 Event-related Potentials and Expertise N170 Face ExpertsObject Experts Tanaka & Curran, 2001; see also Gauthier, Curran, Curby & Collins, 2003, Nature Neuro. Novice DomainExpert Domain

Neuroimaging of face, bird and car experts “Face Experts” Fusiform Gyrus Car Experts Bird Experts Fusiform Gyrus Fusiform Gyrus Cars-ObjectsBirds-Objects Gauthier et al., 2000

Behavioral benchmarks of expertise Downward shift in entry point recognition Improved discrimination of novel exemplars from learned and related categories Neurological benchmarks of expertise Enhancement of N170 ERP brain component Increased activation of fusiform gyrus How to identify an expert?

End of Tanaka Slides

Nancy Kanwisher showed the FFA is specialized for faces by comparing the activation of the FFA when viewing faces to the activation when viewing other objects or houses. …but she didn’t control for what? What if she had used real estate agents?

Greeble Experts (Gauthier et al. 1999) Subjects trained over many hours to recognize individual Greebles. Activation of the FFA increased for Greebles as the training proceeded.

The “visual expertise mystery” If the so-called “Fusiform Face Area” (FFA) is specialized for face processing, then why would it also be used for cars, birds, dogs, or Greebles? Our view: the FFA is an area associated with a process: fine level discrimination of homogeneous categories. But the question remains: why would an area that presumably starts as a face area get recruited for these other visual tasks? Surely, they don’t share features, do they? Sugimoto & Cottrell (2001), Proceedings of the Cognitive Science Society

Solving the mystery with models Main idea: There are multiple visual areas that could compete to be the Greeble expert - “basic” level areas and the “expert” (FFA) area. The expert area must use features that distinguish similar looking inputs -- that’s what makes it an expert Perhaps these features will be useful for other fine-level discrimination tasks. We will create Basic level models - trained to identify an object’s class Expert level models - trained to identify individual objects. Then we will put them in a race to become Greeble experts. Then we can deconstruct the winner to see why they won. Sugimoto & Cottrell (2001), Proceedings of the Cognitive Science Society

Model Database A network that can differentiate faces, books, cups and A network that can differentiate faces, books, cups and cans is a “basic level network.” cans is a “basic level network.” A network that can also differentiate individuals within ONE A network that can also differentiate individuals within ONE class (faces, cups, cans OR books) is an “expert.”

Model Pretrain two groups of neural networks on different tasks. Compare the abilities to learn a new individual Greeble classification task. can cup book face Greeble1 Greeble2 Greeble3 (Experts) (Non-experts) cup Carol book can Bob Ted Alice FFA LOC Greeble1 Greeble2 Greeble3

Expertise begets expertise Learning to individuate cups, cans, books, or faces first, leads to faster learning of Greebles (can’t try this with kids!!!). The more expertise, the faster the learning of the new task! Hence in a competition with the object area, FFA would win. Hence in a competition with the object area, FFA would win. If our parents were cans, the FCA (Fusiform Can Area) would win. AmountOfTrainingRequired To be a GreebleExpert Training Time on first task

Entry Level Shift: Subordinate RT decreases with training (rt = uncertainty of response = 1.0 -max(output)) Human data --- Subordinate Basic RT Network data # Training Sessions

How do experts learn the task? Expert level networks must be sensitive to within-class variation: Expert level networks must be sensitive to within-class variation: Representations must amplify small differences Representations must amplify small differences Basic level networks must ignore within-class variation. Basic level networks must ignore within-class variation. Representations should reduce differences Representations should reduce differences

Observing hidden layer representations Principal Components Analysis on hidden unit activation: Principal Components Analysis on hidden unit activation: PCA of hidden unit activations allows us to reduce the dimensionality (to 2) and plot representations. PCA of hidden unit activations allows us to reduce the dimensionality (to 2) and plot representations. We can then observe how tightly clustered stimuli are in a low-dimensional subspace We can then observe how tightly clustered stimuli are in a low-dimensional subspace We expect basic level networks to separate classes, but not individuals. We expect basic level networks to separate classes, but not individuals. We expect expert networks to separate classes and individuals. We expect expert networks to separate classes and individuals.

Subordinate level training magnifies small differences within object representations 1 epoch80 epochs1280 epochs Face Basic greeble

Greeble representations are spread out prior to Greeble Training Face Basic greeble

Another way to look at the data: “Single unit recordings” Each row corresponds to a different hidden unit. Note that the response to every object category is more variable in the face network - A neural “zip code” for every face! greeble Unit12345Unit FACE NETWORK BASIC NETWORK

Variability over the unknown category decreases learning time on it Greeble Learning Time Greeble Variance Prior to Learning Greebles (r = )

Examining the Net’s Representations We want to visualize “receptive fields” in the network. But the Gabor magnitude representation is noninvertible. We can learn an approximate inverse mapping, however. We used linear regression to find the best linear combination of Gabor magnitude principal components for each image pixel. Then projecting each hidden unit’s weight vector into image space with the same mapping visualizes its “receptive field.”

Two hidden unit receptive fields AFTER TRAINING AS A FACE EXPERTAFTER FURTHER TRAINING ON GREEBLES HU 16 HU 36 NOTE: These are not face-specific!

Controlling for the number of classes We obtained 13 classes from hemera.com: 10 of these are learned at the basic level. 10 faces, each with 8 expressions, make the expert task 3 (lamps, ships, swords) are used for the novel expertise task.

Results: Pre-training New initial tasks of similar difficulty: In previous work, the basic level task was much easier. These are the learning curves for the 10 object classes and the 10 faces.

Results As before, experts still learned new expert level tasks faster Number of epochs To learn swords After learning faces Or objects Number of training epochs on faces or objects

Summary networks can become experts, by the behavioral definition of the entry-level shift in reaction times; expert networks learn the Greeble expertise task faster than basic-level categorizers, suggesting this is why the FFA is recruited for these other tasks; this can be attributed to the spread of representations in expert networks: Greebles are more separated by these features than by the basic- network features; this feature variability to the Greeble category prior to training on it is predictive of the ease with which it will be learned.

Conclusions special There is nothing special about faces! Any Any kind of expertise is sufficient to create a fine-level discrimination area! We predict that if activations of neurons in FFA could be measured at a fine grain, we should see high variance in response to different faces.

END