Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4
Machine Learning2 A Generic System System … … Input Variables: Hidden Variables: Output Variables: Training examples: Parameters:
Machine Learning3 A Generic System System … … Input Variables: Hidden Variables: Output Variables: Training examples: Parameters:
Machine Learning4 Different types of learning Supervised learning: 1.Classification (discrete y), 2.Regression (continuous y). Unsupervised learning (no target y). 1.Clustering (h = different groups of types of data). 2.Density estimation (h = parameters of probability dist.) 3.Reduction (h= a few latent variable describing high dimensional data). Reinforcement learning (y = actions).
Digit recognition (supervised) Handwritten Digit Recognition x: pixelized or pre-processed image. t: classs of pre-classified digits (training example.) y: digit class (computed by ML algorithm). h: contours, left/right handed…
Regression (supervised) Target output Parameters
Linear classifier ? Training examples
Linear classifier Decision boundary Heavyside function: 0 1
Linear classifier Decision boundary Heavyside function: 0 1
Assumptions Multivariate Gaussians Same covariance Two classes equiprobable
How do we compute the output? Positive: Class 1 Negative: Class 0 Orthogonal to decision boundary
How do we compute the output? Orthogonal to decision boundary
How do we learn the parameters? Orthogonal to decision boundary Linear discriminant analysis = Direct parameter estimation
How do we learn the parameters? Orthogonal to decision boundary Minimize mean-squared error:
How do we learn the parameters? Minimize mean-squared error: Gradient descent:
How do we learn the parameters? Minimize mean-squared error: Gradient descent: Stochastic gradient descent:
How do we learn the parameters? Stochastic gradient descent: Problem:is not differentiable
3. How do we learn the parameters? Solution: change y to expected class: The output is now the expected class Logistic function
3. How do we learn the parameters? Stochastic gradient descent:
3. How do we learn the parameters? Stochastic gradient descent: Always positive
3. How do we learn the parameters? Learning based on expected class: with Perceptron learning rule with
Application 1: Neural population decoding
w
How to find ? w w
Linear Discriminant Analysis (LDA) Covariance Matrix: Mean responses:
Inverse Covariance matrix Average neural responses when motion is right Average neural responses when motion is left Linear Discriminant Analysis (LDA) w
Neural network interpretation: Learning the connections with « Delta rule »: Each neuron is a classifier
Limitation of 1 layer perceptron: Linearly separable: ANDNon linearly separable: XOR
Extension: multilayer perceptron Towards a universal computer
Learning a multi-layer neural network with backprop Towards a universal computer
Extension: multilayer perceptron Towards a universal computer Initial error:
Extension: multilayer perceptron Towards a universal computer Backpropagate errors Initial error:
Extension: multilayer perceptron Towards a universal computer Backpropagate errors Apply delta rule: Initial error:
Big problem: overfitting... … Backprop was abandoned in the late eighties…
Compensate with very large datasets 9 th Order Polynomial … Resurgence of backprop with big data
Deep convulational networks Google: Image recognition, speech recognition. Trained on billions of examples…
Single neurons as 2 layer perceptron Poirazi and Mel, 2001, 2003
Regression (supervised) Target output Parameters
Regression in general Target output Basis functions
Gaussian noise assumption
How to learn the parameters? Gradient descent:
But: overfitting...
How to learn the parameters? Gradient descent:
Application 3: Neural coding: function approximation with tuning curves
“Classical view”: multiple spatial maps
Application 3: function approximation in sensorimotor area In Parietal cortex: Retinotopic cells gain modulated by eye position And also head position, arm position … Snyder and Pouget, 2000
Multisensory integration = multidirectional coordinate transform Experimental validation Model prediction: Pouget, Duhamel and Deneve, 2004 Avillac et al, 2005 Partially shifting tuning curves
Unsupervised learning …. First example of many
Principal component analysis Orthogonal basis
Principal component analysis (unsupervised learning) Orthogonal basis
Principal component analysis Orthogonal basis:Uncorrelated components: Note: not the same as independent
Principal component analysis and dimensionality reduction K<<N + “Noise”
Principal component analysis (unsupervised learning) Orthogonal basis N=2 K=1
One solution: eigenvalue decomposition of covariance matrix D D
How do we “learn” the parameters? K<<N Standard iterative method First component: other components:
PCA: gradient descent « Maximization » « Expectation » Generalized Oja rule
Natural images: Weights learnt by PCA
Application of PCA: analysis of large neural datasets Machens, Brody and Romo, 2010
Application of PCA: analysis of large neural datasets Time Frequency Machens, Brody and Romo, 2010