Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 2: Learning without Over-learning Isabelle Guyon

Similar presentations


Presentation on theme: "Lecture 2: Learning without Over-learning Isabelle Guyon"— Presentation transcript:

1 Lecture 2: Learning without Over-learning Isabelle Guyon isabelle@clopinet.com

2 Machine Learning Learning machines include: –Linear discriminant (including Naïve Bayes) –Kernel methods –Neural networks –Decision trees Learning is tuning: –Parameters (weights w or , threshold b) –Hyperparameters (basis functions, kernels, number of units)

3 Conventions X={x ij } n m xixi y ={y j }  w

4 What is a Risk Functional? A function of the parameters of the learning machine, assessing how much it is expected to fail on a given task. Parameter space (w) R[f(x,w)] w*w*

5 Examples of risk functionals Classification: – Error rate:(1/m)  i=1:m 1(F(x i )  y i ) – 1- AUC Regression: – Mean square error: (1/m)  i=1:m (f(x i )-y i ) 2

6 How to Train? Define a risk functional R[f(x,w)] Find a method to optimize it, typically “gradient descent” w j  w j -   R/  w j or any optimization method (mathematical programming, simulated annealing, genetic algorithms, etc.)

7 x1x1 x2x2 Fit / Robustness Tradeoff x1x1 x2x2 15

8 Overfitting Example: Polynomial regression -10-8-6-4-20246810 -0.5 0 0.5 1 1.5 Learning machine: y=w 0 +w 1 x + w 2 x 2 …+ w 10 x 10 Target: a 10 th degree polynomial + noise Error x y

9 Underfitting Example: Polynomial regression -10-8-6-4-20246810 -0.5 0 0.5 1 1.5 Linear model: y=w 0 +w 1 x Target: a 10 th degree polynomial + noise Error x y

10 Variance 10 x y

11 Bias x y

12 Ockham’s Razor Principle proposed by William of Ockham in the fourteenth century: “Pluralitas non est ponenda sine neccesitate”. Of two theories providing similarly good predictions, prefer the simplest one. Shave off unnecessary parameters of your models.

13 The Power of Amnesia The human brain is made out of billions of cells or Neurons, which are highly interconnected by synapses. Exposure to enriched environments with extra sensory and social stimulation enhances the connectivity of the synapses, but children and adolescents can lose them up to 20 million per day.

14 Artificial Neurons x1x1 x2x2 xnxn 1  f(x) w1w1 w2w2 wnwn b f(x) = w  x + b Axon Synapses Activation of other neurons Dendrites Cell potential Activation function McCulloch and Pitts, 1943

15 Hebb’s Rule w j  w j + y i x ij Axon  y xjxj wjwj Synapse Activation of another neuron Dendrite

16 Weight Decay w j  w j + y i x ij Hebb’s rule w j  (1-  ) w j + y i x ijWeight decay   [0, 1], decay parameter

17 Overfitting Avoidance Example: Polynomial regression Target: a 10 th degree polynomial + noise Learning machine: y=w 0 +w 1 x + w 2 x 2 …+ w 10 x 10

18 Weight Decay for MLP  xjxj   Replace: w j  w j + back_prop(j) by: w j  (1-  ) w j + back_prop(j)

19 Theoretical Foundations Structural Risk Minimization Bayesian priors Minimum Description Length Bayes/variance tradeoff

20 Risk Minimization Examples are given: (x 1, y 1 ), (x 2, y 2 ), … (x m, y m ) loss function unknown distribution Learning problem: find the best function f(x; w) minimizing a risk functional R[f] =  L(f(x; w), y) dP(x, y)

21 Approximations of R[f] Empirical risk: R train [f] = (1/n)  i=1:m L(f(x i ; w), y i ) –0/1 loss 1 (F(x i )  y i ) : R train [f] = error rate –square loss (f(x i )-y i ) 2 : R train [f] = mean square error Guaranteed risk: With high probability (1-  ), R[f]  R gua [f] R gua [f] = R train [f] +   C 

22 Structural Risk Minimization Vapnik, 1974 S3S3 S2S2 S1S1 Increasing complexity Nested subsets of models, increasing complexity/capacity: S 1  S 2  … S N Tr, Training error Ga, Guaranteed risk Ga= Tr +  C  , Function of Model Complexity C Complexity/Capacity C

23 SRM Example (linear model) Rank with ||w|| 2 =  i w i 2 S k = { w | ||w|| 2 <  k 2 },  1 <  2 <…<  n Minimization under constraint: min R train [f] s.t. ||w|| 2 <  k 2 Lagrangian: R reg [f,  ] = R train [f] +  ||w|| 2 R capacity S 1  S 2  … S N

24 Gradient Descent R reg [f] = R emp [f] +  ||w|| 2 SRM/regularization w j  w j -   R reg /  w j w j  w j -   R emp /  w j - 2   w j w j  (1-  )  w j -   R emp /  w j Weight decay

25 Multiple Structures Shrinkage (weight decay, ridge regression, SVM): S k = { w | ||w|| 2 <  k },  1 <  2 <…<  k  1 >  2 >  3 >… >  k (  is the ridge ) Feature selection: S k = { w | ||w|| 0 <  k },  1 <  2 <…<  k (  is the number of features ) Data compression:  1 <  2 <…<  k (  may be the number of clusters )

26 Hyper-parameter Selection Learning = adjusting : parameters ( w vector ).  hyper-parameters  (  ). Cross-validation with K-folds: For various values of  : - Adjust w on a fraction (K-1)/K of training examples e.g. 9/10 th. - Test on 1/K remaining examples e.g. 1/10 th. - Rotate examples and average test results (CV error). - Select  to minimize CV error. - Re-compute w on all training examples using optimal . X y Prospective study / “real” validation Training data: Make K folds Test data

27 Summary High complexity models may “overfit”: –Fit perfectly training examples –Generalize poorly to new cases SRM solution: organize the models in nested subsets such that in every structure element complexity < threshold. Regularization: Formalize learning as a constrained optimization problem, minimize regularized risk = training error + penalty.

28 Bayesian MAP  SRM Maximum A Posteriori (MAP): f = argmax P(f|D) = argmax P(D|f) P(f) = argmin –log P(D|f) –log P(f) Structural Risk Minimization (SRM): f = argmin R emp [f] +  [f] Negative log likelihood = Empirical risk R emp [f] Negative log prior = Regularizer  [f]

29 Example: Gaussian Prior Linear model: f(x) = w.x Gaussian prior: P(f) = exp -||w|| 2 /  2 Regularizer:  [f] = –log P(f) =  ||w|| 2 w1w1 w2w2

30 Minimum Description Length MDL: minimize the length of the “message”. Two part code: transmit the model and the residual. f = argmin –log 2 P(D|f) –log 2 P(f) Length of the shortest code to encode the model (model complexity) Residual: length of the shortest code to encode the data given the model

31 Bias-variance tradeoff f trained on a training set D of size m (m fixed) For the square loss: E D [f(x)-y] 2 = [E D f(x)-y] 2 + E D [f(x)-E D f(x)] 2 Variance Bias 2 Expected value of the loss over datasets D of the same size E D f(x) f(x) y target Bias 2 Variance

32 Bias x y

33 Variance 10 x y

34 The Effect of SRM Reduces the variance… …at the expense of introducing some bias.

35 Ensemble Methods E D [f(x)-y] 2 = [E D f(x)-y] 2 + E D [f(x)-E D f(x)] 2 Variance can also be reduced with committee machines. The committee members “vote” to make the final decision. Committee members are built e.g. with data subsamples. Each committee member should have a low bias (no use of ridge/weight decay).

36 Overall summary Weight decay is a powerful means of overfitting avoidance ( ||w|| 2 regularizer). It has several theoretical justifications: SRM, Bayesian prior, MDL. It controls variance in the learning machine family, but introduces bias. Variance can also be controlled with ensemble methods.

37 Want to Learn More? Statistical Learning Theory, V. Vapnik. Theoretical book. Reference book on generatization, VC dimension, Structural Risk Minimization, SVMs, ISBN : 0471030031. Structural risk minimization for character recognition, I. Guyon, V. Vapnik, B. Boser, L. Bottou, and S.A. Solla. In J. E. Moody et al., editor, Advances in Neural Information Processing Systems 4 (NIPS 91), pages 471--479, San Mateo CA, Morgan Kaufmann, 1992. http://clopinet.com/isabelle/Papers/srm.ps.Zhttp://clopinet.com/isabelle/Papers/srm.ps.Z Kernel Ridge Regression Tutorial, I. Guyon. http://clopinet.com/isabelle/Projects/ETH/KernelRidge.pdf http://clopinet.com/isabelle/Projects/ETH/KernelRidge.pdf Feature Extraction: Foundations and Applications. I. Guyon et al, Eds. Book for practitioners with datasets of NIPS 2003 challenge, tutorials, best performing methods, Matlab code, teaching material. http://clopinet.com/fextract-book http://clopinet.com/fextract-book


Download ppt "Lecture 2: Learning without Over-learning Isabelle Guyon"

Similar presentations


Ads by Google