Download presentation
Presentation is loading. Please wait.
1
The Perceptron Algorithm (Primal Form) Repeat: until no mistakes made within the for loop return:. What is ?
3
The Perceptron Algorithm ( STOP in Finite Steps ) Theorem 2.3 (Novikoff) Let be a non-trivial training set, and let Suppose that there exists a vector and. Then the number of mistakes made by the on-line perceptron algorithm on is at most
4
The Perceptron Algorithm (Dual Form) Given a linearly separable training setand Repeat: until no mistakes made within the for loop return:
5
What We Got in the Dual Form Perceptron Algorithm? The number of updates equals: implies that the training point has been misclassified in the training process at least once. implies that removing the training point will not affect the final results The training data only appear in the algorithm through the entries of the Gram matrix, which is defined below:
6
The Margin Slack Variable of with respect to For a fixed value called the target margin, we define the margin slack variable of training point with respect to the hyperplane and as If then is misclassified by the hyperplane
8
Bound of Mistakes of a for loop for the Perceptron Algorithm Theorem 2.7 (Freund & Schapir) Let be a non-trivial training set with no duplicate examples, with Let be any hyperplane with, and and define Then the number of mistakes in the first execution of the for loop of the Perceptron Alg. on is bounded by
9
Fisher’s Linear Discriminator the data is maximally separated. We maximize: Finding the hyperplane on which the projection of where and are respectively the mean and standard deviation of the function output values
10
Multi-class Classification Extension of binary classification Equivalent to our previous rule: For multi-class (one against rest):
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.