Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line.

Similar presentations


Presentation on theme: "Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line."— Presentation transcript:

1 Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line and mistake-driven procedure  Update the weight vector and bias when there is a misclassified point  Converge when problem is linearly separable

2 Basic Notations Input space: Output space: Hypothesis: Real-valued function: Training set:

3 Basic Notations Inner product: Norm: 1-norm: 2-norm: -norm:

4 Definition of Margin The (functional) margin of a training point with respect to a hyperplane to be the quantity: whereis called the weight vector and is called the bias  Note: implies classify the point correctly

5 The Perceptron Algorithm Given a linearly separable training set and learning rate and the initial weight vector, bias: and let

6 The Perceptron Algorithm Repeat: until no mistakes made within the for loop return:. What is ?

7 The Perceptron Algorithm ( STOP in Finite Steps ) Theorem 2.3 (Novikoff) Let be a non-trivial training set, and let Suppose that there exists a vector and. Then the number of mistakes made by the on-line perceptron algorithm on is at most


Download ppt "Linear Learning Machines  Simplest case: the decision function is a hyperplane in input space.  The Perceptron Algorithm: Rosenblatt, 1956  An on-line."

Similar presentations


Ads by Google