Perceptron Learning Demonstration

Slides:



Advertisements
Similar presentations
Classification Classification Examples
Advertisements

also known as the “Perceptron”
CALCULATOR MANIPULATION. Due to the differences in calculators you will have to be able to use your own effectively.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Instance Based Learning. Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return the answer associated.
Solving Equations Please view this tutorial and answer the follow-up questions on loose leaf to turn in to your teacher.
Notes – Addition and Subtraction of Integers (with counting chips) We are learning to…add and subtract positive and negative integers with counting chips.
NO CALCULATORS!!!!!!!.  Integers are the whole numbers and their opposites (no decimal values!)  Example: -3 is an integer  Example: 4 is an integer.
1 Chapter 20 Section Slide Set 2 Perceptron examples Additional sources used in preparing the slides: Nils J. Nilsson’s book: Artificial Intelligence:
Choosing Weight and Threshold Values for Single Perceptrons n CS/PY 231 Lab Presentation # 2 n January 24, 2005 n Mount Union College.
Make sure yours is correct BEFORE you glue!!. any of the numbers that are added together a step by step solution to a problem the answer to a subtraction.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
Perceptrons Gary Cottrell. Cognitive Science Summer School 2 Perceptrons: A bit of history Frank Rosenblatt studied a simple version of a neural net called.
Adding Integers on a Number Line
BIG NUMBERS and SMALL NUMBERS (Scientific Notation)
M 4 Lesson 22. Objective Solve additions with up to four addends with totals within 200 with and without two compositions of larger units. Note: Addends.
Mental Arithmetic Strategies Scotts Primary School. Mental Arithmetic Scheme.
Mr Barton’s Maths Notes
Unit 1 Seminar Welcome to MM212! To resize your pods:
Mental Arithmetic Strategies
FLOWCHARTS Part 1.
I like fruit..
Graphs Mighty Math.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Function Tables Today’s Lesson: What: Why:
The Addition Rule.
Algebra 7. Solving Quadratic Equations
Exponents Scientific Notation
Mr F’s Maths Notes Number 12. Ratio.
Laws of Exponents x2y4(xy3)2 X2 X3.
Perceptrons Lirong Xia.
4th Grade ICAP Academic Planning Understanding your Report Card
Steps Squares and cubes Quadratic graphs Cubic graphs
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
I like fruit..
Trained Perceptron Demonstration
Mr Barton’s Maths Notes
Function Tables Today’s Lesson: What: Why:
CS 4/527: Artificial Intelligence
Introduction to Equations Cronnelly
Exponent Rules
Factoring Polynomials SUM AND PRODUCT
SOLVE RATIO AND RATE PROBLEMS USING TAPE DIAGRAMS
Adding, Subtracting, Multiplying, and Dividing Integers
The Toolbox of Science.
For example:
Semantic Long Term Memory
Subtracting Integers. Subtracting Integers Subtracting Integers Let’s model integer subtraction. 4 – 2 -5 – – –
Chapter 3. Artificial Neural Networks - Introduction -
Standard Deviation.
Theory of Computation Turing Machines.
Mr Barton’s Maths Notes
Hardness Of Approximation
Backpropagation.
Chapter Seven Polynomials Algebra One.
Mr Barton’s Maths Notes
I can describe an unhealthy relationship
Artificial Intelligence 12. Two Layer ANNs
Marzano Vocabulary Instruction Follow-Up
I can describe an unhealthy relationship
Systems of Linear Equations
cREATIVE pRESENTATION
Levels of Processing Demonstration
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Perceptrons Lirong Xia.
Unit task Preparing and acting out a sketch about feelings
Adding integers without number line 2 digit.
Solving Absolute Value Equations
Presentation transcript:

Perceptron Learning Demonstration Langston, Cognitive Psychology Perceptron Learning Demonstration

Perceptron Learning How does a perceptron acquire its knowledge? The question really is: How does a perceptron learn the appropriate weights?

Perceptron Learning Remember our features: For output: Good_Fruit = 1 Not_Good_Fruit = 0 Taste Sweet = 1, Not_Sweet = 0 Seeds Edible = 1, Not_Edible = 0 Skin

Perceptron Learning Let’s start with no knowledge: Input Taste 0.0 If ∑ > 0.4 then fire 0.0 Input Output Taste Seeds Skin

Perceptron Learning The weights are empty: Input Taste 0.0 Output If ∑ > 0.4 then fire 0.0 Input Output Taste Seeds Skin

Perceptron Learning To train the perceptron, we will show it each example and have it categorize each one. Since it’s starting with no knowledge, it is going to make mistakes. When it makes a mistake, we are going to adjust the weights to make that mistake less likely in the future.

Perceptron Learning When we adjust the weights, we’re going to take relatively small steps to be sure we don’t over-correct and create new problems.

Perceptron Learning I’m going to learn the category “good fruit” defined as anything that is sweet. Good fruit = 1 Not good fruit = 0

Perceptron Learning Problem space: Banana Pear Lemon Strawberry Green apple Taste 1 Seeds Skin Good fruit?

Perceptron Learning Show it a banana: Input 1 1 Taste 0.0 Output 0.0 1 .00 Seeds 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a banana: Input 1 1 Taste 0.0 Output 0.0 1 .00 Seeds 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a banana: Input 1 1 Taste 0.0 Output 0.0 1 .00 Seeds 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a banana: Input 1 1 Taste 0.0 Output Teacher 0.0 1 .00 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning In this case we have: It adds up to 0.0. (1 X 0) = 0 + (1 X 0) = 0 + (0 X 0) = 0 It adds up to 0.0. Since that is less than the threshold (0.40), we responded “no.” Is that correct? No.

Perceptron Learning Since we got it wrong, we know we need to change the weights. We’ll do that using the delta rule (delta for change). ∆w = learning rate X (overall teacher - overall output) X node output

Perceptron Learning The three parts of that are: Learning rate: We set that ourselves. I want it to be large enough that learning happens in a reasonable amount of time, but small enough that I don’t go too fast. I’m picking 0.25. (overall teacher - overall output): The teacher knows the correct answer (e.g., that a banana should be a good fruit). In this case, the teacher says 1, the output is 0, so (1 - 0) = 1. node output: That’s what came out of the node whose weight we’re adjusting. For the first node, 1.

Perceptron Learning To pull it together: ∆w = 0.25 X 1 X 1 = 0.25. Learning rate: 0.25. (overall teacher - overall output): 1. node output: 1. ∆w = 0.25 X 1 X 1 = 0.25. Since it’s a ∆w, it’s telling us how much to change the first weight. In this case, we’re adding 0.25 to it.

Perceptron Learning Let’s think about the delta rule: (overall teacher - overall output): If we get the categorization right, (overall teacher - overall output) will be zero (the right answer minus itself). In other words, if we get it right, we won’t change any of the weights. As far as we know we have a good solution, why would we change it?

Perceptron Learning Let’s think about the delta rule: (overall teacher - overall output): If we get the categorization wrong, (overall teacher - overall output) will either be -1 or +1. If we said “yes” when the answer was “no,” we’re too high on the weights and we will get a (teacher - output) of -1 which will result in reducing the weights. If we said “no” when the answer was “yes,” we’re too low on the weights and this will cause them to be increased.

Perceptron Learning Let’s think about the delta rule: Node output: If the node whose weight we’re adjusting sent in a 0, then it didn’t participate in making the decision. In that case, it shouldn’t be adjusted. Multiplying by zero will make that happen. If the node whose weight we’re adjusting sent in a 1, then it did participate and we should change the weight (up or down as needed).

Perceptron Learning How do we change the weights for banana? Feature: Learning rate: (overall teacher - overall output): Node output: ∆w taste 0.25 1 +0.25 seeds skin

Perceptron Learning Adjusting weight 1: .25 X (1 – 0) X 1 = 0.25 Input Taste 0.0 Output Teacher 0.0 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Corrected weight 1: Input 1 1 Taste 0.25 Output Teacher 0.0 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Adjusting weight 2: .25 X (1 – 0) X 1 = 0.25 Input Taste 0.25 Output Teacher 0.0 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Corrected weight 2: Input 1 1 Taste 0.25 Output Teacher 0.25 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Adjusting weight 3: .25 X (1 – 0) X 0 = 0.00 Input Taste 0.25 Output Teacher 0.25 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning Corrected weight 3: Input 1 1 Taste 0.25 Output Teacher 0.25 1 Seeds 1 1 0.0 If ∑ > 0.4 then fire Skin

Perceptron Learning To continue training, we show it the next example, adjust the weights… We will keep cycling through the examples until we go all the way through one time without making any changes to the weights. At that point, the concept is learned.

Perceptron Learning Show it a pear: Input 1 1 Taste 0.25 Output Teacher 0.25 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning How do we change the weights for pear? Feature: Learning rate: (overall teacher - overall output): Node output: ∆w taste 0.25 1 +0.25 seeds skin

Perceptron Learning Adjusting weight 1: .25 X (1 – 0) X 1 = 0.25 Input Taste 0.25 Output Teacher 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning Corrected weight 1: Input 1 1 Taste 0.50 Output Teacher 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning Adjusting weight 2: .25 X (1 – 0) X 0 = 0.00 Input Taste 0.50 Output Teacher 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning Corrected weight 2: Input 1 1 Taste 0.50 Output Teacher 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning Adjusting weight 3: .25 X (1 – 0) X 1 = 0.25 Input Taste 0.50 Output Teacher 0.25 Seeds 1 0.0 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning Corrected weight 3: Input 1 1 Taste 0.50 Output Teacher 0.25 1 Seeds 1 1 025 If ∑ > 0.4 then fire Skin

Perceptron Learning Here it is with the adjusted weights: Input Taste 0.50 Output 0.25 Seeds 0.25 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a lemon: Input Taste 0.50 Output Teacher Taste 0.50 Output Teacher 0.25 Seeds 0.25 If ∑ > 0.4 then fire Skin

Perceptron Learning How do we change the weights for lemon? Feature: Learning rate: (overall teacher - overall output): Node output: ∆w taste 0.25 seeds skin

Perceptron Learning Here it is with the adjusted weights: Input Taste 0.50 Output 0.25 Seeds 0.25 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a strawberry: Input 1 1 Taste 0.50 Output Teacher 0.25 1 1 Seeds 1 1 1 0.25 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning How do we change the weights for strawberry? Feature: Learning rate: (overall teacher - overall output): Node output: ∆w taste 0.25 1 seeds skin

Perceptron Learning Here it is with the adjusted weights: Input Taste 0.50 Output 0.25 Seeds 0.25 If ∑ > 0.4 then fire Skin

Perceptron Learning Show it a green apple: Input Taste 0.50 Output Taste 0.50 Output Teacher 0.25 0.25 Seeds 0.25 If ∑ > 0.4 then fire 1 Skin 1

Perceptron Learning If you keep going, you will see that this perceptron can correctly classify the examples that we have.

End Perceptron Learning Demonstration