Learning: Neural Networks Artificial Intelligence CMSC 25000 February 3, 2005.

Slides:



Advertisements
Similar presentations
Beyond Linear Separability
Advertisements

Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Machine Learning Lecture 4 Multilayer Perceptrons G53MLE | Machine Learning | Dr Guoping Qiu1.
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Machine Learning Neural Networks
Overview over different methods – Supervised Learning
Artificial Neural Networks
Lecture 14 – Neural Networks
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Learning: Nearest Neighbor, Perceptrons & Neural Nets
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
CS 4700: Foundations of Artificial Intelligence
CS 484 – Artificial Intelligence
Neural Networks Lecture 8: Two simple learning algorithms
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Artificial Neural Networks
Computer Science and Engineering
Artificial Neural Networks
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
CS464 Introduction to Machine Learning1 Artificial N eural N etworks Artificial neural networks (ANNs) provide a general, practical method for learning.
Machine Learning Chapter 4. Artificial Neural Networks
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Retrieval by Authority Artificial Intelligence CMSC February 1, 2007.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Network
Backpropagation Training
EEE502 Pattern Recognition
Neural Networks 2nd Edition Simon Haykin
Chapter 6 Neural Network.
Artificial Neural Network. Introduction Robust approach to approximating real-valued, discrete-valued, and vector-valued target functions Backpropagation.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Fall 2004 Backpropagation CS478 - Machine Learning.
Learning with Perceptrons and Neural Networks
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Neural Networks
Classification / Regression Neural Networks 2
Machine Learning Today: Reading: Maria Florina Balcan
CSC 578 Neural Networks and Deep Learning
Data Mining with Neural Networks (HK: Chapter 7.5)
Classification Neural Networks 1
Lecture Notes for Chapter 4 Artificial Neural Networks
Backpropagation.
Seminar on Machine Learning Rada Mihalcea
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Learning: Perceptrons & Neural Networks
Presentation transcript:

Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005

Roadmap Neural Networks –Motivation: Overcoming perceptron limitations –Motivation: ALVINN –Heuristic Training Backpropagation; Gradient descent Avoiding overfitting Avoiding local minima –Conclusion: Teaching a Net to talk

Perceptron Summary Motivated by neuron activation Simple training procedure Guaranteed to converge –IF linearly separable

Neural Nets Multi-layer perceptrons –Inputs: real-valued –Intermediate “hidden” nodes –Output(s): one (or more) discrete-valued X1 X2 X3 X4 InputsHidden Outputs Y1 Y2

Neural Nets Pro: More general than perceptrons –Not restricted to linear discriminants –Multiple outputs: one classification each Con: No simple, guaranteed training procedure –Use greedy, hill-climbing procedure to train –“Gradient descent”, “Backpropagation”

Solving the XOR Problem x1x1 w 13 w 11 w 21 o2o2 o1o1 w 12 y w 03 w 22 x2x2 w 23 w 02 w 01 Network Topology: 2 hidden nodes 1 output Desired behavior: x1 x2 o1 o2 y Weights: w11= w12=1 w21=w22 = 1 w01=3/2; w02=1/2; w03=1/2 w13=-1; w23=1

Neural Net Applications Speech recognition Handwriting recognition NETtalk: Letter-to-sound rules ALVINN: Autonomous driving

ALVINN Driving as a neural network Inputs: –Image pixel intensities I.e. lane lines 5 Hidden nodes Outputs: –Steering actions E.g. turn left/right; how far Training: –Observe human behavior: sample images, steering

Backpropagation Greedy, Hill-climbing procedure –Weights are parameters to change –Original hill-climb changes one parameter/step Slow –If smooth function, change all parameters/step Gradient descent –Backpropagation: Computes current output, works backward to correct error

Producing a Smooth Function Key problem: –Pure step threshold is discontinuous Not differentiable Solution: –Sigmoid (squashed ‘s’ function): Logistic fn

Neural Net Training Goal: –Determine how to change weights to get correct output Large change in weight to produce large reduction in error Approach: Compute actual output: o Compare to desired output: d Determine effect of each weight w on error = d-o Adjust weights

Neural Net Example y3y3 w 03 w 23 z3z3 z2z2 w 02 w 22 w 21 w 12 w11w11 w 01 z1z1 x1x1 x2x2 w 13 y1y1 y2y2 xi : ith sample input vector w : weight vector yi*: desired output for ith sample Sum of squares error over training samples z3z3 z1z1 z2z2 Full expression of output in terms of input and weights - From notes lozano-perez

Gradient Descent Error: Sum of squares error of inputs with current weights Compute rate of change of error wrt each weight –Which weights have greatest effect on error? –Effectively, partial derivatives of error wrt weights In turn, depend on other weights => chain rule

Gradient Descent E = G(w) –Error as function of weights Find rate of change of error –Follow steepest rate of change –Change weights s.t. error is minimized E w G(w) dG dw Local minima w0w1

MIT AI lecture notes, Lozano- Perez 2000 Gradient of Error z3z3 z1z1 z2z2 y3y3 w 03 w 23 z3z3 z2z2 w 02 w 22 w 21 w 12 w11w11 w 01 z1z1 x1x1 x2x2 w 13 y1y1 y2y2 Note: Derivative of sigmoid: ds(z1) = s(z1)(1-s(z1)) dz1 - From notes lozano-perez

From Effect to Update Gradient computation: –How each weight contributes to performance To train: –Need to determine how to CHANGE weight based on contribution to performance –Need to determine how MUCH change to make per iteration Rate parameter ‘r’ –Large enough to learn quickly –Small enough reach but not overshoot target values

Backpropagation Procedure Pick rate parameter ‘r’ Until performance is good enough, –Do forward computation to calculate output –Compute Beta in output node with –Compute Beta in all other nodes with –Compute change for all weights with i j k

Backprop Example y3y3 w 03 w 23 z3z3 z2z2 w 02 w 22 w 21 w 12 w 11 w 01 z1z1 x1x1 x2x2 w 13 y1y1 y2y2 Forward prop: Compute z i and y i given x k, w l

Backpropagation Observations Procedure is (relatively) efficient –All computations are local Use inputs and outputs of current node What is “good enough”? –Rarely reach target (0 or 1) outputs Typically, train until within 0.1 of target

Neural Net Summary Training: –Backpropagation procedure Gradient descent strategy (usual problems) Prediction: –Compute outputs based on input vector & weights Pros: Very general, Fast prediction Cons: Training can be VERY slow (1000’s of epochs), Overfitting

Training Strategies Online training: –Update weights after each sample Offline (batch training): –Compute error over all samples Then update weights Online training “noisy” –Sensitive to individual instances –However, may escape local minima

Training Strategy To avoid overfitting: –Split data into: training, validation, & test Also, avoid excess weights (less than # samples) Initialize with small random weights –Small changes have noticeable effect Use offline training –Until validation set minimum Evaluate on test set –No more weight changes

Classification Neural networks best for classification task –Single output -> Binary classifier –Multiple outputs -> Multiway classification Applied successfully to learning pronunciation –Sigmoid pushes to binary classification Not good for regression

Neural Net Example NETtalk: Letter-to-sound by net Inputs: –Need context to pronounce 7-letter window: predict sound of middle letter 29 possible characters – alphabet+space+,+. –7*29=203 inputs 80 Hidden nodes Output: Generate 60 phones –Nodes map to 26 units: 21 articulatory, 5 stress/sil Vector quantization of acoustic space

Neural Net Example: NETtalk Learning to talk: –5 iterations/1024 training words: bound/stress –10 iterations: intelligible –400 new test words: 80% correct Not as good as DecTalk, but automatic

Neural Net Conclusions Simulation based on neurons in brain Perceptrons (single neuron) –Guaranteed to find linear discriminant IF one exists -> problem XOR Neural nets (Multi-layer perceptrons) –Very general –Backpropagation training procedure Gradient descent - local min, overfitting issues