Seminar on Machine Learning Rada Mihalcea

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Artificial Neural Networks
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
Classification Neural Networks 1
Perceptron.
Machine Learning Neural Networks
Overview over different methods – Supervised Learning
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Artificial Neural Networks
Back-Propagation Algorithm
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
Foundations of Learning and Adaptive Systems ICS320
CS 484 – Artificial Intelligence
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Artificial Neural Networks
Computer Science and Engineering
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
CS464 Introduction to Machine Learning1 Artificial N eural N etworks Artificial neural networks (ANNs) provide a general, practical method for learning.
Machine Learning Chapter 4. Artificial Neural Networks
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
SUPERVISED LEARNING NETWORK
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Artificial Neural Network
Artificial Neural Network. Introduction Robust approach to approximating real-valued, discrete-valued, and vector-valued target functions Backpropagation.
Artificial Neural Network. Introduction Robust approach to approximating real-valued, discrete-valued, and vector-valued target functions Backpropagation.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Machine Learning Supervised Learning Classification and Regression
Fall 2004 Backpropagation CS478 - Machine Learning.
CS 388: Natural Language Processing: Neural Networks
Learning with Perceptrons and Neural Networks
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Artificial Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Neural Networks
CSSE463: Image Recognition Day 17
Classification / Regression Neural Networks 2
Machine Learning Today: Reading: Maria Florina Balcan
CSC 578 Neural Networks and Deep Learning
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
Classification Neural Networks 1
Artificial Intelligence 13. Multi-Layer ANNs
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
Artificial Neural Networks
Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
CSSE463: Image Recognition Day 17
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
CSSE463: Image Recognition Day 17
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Seminar on Machine Learning Rada Mihalcea Artificial Neural Networks On-Line Handwriting Recognition: The NPen++ Recognizer Febr 17, 2004

Last time Instance based learning (memory based learning, case based learning) Lazy vs. eager learners

Outline Perceptrons Gradient descent Multi-layer networks Backpropagation

Biological Neural Systems Neuron switching time : > 10-3 secs Number of neurons in the human brain: ~1010 Connections (synapses) per neuron : ~104–105 Face recognition : 0.1 secs High degree of parallel computation Distributed representations

Properties of Artificial Neural Nets (ANNs) Many simple neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed processing Learning by tuning the connection weights ANN are motivated by biological neural systems But not as complex as biological systems

Appropriate Problem Domains for Neural Network Learning Input is high-dimensional discrete or real-valued Output is discrete or real valued Output is a vector of values Form of target function is unknown Humans do not need to interpret the results (black box model) Training examples may contain errors (ANN are robust to errors)

ALVINN Drives 70 mph on a public highway Camera image 30 outputs for steering 30x32 weights into one out of four hidden unit 4 hidden units 30x32 pixels as inputs

{  Perceptron . i=0n wi xi x0=1 x1 w1 w0 w2 x2 o wn xn Linear treshold unit (LTU) x0=1 x1 w1 w0 w2 x2  o . i=0n wi xi wn xn 1 if i=0n wi xi >0 o(xi)= -1 otherwise {

Decision Surface of a Perceptron x2 + - x1 x2 + - x1 + - Perceptron is able to represent some useful functions And(x1,x2) choose weights w0=-1.5, w1=1, w2=1 But functions that are not linearly separable (e.g. Xor) are not representable

Perceptron Learning Rule wi = wi + wi wi =  (t - o) xi t=c(x) is the target value o is the perceptron output  Is a small constant (e.g. 0.1) called learning rate Start with some random weights (usually small values) If the output is correct (t=o) the weights wi are not changed If the output is incorrect (to) the weights wi are changed such that the output of the perceptron for the new weights is closer to t. The algorithm converges to the correct classification if the training data is linearly separable and  is sufficiently small

Gradient Descent Learning Rule Perceptron learning rule fails to converge if examples are not linearly separable Gradient Descent: Consider linear unit without threshold and continuous output o (not just –1,1) o=w0 + w1 x1 + … + wn xn Train the wi’s such that they minimize the squared error E[w1,…,wn] = ½ dD (td-od)2 where D is the set of training examples

Gradient Descent D={<(1,1),1>,<(-1,-1),1>, <(1,-1),-1>,<(-1,1),-1>} (w1,w2) (w1+w1,w2 +w2)

Gradient Descent Gradient: E[w]=[E/w0,… E/wn] w=- E[w] Train the wi’s such that they minimize the squared error E[w1,…,wn] = ½ dD (td-od)2 Gradient: E[w]=[E/w0,… E/wn] w=- E[w] wi=- E/wi = - /wi 1/2d(td-od)2 = - /wi 1/2d(td-i wi xi)2 = - d(td- od)(-xi)

Gradient Descent Gradient-Descent(training_examples, ) Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is the vector of input values, and t is the target output value,  is the learning rate (e.g. 0.1) Initialize each wi to some small random value Until the termination condition is met , Do Initialize each wi to zero For each <(x1,…xn),t> in training_examples Do Input the instance (x1,…,xn) to the linear unit and compute the output o For each linear unit weight wi Do wi= wi +  (t-o) xi wi=wi+wi Termination condition – error falls under a given threshold

Incremental Stochastic Gradient Descent Batch mode : gradient descent w=w -  ED[w] over the entire data D ED[w]=1/2d(td-od)2 Incremental mode: gradient descent w=w -  Ed[w] over individual training examples d Ed[w]=1/2 (td-od)2 Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if  is small enough

Perceptron Learning Rule VS. Gradient Descent Rule Perceptron learning rule guaranteed to succeed if Training examples are linearly separable Sufficiently small learning rate  Linear unit training rules uses gradient descent Guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate  Even when training data contains noise Even when training data not separable by H

Multi-Layer Networks output layer hidden layer input layer

 Sigmoid Unit . x0=1 x1 w1 w0 w2 x2 o wn xn net=i=0n wi xi o=(net)=1/(1+e-net) w2 x2  o . (x) is the sigmoid function: 1/(1+e-x) wn d(x)/dx= (x) (1- (x)) xn Derive gradient decent rules to train: one sigmoid function E/wi = -d(td-od) od (1-od) xi Multilayer networks of sigmoid units backpropagation: d (y)/dy = (y) (1- (y))

Backpropagation Algorithm Initialize each wi to some small random value Until the termination condition is met, Do For each training example <(x1,…xn),t> Do Input the instance (x1,…,xn) to the network and compute the network outputs ok For each output unit k k=ok(1-ok)(tk-ok) For each hidden unit h h=oh(1-oh) k wh,k k For each network weight w,j Do wi,j=wi,j+wi,j where wi,j=  j xi,j

Backpropagation Gradient descent over entire network weight vector Easily generalized to arbitrary directed graphs Will find a local, not necessarily global error minimum -in practice often works well (can be invoked multiple times with different initial weights) Often include weight momentum term wi,j(n)=  j xi,j +  wi,j (n-1) Minimizes error training examples Will it generalize well to unseen instances (over-fitting)? Training can be slow typical 1000-10000 iterations Using network after training is fast

8-3-8 Binary Encoder -Decoder 8 inputs 3 hidden 8 outputs Hidden values .89 .04 .08 .01 .11 .88 .01 .97 .27 .99 .97 .71 .03 .05 .02 .22 .99 .99 .80 .01 .98 .60 .94 .01

Convergence of Backpropagation Gradient descent to some local minimum Perhaps not global minimum Add momentum Stochastic gradient descent Train multiple nets with different initial weights Nature of convergence Initialize weights near zero Therefore, initial networks near-linear Increasingly non-linear functions possible as training progresses

Avoid ANN Overfitting 1. Weight decay Decrease each weight by a small factor during each iteration Plays the role of a penalty term [Keep weight values small] 2. Use a different validation set Use the number of iterations that leads to the lowest error on the validation set

Expressive Capabilities of ANN Boolean functions Every boolean function can be represented by network with single hidden layer But might require exponential (in number of inputs) hidden units Continuous functions Every bounded continuous function can be approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989, Hornik 1989] Any function can be approximated to arbitrary accuracy by a network with two hidden layers [Cybenko 1988]