1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
For Wednesday Read chapter 19, sections 1-3 No homework.
CS 4700: Foundations of Artificial Intelligence
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neural Networks
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
CS 4700: Foundations of Artificial Intelligence
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
CS 484 – Artificial Intelligence
For Monday Read chapter 22, sections 1-3 No homework.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
CISC 4631 Data Mining Lecture 11: Neural Networks.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural Networks Chapter 18.7
Artificial Neural Networks
1 CS 343: Artificial Intelligence Neural Networks Raymond J. Mooney University of Texas at Austin.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
1 Machine Learning Neural Networks Neural Networks NN belong to the category of trained classification models The learned classification model is an.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
Machine Learning Chapter 4. Artificial Neural Networks
For Friday Read chapter 18, section 7 Program 3 due.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
For Wednesday No reading No homework. Exam 2 Friday. Will cover material through chapter 18. Take home is due Friday.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
For Wednesday No reading Homework: –Chapter 18, exercise 25 (a & b)
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 CS 391L: Machine Learning Neural Networks Raymond J. Mooney University of Texas at Austin.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
For Friday Read ch. 20, sections 1-3 Program 4 due.
Artificial Neural Network
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
1 Machine Learning Neural Networks and Deep Learning.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
CS 388: Natural Language Processing: Neural Networks
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
CISC 4631/6930 Data Mining Lecture 11: Neural Networks.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Data Mining Neural Networks Last modified 1/9/19.
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Seminar on Machine Learning Rada Mihalcea
Presentation transcript:

1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)

2 Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand “ distributed ” nature of neural representations (rather than “ localist ” representation) that allow robustness and graceful degradation. Intelligent behavior as an “ emergent ” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms.

3 Neural Speed Constraints Neurons have a “ switching time ” on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting “ massive parallelism. ” Human brain has about neurons with an average of 10 4 connections each.

4 Neural Network Learning Learning approach based on modeling adaptation in biological neural systems. Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950 ’ s. Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980 ’ s.

5 Real Neurons Cell structures –Cell body –Dendrites –Axon –Synaptic terminals

6 Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excititory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential.

7 Real Neural Learning Synapses change size and strength with experience. Hebbian learning: When two connected neurons are firing at the same time, the strength of the synapse between them increases. “ Neurons that fire together, wire together. ”

8 Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, w ji Model net input to cell as Cell output is: w 12 w 13 w 14 w 15 w 16 (T j is threshold for unit j) net j ojoj TjTj 0 1

Perceptron schema 9 o1o2ono1o2on Threshold Tj ∑ net j w 1j w nj ojoj Perceptron training phase= estimating edge weights and threshold

10 Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Learn synaptic weights so that unit produces the correct output for each example. Perceptron uses iterative update algorithm to learn a correct set of weights.

11 Perceptron Learning Rule Update weights by: where η is a constant named the “ learning rate ” t j is the teacher specified output for unit j, (t j -o j ) is the error in iteration t Equivalent to rules: –If output is correct do nothing. –If output is “high”, lower weights on active inputs –If output is “low”, increase weights on active inputs Also adjust threshold to compensate:

12 Perceptron Learning Algorithm Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, Ej, do: Compute current output o j for Ej given its inputs Compare current output to target value, t j, for Ej Update synaptic weights and threshold using learning rule

13 Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes. o3o3 o2o2 ?? Or hyperplane in n-dimensional space w 12 w

14 Concept Perceptron Cannot Learn Cannot learn exclusive-or, or parity function in general (not linearly separable!). o3o3 o2o2 ?? – + –

15 Perceptron Limits System obviously cannot learn concepts it cannot represent. Minksy and Papert (1969) wrote a book analyzing the perceptron and demonstrating many functions it could not learn. These results discouraged further research on neural nets; and symbolic AI became the dominate paradigm.

16 Perceptron Convergence and Cycling Theorems Perceptron convergence theorem: If the data is linearly separable and therefore a set of weights exist that are consistent with the data, then the Perceptron algorithm will eventually converge to a consistent set of weights. Perceptron cycling theorem: If the data is not linearly separable, the Perceptron algorithm will eventually repeat a set of weights and threshold at the end of some epoch and therefore enter an infinite loop. –By checking for repeated weights+threshold, one can guarantee termination with either a positive or negative result.

17 Perceptron as Hill Climbing The hypothesis space being search is a set of weights and a threshold. Objective is to minimize classification error on the training set. Perceptron effectively does hill-climbing (gradient descent) in this space, changing the weights a small amount at each point to decrease training set error. For a single model neuron, the space is well behaved with a single minima. weights 0 training error

Example: Learn a NOR function 18

19

20

21

22

23

After 20 iterations 24

25 Perceptron Performance In practice, converges fairly quickly for linearly separable data. Can effectively use even incompletely converged results when only a few outliers are misclassified. Experimentally, Perceptron does quite well on many benchmark data sets.

26 Multi-Layer Networks Multi-layer networks can represent arbitrary functions A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer. output hidden input activation

27 Hill-Climbing in Multi-Layer Nets Since “ greed is good ” perhaps hill-climbing can be used to learn multi-layer networks in practice although its theoretical limits are clear. However, to do gradient descent with multiple neurons, we need the output of a unit to be a differentiable function of its input and weights. Standard linear threshold function is not differentiable at the threshold. net j oioi TjTj 0 1 training error weights

28 Differentiable Output Function Need non-linear output function to move beyond linear functions. –Standard solution is to use the non-linear, differentiable sigmoidal “ logistic ” function: net j TjTj 0 1 Can also use tanh or Gaussian output function

29 Structure of a node: Thresholding function limits node output (Tj=0 in the example): threshold

30 Feeding data through the net (1  0.25) + (0.5  (-1.5)) = (-0.75) = thresholding:

31 Gradient Descent The objective is to minimize error: where D is the set of training examples, K is the set of output units, t kd and o kd are, respectively, the teacher and current output for unit k for example d. The derivative of a sigmoid unit with respect to net input is: Learning rule to change weights to minimize error is:

32 Backpropagation Learning Rule Each weight changed by: where η is a constant called the learning rate t j is the correct “teacher” output for unit j δ j is the error measure for unit j i j oioi ojoj wij

Example for net 1 =w 1 x 1 +w 2 x 2 x1x2 33 Se x1 è l’output del nodo precedente, cioè oi, abbiamo:

Example nj h2h1 oj w j1 w j2 n1 njnN ……. …..…... Step 1: feed with first example and compute all oj x1: x 11 x 12 …x 1N

Example (cont’d) oj w j1 w j2 ……. …..…... Compute the error on the output node, and backpropagate computation on hidden nodes

Example (cont’d) oj w j1 w j2 ……. …..…... w h1n1 w h2n1 …and on input nodes

Example (cont’d) Update all weights Consider the second example Compute input and output in all nodes Compute errors on outpus and on all nodes Re-update weights Untill all examples have been considered (Epoch) Repeat for n Epochs, until convergence

Another example (weight updates) Training set –((0.1, 0.1), 0.1) –((0.1, 0.9), 0.9) –((0.9, 0.1), 0.9) –((0.9, 0.9), 0.1) 0,1 -0,1 0, Initial weights

1. Compute output and error on output node First example: ((0.1, 0.1), 0.1) 0,1 0, ,1 -0,1 0,1 -0,1 0,1

2. Backpropagate error 0,1 0, ,1 -0,025 0,025

3. Update weights 0,1 0, ,1 -0,025 0,025 0,1 -0,1 0,1 -0,1 0,1

Read second input from D 2. ((0,1,0,9), 0,9) 0,1005 0, , , ,11 0, ,1 0,9 0,48 0,52 0,501..etc!!

43 Yet another example First calculate error of output units and use this to change the top layer of weights. output hidden input Current output: o j =0.2 Correct output: t j =1.0 Error δ j = o j (1–o j )(t j –o j ) 0.2(1–0.2)(1–0.2)=0.128 Update weights into j

44 Error Backpropagation Next calculate error for hidden units based on errors on the output units it feeds into. output hidden input

45 Error Backpropagation Finally update bottom layer of weights based on errors calculated for hidden units. output hidden input Update weights into j

Example: Voice Recognition Task: Learn to discriminate between two different voices saying “ Hello ” Data –Sources Steve Simpson David Raubenheimer –Format Frequency distribution (60 bins) Analogy: cochlea

Network architecture –Feed forward network 60 input (one for each frequency bin) 6 hidden 2 output (0-1 for “ Steve ”, 1-0 for “ David ” )

Presenting the data Steve David

Presenting the data (untrained network) Steve David

Calculate error Steve David 0.43 – 0 = –1= – 1= – 0= 0.55

Backprop error and adjust weights Steve David 0.43 – 0 = – 1= – 1= – 0=

Repeat process (sweep) for all training pairs –Present data –Calculate error –Backpropagate error –Adjust weights Repeat process multiple times

Presenting the data (trained network) Steve David

54 Backpropagation Training Algorithm Create the 3-layer network with H hidden units with full connectivity between layers. Set weights to small random real values. Until all training examples produce the correct value (within ε), or mean squared error ceases to decrease, or other termination criteria: Begin epoch For each training example, d, do: Calculate network output for d ’ s input values Compute error between current output and correct output for d Update weights by backpropagating error and using learning rule End epoch

55 Comments on Training Algorithm Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. However, in practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems, run several trials starting with different random weights (random restarts). –Take results of trial with lowest training set error. –Build a committee of results from multiple trials (possibly weighting votes by training set accuracy).

56 Representational Power Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. –Sigmoid functions can act as a set of basis functions for composing more complex functions, like sine waves in Fourier analysis. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three- layer network.

57 Over-Training Prevention Running too many epochs can result in over-fitting. Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. To avoid losing training data for validation: –Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. –Train final network on complete training set for this many epochs. error on training data on test data 0 # training epochs

58 Determining the Best Number of Hidden Units Too few hidden units prevents the network from adequately fitting the data. Too many hidden units can result in over-fitting. Use internal cross-validation to empirically determine an optimal number of hidden units. error on training data on test data 0 # hidden units

59 Successful Applications Text to Speech (NetTalk) Fraud detection Financial Applications –HNC (eventually bought by Fair Isaac) Chemical Plant Control –Pavillion Technologies Automated Vehicles Game Playing –Neurogammon Handwriting recognition

60 Issues in Neural Nets More efficient training methods: –Quickprop –Conjugate gradient (exploits 2 nd derivative) Learning the proper network architecture: –Grow network until able to fit data Cascade Correlation Upstart –Shrink large network until unable to fit data Optimal Brain Damage Recurrent networks that use feedback and can learn finite state machines with “ backpropagation through time. ”

61 Issues in Neural Nets (cont.) More biologically plausible learning algorithms based on Hebbian learning. Unsupervised Learning –Self-Organizing Feature Maps (SOMs) Reinforcement Learning –Frequently used as function approximators for learning value functions. Neuroevolution