Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
For Wednesday Read chapter 19, sections 1-3 No homework.
Navneet Goyal, BITS-Pilani Perceptrons. Labeled data is called Linearly Separable Data (LSD) if there is a linear decision boundary separating the classes.
Perceptron.
Simple Neural Nets For Pattern Classification
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
How does the mind process all the information it receives?
Data Mining with Neural Networks (HK: Chapter 7.5)
CS 4700: Foundations of Artificial Intelligence
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
CS 484 – Artificial Intelligence
For Monday Read chapter 22, sections 1-3 No homework.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Artificial neural networks:
CISC 4631 Data Mining Lecture 11: Neural Networks.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural Networks Chapter 18.7
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
1 CS 343: Artificial Intelligence Neural Networks Raymond J. Mooney University of Texas at Austin.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
2101INT – Principles of Intelligent Systems Lecture 10.
1 Machine Learning Neural Networks Neural Networks NN belong to the category of trained classification models The learned classification model is an.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
For Friday Read chapter 18, section 7 Program 3 due.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
For Wednesday No reading No homework. Exam 2 Friday. Will cover material through chapter 18. Take home is due Friday.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
For Wednesday No reading Homework: –Chapter 18, exercise 25 (a & b)
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
For Wednesday Read ch. 20, sections 1, 2, 5, and 7 No homework.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 CS 391L: Machine Learning Neural Networks Raymond J. Mooney University of Texas at Austin.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Chapter 2 Single Layer Feedforward Networks
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
For Friday Read ch. 20, sections 1-3 Program 4 due.
Artificial Neural Network
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
CS 388: Natural Language Processing: Neural Networks
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
CISC 4631/6930 Data Mining Lecture 11: Neural Networks.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Data Mining with Neural Networks (HK: Chapter 7.5)
Perceptron as one Type of Linear Discriminants
Data Mining Neural Networks Last modified 1/9/19.
Presentation transcript:

Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1

Amazing numbers 2

Properties of Neural Networks 3

Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand “distributed” nature of neural representations (rather than “localist” representation) that allow robustness and graceful degradation. Intelligent behavior as an “emergent” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms. 4

Neural Speed Constraints Neurons have a “switching time” on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting “massive parallelism.” Human brain has about neurons with an average of 10 4 connections each. 5

Real Neurons Cell structures – Cell body – Dendrites – Axon – Synaptic terminals 6

Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excititory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential. 7

Real Neural Learning Synapses change size and strength with experience. Hebbian learning (1949): When two connected neurons are firing at the same time, the strength of the synapse between them increases. “Neurons that fire together, wire together.” 8

A Neuron model 9

Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges w i, from node i Model net input to cell as Cell output is: O x11x3x2x4 w0w0 w1w1 w2w2 w3w3 w4w4 10 net O=sign(net) 0 1 w 0 : is called bias or threshold

Artificial Neural Network Learning approach based on modeling adaptation in biological neural systems: – Perceptron: Initial algorithm for learning simple neural networks (single layer) (Rosenblatt, Frank (1957), The Perceptron--a perceiving and recognizing automaton. Report , Cornell Aeronautical Laboratory) – Backpropagation: More complex algorithm for learning feed forward multi-layer neural networks (Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (8 October 1986). "Learning representations by back-propagating errors". Nature 323 (6088): 533–536) 11

A Perceptron 12

Neural Computation McCollough and Pitts (1943) showed how such model neurons could compute logical functions and be used to construct finite-state machines. Can be used to simulate logic gates: – AND: Let all w ji be T j /n, where n is the number of inputs. – OR: Let all w ji be T j – NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates. Given negated inputs, two layer network can compute any boolean function using a two level AND-OR network. 13

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Learn synaptic weights so that unit produces the correct output for each example. Perceptron uses iterative update algorithm to learn a correct set of weights. 14

Perceptron Learning Rule Update weights by: where, η: is the “learning rate” t: is the teacher specified output (target value). Equivalent to rules: – If output is correct do nothing. – If output is high, lower weights on active inputs – If output is low, increase weights on active inputs 15

Perceptron Learning Algorithm Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output o for E given its inputs Compare current output to target value, t, for E Update synaptic weights and threshold using learning rule 16

Perceptron training rule 17

Perceptron as Hill Climbing to minimize error on the training set. Perceptron effectively does hill-climbing (gradient descent) in this space, changing the weights a small amount at each point to decrease training set error. For a single model neuron, the space is well behaved with a single minima. 0 weights training error 18

Geometric Interpretation: Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes. x2x2 x1x1 ?? Or hyperplane in n-dimensional space 19

Perceptron Performance Linear threshold functions are restrictive In practice, converges fairly quickly for linearly separable data. Can effectively use even incompletely converged results when only a few outliers are misclassified. Experimentally, Perceptron does quite well on many benchmark data sets. 20

Concept Perceptron Cannot Learn XOR Cannot learn exclusive-or, or parity function in general. x3x3 x2x2 ?? – + – 21

Perceptron Limits If the data are not linearly separable, the convergence is not assured. Minksy and Papert (1969) wrote a book analyzing the perceptron and demonstrating many functions it could not learn. (M.L. Minsky and S. Papert. Perceptrons; An introduction to computational geometry. Cambridge, Mass.,: MIT Press, 1969) These results discouraged further research on neural nets 22

Perceptron Convergence and Cycling Theorems Perceptron convergence theorem: If the data is linearly separable and therefore a set of weights exist that are consistent with the data, then the Perceptron algorithm will eventually converge to a consistent set of weights. Perceptron cycling theorem: If the data is not linearly separable, the Perceptron algorithm will eventually repeat a set of weights and threshold at the end of some epoch and therefore enter an infinite loop. – By checking for repeated weights+threshold, one can guarantee termination with either a positive or negative result. (H.D. Block and S.A. Levin. On the boundedness of an iterative procedure for solving a system of linear inequalities. Proceedings of the American Mathematical Society, 26(2):229–235, 1970) 23

Hidden unit: a solution for non-linear separable data Such networks show the potential of hidden neurons Threshold (bias) weights x1x2 24