George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.

Slides:



Advertisements
Similar presentations
Pattern Association.
Advertisements

Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
A Review: Architecture
The back-propagation training algorithm
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
An Illustrative Example
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
Before we start ADALINE
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks An Introduction.
Artificial Neural Network
Supervised Learning: Perceptrons and Backpropagation.
Artificial neural networks:
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Networks An Overview and Analysis.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
ANNs (Artificial Neural Networks). THE PERCEPTRON.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Appendix B: An Example of Back-propagation algorithm
The Predicate Calculus
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Hebbian Coincidence Learning
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Reasoning in Uncertain Situations
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
381 Self Organization Map Learning without Examples.
CS621 : Artificial Intelligence
Chapter 2 Single Layer Feedforward Networks
George F Luger ARTIFICIAL INTELLIGENCE 5th edition Structures and Strategies for Complex Problem Solving HEURISTIC SEARCH Luger: Artificial Intelligence,
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
An Introduction to Artificial Neural Networks. Primitive Neuroscience.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Real Neurons Cell structures Cell body Dendrites Axon
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
CSE 473 Introduction to Artificial Intelligence Neural Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Luger: Artificial Intelligence, 5th edition
CSE 573 Introduction to Artificial Intelligence Neural Networks
The Network Approach: Mind as a Web
Introduction to Neural Network
Artificial Neural Networks
Presentation transcript:

George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, Introduction 11.1Foundations for Connectionist Networks 11.2Perceptron Learning 11.3Backpropagation Learning. 11.4Competitive Learning 11.5Hebbian Coincidence Learning 11.6Attractor Networks or “Memories” 11.7Epilogue and References 11.8Exercises 1

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.1An artificial neuron, input vector x i, weights on each input line, and a thresholding function f that determines the neuron’s output value. Compare with the actual neuron in fig 1.2 2

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.2McCulloch-Pitts neurons to calculate the logic functions and and or. 3

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.1 The McCulloch-Pitts model for logical and. 4

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.2 The truth table for exclusive-or. 5

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.3The exclusive-or problem. No straight line in two-dimensions can separate the (0, 1) and (1, 0) data points from (0, 0) and (1, 1). 6

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.4A full classification system. 7

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.3 A data set for perceptron classification. 8

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.5A two-dimensional plot of the data oints in Table The perceptron of Section provides a linear separation of the data sets. 9

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.6The perceptron net for the example data of Table The thresholding function is linear and bipolar (see fig 11.7a) 10 XiWiXiWi

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.7Thresholding functions. 11

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.8An error surface in two dimensions. Constant c dictates the size of the learning step. 12

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.9Backpropagation in a connectionist network having a hidden layer. 13

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig The network topology of NETtalk. 15

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A backpropagation net to solve the exclusive-or problem. The W ij are the weights and H is the hidden node. 16

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A layer of nodes for application of a winner-take-all algorithm. The old input vectors support the winning node. 17

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig The use of a Kohonen layer, unsupervised, to generate a sequence of prototypes to represent the classes of Table

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig The architecture of the Kohonen based learning network for the data of Table 11.3 and classification of Fig

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig The “outstar” of node J, the “winner” in a winner-take-all network. The Y vector supervises the response on the output layer in Grossberg training. The “outstar” is bold with all weights, 1; all other weights are 0. 20

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A counterpropagation network to recognize the classes in Table We train the outstar weights of node A, w sa and w da. 21

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A SVM learning the boundaries of a chess board from points generated according to the uniform distribution using Gaussian kernels. The dots are the data points with the larger dots comprising the set of support vectors, the darker areas indicate the confidence in the classification. Adapted from Cristianini and Shawe-Taylor (2000). 22

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.4 The signs and product of signs of node output values. 23

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig An example neuron for application of a hybrid Hebbian node where learning is supervised. 24

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A supervised Hebbian network for learning pattern association. 25

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig The linear association network. The vector X i is entered as input and the associated vector Y is produced as output. y i is a linear combination of the x input. In training each y i is supplied with its correct output signals. 26

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A linear associator network for the example in Section The weight matrix is calculated using the formula presented in the previous section. 27

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig A BAM network for the examples of Section Each node may also be connected to itself. 28

Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig An autoassociative network with an input vector I i. We assume single links between nodes with unique indices, thus w ij = w ij and the weight matrix is symmetric. 29