CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
NEURAL NETWORKS Perceptron
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Kostas Kontogiannis E&CE
Machine Learning Neural Networks
Brian Merrick CS498 Seminar.  Introduction to Neural Networks  Types of Neural Networks  Neural Networks with Pattern Recognition  Applications.
Simple Neural Nets For Pattern Classification
Neural Networks Basic concepts ArchitectureOperation.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Neural Network Justin Jansen December 9 th 2002.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Data Mining with Neural Networks (HK: Chapter 7.5)
Hub Queue Size Analyzer Implementing Neural Networks in practice.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
November 25, 2014Computer Vision Lecture 20: Object Recognition IV 1 Creating Data Representations The problem with some data representations is that the.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Introduction to Neural Networks And Their Applications.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
BACKPROPAGATION: An Example of Supervised Learning One useful network is feed-forward network (often trained using the backpropagation algorithm) called.
Multi-Layer Perceptron
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Akram Bitar and Larry Manevitz Department of Computer Science
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Neural networks.
Neural Network Architecture Session 2
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Artificial Neural Networks
CSSE463: Image Recognition Day 17
CSC 578 Neural Networks and Deep Learning
Data Mining with Neural Networks (HK: Chapter 7.5)
BACKPROPOGATION OF NETWORKS
of the Artificial Neural Networks.
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
General Aspects of Learning
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire fighting robot  Encoding of target actions  Training of network  Experiments  Conclusion

Introduction :  The goal of this paper is to present the design of a robot using neural network technique to fulfill the requirements for a Robotics course.  The robot has to move forward from a starting position, navigate around obstacles, detect any fires in its path, and then extinguish them. Next, the neural network control algorithm is discussed, together with the training procedure.  Finally, we summarize the results of several experiments that measured the robot's fire-fighting

What is a Neural Network?  A neural network is a set of nodes, called units, which are connected by links.  A numeric value, called a weight, is associated with each link. All weights in a network are generally initialized to a random value between zero and one. Learning in a neural network is typically performed by adjusting the weights.  Neural networks are software or hardware models inspired by structure & behavior of biological neurons and the nervous system.

Models of neural networks

APPLICATIONS :  Robotics (control, navigation, coordination, object recognition, decision making).  Signal processing, speech and word recognition.  Financial forecasting (Interest rates, stock indices and currencies).  Weather forecasting.  Diagnostics (e.g. Medicine or Engineering).

Phases in the neural networks  There are two phases in a neural network for robotics. First, the network must be trained, then it can be tested, by adjusting the weights associated with the link in order to map each input pattern to the desired output pattern for that input. i.e., a function must be learned that performs this mapping.  The function is represented by the weights associated with the links. Once the network has been trained, the function is evaluated by processing the inputs, and taking the action corresponding to the output unit with the largest value.

Perceptron network:  A single perceptron unit j is shown in Figure 1.The unit computes a weighted sum of its inputs, and then transforms the weighted sum into the output value for the unit, oj, by passing it through an activation function.  Many types of activation functions can be used. The most common activation functions are the step function, sign function, and Sigmoid function.  The Sigmoid function takes values from a possibly large input domain and maps them to values between zero and one. For this reason, the Sigmoid function is often called a squashing function.

Model of a fire fighting Robot:

Neural Network of a Robot:

Encoding of Target Action FORWARD Both sensors Covered{1, 0, 0, 0} TURN_LEFT Left uncovered Right covered {0, 1, 0, 0} TURN_RIGHT Left covered Right uncovered {0, 0, 1, 0} BLOW Both sensors Uncovered {0, 0, 0, 1}

Training of Network  The goal of training the network is to adjust the weights so that the input patterns (sensor inputs) are mapped to an output pattern that corresponds to the “correct” action the robot should take.  The robot is trained by simulating the situations that it can expect to encounter.  The trainer places her hand in front of an infrared sensor, say on the left side, and then gives the target action TURN_RIGHT, and vice versa. When none of the sensors detects an obstacle, the target action FORWARD is given.

Continued..  When a candle is placed within sensor range, the robot is given feedback to BLOW. Initially, all weights in the network are assigned random values (between zero and one). During training, when an incorrect action is taken by the robot, the correct (target) action is indicated by covering, or uncovering, the light sensors. Weights are updated during training using simple back-propagation of errors.  The formula for updating wij, the weight of the link between input unit i and output unit j, at time t+1 is:  wij(t+1) = wij + h × (tj(t) - oj(t)) × ii(t) + a × Dwij(t-1),  where h is the learning rate (defined as 0.3), tj(t) and oj(t) are the target output and actual output from unit j, respectively at time t, ii(t) is the input at unit i at time t, a is the learning momentum (also defined as 0.3), and Dwij(t-1) is the weight update increment on the link from unit i to unit j in the previous iteration.

Graphical representation of training

Explanation of Graph:  The weights in the network are shown at two intermediate checkpoints and at their final values. At Checkpoint 1, the robot had learned to navigate and blow, but not to detect fire.  By Checkpoint 2, the robot had also learned to detect a fire on the left-hand side. In this case, the weights on the links associated with the first two sensors (LED1 and LED2) converged quickly.  The next weights to converge corresponded to OPT1 and OPT2, the two opto sensors nearest the front of the robot. Finally, the weights of the links ass associated with sensors OPT3 and OPT4 converged. Thus, in this training session, fire detection in the periphery was the final task learned. Once the neural net had been trained, no additional training required.

Experiment1 :

Experiment2:

Conclusion:  From the above discussion it is understood that Neural networks have the property of learning from data and mimicking the human Learning ability.  It can be easily implemented in hardware and don't ask the deep understanding of problem or the process.

References