Artificial Neural Networks (1)

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
also known as the “Perceptron”
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.
Artificial Intelligence (CS 461D)
Simple Neural Nets For Pattern Classification
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Perceptron Learning Rule
An Illustrative Example
Neural Networks An Introduction.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neuron Model and Network Architecture
Artificial neural networks:
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Computer Go : A Go player Rohit Gurjar CS365 Project Presentation, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
EEE502 Pattern Recognition
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial neural networks
Artificial neural networks:
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
CSE 473 Introduction to Artificial Intelligence Neural Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Artificial Neural Network & Backpropagation Algorithm
of the Artificial Neural Networks.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Network
David Kauchak CS158 – Spring 2019
Perceptron Learning Rule
Perceptron Learning Rule
Perceptron Learning Rule
PYTHON Deep Learning Prof. Muhammad Saeed.
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Artificial Neural Networks (1) Dr. Hala Farouk

Biological Neurons of the Brain

History of Neural Networks In 1943, Warren McCulloch and Walter Pitts introduced one of the first artificial neurons. The main feature of their neuron model is that a weighted sum of input signals is compared to a threshold to determine the neuron output. When the sum is greater than or equal to the threshold, the output is 0. They went on to show that networks of these neurons could, in principle, compute any arithmetic or logical function. Unlike biological networks, the parameters of their networks had to be designed, as no training method was available. In the late 1950s, Frank Rosenblatt introduces a learning rule for training perceptron networks to solve pattern recognition problems. Perceptron learn from their mistakes even if they were initialized with random weights. In the 1980s, improved multi-layer perceptron networks were introduced.

Analogy

Single-Input Neuron Weight w Bias b The output depends on the transfer function f chosen by the designer W and b are parameters adjustable by some learning rule

Learning Rules By learning we mean a procedure for modifying the weights and biases of a network The training algorithm falls into three broad categories Supervised Learning Here we need a set of example for the NN. {p1,t1}, {p2,t2},…, {pQ,tQ} Where pq is the input and tq is the corresponding correct output. Reinforcement Learning Similar to supervised but instead of the given correct output, only a grade (score) is given. This is currently much less common than supervised. Unsupervised Learning Here the weights and biases are modified in response to network inputs only. Most of these algorithms perform clustering operation.

Typical Transfer Functions

Typical Transfer Functions The log-sigmoid transfer function is commonly used in multilayer networks that are trained using the back-propagation algorithm, in part because this function is differentiable.

Example on Single-Input Neuron The input to a single input neuron is 2.0, its weights is 2.3 and its bias is -3. What is the net input to the transfer function? n=w p + b = (2.3) (2) + (-3) = 1.6 What is the neuron output, if it has the following transfer functions? Hard limit Linear Log-sigmoid a= hardlim (1.6) =1.0 a= purelin (1.6) =1.6 a= logsig (1.6) = 1/ (1+e-1.6)=0.8320

Multi-Input Neuron Second Index for the Input First Index for the Neuron n= w1,1 p1 + w1,2 p2 + w1,3 p3 + … +w1,R pR + b1 n=W p + b

Multi-Input Neuron The number of input is set by the external specifications of the problem. If you want to design a neural network that is to predict kite- flying conditions and the inputs are air temperature, wind velocity, and humidity, then there would be three inputs to the network (R=3).

Example on Multi-Input Neuron Given a two-input neuron with the following parameters: b=1.2, W=[3 2] and p=[-5 6]T, calculate the neuron output for the following transfer functions: A symmetrical hard limit transfer fn. A saturating linear transfer fn. A hyperbolic tangent sigmoid transfer fn. n= W p + b = [ 3 2 ] * [ -5 6]T + (1.2)= -1.8 a= hardlims (-1.8)=-1 a= satlin(-1.8)=0 a= transig(-1.8)=0.9468

S Neurons with R Inputs each

S Neurons with R Inputs each

Example on Single-Neuron Perceptron Ai= hardlim (ni) = hardlim(iwT p + bi) Given w1,1=1, w1,2=1, b= - 1 The decision boundary (line at which n=0) n=1wT p + b1=w1,1p1+w1,2p2+b =p1+p2-1 = 0 On one side of this boundary, the output will be 0; on the line and on the other side the output is 1 Test with any point to know the direction The boundary is always orthogonal to 1w

Adjusting the Bias If the decision boundary line is given then we can adjust the bias according to the following equation: 1wT p + b1= 0

Design Problem Lets design an AND gate using NN Draw the input space Black dots --> output=1 White dots --> output=0

Design Problem cont. Select a decision boundary there are infinite number of lines that separates the black dots from the white dots. BUT it’s reasonable to choose a line halfway Choose weight vector that is orthogonal to the decision boundary the weight vector can be of any length so there are infinite possibilities One choice is 1wT=[2 2]

Design Problem cont. Test the network Find the bias Pick any point on the decision boundary line and substitute in 1wT p + b1= 0 For example, p=[1.5 0]T 1wT p + b1= [2 2]*[1.5 0]T+b = 3 + b =0 b= -3 Test the network

Multiple-Neuron Perceptron There will be one decision boundary for EACH neuron. The decision boundaries will be defined by iwT p + bi= 0

Example on Supervision Learning The problem has two inputs and one output Therefore two inputs and one neuron

Example on Supervision Learning cont. Lets start without any bias, so only two parameters w1,1 and w1,2 Removing the bias, then decision boundary passes through the origin

Example on Supervision Learning cont. We want a learning rule that will find a weight vector that points in one of these directions (length of vector is not important) So lets start with random weights 1wT=[1.0 -0.8] Then present it to the network with p1. a=hardlim( [1.0 -0.8]*[1 2]T a=hardlim( -0.6 ) = 0 The NN has made a mistake!

Example on Supervision Learning cont. 2 1 3 Why did it make a mistake? If we adjust the weights but adding p1 to it this would make 1w point more in the direction of p1 so that it would hopefully not be classified falsely into the wrong zone. 1w SO If ( t=1 and a=0 ), then 1wnew = 1wold + p

Example on Supervision Learning cont. 2 1 3 1wnew = 1wold + p = [ 1.0 -0.8 ]T + [ 1 2 ]T= [ 2.0 1.2 ]T Test the next input p2 with new weights a=hardlim( [2.0 1.2]*[-1 2]T a=hardlim( 0.4 ) = 1 1w Again a mistake. This time we want w to move away from p2 SO If ( t=0 and a=1 ), then 1wnew = 1wold - p

Example on Supervision Learning cont. 2 1 3 1wnew = 1wold + p = [2.0 1.2]T - [- 1 2 ]T= [ 3.0 -0.8 ]T Test the next input p3 with new weights a=hardlim( [3.0 -0.8 ]*[0 -1]T a=hardlim( 0.8 ) = 1 1w Again a mistake. We want w to move away from p3 SO If ( t=0 and a=1 ), then 1wnew = 1wold - p

Example on Supervision Learning cont. 2 1 3 1wnew = 1wold + p = [3.0 -0.8 ]T - [0 -1 ]T= [ 3.0 0.2 ]T Test the next input p1 again with new weights a=hardlim( [ 3.0 0.2 ]*[1 2]T a=hardlim( 3.4 ) = 1 1w Now correct. Repeat for all other inputs SO If ( t=a1) then wnew = 1wold

Multilayer Network Two Hidden Layers One Output Layer

Recurrent Network It is a network with feedback. Some of its outputs is connected to its inputs a(1)=satlins ( W a(0) + b ) a(2)=satlins ( W a(1) + b ), …