Chapter 3. Artificial Neural Networks - Introduction -

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Slides from: Doug Gray, David Poole
Introduction to Neural Networks Computing
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Artificial Intelligence (CS 461D)
Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks - Introduction -. Overview 1.Biological inspiration 2.Artificial neurons and neural networks 3.Application.
Machine Learning. Learning agent Any other agent.
Some more Artificial Intelligence
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Multi-Layer Perceptrons Michael J. Watts
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Perceptrons Michael J. Watts
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Today’s Lecture Neural networks Training
Fall 2004 Backpropagation CS478 - Machine Learning.
CS 388: Natural Language Processing: Neural Networks
Neural Networks.
Deep Feedforward Networks
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Chapter 2 Single Layer Feedforward Networks
Artificial Intelligence (CS 370D)
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Soft Computing Applied to Finite Element Tasks
Classification with Perceptrons Reading:
Wed June 12 Goals of today’s lecture. Learning Mechanisms
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CS621: Artificial Intelligence
Data Mining with Neural Networks (HK: Chapter 7.5)
Classification Neural Networks 1
Neuro-Computing Lecture 4 Radial Basis Function Network
cs540 - Fall 2016 (Shavlik©), Lecture 18, Week 10
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
Neural Networks Chapter 5
Neural Network - 2 Mayank Vatsa
Perceptrons Introduced in1957 by Rosenblatt
Artificial Intelligence Lecture No. 28
Capabilities of Threshold Neurons
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Chapter - 3 Single Layer Percetron
Artificial Intelligence Chapter 3 Neural Networks
Computer Vision Lecture 19: Object Recognition III
The McCullough-Pitts Neuron
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Chapter 3. Artificial Neural Networks - Introduction -

Overview Biological inspiration Artificial neurons and neural networks Application

Biological Neuron Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours. An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems.

Biological Neuron The information transmission happens at the synapses.

Artificial neurons Neuron

Artificial neurons x1 x2 x3 … xn-1 xn w1 Output w2 Inputs y w3 . . . wn-1 wn one possible model

Artificial neurons Nonlinear generalization of neuron: y is the neuron’s output, x is the vector of inputs, and w is the vector of synaptic weights. Examples: sigmoidal neuron Gaussian neuron

Other Model Hopfield Retropropagation

Artificial neural networks Output Inputs An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.

Artificial neural networks Tasks to be solved by artificial neural networks: controlling the movements of a robot based on self-perception and other information (e.g., visual information); deciding the category of potential food items (e.g., edible or non-edible) in an artificial world; recognizing a visual object (e.g., a familiar face); predicting where a moving object goes, when a robot wants to catch it.

Neural network mathematics Output Inputs

Neural network mathematics Neural network: input / output transformation W is the matrix of all weight vectors.

Learning principle for artificial neural networks ENERGY MINIMIZATION We need an appropriate definition of energy for artificial neural networks, and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connections between neurons. ENERGY = measure of task performance error

Perceptrons - First studied in the late 1950s. - Also known as Layered Feed-Forward Networks. - The only efficient learning element at that time was for single-layered networks. - Today, used as a synonym for a single-layer, feed-forward network.

Perceptrons

Perceptrons

Sigmoid Perceptron

Perceptron learning rule Teacher specifies the desired output for a given input Network calculates what it thinks the output should be Network changes its weights in proportion to the error between the desired & calculated results wi,j =  * [teacheri - outputi] * inputj where:  is the learning rate; teacheri - outputi is the error term; and inputj is the input activation wi,j = wi,j + wi,j Delta rule

Adjusting perceptron weights wi,j =  * [teacheri - outputi] * inputj missi is (teacheri - outputi) Adjust each wi,j based on inputj and missi The above table shows adaptation. Incremental learning.

Node biases A node’s output is a weighted function of its inputs What is a bias? How can we learn the bias value? Answer: treat them like just another weight

Training biases () A node’s output: Rewrite 1 if w1x1 + w2x2 + … + wnxn >=  0 otherwise Rewrite w1x1 + w2x2 + … + wnxn -  >= 0 w1x1 + w2x2 + … + wnxn + (-1) >= 0 Hence, the bias is just another weight whose activation is always -1 Just add one more input unit to the network topology bias

Perceptron convergence theorem If a set of <input, output> pairs are learnable (representable), the delta rule will find the necessary weights in a finite number of steps independent of initial weights However, a single layer perceptron can only learn linearly separable concepts it works iff gradient descent works

Linear separability Consider a perceptron Its output is 1, if W1X1 + W2X2 >  0, otherwise In terms of feature space hence, it can only classify examples if a line (hyperplane more generally) can separate the positive examples from the negative examples

What can Perceptrons Represent ? - Some complex Boolean function can be represented. For example: Majority function - will be covered in this lecture. - Perceptrons are limited in the Boolean functions they can represent.

The Separability Problem and EXOR trouble Linear Separability in Perceptrons

AND and OR linear Separators

Separation in n-1 dimensions majority Example of 3Dimensional space

Perceptrons & XOR XOR function no way to draw a line to separate the positive from negative examples

How do we compute XOR?

Perceptron application + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Multi-Layer Perceptron One or more hidden layers Sigmoid activations functions Output layer 2nd hidden layer 1st hidden layer Input data

Multi-Layer Perceptron Application Types of Decision Regions Result Structure Single-Layer Half Plane Bounded By Hyperplane A B Two-Layer Convex Open Or Closed Regions A B Abitrary (Complexity Limited by No. of Nodes) Three-Layer A B

Conclusion NN have some desadvantages such as: Preprocessing Results interpretation by high dimension Learning phase/Supervised/Non Supervised