Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of 10 10 -10 11 (10 billion) neurons (cortical cells) Biological.

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Slides from: Doug Gray, David Poole
Perceptron Learning Rule
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Kostas Kontogiannis E&CE
Artificial Neural Networks
Machine Learning Neural Networks
Artificial Neural Networks
Simple Neural Nets For Pattern Classification
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
Machine Learning. Learning agent Any other agent.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
1 Chapter 20 Section Slide Set 2 Perceptron examples Additional sources used in preparing the slides: Nils J. Nilsson’s book: Artificial Intelligence:
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Pattern Classification X. 2 Content General Method K Nearest Neighbors Decision Trees Nerual Networks.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Linear Discrimination Reading: Chapter 2 of textbook.
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Artificial Intelligence & Neural Network
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Neural Networks 2nd Edition Simon Haykin
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural NetworksNN 21 Architecture We consider the architecture: feed- forward NN with one layer It is sufficient to study single layer perceptrons with.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
CS621: Artificial Intelligence Lecture 10: Perceptrons introduction Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural networks.
Neural Networks.
Learning with Perceptrons and Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Perceptron as one Type of Linear Discriminants
Introduction to Neural Network
Presentation transcript:

Neurons, Neural Networks, and Learning 1

Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological Neuron - The simple “arithmetic computing” element Brain Computer: What is it? 2

The schematic model of a biological neuron Synapses Dendrites Soma Axon Dendrite from other Axon from other neuron 1.Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron are realized. 2.The axon (output ), is a nerve fibre attached to the soma which can serve as a final output channel of the neuron. An axon is usually highly branched. 3.The dendrites (inputs)- represent a highly branching tree of fibres. These long irregularly shaped nerve fibres (processes) are attached to the soma. 4.Synapses are specialized contacts on a neuron which are the termination points for the axons from other neurons. Biological Neurons 3

Artificial Neuron  A neuron has a set of n synapses associated to the inputs. Each of them is characterized by a weight.  A signal at the i th input is multiplied (weighted) by the weight  The weighted input signals are summed. Thus, a linear combination of the input signals is obtained. A "free weight" (or bias), which does not correspond to any input, is added to this linear combination and this forms a weighted sum.  A nonlinear activation function φ is applied to the weighted sum. A value of the activation function is the neuron's output. w1w1 wnwn w2w2 x1x1 x2x2 xnxn y 4

A Neuron f is a function to be earned are the inputs φ is the activation function Z is the weighted sum 5

A Neuron Neurons’ functionality is determined by the nature of its activation function, its main properties, its plasticity and flexibility, its ability to approximate a function to be learned 6

Linear activation Threshold activation Hyperbolic tangent activation Logistic activation z z z z Artificial Neuron: Most Popular Activation Functions 7

Threshold Neuron (Perceptron) Output of a threshold neuron is binary, while inputs may be either binary or continuous If inputs are binary, a threshold neuron implements a Boolean function The Boolean alphabet {1, -1} is usually used in neural networks theory instead of {0, 1}. Correspondence with the classical Boolean alphabet {0, 1} is established as follows: 8

Threshold Boolean Functions The Boolean function is called a threshold (linearly separable) function, if it is possible to find such a real-valued weighting vector that equation holds for all the values of the variables x from the domain of the function f. Any threshold Boolean function may be learned by a single neuron with the threshold activation function. 9

Threshold Boolean Functions: Geometrical Interpretation “OR” (Disjunction) is an example of the threshold (linearly separable) Boolean function: “-1s” are separated from “1” by a line 1 1     -1 XOR is an example of the non-threshold (not linearly separable) Boolean function: it is impossible separate “1s” from “-1s” by any single line 1 1     1 10

Threshold Boolean Functions and Threshold Neurons Threshold (linearly separable) functions can be learned by a single threshold neuron Non-threshold (nonlinearly separable) functions can not be learned by a single neuron. For learning of these functions a neural network created from threshold neurons is required (Minsky-Papert, 1969) The number of all Boolean functions of n variables is equal to, but the number of the threshold ones is substantially smaller. Really, for n=2 fourteen from sixteen functions (excepting XOR and not XOR) are threshold, for n=3 there are 104 threshold functions from 256, but for n>3 the following correspondence is true (T is a number of threshold functions of n variables): For example, for n=4 there are only about 2000 threshold functions from

Threshold Neuron: Learning A main property of a neuron and of a neural network is their ability to learn from its environment, and to improve its performance through learning. A neuron (a neural network) learns about its environment through an iterative process of adjustments applied to its synaptic weights. Ideally, a network (a single neuron) becomes more knowledgeable about its environment after each iteration of the learning process. 12

Threshold Neuron: Learning Let us have a finite set of n-dimensional vectors that describe some objects belonging to some classes (let us assume for simplicity, but without loss of generality that there are just two classes and that our vectors are binary). This set is called a learning set: 13

Threshold Neuron: Learning Learning of a neuron (of a network) is a process of its adaptation to the automatic identification of a membership of all vectors from a learning set, which is based on the analysis of these vectors: their components form a set of neuron (network) inputs. This process should be utilized through a learning algorithm. 14

Threshold Neuron: Learning Let T be a desired output of a neuron (of a network) for a certain input vector and Y be an actual output of a neuron. If T=Y, there is nothing to learn. If T≠Y, then a neuron has to learn, in order to ensure that after adjustment of the weights, its actual output will coincide with a desired output 15

Error-Correction Learning If T≠Y, then is the error. A goal of learning is to adjust the weights in such a way that for a new actual output we will have the following: That is, the updated actual output must coincide with the desired output. 16

Error-Correction Learning The error-correction learning rule determines how the weights must be adjusted to ensure that the updated actual output will coincide with the desired output: α is a learning rate (should be equal to 1 for the threshold neuron) 17

Learning Algorithm Learning algorithm consists of the sequential checking for all vectors from a learning set, whether their membership is recognized correctly. If so, no action is required. If not, a learning rule must be applied to adjust the weights. This iterative process has to continue either until for all vectors from the learning set their membership will be recognized correctly or it will not be recognized just for some acceptable small amount of vectors (samples from the learning set). 18

When we need a network The functionality of a single neuron is limited. For example, the threshold neuron (the perceptron) can not learn non-linearly separable functions. To learn those functions (mappings between inputs and output) that can not be learned by a single neuron, a neural network should be used. 19

The simplest network 20

Solving XOR problem using the simplest network 21

Solving XOR problem using the simplest network # Inputs Neuron 1Neuron 2Neuron 3 XOR= Z output Z output Z 1) ) )171 4)