1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio- Medical Engineering 8 th Term Neural.

Slides:



Advertisements
Similar presentations
KULIAH II JST: BASIC CONCEPTS
Advertisements

Introduction to Neural Networks
Introduction to Neural Networks Andy Philippides Centre for Computational Neuroscience and Robotics (CCNR) School of Cognitive and Computing Sciences/School.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Neural Networks (1)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Kostas Kontogiannis E&CE
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Neural Networks.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Introduction to Neural Network Justin Jansen December 9 th 2002.
Artificial Neural Networks - Introduction -
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
Artificial Intelligence & Neural Network
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Neural Networks.
Introduction to Artificial Neural Network Session 1
Artificial Intelligence (CS 370D)
Dr. Unnikrishnan P.C. Professor, EEE
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Chapter 12 Advanced Intelligent Systems
OVERVIEW OF BIOLOGICAL NEURONS
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Artificial Intelligence Lecture No. 28
ARTIFICIAL NEURAL networks.
Introduction to Neural Network
Presentation transcript:

1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio- Medical Engineering 8 th Term Neural Networks and Fuzzy Logic By Dr. Mukhtiar Ali Unar Telephone:

2 Books: 1. Haykin, S., Neural Networks – A Comprehensive Foundation, Second edition or latest, McMillan. 2. Hagan, M.T., Demuth, H.B., and Beale, M., Neural Network Design, PWS Publishing Company.

3 Main Features of Neural Networks: Artificial Neural Networks (ANNs) learn by experience rather than by modelling or programming. ANN architectures are distributed, inherently parallel and potentially real time. They have the ability to generalize. They do not require a prior understanding of the process or phenomenon being studied. They can form arbitrary continuous non-linear mappings. They are robust to noisy data. VLSI implementation is easy

4 Some Applications Pattern recognition including character recognition, speech recognition, face recognition, on-line signature recognition colour recognition etc. Pattern recognition including character recognition, speech recognition, face recognition, on-line signature recognition colour recognition etc. Weather forecasting, load forecasting. Weather forecasting, load forecasting. Intelligent routers, intelligent traffic monitoring, intelligent filter design. Intelligent routers, intelligent traffic monitoring, intelligent filter design. Intelligent controller design Intelligent controller design Intelligent modelling. Intelligent modelling. and many more

5 limitations Conventional neural networks are black box models. Tools for analysis and model validation are not well established. An intelligent machine can only solve some specific problem for which it is trained. Human brain is very complex and cannot be fully simulated with present computing power. An artificial neural network does not have capability of human brain. Issue:   What is the difference between human brain neurons and other animal brain neurons?

6 Definition: A universal definition of ANNs is not available, however, the following definition summarizes the basic features of an ANN: Artificial Neural Networks, also called Neurocomputing or Parallel Distributed Processes (PDP) or connectionist networks or simply neural networks are interconnected assemblies of simple processing elements, called neurons, units or nodes, whose functionality is loosely based on the biological neuron. The processing ability of the network is stored in the inter-unit connection strength, or weights, obtained by a process of adaptation to, or learning from a set of training patterns.

7 Biological Neuron: A simplified view of a biological (real) neuron

8 Examples of some real neurons

9 Image of the vertical organization of neurons in the primary visual cortex

10 Soma or Cell Body The central part of a neuron is called the soma or cell body which contains the nucleus and the protein synthesis machinery. The size of soma of a typical neuron is about 10 to 80  m. Almost all the logical functions are realized in this part of a neuron.

11 The Dendrites The dendrites represent a highly branching tree of fibers and are attached to the soma. The word dendrite has been taken from the Greek word dendro which means tree. Dendrites connect the neuron to a set of other neurons. Dendrites either receive inputs from other neurons or connect other dendrites to the synaptic outputs.

12 The Axon It is a long tubular fiber which divides itself into a number of branches towards its end. Its length can be from 100  m to 1 m. The function of an axon is to transmit the generated neural activity to other neurons or to muscle fibers. In other words, it is output channel of the neuron. The point where the axon is connected to its cell body is called the Hillock zone.

13 Synapses The junction at which a signal is passed from one neuron to the next is called a synapse (from the Greek verb to join). It has a button like shape with diameter around 1  m. Usually a synapse is not a physical connection (the axon and the dendrite do not touch) but there is a gap called the synaptic cleft that is normally between 200Å to 500Å (1Å = m). The strength of synaptic connection between neurons can be chemically altered by the brain in response to favorable and unfavorable stimuli in such a way as to adapt the organism to function optimally within its environment.

14 “The nerve fibre is clearly a signalling mechanism of limited scope. It can only transmit a succession of brief explosive waves, and the message can only be varied by changes in the frequency and in the total number of these waves. … But this limitation is really a small matter, for in the body the nervous units do not act in isolation as they do in our experiments. A sensory stimulus will usually affect a number of receptor organs, and its result will depend on the composite message in many nerve fibres.” Lord Adrian, Nobel Acceptance Speech, 1932.

15 We now know it’s not quite that simple Single neurons are highly complex electrochemical devices Single neurons are highly complex electrochemical devices Synaptically connected networks are only part of the story Synaptically connected networks are only part of the story Many forms of interneuron communication now known – acting over many different spatial and temporal scales Many forms of interneuron communication now known – acting over many different spatial and temporal scales

16 Single neuron activity Membrane potential is the voltage difference between a neuron and its surroundings (0 mV) Cell 0 Mv Membrane potential

17 Single neuron activity If you measure the membrane potential of a neuron and print it out on the screen, it looks like: spike

18 Single neuron activity A spike is generated when the membrane potential is greater than its threshold

19 Abstraction So we can forget all sub-threshold activity and concentrate on spikes (action potentials), which are the signals sent to other neurons Spikes

20 Only spikes are important since other neurons receive them (signals) Neurons communicate with spikes Information is coded by spikes So if we can manage to measure the spiking time, we decipher how the brain works ….

21 Again its not quite that simple spiking time in the cortex is random

22 With identical input for the identical neuron spike patterns are similar, but not identical

23 Recording from a real neuron: membrane potential

24  Single spiking time is meaningless  To extract useful information, we have to average to obtain the firing rate r for a group of neurons in a local circuit where neuron codes the same information over a time window Local circuit = Time window = 1 sec r =  Hz

25 So we can have a network of these local groups w 1: synaptic strength wnwn r1r1 rnrn Hence we have firing rate of a group of neurons

26 r i is the firing rate of input local circuit The neurons at output local circuits receive signals in the form The output firing rate of the output local circuit is then given by R where f is the activation function, generally a Sigmoidal function of some sort w i weight, (synaptic strength) measuring the strength of the interaction between neurons.

27 Artificial Neural networks Local circuits (average to get firing rates) Single neuron (send out spikes)

28 Is it possible to simulate human brain? No, with present computing power is it almost impossible to simulate a full behavior of human brain. Reasons: It is estimated that the human cerebral cortex contains 100 billion neurons. Each neuron has as many as 1000 dendrites and, hence within the cerebral cortex there are approximately 100,000 billion synapses. It is estimated that the human cerebral cortex contains 100 billion neurons. Each neuron has as many as 1000 dendrites and, hence within the cerebral cortex there are approximately 100,000 billion synapses. The behavior of the real nervous system is very complex and not yet fully known. The behavior of the real nervous system is very complex and not yet fully known.

29 Model of a single artificial neuron w2w2 w1w1 wpwp  f x1x1 x 2 Activation function xpxp b y

30 Model of a single artificial neuron w2w2 w1w1 wpwp  f x1x1 x 2 Activation function xpxp y w0w0 x0x0 For simplicity, the threshold contribution b may be treated as an extra input to the neuron, as shown below, where x 0 = 1, w 0 = b. In this case

31 Activation Functions: The activation function of a neuron, also known as transfer function, may be linear or non-linear. A particular activation function is chosen to satisfy some specification of the problem that the neuron is attempting to solve. A variety of activation functions have been proposed. Some of these are discussed below:

32 (a) The hard limit activation function: This activation function sets the output of the neuron to 0 if the function argument is less than 0, or 1 if its argument is greater than or equal to 0. (see figure below). This function is generally used to create neurons that classify inputs into two distinct classes. 0 1

33 Example 1: The input to a neuron is 3, and its weight is 1.8. (a) What is the net input to the activation function? (a) What is the net input to the activation function? (b) What is the output of the neuron if it has the hard limit transfer function? (b) What is the output of the neuron if it has the hard limit transfer function?Solution: (a) Net input = u = 1.8  3 = 5.4 (b) Neuron output = f(u) = 1.

34 Example 2: Repeat example 1 if its bias is (i) –2 (ii) –6 (i) –2 (ii) –6 Solution: (a) net input = u = (1.8  3) + (-2) = 5.4 – 2 = 3.4 = 5.4 – 2 = 3.4 output = f(u) = 1.0 output = f(u) = 1.0 (b) net input = u = (1.8  3) + (-6) (b) net input = u = (1.8  3) + (-6) = 5.4 – 6 = -0.6 = 5.4 – 6 = -0.6 output = 0. output = 0.

35 Example 3: Given a two input neuron with the following parameters: b = 1.4, W = [ ] and P = [-3 5], calculate the neuron output for the hard limit transfer function. Solution: net input to the activation function = Neuron output = f(u) = 1.0 Neuron output = f(u) = 1.0

36 (b) Symmetrical hard limit: This activation function is defined as follows: y = -1 u < 0 y = -1 u < 0 y = +1 u  0 y = +1 u  0 Where u is the net input and y is the output of the function (see figure below): 0 1 y u0

37 Example 4: Given a two output neuron with the following parameters: b = 1.2, W = [3 2], and p = [-5 6]. Calculate the neuron output for the symmetrical hard limit activation function. Solution: Output = y = f(u) = -1

38 (c) Linear activation function: The output of a linear activation function is equal to its input: y = u as shown in the following figure: as shown in the following figure: 0 0 u y

39 (d) The log-sigmoid activation function: This activation function takes the input (which may have any value between plus and minus infinity) and squashes the output into the range 0 to 1, according to the expression: A plot of this function is given below: 0u 0 1

40 Example 5: Repeat example 2 for a log- sigmoid activation function. Solution: (a) net input = u = 3.4 output = f(u) = 1/(1 + e -3.4 ) = output = f(u) = 1/(1 + e -3.4 ) = (b) net input u = -0.6 output = f(u) = 1/(1 + e 0.6 ) = output = f(u) = 1/(1 + e 0.6 ) = Example 6: Repeat example 3 for a log sigmoid activation function. Solution: output = 1

41 (e) Hyperbolic Tangent Sigmoid: This activation function is shown in the following figure and is defined mathematically as follows: 0 0 1

42 Other choices of activation functions 0 +1 Saturation Limiter Gaussian function Schmitt trigger +1 and many more

43 Artificial Neural Network (ANN) architectures: Connecting several neurons in some specific manner yields an artificial neural network. The architecture of an ANN defines the network structure, that is the number of neurons in the network and their interconnectivity. In a typical ANN architecture, the artificial neurons are connected in layers and they operate in parallel. The weights or the strength of connection between the neurons are adapted during use to yield good performance. Each ANN architecture has its own learning rule.

44 Classes of ANN architectures: (a) Single Layer feedforward networks: Input layer Output layer

45 (b) Multilayer Feedforward Networks Input layerHidden layerOutput layer Fully connected feedforward network with one hidden layer

46 (b) Multilayer Feedforward Networks Partially connected feedforward network with one hidden layer Input signal Input layerHidden layerOutput layer

47 Recurrent or Feedback Networks: Recurrent network with no hidden neurons

48 Recurrent Networks: example 2

49 Lattice Structures: