Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Artificial Neural Networks (1)
Artificial Neural Networks
Kostas Kontogiannis E&CE
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Artificial Neural Networks
Artificial Intelligence (CS 461D)
Neural Networks.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Explorations in Neural Networks Tianhui Cai Period 3.
Artificial Intelligence Neural Networks ( Chapter 9 )
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Multi-Layer Perceptron
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Artificial Intelligence & Neural Network
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
SUPERVISED LEARNING NETWORK
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
EEE502 Pattern Recognition
Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural Networks An Introduction
Multiple-Layer Networks and Backpropagation Algorithms
Neural Networks.
Learning with Perceptrons and Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
Real Neurons Cell structures Cell body Dendrites Axon
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Artificial Intelligence Methods
Artificial Neural Network & Backpropagation Algorithm
CSE 573 Introduction to Artificial Intelligence Neural Networks
ARTIFICIAL NEURAL networks.
Artificial Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure

Neural Networks The Structure of Neurones –Axons connect to dendrites via synapses. –Electro-chemical signals are propagated from the dendritic input, through the cell body, and down the axon to other neurons A neurone has a cell body, a branching input structure (the dendrIte) and a branching output structure (th axOn)

Neural Networks The Structure of Neurones A neurone only fires if its input signal exceeds a certain amount (the threshold) in a short time period. Synapses vary in strength –Good connections allowing a large signal –Slight connections allow only a weak signal. –Synapses can be either excitatory or inhibitory.

Neural Networks SjSj f (S j ) XjXj aoao a1a1 a2a2 anan +1 w j0 w j1 w j2 w jn A Classic Artifical Neuron(1)

Neural Networks All neurons contain an activation function which determines whether the signal is strong enough to produce an output. Shows several functions that could be used as an activation function. A Classic Artifical Neuron(2)

Neural Networks Learning When the output is calculated, the desire output is then given to the program to modify the weights. After modifications are done, the same inputs given will produce the outputs desired. Formula : Weight N = Weight N + learning rate * (Desire Output-Actual Output) * Input N * Weight N

Neural Networks Tractable Architectures Feedforward Neural Networks –Connections in one direction only –Partial biological justification Complex models with constraints (Hopfield and ART). –Feedback loops included –Complex behaviour, limited by constraining architecture

Neural Networks Fig. 1: Multilayer Perceptron Output Values Input Signals (External Stimuli) Output Layer Adjustable Weights Input Layer

Neural Networks Types of Layer The input layer. –Introduces input values into the network. –No activation function or other processing. The hidden layer(s). –Perform classification of features –Two hidden layers are sufficient to solve any problem –Features imply more layers may be better

Neural Networks Types of Layer (continued) The output layer. –Functionally just like the hidden layers –Outputs are passed on to the world outside the neural network.

Neural Networks A Simple Model of a Neuron Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds the threshold, the neuron “fires” w 1j w 2j w 3j w ij y1y1 y2y2 y3y3 yiyi O

Neural Networks An Artificial Neuron Each hidden or output neuron has weighted input connections from each of the units in the preceding layer. The unit performs a weighted sum of its inputs, and subtracts its threshold value, to give its activation level. Activation level is passed through a sigmoid activation function to determine output. w 1j w 2j w 3j w ij y1y1 y2y2 y3y3 yiyi f(x) O

Neural Networks Mathematical Definition Number all the neurons from 1 up to N The output of the j'th neuron is o j The threshold of the j'th neuron is  j The weight of the connection from unit i to unit j is w ij The activation of the j'th unit is a j The activation function is written as  (x)

Neural Networks Mathematical Definition Since the activation a j is given by the sum of the weighted inputs minus the threshold, we can write: o j =  (a j ) a j =  ( w ij o i ) -  j i

Neural Networks Activation functions Transforms neuron’s input into output. Features of activation functions: –A squashing effect is required Prevents accelerating growth of activation levels through the network. –Simple and easy to calculate –Monotonically non-decreasing order-preserving

Neural Networks Standard activation functions The hard-limiting threshold function –Corresponds to the biological paradigm either fires or not Sigmoid functions ('S'-shaped curves) –The logistic function –The hyperbolic tangent (symmetrical) –Both functions have a simple differential –Only the shape is important  (x) = e -ax

Neural Networks Training Algorithms Adjust neural network weights to map inputs to outputs. Use a set of sample patterns where the desired output (given the inputs presented) is known. The purpose is to learn to generalize –Recognize features which are common to good and bad exemplars

Neural Networks Back-Propagation A training procedure which allows multi- layer feedforward Neural Networks to be trained; Can theoretically perform “any” input- output mapping; Can learn to solve linearly inseparable problems.

Neural Networks Activation functions and training For feedforward networks: –A continuous function can be differentiated allowing gradient-descent. –Back-propagation is an example of a gradient- descent technique. –Reason for prevalence of sigmoid

Neural Networks Training versus Analysis Understanding how the network is doing what it does Predicting behaviour under novel conditions