Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Chapter Seven The Network Approach: Mind as a Web.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Before we start ADALINE
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Data Mining with Neural Networks (HK: Chapter 7.5)
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function Networks
Where We’re At Three learning rules  Hebbian learning regression  LMS (delta rule) regression  Perceptron classification.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Multi-Layer Perceptrons Michael J. Watts
Chapter 9 Neural Network.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
SUPERVISED LEARNING NETWORK
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
EEE502 Pattern Recognition
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Neural Networks 2nd Edition Simon Haykin
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Big data classification using neural network
Learning in Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
The Network Approach: Mind as a Web
David Kauchak CS158 – Spring 2019
PYTHON Deep Learning Prof. Muhammad Saeed.
Presentation transcript:

Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah

Neural networks are adaptive statistical models based on an analogy with the structure of the brain.

Basically, neural networks are built from simple units, sometimes called neurons or cells by analogy with the real thing. These units are linked by a set of weighted connections. Learning is usually accomplished by modification of the connection weights. Each unit codes or corresponds to a feature or a characteristic of a pattern that we want to analyze or that we want to use as a predictor.

Biological Analogy

Computational Structure of a neuron . . . x1 x2 xN w1 w2 wN y

Multi-Layer Neural Network

The goal of the network is to learn or to discover some association between input and output patterns, or to analyze, or to find the structure of the input patterns.

The learning process is achieved through the modification of the connection weights between units. In statistical terms, this is equivalent to interpreting the value of the connections between units as parameters (e.g., like the values of a and b in the regression equation y = a + bx) to be estimated.

Any function whose domain is the real numbers can be used as a transfer function.

The most popular ones are : The linear function The step function (activation values less than a given threshold are set to 0 or to −1 , and the other values are set to +1) The logistic function [f(x) =1/(1 + exp{-x})]¸ which maps the real numbers into the interval [-1 + 1] and whose derivative, needed for learning, is easily computed {f’(x) = f(x) [1-f(x)] The normal or Gaussian function

The architecture (i.e., the pattern of connectivity) of the network, along with the transfer functions used by the neurons and the synaptic weights, completely specify the behavior of the network.

Neural networks are adaptive statistical devices Neural networks are adaptive statistical devices. This means that they can change iteratively the values of their parameters (i.e., the synaptic weights) as a function of their performance.

These changes are made according to learning rules which can be characterized as supervised (when a desired output is known and used to compute an error signal) or unsupervised (when no such error signal is used).

The Widrow-Hoff ( gradient descent or Delta rule) is the most widely known supervised learning rule. It uses the difference between the actual input of the cell and the desired output as an error signal for units in the output layer.

Units in the hidden layers cannot compute directly their error signal but estimate it as a function (e.g., a weighted average) of the error of the units in the following layer. This adaptation of the Widrow-Hoff learning rule is known as error backpropagation.

This adaptation of the Widrow-Hoff learning rule is known as Error Backpropagation: Minimizes the mean squared error using a gradient descent method Error is backpropagated into previous layers one layer at a time. Does not guarantee an optimal solution, as it might converge onto a local minimum takes a long time to train and requires long amount of training data

Alternative Activation functions Radial Basis Functions Square Triangle Gaussian! (μ, σ) can be varied at each hidden node to guide training

Typical Activation Functions F(x) = 1 / (1 + e -k ∑ (wixi) ) Shown for k = 0.5, 1 and 10 Using a nonlinear function which approximates a linear threshold allows a network to approximate nonlinear functions

The Hebbian rule is the most widely known unsupervised learning rule, it is based on work by the Canadian neuropsychologist Donald Hebb, who theorized that neuronal learning (i.e., synaptic change) is a local phenomenon expressible in terms of the temporal correlation between the activation values of neurons.

Specifically, the synaptic change depends on both presynaptic and postsynaptic activities and states that the change in a synaptic weight is a function of the temporal correlation between the presynaptic and postsynaptic activities.

Specifically, the value of the synaptic weight between two neurons increases whenever they are in the same state and decreases when they are in different states.

What can a Neural Net do? Compute a known function Approximate an unknown function Pattern Recognition Signal Processing Learn to do any of the above

The areas where neural nets may be useful ·   pattern association ·   pattern classification ·   regularity detection ·   image processing ·   speech analysis ·   optimization problems ·   robot steering ·   processing of inaccurate or incomplete inputs ·   quality assurance ·   stock market forecasting ·   simulation ·   ...

One the most popular architectures in neural networks is the multi-layer perceptron.

Hopfield Net structure

Recap – Neural Networks Components – biological plausibility Neurone / node Synapse / weight Feed forward networks Unidirectional flow of information Good at extracting patterns, generalisation and prediction Distributed representation of data Parallel processing of data Training: Backpropagation Not exact models, but good at demonstrating principles Recurrent networks Multidirectional flow of information Memory / sense of time Complex temporal dynamics (e.g. CPGs) Various training methods (Hebbian, evolution) Often better biological models than FFNs