Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Memristor in Learning Neural Networks
Slides from: Doug Gray, David Poole
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
X0 xn w0 wn o Threshold units SOM.
Machine Learning Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Simple Neural Nets For Pattern Classification
Neural Networks Basic concepts ArchitectureOperation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Chapter Seven The Network Approach: Mind as a Web.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Artificial Neural Network Unsupervised Learning
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Hebbian Coincidence Learning
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
UNSUPERVISED LEARNING NETWORKS
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Ten MC Questions taken from the Text, slides and described in class presentation. COSC 4426 AJ Boulay Julia Johnson.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Neural Networks 2nd Edition Simon Haykin
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Chapter 6 Neural Network.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Fall 2004 Backpropagation CS478 - Machine Learning.
Self-Organizing Network Model (SOM) Session 11
Neural Networks.
Learning in Neural Networks
Data Mining, Neural Network and Genetic Programming
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
Other Applications of Energy Minimzation
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Self organizing networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification Neural Networks 1
Chapter 3. Artificial Neural Networks - Introduction -
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
The Network Approach: Mind as a Web
Presentation transcript:

Presentation on Neural Networks.

Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information processing occurring in the nervous system. It can also be defined as an interconnected assembly of simple processing elements,units or nodes whose functionality is loosely based on the animal neuron. And a cognitive information processing structure based (on models of brain function. In a more formal engineering context a highly parallel dynamical system with the topology of a directed graph that can carry out information processing by means of it's state response to continuous or initial input.

Facts 1. Knowledge is acquired by the network from its environment through a learning process. 2. Interneuron connection strengths known as synaptic weights are used to store the acquired knowledge

Three components of a neural network A. A set of nodes connected together via links. B. An activation rule that each node follows in updating its activation level. C. An activation function for limiting the amplitude of the output of a neuron.

Three basic elements of a neuronal model a. A set of synapses or connecting links each of which is characterized by a weight. b. An adder for summing the input signals. c. An activation function for limiting the amplitude of the output of a neuron.

Classification of neural networks Binary valued inputs and continuous valued inputs. Trained with and without supervision. Those with and without adaptive training.

Supervised learning The adjustment of weights is done according to the desired or correct output available under specific input pattern. Error correction is the most common form of supervised learning. Error is defined as the difference between the direct response and actual response of the network. Unsupervised learning in unsupervised learning or self organized learning the network is not given Any external indication as to what the correct responses should be. It simply learns by the environment. Unsupervised learning aims at finding a certain kind of regularity in the data represented by the exemplars. In unsupervised learning correlation rule may be applied to calculate weight changes.

Single layer perceptrons The single layer perceptron was among the first and simplest learning machines that are trainable.

Multi layer preceptrons 1.The model of each neuron in the network includes a non linear activation function. 2. The network contains one or more layers of hidden neurons that are not part of input or output of the network. 3. The network exhibits a high degree of connectivity determined by the synapses of the network.

Recurrent Networks Networks Neural networks with one or more feedback loops are referred to as recurrent networks There are two ways of feedback a. Local feedback at the level of a single neuron inside the network. b.Global feedback encompassing the whole network

Another way of classifying neural network 1. Multilayer feed forward networks 2. Kohonen self organizing feature maps. 3. Hopfield networks

Multilayer perceptron networks These are feed forward nets with one of more layers of nodes between input and output nodes.

Kohonen networks and learning vector quantization A simple kohonen net architecture consists of two layers an input layer and a kohonen output layer.

Kohonen network operates in two steps. First it selects the unit whose connection weight vector is closest to the current input vector as the winning After a winning neighborhood is selected the connection vectors to the units whose output values are rotated toward the input vector.

SOFM and competitive learning The goal of SOFM is the mapping of an input space of n-dimensions into one or two dimensional lattice which comprises the output space such that a meaningful topological ordering exists within the output space. The input layer is connected to the output layer through feed forward connections.

Hopfield network Its a network in which every unit is connected to every other unit and the connections are symmetric. A Hopfield network consists of the following algorithms- 1. Assigning synaptic weights. 2. Initializaion the search items. 3. Activation weight computation and iteration. 4. Convergence. A Hopfield network follows a gradient descent rule. Once it reaches a global minimum is stuck there until some randomness is thrown to make it reach global minimum. Simulated annealing is a method that introduces randomness to allow system to jump out of global minimum

Semantic networks Semantic networks have nodes that represent concepts and connections that represent associations between them. There is some sort of inheritance links between objects and these links are called "IS A" links.