Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India

Slides:



Advertisements
Similar presentations
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Artificial Neural Network
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kostas Kontogiannis E&CE
Artificial Intelligence (CS 461D)
Simple Neural Nets For Pattern Classification
Neural Networks.
Neural Networks Basic concepts ArchitectureOperation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Neural Network: A Brief Overview
How does the mind process all the information it receives?
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
BEE4333 Intelligent Control
Artificial neural networks:
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Networks An Overview and Analysis.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Unsupervised Learning
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Introduction to Neural Networks And Their Applications.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Artificial Intelligence & Neural Network
Artificial Intelligence, Expert Systems, and Neural Networks Group 10 Cameron Kinard Leaundre Zeno Heath Carley Megan Wiedmaier.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
A field of study that encompasses computational techniques for performing tasks that require intelligence when performed by humans. Simulation of human.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Artificial Intelligence (CS 370D)
Artificial neural networks:
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Introduction to Neural Networks And Their Applications
OVERVIEW OF BIOLOGICAL NEURONS
ARTIFICIAL NEURAL NETWORK Intramantra Global Solution PVT LTD, Indore
The Network Approach: Mind as a Web
Introduction to Neural Network
Presentation transcript:

Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India Web: Artificial Neural Network Simulation

Key Points - Computer expert systems aim to go from the crisp binary conventional control towards the wooly way in which humans think. Computer expert systems aim to go from the crisp binary conventional control towards the wooly way in which humans think. Artificial Neural Network is a system loosely modeled on the human brain. Artificial Neural Network is a system loosely modeled on the human brain. The first attempt to build an operational model of the neuron used the simple binary comparator (known as binary decision neuron). The first attempt to build an operational model of the neuron used the simple binary comparator (known as binary decision neuron). One major disadvantage is that training is required and the amount of training data can be large. One major disadvantage is that training is required and the amount of training data can be large. As our aim is to mimic the operation of the human brain to some extent it implies building Artificial Intelligence. As our aim is to mimic the operation of the human brain to some extent it implies building Artificial Intelligence.

 The field goes by many names, such as connectionism, parallel distributed processing, neuro-computing, natural intelligent systems, machine learning algorithms, and artificial neural networks. Artificial Neural Networks:  Consists of multiple layers of simple processing elements called neurons.  Learning is accomplished by adjusting the varying strengths of neurons with neighbors that causes the overall network to output appropriate results.  An artificial neural network (ANN) is an information- processing system that is based on generalizations of human cognition or neural biology.  Signals are passed between neurons over connection links.  Each connection link has an associated weight, which, in a typical neural net, multiplies the signal transmitted.

A Neural Network (NN) is characterized by its particular:  Architecture: its pattern of connections between the neurons.  Learning Algorithm: its method of determining the weights on the connections.  Activation function: which determines its output.  Each neuron has an internal state, called its activation or activity level which is a function of the inputs it has received.  A neuron sends its activation as a signal to several other neurons.  A neuron can send only one signal at a time, although that signal may be broadcast to several other neurons.

Analogy to the Brain:  Neural networks have a strong similarity to the biological brain and therefore a great deal of the terminology is borrowed from neuroscience. The Biological Neuron:   The most basic element of the human brain is a specific type of cell, which provides us with the abilities to These cells are known as neurons, each of these neurons can connect with up to other neurons.   The power of the brain comes from the numbers of a specific type cell which provides the ability to remember, think, and apply previous experiences to our every action and the multiple connections between them.   All natural neurons have four basic components, which are dendrites, soma, axon, and synapses.

A Biological Neuron

 Artificial neurons, simulates the four basic functions of natural neurons.  Artificial neurons are much simpler than the biological neuron.  Various inputs to the network are represented by the mathematical symbol, x (n).  Each of these inputs are multiplied by a connection weight, these weights are represented by w (n).  In the simplest case, these products are simply summed, fed through a transfer function to generate a result, and then output.

Basic Block of an Artificial Neuron.

The Complex Design issues consists of: Layers:  Arranging neurons in various layers.  Deciding the type of connections among neurons for different layers, as well as among the neurons within a layer.  Deciding the way a neuron receives input and produces output.  Determining the strength of connection within the network by allowing the network to learn the appropriate values of connection weights by using a training data set.  A layer of “input” units is connected to a layer of “hidden” units, which is connected to a layer of “output” units.  The activity of each hidden unit is determined by the activities of the input units and weights on the connections between the input and hidden units.  The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.  Biologically, neural networks are constructed in a 3D way from microscopic components.

 The input layer consists of neurons that receive input form the external environment.  The output layer consists of neurons that communicate the output of the system to the user or external environment.  Usually a number of hidden layers between these two layers. The input layer receives the input its neurons produce output, which becomes input to the other layers of the system. The process continues until a certain condition is satisfied. For determining the number of hidden neurons, one are often left out to the method trial and error.

Neural Network Architecture: Feed forward networks: Feed forward ANNs allow signals to travel one way only, from input to output. Feedback Networks: Feedback networks can have signals traveling in both directions by introducing the loops in the network.

Communication and types of connections:  Connected via a network of paths carrying the output of one neuron as input to another neuron.  Unidirectional paths.  Neuron receives input from many neurons, but produce a single output, which is communicated to other neurons.  Neuron in a layer may communicate with each other, or they may not have any connections.  The neurons of one layer are always connected to the neurons of at least another layer.

Inter-layer connections: Inter-layer connections:  Fully connected  Partially connected  Bi-directional  Resonance  Feed forward  Hierarchical There are different types of connections used between layers, these connections between layers are called inter-layer connections. These are of following types:

Intra-layer connections:  In more complex structures the neurons communicate among themselves within a layer, known as intra-layer connections. There are of two types:  Recurrent: The neurons within a layer are fully- or partially connected to one another. They communicate their outputs with one another a number of times before they are allowed to send their outputs to another layer.  On-center/off surround: A neuron within a layer has excitatory connections to itself and its immediate neighbors, and has inhibitory connections to other neurons. Each gang excites itself and its gang members and inhibits all members of other gangs. After a few rounds of signal interchange, the neurons with an active output value will win, and is allowed to update its and its gang member’s weights. There are two types of connections between two neurons, excitatory or inhibitory. In the excitatory connection, the output of one neuron increases the action potential of the neuron to which it is connected. In Inhibitory connection the output of the neuron sending a message would reduce the activity or action potential of the receiving neuron.  Excitatory causes the summing mechanism of the next neuron to add while Inhibitory causes it to subtract.

Learning: Learning:  Neural networks are sometimes called machine-learning algorithms.  Strength of connection between the neurons is stored as a weight-value for the specific connection.  System learns new knowledge by adjusting these connection weights. Supervised Learning: It incorporates an external teacher, so that each output unit is told what is desired response to input signals ought to be. During the learning process global information may be required. Unsupervised Learning: It uses no external teacher and is based upon only local information. It is also referred to as self-organization, in the sense that it self-organizes the data presented to the network and detect their emergent collective properties.

Back propagation: This method is proven highly successful in training of multilayered neural nets. The network is not just given reinforcement for how it is doing on a task. Information about errors is also filtered back through the system and is used to adjust the connections between the layers, thus improving performance. A form of supervised learning. Reinforcement learning: This method works on reinforcement from the outside. The connections among the neurons in the hidden layer are randomly arranged, then reshuffled as the network is told how close it is to solving the problem.

Learning Methods:  Off-line: In the off-line learning methods, once the systems enters into the operation mode, its weights are fixed and do not change any more. Most of the networks are of the off-line learning type.  On-line: In on-line or real time learning, when the system is in operating mode (recall), it continues to learn while being used as a decision tool. This type of learning has a more complex design structure.

Learning Laws: These laws are mathematical algorithms used to update the connection weights.  Hebb’s Rule: If a neuron receives an input from another neuron, and if both are highly active (mathematically have the same sign), the weight between the neurons should be strengthened.  Hopfield Law: It specifies the magnitude of the strengthening or weakening. It states, "if the desired output and the input are both active or both inactive, increment the connection weight by the learning rate, otherwise decrement the weight by the learning rate.

 The Delta Rule: The Delta Rule is a further variation of Hebb’s Rule, and it is one of the most commonly used. This rule is based on the idea of continuously modifying the strengths of the input connections to reduce the difference (the delta) between the desired output value and the actual output of a neuron.  Activation functions: Various algorithms can be tried with several choices of activation functions. Some examples of those activation functions are: - sine, cosine, linear & hyperbolic tangent.

Architecture for XOR problem Problem analysis in Neural Networks: XOR/Parity bit problem: This is a standard problem. The network used, had a architecture.

Architecture for problem The encoder decoder problem: This architecture contains 10 inputs, 5 Neurons in hidden layer & 10 Outputs.

 8 input 3-output classification problem: The 8 input is 8 different symptoms of diseases. The network was trained to diagnose the disease. The disease was coded as one of the possible 8 binary combinations. The network used had an architecture. Architecture for 8 inputs & 3-output problem Architecture for 8 inputs & 3-output problem

Where are Neural Networks being used:  Pattern recognition training: Automated recognition of handwritten text, spoken words, facial/fingerprint identification and moving targets on a static background has all been successfully implemented.  Speech production: This involves a neural network connected to a speech synthesizer. ANN-based algorithms are used to discover rules for themselves. A most remarkable example of this is the program Net- Talk.  Image processing and pattern recognition form an important area of neural networks.  Character recognition and handwriting recognition.  AI expert systems are today used in applications where the underlying knowledge base does not significantly change with time (e.g. medical diagnostic systems).  ANNs are more suitable when the input dataset can evolve with time (e.g. real-time control systems).

Conclusion:  Artificial neural networks offer an ability to perform tasks outside the scope of traditional processors.  Neural networks learn, they are not programmed.It is for that reason that neural networks are finding themselves in applications where humans are also unable to always be right.  Neural networks need faster hardware. It is then that these systems will be able to hear speech, read handwriting, and formulate actions. They will be able to become the intelligence behind robots who never tire nor become distracted. It is then that they will become the leading edge in an age of "intelligent machines”.

Thank You.