What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Simple Neural Nets For Pattern Classification
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks (ANNs)
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
Data Mining with Neural Networks (HK: Chapter 7.5)
Using Neural Networks to Improve the Performance of an Autonomous Vehicle By Jon Cory and Matt Edwards.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Artificial neural networks:
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Networks An Overview and Analysis.
Explorations in Neural Networks Tianhui Cai Period 3.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Neural networks.
Big data classification using neural network
Neural Network Architecture Session 2
Fall 2004 Perceptron CS478 - Machine Learning.
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Neural Networks (ANNs)
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Introduction to Neural Networks And Their Applications
Data Mining with Neural Networks (HK: Chapter 7.5)
OVERVIEW OF BIOLOGICAL NEURONS
XOR problem Input 2 Input 1
network of simple neuron-like computing elements
The Network Approach: Mind as a Web
Introduction to Neural Network
PYTHON Deep Learning Prof. Muhammad Saeed.
Presentation transcript:

Artificial Neural Networks Group Member: Aqsa Ijaz Sehrish Iqbal Zunaira Munir

What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number of interconnected processing units.

About Human Brain The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by Axons. inputs from sensory organs are accepted.These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or doesnot send it forward.

About Human Brain… Each neuron can connect with upto 200,000 other neurons. neurons enable us to remember, recall, think, and apply previous experiences to our every action. The power (outcome) of the human mind comes from the networks of neurons and learning.

Artificial Neural Networks There are two Artificial Neural Network topologies FeedForward Feedback.

FeedForward ANN The information flow is unidirectional. A unit sends information to other unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classificatio n. They have fixed inputs and outputs.

FeedBack ANN Here, feedback loops are allowed. They are used in content addressable memories.

Working of ANNs

Applications of Neural Networks They can perform tasks that are easy for a human but difficult for a machine Speech − Speech recognition, speech classification, text to speech conversion. Telecommunications − Image and data compression, automated information services, real-time spoken language translation. Software − Pattern Recognition in facial recognition, optical character recognition, etc Industrial − Manufacturing process control, product design and analysis

Main Properties of an ANN Parallelism Learning Storing Recalling Decision making

Main Properties of ANNs Parallelism: The capability of acquiring knowledge from the environment. Learning: The capability of learning from examples and experience with or without a teacher. Learn From Experience Learn From Samples Storing The capability of storing its learnt knowledge.

Main Properties of ANNs… Recalling: The capability of recalling its learnt knowledge. Decision making: The capability of making particular decisions based upon the acquired knowledge.

Learning paradigms There are three major learning paradigms, each corresponding to a particular abstract learning task. These are: Supervised Learning Unsupervised Learning Reinforcement Learning

Supervised Learning : Learn by examples as to what a face is in terms of structure, color, etc so that after several iterations it learns to define a face.

It involves a teacher that is scholar than the ANN itself It involves a teacher that is scholar than the ANN itself. For example, the teacher feeds some example data about which the teacher already knows the answers. The ANN comes up with guesses while recognizing. Then the teacher provides the ANN with the answers. The network then compares it guesses with the teacher’s “correct” answers and makes adjustments according to errors.

SUPERVISED LEARNING Training Info = desired (target) outputs Input output Supervised Learning System

Unsupervised Learning It is required when there is no example data set with known answers.

Unsupervised Learning Application

Unsupervised Learning Application

Reinforcement Learning This strategy built on observation. The ANN makes a decision by observing its environment. Reinforcement Learning allows the machine or software agent to learn its behaviour based on feedback from the environment.

Reinforcement Learning Training Info = evaluations(“reward / penalties”) input output RL System

Supervised learning Vs RL

1.3 Basics of a Neuron Topology of a Neuron

Topology of a Neuron Neuron: A neuron (a perceptron) is a basic processing unit to perform a small part of overall computational problem of a neural network.

ANN ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value. Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The following illustration shows a simple ANN

Modeling Artificial Neurons

Example

Topology of a Neuron… Basic model of a neuron å ) ( j x w v output x0 x1 xn Figure 1.8 Basic model of a neuron. w0 wn w1 Input layer Output neuron å = n i x w v ) ( j

There are four components of a neuron Topology of a Neuron… There are four components of a neuron Connections Memory Buffers (register) An adder (a computing unit) An activation function

Components of a neuron… Connections are directed links (shown by arrows) through which the neuron receives inputs from other neurons. Each input is scaled (scaled up or scaled down) by multiplying it with a number called weight (the connection weight). Value of a weight indicates the level/strength/ degree of influence/importance of the input to be given by the neuron. Weights are computed through a process called training

Components of a neuron… An adder For computing weighted sum of inputs (also known as net input of the activation function). An activation function For transforming the output of the adder. The value resulted through this operation is termed as the output of the neuron.

The Perceptron Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output

Continued A perceptron follows the “feed-forward” model, meaning inputs are sent into the neuron, are processed, and result in an output. In the diagram above, this means the network (one neuron) reads from left to right: inputs come in, output goes out.

Continued Step 1: Receive inputs. Say we have a perceptron with two inputs—let’s call them x1 and x2. Input 0: x1 = 12 Input 1: x2 = 4

Continued Step 2: Weight inputs. Each input that is sent into the neuron must first be weighted, i.e. multiplied by some value (often a number between -1 and 1). When creating a perceptron, we’ll typically begin by assigning random weights. Here, let’s give the inputs the following weights: Weight 0: 0.5 Weight 1: -1

Continued We take each input and multiply it by its weight. Input 0 * Weight 0 ⇒ 12 * 0.5 = 6 Input 1 * Weight 1 ⇒ 4 * -1 = -4

Continued Step 3: Sum inputs. The weighted inputs are then summed.

Continued Output = sign(sum) ⇒ sign(2) ⇒ +1 The Perceptron Algorithm: For every input, multiply that input by its weight. Sum all of the weighted inputs. Compute the output of the perceptron based on that sum passed through an activation function (the sign of the sum).

Training phase To train a neural network to answer correctly, we’re going to employ the method of supervised learning. With this method, the network is provided with inputs for which there is a known answer. This way the network can find out if it has made a correct guess. If it’s incorrect, the network can learn from its mistake and adjust its weights. The process is as follows:

Steps Provide the perceptron with inputs for which there is a known answer. Ask the perceptron to guess an answer. Compute the error. (Did it get the answer right or wrong?) Adjust all the weights according to the error. Return to Step 1 and repeat!

Continued The perceptron’s error can be defined as the difference between the desired answer and its guess. ERROR = DESIRED OUTPUT - GUESS OUTPUT

Continued

Continued The error is the determining factor in how the perceptron’s weights should be adjusted. For any given weight, what we are looking to calculate is the change in weight, often called Δ weight (or “delta” weight, delta being the Greek letter Δ). NEW WEIGHT = WEIGHT + ΔWEIGHT Δ weight is calculated as the error multiplied by the input. ΔWEIGHT = ERROR * INPUT Therefore: NEW WEIGHT = WEIGHT + ERROR * INPUT