Ahmad Aljebaly Artificial Neural Networks. Agenda History of Artificial Neural Networks What is an Artificial Neural Networks? How it works? Learning.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Decision Support Systems
September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
Artificial Neural Networks (ANNs)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial-Basis Function Networks
Artificial neural networks:
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Machine Learning. Learning agent Any other agent.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Data Mining and Neural Networks Danny Leung CS157B, Spring 2006 Professor Sin-Min Lee.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Machine Learning Supervised Learning Classification and Regression
Artificial Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Presentation transcript:

Ahmad Aljebaly Artificial Neural Networks

Agenda History of Artificial Neural Networks What is an Artificial Neural Networks? How it works? Learning Learning paradigms Supervised learning Unsupervised learning Reinforcement learning Applications areas Advantages and Disadvantages

History of the Artificial Neural Networks history of the ANNs stems from the 1940s, the decade of the first electronic computer. However, the first important step took place in 1957 when Rosenblatt introduced the first concrete neural model, the perceptron. Rosenblatt also took part in constructing the first successful neurocomputer, the Mark I Perceptron. After this, the development of ANNs has proceeded as described in Figure.

History of the Artificial Neural Networks Rosenblatt's original perceptron model contained only one layer. From this, a multi-layered model was derived in At first, the use of the multi- layer perceptron (MLP) was complicated by the lack of a appropriate learning algorithm. In 1974, Werbos came to introduce a so-called backpropagation algorithm for the three-layered perceptron network.

History of the Artificial Neural Networks in 1986, The application area of the MLP networks remained rather limited until the breakthrough when a general back propagation algorithm for a multi-layered perceptron was introduced by Rummelhart and Mclelland. in 1982, Hopfield brought out his idea of a neural network. Unlike the neurons in MLP, the Hopfield network consists of only one layer whose neurons are fully connected with each other.

History of the Artificial Neural Networks Since then, new versions of the Hopfield network have been developed. The Boltzmann machine has been influenced by both the Hopfield network and the MLP.

History of the Artificial Neural Networks in 1988, Radial Basis Function (RBF) networks were first introduced by Broomhead & Lowe. Although the basic idea of RBF was developed 30 years ago under the name method of potential function, the work by Broomhead & Lowe opened a new frontier in the neural network community.

History of the Artificial Neural Networks in 1982, A totally unique kind of network model is the Self-Organizing Map (SOM) introduced by Kohonen. SOM is a certain kind of topological map which organizes itself based on the input patterns that it is trained with. The SOM originated from the LVQ (Learning Vector Quantization) network the underlying idea of which was also Kohonen's in 1972.

History of Artificial Neural Networks Since then, research on artificial neural networks has remained active, leading to many new network types, as well as hybrid algorithms and hardware for neural information processing.

Artificial Neural Network An artificial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections.

Artificial Neural Network A set of major aspects of a parallel distributed model include:  a set of processing units (cells).  a state of activation for every unit, which equivalent to the output of the unit.  connections between the units. Generally each connection is defined by a weight.  a propagation rule, which determines the effective input of a unit from its external inputs.  an activation function, which determines the new level of activation based on the effective input and the current activation.  an external input for each unit.  a method for information gathering (the learning rule).  an environment within which the system must operate, providing input signals and _ if necessary _ error signals.

Computers vs. Neural Networks “Standard” Computers Neural Networks one CPU highly parallel processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

Artificial Neural Networks The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes. Basically, each neuron receives input from many other neurons. changes its internal state (activation) based on the current input. sends one output signal to many other neurons, possibly including its input neurons (recurrent network).

Artificial Neural Networks Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons. Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field.

How do ANNs work? An artificial neural network (ANN) is either a hardware implementation or a computer program which strives to simulate the information processing capabilities of its biological exemplar. ANNs are typically composed of a great number of interconnected artificial neurons. The artificial neurons are simplified models of their biological counterparts. ANN is a technique for solving problems by constructing software that works like our brains.

How do our brains work?  The Brain is A massively parallel information processing system.  Our brains are a huge network of processing elements. A typical brain contains a network of 10 billion neurons.

How do our brains work?  A processing element Dendrites: Input Cell body: Processor Synaptic: Link Axon: Output

How do our brains work?  A processing element A neuron is connected to other neurons through about 10,000 synapses

How do our brains work?  A processing element A neuron receives input from other neurons. Inputs are combined.

How do our brains work?  A processing element Once input exceeds a critical level, the neuron discharges a spike ‐ an electrical pulse that travels from the body, down the axon, to the next neuron(s)

How do our brains work?  A processing element The axon endings almost touch the dendrites or cell body of the next neuron.

How do our brains work?  A processing element Transmission of an electrical signal from one neuron to the next is effected by neurotransmitters.

How do our brains work?  A processing element Neurotransmitters are chemicals which are released from the first neuron and which bind to the Second.

How do our brains work?  A processing element This link is called a synapse. The strength of the signal that reaches the next neuron depends on factors such as the amount of neurotransmitter available.

How do ANNs work? An artificial neuron is an imitation of a human neuron

How do ANNs work? Now, let us have a look at the model of an artificial neuron.

How do ANNs work? Output x1x1 x2x2 xmxm ∑ y Processing Input ∑= X 1 +X 2 + ….+X m =y......

How do ANNs work? Not all inputs are equal Output x1x1 x2x2 xmxm ∑ y Processing Input ∑= X 1 w 1 +X 2 w 2 + ….+X m w m =y w1w1 w2w2 wmwm weights

How do ANNs work? The signal is not passed down to the next neuron verbatim Transfer Function (Activation Function) Output x1x1 x2x2 xmxm ∑ y Processing Input w1w1 w2w2 wmwm weights f(v k ).....

The output is a function of the input, that is affected by the weights, and the transfer functions

Three types of layers: Input, Hidden, and Output

Artificial Neural Networks An ANN can: 1. compute any computable function, by the appropriate selection of the network topology and weights values. 2. learn from experience!  Specifically, by trial ‐ and ‐ error

Learning by trial ‐ and ‐ error Continuous process of:  Trial: Processing an input to produce an output (In terms of ANN: Compute the output function of a given input)  Evaluate: Evaluating this output by comparing the actual output with the expected output.  Adjust: Adjust the weights.

Example: XOR Hidden Layer, with three neurons Output Layer, with one neuron Input Layer, with two neurons

How it works? Hidden Layer, with three neurons Output Layer, with one neuron Input Layer, with two neurons

How it works? Set initial values of the weights randomly. Input: truth table of the XOR Do  Read input (e.g. 0, and 0)  Compute an output (e.g )  Compare it to the expected output. (Diff= )  Modify the weights accordingly. Loop until a condition is met  Condition: certain number of iterations  Condition: error threshold

Design Issues Initial weights (small random values ∈ [ ‐ 1,1]) Transfer function (How the inputs and the weights are combined to produce output?) Error estimation Weights adjusting Number of neurons Data representation Size of training set

Transfer Functions Linear: The output is proportional to the total weighted input. Threshold: The output is set at one of two values, depending on whether the total weighted input is greater than or less than some threshold value. Non ‐ linear: The output varies continuously but not linearly as the input changes.

Error Estimation The root mean square error (RMSE) is a frequently- used measure of the differences between values predicted by a model or an estimator and the values actually observed from the thing being modeled or estimated

Weights Adjusting After each iteration, weights should be adjusted to minimize the error. – All possible weights – Back propagation

Back Propagation Back-propagation is an example of supervised learning is used at each layer to minimize the error between the layer’s response and the actual data The error at each hidden layer is an average of the evaluated error Hidden layer networks are trained this way

Back Propagation N is a neuron. N w is one of N’s inputs weights N out is N’s output. N w = N w +Δ N w Δ N w = N out * (1 ‐ N out )* N ErrorFactor N ErrorFactor = N ExpectedOutput – N ActualOutput This works only for the last layer, as we can know the actual output, and the expected output.

Number of neurons Many neurons: Higher accuracy Slower Risk of over ‐ fitting Memorizing, rather than understanding The network will be useless with new problems. Few neurons: Lower accuracy Inability to learn at all Optimal number.

Data representation Usually input/output data needs pre ‐ processing Pictures Pixel intensity Text: A pattern

Size of training set No one ‐ fits ‐ all formula Over fitting can occur if a “good” training set is not chosen What constitutes a “good” training set? Samples must represent the general population. Samples must contain members of each class. Samples in each class must contain a wide range of variations or noise effect. The size of the training set is related to the number of hidden neurons

Learning Paradigms Supervised learning Unsupervised learning Reinforcement learning

Supervised learning This is what we have seen so far! A network is fed with a set of training samples (inputs and corresponding output), and it uses these samples to learn the general relationship between the inputs and the outputs. This relationship is represented by the values of the weights of the trained network.

Unsupervised learning No desired output is associated with the training data! Faster than supervised learning Used to find out structures within data: Clustering Compression

Reinforcement learning Like supervised learning, but: Weights adjusting is not directly related to the error value. The error value is used to randomly, shuffle weights! Relatively slow learning due to ‘randomness’.

Applications Areas Function approximation including time series prediction and modeling. Classification including patterns and sequences recognition, novelty detection and sequential decision making. (radar systems, face identification, handwritten text recognition) Data processing including filtering, clustering blinds source separation and compression. (data mining, Spam filtering)

Advantages / Disadvantages Advantages Adapt to unknown situations Powerful, it can model complex functions. Ease of use, learns by example, and very little user domain ‐ specific expertise needed Disadvantages Forgets Not exact Large complexity of the network structure

Conclusion Artificial Neural Networks are an imitation of the biological neural networks, but much simpler ones. The computing would have a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful furthermore there is need to device an algorithm in order to perform a specific task.

Conclusion Neural networks also contributes to area of research such a neurology and psychology. They are regularly used to model parts of living organizations and to investigate the internal mechanisms of the brain. Many factors affect the performance of ANNs, such as the transfer functions, size of training sample, network topology, weights adjusting algorithm, …

References Craig Heller, and David Sadava, Life: The Science of Biology, fifth edition, Sinauer Associates, INC, USA, Introduction to Artificial Neural Networks, Nicolas Galoppo von Borries Tom M. Mitchell, Machine Learning, WCB McGraw-Hill, Boston, 1997.

Thank You

Q. How does each neuron work in ANNS? What is back propagation? A neuron: receives input from many other neurons; changes its internal state (activation) based on the current input; sends one output signal to many other neurons, possibly including its input neurons (ANN is recurrent network). Back-propagation is a type of supervised learning, used at each layer to minimize the error between the layer’s response and the actual data.