Artificial Intelligence Techniques. Aims: Section fundamental theory and practical applications of artificial neural networks.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks 2. Overview  The McCulloch-Pitts neuron  Pattern space  Limitations  Learning.
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Biological and Artificial Neurons Michael J. Watts
Neural Networks.
1 Part I Artificial Neural Networks Sofia Nikitaki.
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Intro Neural Networks (Reading:
How does the mind process all the information it receives?
Neural Networks for Information Retrieval Hassan Bashiri May 2005.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neural Networks.
CS-485: Capstone in Computer Science Artificial Neural Networks and their application in Intelligent Image Processing Spring
Neural Networks Applications Versatile learners that can be applied to nearly any learning task: classification numeric prediction unsupervised pattern.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Artificial Intelligence Neural Networks ( Chapter 9 )
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Presented by Scott Lichtor An Introduction to Neural Networks.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Language Project.  Neural networks have a large appeal to many researchers due to their great closeness to the structure of the brain, a characteristic.
Artificial Intelligence & Neural Network
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
© SEMINAR ON ARTIFICIAL NEURAL NETWORK AND ITS APPLICATIONS By Mr. Susant Kumar Behera Mrs. I. Vijaya.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CS 621 Artificial Intelligence Lecture /11/05 Guest Lecture by Prof
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
A neuron is the functioning cell unit in the nervous system that is also known as a nerve cell. Neurons are responsible for sending impulse messages to.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Neurons FG4&feature=related.
3.1.2 Biopsychology Psychology Physiological Psychology
Artificial Intelligence (CS 370D)
Artificial neural networks:
Joost N. Kok Universiteit Leiden
Dr. Unnikrishnan P.C. Professor, EEE
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Introduction to Neural Networks And Their Applications
OVERVIEW OF BIOLOGICAL NEURONS
Perceptron as one Type of Linear Discriminants
G5AIAI Introduction to AI
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Artificial Intelligence Lecture No. 28
The Naïve Bayes (NB) Classifier
CS 621 Artificial Intelligence Lecture /10/05 Prof
ARTIFICIAL NEURAL networks.
ARTIFICIAL NEURAL NETWORK Intramantra Global Solution PVT LTD, Indore
Introduction to Neural Network

Machine Learning.
Presentation transcript:

Artificial Intelligence Techniques

Aims: Section fundamental theory and practical applications of artificial neural networks.

Aims: Session Aim Introduction to the biological background and implementation issues relevant to the development of practical systems.

Biological neuron  Taken from ets/neuralNetIntro.html

Human brain consists of approx. 10 billion neurons interconnected with about 10 trillion synapses.

 A neuron: specialized cell for receiving, processing and transmitting informations.

 Electric charge from neighboring neurons reaches the neuron and they add.

 The summed signal is passed to the soma that processing this information.

 A signal threshold is applied.

If the summed signal > threshold, the neuron fires

Constant output signal is transmitted to other neurons.

 The strength and polarity of the output depends features of each synapse

 varies these features - adapt the network.

 varies the input contribute - vary the system!

Simplified neuron  Taken from ojects/medalus3/Task1.htm

Exercise 1  In groups of 2-3, as a group:  Write down one question about this topic?

McCulloch-Pitts model X1 X2 X3 W1 W2 W3 T Y Y=1 if W1X1+W2X2+W3X3  T Y=0 if W1X1+W2X2+W3X3<T

McCulloch-Pitts model Y=1 if W1X1+W2X2+W3X3  T Y=0 if W1X1+W2X2+W3X3<T

Logic functions - OR X1 X Y Y = X1 OR X2

Logic functions - AND X1 X Y Y = X1 AND X2

Logic functions - NOT X 0 Y Y = NOT X

McCulloch-Pitts model X1 X2 X3 W1 W2 W3 T Y Y=1 if W1X1+W2X2+W3X3  T Y=0 if W1X1+W2X2+W3X3<T

Introduce the bias Take the threshold over to the other side of the equation and replace it with a weight W0 which equals -T, and include a constant input X0 which equals 1.

Introduce the bias Y=1 if W1X1+W2X2+W3X3 - T 0 Y=0 if W1X1+W2X2+W3X3 -T <0

Introduce the bias  Lets just use weights – replace T with a ‘fake’ input  ‘fake’ is always 1.

Introduce the bias Y=1 if W1X1+W2X2+W3X3 +W0X0 0 Y=0 if W1X1+W2X2+W3X3 +W0X0 <0

Short-hand notation Instead of writing all the terms in the summation, replace with a Greek sigma Σ Y=1 if W1X1+W2X2+W3X3 +W0X0 0 Y=0 if W1X1+W2X2+W3X3 +W0X0 <0 becomes

Logic functions - OR X1 X2 1 1 Y Y = X1 OR X2 X0

Logic functions - AND X1 X2 1 1 Y Y = X1 AND X2 X0 -2

Logic functions - NOT X1 Y Y = NOT X1 X0 0

The weighted sum  The weighted sum, Σ WiXi is called the “net” sum.  Net = Σ WiXi  y=1 if net  0  y=0 if net < 0

Multi-layered perceptron  Feedback network  Train by passing error backwards  Input-hidden-output layers  Most common

Multi-layered perceptron (Taken from Picton 2004) Input layer Hidden layer Output layer

Hopfield network  Feedback network  Easy to train  Single layer of neurons  Neurons fire in a random sequence

Hopfield network x1 x2 x3

Radial basis function network  Feedforward network  Has 3 layers  Hidden layer uses statistical clustering techniques to train  Good at pattern recognition

Radial basis function networks Input layer Hidden layer Output layer

Kohonen network  All neurons connected to inputs not connected to each other  Often uses a MLP as an output layer  Neurons are self-organising  Trained using “winner-takes all”

What can they do?  Perform tasks that conventional software cannot do  For example, reading text, understanding speech, recognising faces

Neural network approach  Set up examples of numerals  Train a network  Done, in a matter of seconds

Learning and generalising  Neural networks can do this easily because they have the ability to learn and to generalise from examples

Learning and generalising  Learning is achieved by adjusting the weights  Generalisation is achieved because similar patterns will produce an output

Summary  Neural networks have a long history but are now a major part of computer systems

Summary  They can perform tasks (not perfectly) that conventional software finds difficult

 Introduced  McCulloch-Pitts model and logic  Multi-layer preceptrons  Hopfield network  Kohenen network

 Neural networks can  Classify  Learn and generalise.