Nicolas Galoppo von Borries COMP290-058 Motion Planning Introduction to Artificial Neural Networks.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Slides from: Doug Gray, David Poole
Introduction to Artificial Neural Networks
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Brian Merrick CS498 Seminar.  Introduction to Neural Networks  Types of Neural Networks  Neural Networks with Pattern Recognition  Applications.
Artificial Intelligence (CS 461D)
Decision Support Systems
1 Part I Artificial Neural Networks Sofia Nikitaki.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Introduction to Neural Network Justin Jansen December 9 th 2002.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Artificial Neural Networks (ANNs)
Neural Networks (NN) Ahmad Rawashdieh Sa’ad Haddad.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
BEE4333 Intelligent Control
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Data Mining and Neural Networks Danny Leung CS157B, Spring 2006 Professor Sin-Min Lee.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
1 Introduction to Neural Networks And Their Applications.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Behavioral Fault Model for Neural Networks A. Ahmadi, S. M. Fakhraie, and C. Lucas Silicon Intelligence and VLSI Signal Processing Laboratory, School of.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CSC 562: Final Project Dave Pizzolo Artificial Neural Networks.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Artificial Neural Networks An Introduction. Outline Introduction Biological and artificial neurons Perceptrons (problems) Backpropagation network Training.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Fall 2004 Perceptron CS478 - Machine Learning.
Learning in Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
The Network Approach: Mind as a Web
Presentation transcript:

Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks

A new sort of computer What are (everyday) computer systems good at... and not so good at? Good atNot so good at Rule-based systems: doing what the programmer wants them to do Dealing with noisy data Dealing with unknown environment data Massive parallelism Fault tolerance Adapting to circumstances

Neural networks to the rescue Neural network: information processing paradigm inspired by biological nervous systems, such as our brain Structure: large number of highly interconnected processing elements (neurons) working together Like people, they learn from experience (by example)

Neural networks to the rescue Neural networks are configured for a specific application, such as pattern recognition or data classification, through a learning process In a biological system, learning involves adjustments to the synaptic connections between neurons  same for artificial neural networks (ANNs)

Where can neural network systems help when we can't formulate an algorithmic solution. when we can get lots of examples of the behavior we require. ‘learning from experience’ when we need to pick out the structure from existing data.

Inspiration from Neurobiology A neuron: many-inputs / one-output unit output can be excited or not excited incoming signals from other neurons determine if the neuron shall excite ("fire") Output subject to attenuation in the synapses, which are junction parts of the neuron

Synapse concept The synapse resistance to the incoming signal can be changed during a "learning" process [1949 ] Hebb’s Rule: If an input of a neuron is repeatedly and persistently causing the neuron to fire, a metabolic change happens in the synapse of that particular input to reduce its resistance

Mathematical representation The neuron calculates a weighted sum of inputs and compares it to a threshold. If the sum is higher than the threshold, the output is set to 1, otherwise to -1. Non-linearity

A simple perceptron It’s a single-unit network Change the weight by an amount proportional to the difference between the desired output and the actual output. Δ W i = η * (D-Y).I i Perceptron Learning Rule Learning rate Desired output Input Actual output

Example: A simple single unit adaptive network The network has 2 inputs, and one output. All are binary. The output is 1 if W 0 I 0 + W 1 I 1 + W b > 0 0 if W 0 I 0 + W 1 I 1 + W b ≤ 0 We want it to learn simple OR: output a 1 if either I 0 or I 1 is 1. Demo

Learning From experience: examples / training data Strength of connection between the neurons is stored as a weight-value for the specific connection Learning the solution to a problem = changing the connection weights

Operation mode Fix weights (unless in online learning) Network simulation = input signals flow through network to outputs Output is often a binary decision Inherently parallel Simple operations and threshold: fast decisions and real-time response

Artificial Neural Networks Adaptive interaction between individual neurons Power: collective behavior of interconnected neurons The hidden layer learns to recode (or to provide a representation of) the inputs: associative mapping

Evolving networks Continuous process of: Evaluate output Adapt weights Take new inputs ANN evolving causes stable state of the weights, but neurons continue working: network has ‘learned’ dealing with the problem “Learning”

Learning performance Network architecture Learning method: Unsupervised Reinforcement learning Backpropagation

Unsupervised learning No help from the outside No training data, no information available on the desired output Learning by doing Used to pick out structure in the input: Clustering Reduction of dimensionality  compression Example: Kohonen’s Learning Law

Competitive learning: example Example: Kohonen network Winner takes all only update weights of winning neuron Network topology Training patterns Activation rule Neighbourhood Learning

Competitive learning: example Demo

Reinforcement learning Teacher: training data The teacher scores the performance of the training examples Use performance score to shuffle weights ‘randomly’ Relatively slow learning due to ‘randomness’

Back propagation Desired output of the training examples Error = difference between actual & desired output Change weight relative to error size Calculate output layer error, then propagate back to previous layer Improved performance, very common!

Hopfield law demo if the desired and actual output are both active or both inactive, increment the connection weight by the learning rate, otherwise decrement the weight by the learning rate. Matlab demo of a simple linear separation One perceptron can only separate linearily!

Online / Offline Offline Weights fixed in operation mode Most common Online System learns while in operation mode Requires a more complex network architecture

Generalization vs. specialization Optimal number of hidden neurons Too many hidden neurons : you get an over fit, training set is memorized, thus making the network useless on new data sets Not enough hidden neurons: network is unable to learn problem concept ~ conceptually: the network’s language isn’t able to express the problem solution

Generalization vs. specialization Overtraining: Too much examples, the ANN memorizes the examples instead of the general idea Generalization vs. specialization trade-off: # hidden nodes & training samples MATLAB DEMO

Where are NN used? Recognizing and matching complicated, vague, or incomplete patterns Data is unreliable Problems with noisy data Prediction Classification Data association Data conceptualization Filtering Planning

Applications Prediction: learning from past experience pick the best stocks in the market predict weather identify people with cancer risk Classification Image processing Predict bankruptcy for credit card companies Risk assessment

Applications Recognition Pattern recognition: SNOOPE (bomb detector in U.S. airports) Character recognition Handwriting: processing checks Data association Not only identify the characters that were scanned but identify when the scanner is not working properly

Applications Data Conceptualization infer grouping relationships e.g. extract from a database the names of those most likely to buy a particular product. Data Filtering e.g. take the noise out of a telephone signal, signal smoothing Planning Unknown environments Sensor data is noisy Fairly new approach to planning

Strengths of a Neural Network Power: Model complex functions, nonlinearity built into the network Ease of use: Learn by example Very little user domain-specific expertise needed Intuitively appealing: based on model of biology, will it lead to genuinely intelligent computers/robots? Neural networks cannot do anything that cannot be done using traditional computing techniques, BUT they can do some things which would otherwise be very difficult.

General Advantages Advantages Adapt to unknown situations Robustness: fault tolerance due to network redundancy Autonomous learning and generalization Disadvantages Not exact Large complexity of the network structure For motion planning?

Status of Neural Networks Most of the reported applications are still in research stage No formal proofs, but they seem to have useful applications that work