COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
Artificial Neural Networks (1)
Perceptron Learning Rule
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Biological and Artificial Neurons Michael J. Watts
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Simple Neural Nets For Pattern Classification
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
ANNs (Artificial Neural Networks). THE PERCEPTRON.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
ICS 586: Neural Networks Dr. Lahouari Ghouti Information & Computer Science Department.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce.
Ten MC Questions taken from the Text, slides and described in class presentation. COSC 4426 AJ Boulay Julia Johnson.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Perceptrons Michael J. Watts
Chapter 6 Neural Network.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Real Neurons Cell structures Cell body Dendrites Axon
Joost N. Kok Universiteit Leiden
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Unsupervised Learning and Neural Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Dr. Unnikrishnan P.C. Professor, EEE
CSE 473 Introduction to Artificial Intelligence Neural Networks
XOR problem Input 2 Input 1
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Networks Chapter 5
G5AIAI Introduction to AI
The Network Approach: Mind as a Web
Introduction to Neural Network
Artificial Neural Networks
David Kauchak CS158 – Spring 2019

PYTHON Deep Learning Prof. Muhammad Saeed.
Presentation transcript:

COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)

Chapter 4 Neural Computing I’m AJ Boulay, a Masters Student in Computational Science here at LU. Dr. Johnson asked me to present on ANN’s. Anyone know something about ANN’s? BRAIN – you can ask me about neural computing in the Brain, if I know the answer, I will tell you. Will be talking about Artificial Neural Networks (ANN’s) as massively parallel connectionist networks, inspired by biology of cognition.

Chapter 4 Neural Computing What will we cover in this presentation? The sections of chapter four of the text, and a few questions that may appear on the exam. A review of a computational experiment with a Neural Network If we have time, a chance to run training and testing a Neural Network.

Chapter 4 Neural Computing What do I need to remember about how Neural Nets work? “Each neuron, as the main computational unit, performs only a very simple operation: it sums its weighted inputs and applies a certain activation function on the sum. Such a value then represents the output of the neuron.” p. 92.

Chapter 4 Neural Computing See the system: Here is an example of a network and how it works. When we have finished reviewing the material, we can look at the network again and you can see what you learned.

Autoassociator Network Architecture

Activation Functions – p. 93 Binary Step Function Linear Function Sigmoid Function The activation function The activation function will affect the firing of the unit – like a threshold.

Neural Nets are Trained – pg. 97 Training involves the modification of weights between units. This can be set a priori or a result of the training process, or both. Two kinds of training: Supervised – programmer makes choices about how the network learns (autoassociator). Unsupervised – the system learn something on its own (will see this in SOMs, clustering etc.)

Hebb Rule – p. 97 “Fire together – wire together” – the strength of connection (weights) increase when both units ‘fire’. Hebb – Canadian, McGill, Neurologist – Hebb rule determines learning by multiplying the weights between two units and multiplying that by the learning rate alpha. If learning patterns are mutually orthogonal, then it can be proved that learning will occur.

Delta Rule – p.98 Another common rule is the Delta rule – aka LMS rule. Usually used in networks that have a gradient manifold. For a given output vector, the output is compared to the correct answer. Would learning take place if the weights are zero? The change in weight – where alpha is the learning rate, y is the activation function, and e is the difference between the expected output and the actual output. If inputs are linearly independent then learning can take place.

Network Topologies – p Feed Forward Networks – can be multiple layers. Often use Hebb rule alone, a Binary Activation Function. VS Recurrent Nets – Often use Sigmoid Activation function, Backpropogation learning algorithms.

Perceptron – p Can only classify linearly separable cases – what does this mean? No a prori knowledge – initialized with random weights. Predicted – desired – if match, then no change in weights. XOR – Famous – Minsky and Papert

Multilayer Nets – p Multilayer feed forward Usually fully connected – Kohonen nets, and Autoassociator No connections between neurons of the same layer – their states are fixed by the problem. Gradient Decent methods – algorithm searches for a global minimum of the weight landscape No a priori knowledge initialization

Kohonen SOM’s – Self Organizing Maps (SOM’s) Full Connectivity Hebb Rule – Best Matching Unit (BMU) – this is the unit that is closest to input when compared. What does this mean? Learning? A priori? Supervised? Clustering – you get ‘nearest neighbor’ searches, with features clustered. In this kind of network there is competitive learning. Inhibition occurs between units that are BMU and others that didn't quite figure out the problems. The winning weights/activations go on to the next round of learning.

Hopfield Networks – p Hopefield was a physicist – looked at networks as if the gradient landscapes followed energy functions. Famous paper – Mc Collugh and Pitts 1943 – used his ideas in constructing a computational model of human memory. Information is content addressable in this net – is a common data structure. Can ‘retrieve’ information by priming or cueing with an input. Notice some common features of net already mentioned in text – gradient decent, recursive or recurrent architecure.

Conclusion Questions?