Neural Networks: An Introduction and Overview

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Multi-Layer Perceptron (MLP)
Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Machine Learning Neural Networks
The back-propagation training algorithm
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Neural Networks Multi-stage regression/classification model activation function PPR also known as ridge functions in PPR output function bias unit synaptic.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Back-Propagation Algorithm
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Artificial Neural Networks
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
CS 4700: Foundations of Artificial Intelligence
ICS 273A UC Irvine Instructor: Max Welling Neural Networks.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
© N. Kasabov Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, MIT Press, 1996 INFO331 Machine learning. Neural networks. Supervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Appendix B: An Example of Back-propagation algorithm
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Today’s Topics Read: Chapters 7, 8, and 9 on Logical Representation and Reasoning HW3 due at 11:55pm THURS (ditto for your Nannon Tourney Entry) Recipe.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning 12. Local Models.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Big data classification using neural network
Multiple-Layer Networks and Backpropagation Algorithms
Fall 2004 Backpropagation CS478 - Machine Learning.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
第 3 章 神经网络.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSC 578 Neural Networks and Deep Learning
Neuro-Computing Lecture 4 Radial Basis Function Network
of the Artificial Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks
network of simple neuron-like computing elements
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks: An Introduction and Overview
David Kauchak CS158 – Spring 2019
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Neural Networks: An Introduction and Overview 1/13/2018 Neural Networks: An Introduction and Overview Jim Ries NLM Predoctoral Fellow JimR@acm.org 6/13/2000

1/13/2018 Introduction Provide an intuitive feel for what NN’s are and problems for which they are an appropriate tool. NOT overwhelm you with mathematics. Caveat: I’m not an NN researcher; just an interested outsider (like most of you).

Topics of Discussion What are Neural Networks? Training History 1/13/2018 Topics of Discussion What are Neural Networks? Training History Alternative Methods Applications Conclusions Questions

1/13/2018 What are Neural Nets? A mechanism for approximating a function, given some sample or “training” data. A mechanism for classifying, clustering, or recognizing patterns in data. These two broad applications are essentially the same (e.g., imagine a function that outputs a discrete number indicating a cluster).

What are Neural Nets? (cont.) 1/13/2018 What are Neural Nets? (cont.) Rosenblatt’s Perceptron: a network of processing elements (PE): Y1 Yp a1 am . . . - Notice that weights determine the amount with which each x affects each a. - The weights are updated via a learning rule at each iteration of input. - Notice that networks need NOT be fully connected as shown here. x1 x2 x3 xn . . .

What are Neural Nets? (cont.) 1/13/2018 What are Neural Nets? (cont.) Additional layer(s) can be added: Y1 Yp a1 am . . . - We can add an arbitrary number of hidden layers. - Additional hidden layers tend to increase the ability of the network to learn complex functions, but also increase learning times required. h1 hm . . . x1 x2 x3 xn . . .

What are Neural Nets? (cont.) 1/13/2018 What are Neural Nets? (cont.)

What are Neural Nets? (cont.) 1/13/2018 What are Neural Nets? (cont.) A “node” (PE) is typically represented as a function. Simple functions can quickly be “trained” or updated to fit a curve to data, but are unable to fit well to complex data (e.g., linear functions can never approximate quadratics). Universal Approximator! (typically Radial Basis Function).

1/13/2018 Training With simple Perceptron model, we can train by adjusting the weights on inputs when the output does not match test data. The amount of adjustment we do at each training iteration is called the “learning rate”.

1/13/2018 Training (cont.) With one or more hidden layers, training requires some sort of “propagation algortihm”. Backpropagation is commonly used and is an extension to the “Minimum Disturbance Algorithm”:

Minimum Disturbance Algorithm 1/13/2018 Training (cont.) Minimum Disturbance Algorithm 1) Apply an example, propagate inputs to output 2) Count # of incorrect output units 3) For output units, do a number of times Select unselected units closest to zero activation Change weights if less errors, use new weights, else old 4) Repeat step #3 for all layers See also handout on backpropagation algorithm

1/13/2018 Training (cont.) Overfitting - fits a function to training data, but does not approximate real world. Ways to avoid overfitting Regularization (assumes real function is “smooth”. Early stopping Curvature-driven

1/13/2018 History Early 1960’s - Rosenblatt’s Perceptron (Rosenblatt, F., Principles of Neurodynamics, New York: Spartan Books, 1962). Late 1960’s - Minsky (Minsky, M. and Papert, S., Perceptrons, MIT Press, Cambridge, 1969). 1970’s & early 1980’s - largely empty of NN activity due to Minsky.

1/13/2018 History (cont.) Late 1980’s - NN re-emerge with Rumelhart and McClelland (Rumelhart, D., McClelland, J., Parallel and Distributed Processing, MIT Press, Cambridge, 1988). Since PDP there has been an explosion of NN literature.

Alternative Methods Classical statistical methods Symbolic approach. 1/13/2018 Alternative Methods Classical statistical methods Fail in on-line scenarios Not universal approximators (e.g., linear regression) Assume normal distribution. Symbolic approach. Expert Systems Mathematical Logic (e.g., Prolog) Schemas, Frames, or Scripts

Alternative Methods (cont.) 1/13/2018 Alternative Methods (cont.) NN’s are the “Connectionist” approach. Encoding of data can be a “creative” endeavor Ensemble Approach Baysian Networks Fuzzy NN

Applications Control Forecasting 1/13/2018 Applications Control Forecasting Provide faster approximations compared to exact algorithms (e.g., NeuroBlast). Compression Cognitive Modeling

1/13/2018 Conclusions NN’s are useful for a wide variety of tasks, but care must be taken to choose the correct algorithms for a given problem domain. NN’s are not a panacea, and other approaches may be appropriate for given problems.

1/13/2018 Questions?