Artificial Neural Networks

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
Beyond Linear Separability
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
For Wednesday Read chapter 19, sections 1-3 No homework.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Neural Networks
Lecture 14 – Neural Networks
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neural Networks
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Data Mining with Neural Networks (HK: Chapter 7.5)
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
CS 4700: Foundations of Artificial Intelligence
ICS 273A UC Irvine Instructor: Max Welling Neural Networks.
CS 484 – Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Computer Science and Engineering
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
Non-Bayes classifiers. Linear discriminants, neural networks.
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
EEE502 Pattern Recognition
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Chapter 6 Neural Network.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Fall 2004 Backpropagation CS478 - Machine Learning.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Machine Learning Today: Reading: Maria Florina Balcan
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Lecture Notes for Chapter 4 Artificial Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Artificial Neural Networks
David Kauchak CS158 – Spring 2019

Presentation transcript:

Artificial Neural Networks Artificial Neural Networks are another technique for supervised machine learning k-Nearest Neighbor Decision Tree Logic statements Neural Network Training Data Test Data Clasification

Human neuron Dendrites pick up signals from other neurons When signals from dendrites reach a threshold, a signal is sent down axon to synapse

Connection with AI Most modern AI: Implementing neurons in a computer “Systems that act rationally” Implementing neurons in a computer “Systems that think like humans” Why artificial neural networks then? “Universal” function fitter Potential for massive parallelism Some amount of fault-tolerance Trainable by inductive learning, like other supervised learning teachniques

Perceptron Example 1 = malignant 0 = benign # of tumors w1 = -0.1 Output Unit w2 = 0.9 Avg area Avg density w3 = 0.1 Input Units

The Perceptron: Input Units Input units: features in original problem If numeric, often scaled between –1 and 1 If discrete, often create one input node for each category Can also assign values for a single node (imposes ordering)

The Perceptron: Weights Weights: Represent importance of each input unit Combined with input units to feed output units The output unit receives as input:

The Perceptron: Output Unit The output unit uses an activation function to decide what the correct output is Sample activation function:

Simplifying the threshold Managing the threshold is cumbersome Incorporate as a “virtual” weight

How do we compute weights? Initialize all weights randomly Usually between [-0.5, 0.5] Put the first point through the network Actual Output: Define Error = Correct Output – Actual Output

Perceptron Example 1 = malignant 0 = benign # of tumors w1 = -0.3 Output Unit w2 = -0.2 Actual Output = 0 Correct Output = 1 Error = 1 – 0 = 1 If input is positive, want weight to be more positive If input is negative, want weight to be more negative Avg area Avg density w3 = 0.4 Input Units

The Perceptron Learning Rule: How do we compute weights? Put the first point through the network Actual Output: Define Error = Correct Output – Actual Output Update all weights: a = learning rate Repeat with all points, then all points again, and again, until all correct or stopping criterion reached

Can appropriate weights always be found? ONLY IF data is linearly separable

What if data is not linearly separable? Neural Network. Vj Each hidden unit is a perceptron The output unit is another perceptron with hidden units as input

How to compute weights for a multilayer neural network? Need to redefine perceptron “Step function” no good – need something differentiable Replace with sigmoid approximation

Sigmoid function Good approximation to step function As binfinity, sigmoid  step We’ll just take b = 1 for simplicity

Computing weights: backpropogation Think of as a gradient descent method, where weights are variables and trying to minimize error:

Minimize squared errors For all training points, let Tp = correct output Op = actual output Want to minimize error Work with one point at a time, and move weights in direction to reduce error the most

Expand (drop the p for simplicity) Direction of most rapid positive rate of change (gradient) is given by partial derivative Update rule for hidden layer

Simplify as Input layer backprop is similar, but requires some more chain rule partial derivatives

Neural Networks and machine learning issues Neural networks can represent any training set, if enough hidden units are used How long do they take to train? How much memory? Does backprop find the best set of weights? How to deal with overfitting? How to interpret results?