Http://xkcd.com/894/.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Artificial Neural Networks (1)
Perceptron Learning Rule
Artificial Neural Network
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Artificial Neural Networks
Biological and Artificial Neurons Michael J. Watts
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
NEURAL NETWORKS Introduction
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Some more Artificial Intelligence
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Explorations in Neural Networks Tianhui Cai Period 3.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Intelligence Neural Networks ( Chapter 9 )
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Neural networks.
Neural Networks.
Artificial neural networks
Artificial neural networks:
Real Neurons Cell structures Cell body Dendrites Axon
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSSE463: Image Recognition Day 17
Data Mining with Neural Networks (HK: Chapter 7.5)
of the Artificial Neural Networks.
XOR problem Input 2 Input 1
Perceptron as one Type of Linear Discriminants
Artificial Intelligence Lecture No. 28
CSSE463: Image Recognition Day 17
Artificial neurons Nisheeth 10th January 2019.
Artificial Intelligence 12. Two Layer ANNs
Artificial Neural Networks
Backpropagation David Kauchak CS159 – Fall 2019.
CSSE463: Image Recognition Day 17
David Kauchak CS51A Spring 2019
The Network Approach: Mind as a Web
Introduction to Neural Network
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
EE 193/Comp 150 Computing with Biological Parts
Presentation transcript:

http://xkcd.com/894/

David Kauchak CS51A Spring 2019 Neural Networks David Kauchak CS51A Spring 2019

Neural Networks Neural Networks try to mimic the structure and function of our nervous system People like biologically motivated approaches

Our Nervous System Neuron What do you know?

Our nervous system: the computer science view the human brain is a large collection of interconnected neurons a NEURON is a brain cell they collect, process, and disseminate electrical signals they are connected via synapses they FIRE depending on the conditions of the neighboring neurons

Our nervous system The human brain contains ~1011 (100 billion) neurons each neuron is connected to ~104 (10,000) other neurons Neurons can fire as fast as 10-3 seconds How does this compare to a computer?

Man vs. Machine 1011 neurons 1014 synapses 10-3 “cycle” time 1010 transistors 1011 bits of ram/memory 1013 bits on disk 10-9 cycle time

Brains are still pretty fast Who is this?

Brains are still pretty fast If you were me, you’d be able to identify this person in 10-1 (1/10) s! Given a neuron firing time of 10-3 s, how many neurons in sequence could fire in this time? A few hundred What are possible explanations? either neurons are performing some very complicated computations brain is taking advantage of the massive parallelization (remember, neurons are connected ~10,000 other neurons)

Artificial Neural Networks Node (Neuron) Edge (synapses) our approximation

(neuron) (neuron) W is the strength of signal sent between A and B. Weight w Node A Node B (neuron) (neuron) W is the strength of signal sent between A and B. If A fires and w is positive, then A stimulates B. If A fires and w is negative, then A inhibits B.

… A given neuron has many, many connecting, input neurons If a neuron is stimulated enough, then it also fires How much stimulation is required is determined by its threshold

A Single Neuron/Perceptron Output y Input x1 Input x2 Input x3 Input x4 Weight w1 Weight w2 Weight w3 Weight w4 Each input contributes: xi * wi threshold function

Possible threshold functions hard threshold sigmoid

A Single Neuron/Perceptron 1 1 -1 0.5 ? 1 Threshold of 1 1

A Single Neuron/Perceptron 1 1 -1 0.5 ? 1 Threshold of 1 1 1*1 + 1*-1 + 0*1 + 1*0.5 = 0.5

A Single Neuron/Perceptron 1 1 -1 0.5 1 Weighted sum is 0.5, which is not larger than the threshold Threshold of 1 1

A Single Neuron/Perceptron 1 1 -1 0.5 ? Threshold of 1 1 1*1 + 0*-1 + 0*1 + 1*0.5 = 1.5

A Single Neuron/Perceptron 1 1 -1 0.5 1 Weighted sum is 1.5, which is larger than the threshold Threshold of 1 1

Neural network inputs Individual perceptrons/neurons - often we view a slice of them

Neural network some inputs are provided/entered inputs - often we view a slice of them

Neural network inputs each perceptron computes and calculates an answer - often we view a slice of them

Neural network inputs those answers become inputs for the next level - often we view a slice of them

Neural network inputs finally get the answer after all levels compute - often we view a slice of them finally get the answer after all levels compute

Neural networks Different kinds/characteristics of networks inputs inputs inputs How are these different?

Neural networks inputs inputs hidden units/layer Feed forward networks

Neural networks inputs Recurrent network Output is fed back to input Can support memory! How?

History of Neural Networks McCulloch and Pitts (1943) – introduced model of artificial neurons and suggested they could learn Hebb (1949) – Simple updating rule for learning Rosenblatt (1962) - the perceptron model Minsky and Papert (1969) – wrote Perceptrons Bryson and Ho (1969, but largely ignored until 1980s--Rosenblatt) – invented back-propagation learning for multilayer networks

Training the perceptron First wave in neural networks in the 1960’s Single neuron Trainable: its threshold and input weights can be modified If the neuron doesn’t give the desired output, then it has made a mistake Input weights and threshold can be changed according to a learning algorithm

Examples - Logical operators AND – if all inputs are 1, return 1, otherwise return 0 OR – if at least one input is 1, return 1, otherwise return 0 NOT – return the opposite of the input XOR – if exactly one input is 1, then return 1, otherwise return 0

AND x1 x2 x1 and x2 1 We have the truth table for AND

AND x1 x2 x1 and x2 1 Input x1 W1 = ? Output y T = ? Input x2 W2 = ? 1 Input x1 W1 = ? Output y T = ? And here we have the node for AND Assume that all inputs are either 1 or 0, “on” or “off” When they are activated, they become “on” So what weights do we need to assign to these two links and what threshold do we need to put in this unit in order to represent an AND gate? Input x2 W2 = ?

AND x1 x2 x1 and x2 1 Output is 1 only if all inputs are 1 1 Input x1 W1 = 1 Output y T = 2 Output is 1 only if all inputs are 1 Input x2 W2 = 1 Inputs are either 0 or 1

AND Input x1 W1 = ? W2 = ? Input x2 Output y T = ? Input x3 W3 = ? We can extend this to take more inputs What are the new weights and thresholds?

AND Output is 1 only if all inputs are 1 Inputs are either 0 or 1 Output y Input x1 Input x2 Input x3 Input x4 W1 = 1 W2 = 1 W3 = 1 W4 = 1 Output is 1 only if all inputs are 1 Inputs are either 0 or 1

OR x1 x2 x1 or x2 1

OR x1 x2 x1 or x2 1 Input x1 W1 = ? Output y T = ? Input x2 W2 = ? 1 OR Input x1 W1 = ? Output y T = ? What about for an OR gate? What weights and thresholds would serve to create an OR gate? Input x2 W2 = ?

OR x1 x2 x1 or x2 1 Output is 1 if at least 1 input is 1 1 OR Input x1 W1 = 1 Output y T = 1 Output is 1 if at least 1 input is 1 Input x2 W2 = 1 Inputs are either 0 or 1

OR Input x1 W1 = ? W2 = ? Input x2 Output y T = ? Input x3 W3 = ? Similarly, we can expand it to take more inputs

OR Output is 1 if at least 1 input is 1 Inputs are either 0 or 1 Output y Input x1 Input x2 Input x3 Input x4 W1 = 1 W2 = 1 W3 = 1 W4 = 1 Output is 1 if at least 1 input is 1 Inputs are either 0 or 1

NOT x1 not x1 1

NOT x1 not x1 1 W1 = ? Input x1 T = ? Output y 1 NOT W1 = ? Input x1 T = ? Output y Now it gets a little bit trickier, what weight and threshold might work to create a NOT gate?

NOT Input is either 0 or 1 If input is 1, output is 0. W1 = -1 Input x1 T = 0 Output y Input is either 0 or 1 If input is 1, output is 0. If input is 0, output is 1.

How about… x1 x2 x3 x1 and x2 1 Input x1 w1 = ? w2 = ? Input x2 T = ? 1 Input x1 w1 = ? w2 = ? Input x2 T = ? Output y Input x3 w3 = ?

Training neural networks Learn individual node parameters (e.g. threshold) Learn the individual weights between nodes

Positive or negative? NEGATIVE

Positive or negative? NEGATIVE

Positive or negative? POSITIVE

Positive or negative? NEGATIVE

Positive or negative? POSITIVE

Positive or negative? POSITIVE

Positive or negative? NEGATIVE

Positive or negative? POSITIVE

A method to the madness blue = positive yellow triangles = positive all others negative How did you figure this out (or some of it)?

Training neural networks x1 x2 x3 x1 and x2 1 Input x1 w1 = ? w2 = ? Input x2 T = ? Output y Input x3 w3 = ? start with some initial weights and thresholds show examples repeatedly to NN update weights/thresholds by comparing NN output to actual output