CS 182 Sections 101 - 102 Leon Barrett (http://www2.hi.net/s4/strangebreed.htm)

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Models of Learning Hebbian ~ coincidence Recruitment ~ one trial Supervised ~ correction (backprop) Reinforcement ~ delayed reward Unsupervised ~ similarity.
Artificial Neural Networks
Justin Besant BIONB 2220 Final Project
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Connectionist Models: Backprop Jerome Feldman CS182/CogSci110/Ling109 Spring 2008.
The back-propagation training algorithm
CS 182 Sections Created by Eva Mok Modified by JGM 2/2/05 Q:What did the hippocampus say during its retirement speech? A:“Thanks for the memories”
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
20 Minute Quiz For each of the questions, you can use text, diagrams, bullet points, etc. 1) Give three processes that keep neural computation from proceeding.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Models: Lecture 3 Srini Narayanan CS182/CogSci110/Ling109 Spring 2006.
Connectionist Model of Word Recognition (Rumelhart and McClelland)
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
CS 182 Sections Eva Mok Feb 11, 2004 ( bad puns alert!
Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative cortex.
Artificial Neural Networks
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Data Mining with Neural Networks (HK: Chapter 7.5)
20 Minute Quiz For each of the two questions, you can use text, diagrams, bullet points, etc. 1) What are the main events in neural firing and transmission?
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Multiple-Layer Networks and Backpropagation Algorithms
Advanced information retreival Chapter 02: Modeling - Neural Network Model Neural Network Model.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Appendix B: An Example of Back-propagation algorithm
Molecular mechanisms of memory. How does the brain achieve Hebbian plasticity? How is the co-activity of presynaptic and postsynaptic cells registered.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
In this section, we will investigate how to take the derivative of the product or quotient of two functions.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
CS621 : Artificial Intelligence
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
Announcements a3 is out, due 2/15 11:59pm Please please please start early quiz will be graded in about a week. a1 will be graded shortly—use glookup to.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
EEE502 Pattern Recognition
CAP6938 Neuroevolution and Artificial Embryogeny Neural Network Weight Optimization Dr. Kenneth Stanley January 18, 2006.
Chapter 6 Neural Network.
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
CS 182 Sections Leon Barrett ( bad puns alert!
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Announcements a3 is out, due 2/13 and 2/22 11:59pm Computational: part 1 isn't too hard; part 2 is harder quiz will be graded in about a week.
CS 182 Leon Barrett and Will Chang Thanks to Eva Mok and Joe Makin Q:What did the Hollywood film director say after he finished making a movie about myelin?
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Today’s Lecture Neural networks Training
Multiple-Layer Networks and Backpropagation Algorithms
Supervised Learning in ANNs
Artificial neural networks
Real Neurons Cell structures Cell body Dendrites Axon
Matt Gormley Lecture 16 October 24, 2016
CSE 473 Introduction to Artificial Intelligence Neural Networks
Dr. Kenneth Stanley September 6, 2006
Neural Networks CS 446 Machine Learning.
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
CS621: Artificial Intelligence
Lecture 11. MLP (III): Back-Propagation
CSE 573 Introduction to Artificial Intelligence Neural Networks
CS 621 Artificial Intelligence Lecture 25 – 14/10/05
Neural Networks Geoff Hulten.
Backpropagation.
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
The McCullough-Pitts Neuron
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
CS621: Artificial Intelligence Lecture 18: Feedforward network contd
Presentation transcript:

CS 182 Sections Leon Barrett (

Announcements a3 part 1 is due on Thursday (submit as a3-1)

Where we stand Last Week –Learning –backprop This Week –color –cognitive linguistics

Back-Propagation Algorithm We define the error term for a single node to be t i - y i xixi f yjyj w ij yiyi x i = ∑ j w ij y j y i = f(x i ) t i :target Sigmoid:

Gradient Descent i2i2 i1i1 global minimum: this is your goal it should be 4-D (3 weights) but you get the idea

Equations of Backprop Weight update shown on following slides; important equations highlighted in green Note momentum equation: –dW(t) = change in weight at time t –dW(t-1) = change in weight at time t-1 –so using momentum: –dW(t) = -learning_rate * -input * delta(i) + momentum * dW(t-1) –the first part of that comes from last slides below –the second part is the momentum term

kji w jk w ij E = Error = ½ ∑ i (t i – y i ) 2 yiyi t i : target The derivative of the sigmoid is just The output layer learning rate

kji w jk w ij E = Error = ½ ∑ i (t i – y i ) 2 yiyi t i : target The hidden layer

Let’s just do an example E = Error = ½ ∑ i (t i – y i ) 2 x0x0 f i1i1 w 01 y0y0 i2i2 b=1 w 02 w 0b E = ½ (t 0 – y 0 ) y0y0 i2i2 i1i /(1+e^-0.5) E = ½ (0 – ) 2 = learning rate suppose  =

Biological learning 1.What is Hebbian learning? 2.Where has it been observed? 3.What is wrong with Hebbian learning as a story of how animals learn? –hint – it's the opposite of what's wrong with backprop

Hebb’s Rule: neurons that fire together wire together Long Term Potentiation (LTP) is the biological basis of Hebb’s Rule Calcium channels are the key mechanism LTP and Hebb’s Rule strengthenweaken

Why is Hebb’s rule incomplete? here’s a contrived example: should you “punish” all the connections? tastebudtastes rotteneats foodgets sick drinks water

With high-frequency stimulation, Calcium comes in During normal low-frequency trans-mission, glutamate interacts with NMDA and non-NMDA (AMPA) and metabotropic receptors.

Recruitment learning What is recruitment learning? Why do we need it in our story? How does it relate to triangle nodes?

Models of Learning Hebbian ~ coincidence Recruitment ~ one trial Supervised ~ correction (backprop) Reinforcement ~ delayed reward Unsupervised ~ similarity