CS 182 Sections 101 - 102 Eva Mok Feb 11, 2004 (http://www2.hi.net/s4/strangebreed.htm) bad puns alert!

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

For Wednesday Read chapter 19, sections 1-3 No homework.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Connectionist Models: Backprop Jerome Feldman CS182/CogSci110/Ling109 Spring 2008.
CS 182 Sections Created by Eva Mok Modified by JGM 2/2/05 Q:What did the hippocampus say during its retirement speech? A:“Thanks for the memories”
CS 182 Sections slides created by Eva Mok modified by JGM March 22, 2006.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Nature requires Nurture Initial wiring is genetically controlled  Sperry Experiment But environmental input critical in early development  Occular dominance.
COGNITIVE NEUROSCIENCE
Connectionist Models: Lecture 3 Srini Narayanan CS182/CogSci110/Ling109 Spring 2006.
Connectionist Model of Word Recognition (Rumelhart and McClelland)
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
CS 182 Sections 101 & 102 slides created by: Eva Mok modified by JGM Jan. 25, 2006.
20 Minute Quiz For each of the two questions, you can use text, diagrams, bullet points, etc. 1) What are the main events in neural firing and transmission?
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Multiple-Layer Networks and Backpropagation Algorithms
Machine Learning Chapter 4. Artificial Neural Networks
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Appendix B: An Example of Back-propagation algorithm
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
CS-424 Gregory Dudek Today’s Lecture Neural networks –Training Backpropagation of error (backprop) –Example –Radial basis functions.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
Announcements a3 is out, due 2/15 11:59pm Please please please start early quiz will be graded in about a week. a1 will be graded shortly—use glookup to.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
EEE502 Pattern Recognition
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
語音訊號處理之初步實驗 NTU Speech Lab 指導教授: 李琳山 助教: 熊信寬
CS 182 Sections Leon Barrett ( bad puns alert!
CS 182 Sections Leon Barrett with acknowledgements to Eva Mok and Joe Makin April 4, 2007.
slides derived from those of Eva Mok and Joe Makin March 21, 2007
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
CS 182 Leon Barrett Thanks to Eva Mok and Joe Makin Q:What did the Hollywood film director say after he finished making a movie about myelin? A:“That’s.
CS 182 Sections Leon Barrett. Status A3-P1 already due A3-P2 due on Thursday This week –Color –Representations and concepts Next week.
CS 182 Sections Leon Barrett (
Announcements a3 is out, due 2/13 and 2/22 11:59pm Computational: part 1 isn't too hard; part 2 is harder quiz will be graded in about a week.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
CS 182 Leon Barrett and Will Chang Thanks to Eva Mok and Joe Makin Q:What did the Hollywood film director say after he finished making a movie about myelin?
Today’s Lecture Neural networks Training
Multiple-Layer Networks and Backpropagation Algorithms
CS 182 Sections 101 & 102 Leon Barrett Jan 25, 2008
Lecture 24: Convolutional neural networks
slides derived from those of Eva Mok and Joe Makin March 21, 2007
slides created by Leon Barrett
CSE 473 Introduction to Artificial Intelligence Neural Networks
with acknowledgments to Eva Mok and Joe Makin
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
CS 4501: Introduction to Computer Vision Training Neural Networks II
Artificial Neural Network & Backpropagation Algorithm
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Networks Geoff Hulten.
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
The McCullough-Pitts Neuron
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Principles of Back-Propagation
Presentation transcript:

CS 182 Sections Eva Mok Feb 11, 2004 ( bad puns alert!

Announcements a3 part 1 is due tonight (submit as a3-1) The second tester file is up, so pls. start part 2. The quiz is graded (get it after class).

Where we stand Last Week –Backprop This Week –Recruitment learning –color Coming up –Imagining techniques (e.g. fMRI)

The Big (and complicated) Picture Cognition and Language Computation Structured Connectionism Computational Neurobiology Biology MidtermQuiz Finals Neural Development Triangle Nodes Neural Net & Learning Spatial Relation Motor Control Metaphor SHRUTI Grammar abstraction Regier Model Bailey Model Narayanan Model Chang Model Visual System Psycholinguistics Experiments

Quiz 1.What is a localist representation? What is a distributed representation? Why are they both bad? 2.What is coarse-fine encoding? Where is it used in our brain? 3.What can Back-Propagation do that Hebb’s Rule can’t? 4.Derive the Back-Propagation Algorithm 5.What (intuitively) does the learning rate do? How about the momentum term?

Distributed vs Localist Rep’n John 1100 Paul 0110 George 0011 Ringo 1001 John 1000 Paul 0100 George 0010 Ringo 0001 What are the drawbacks of each representation?

Distributed vs Localist Rep’n What happens if you want to represent a group? How many persons can you represent with n bits? 2^n What happens if one neuron dies? How many persons can you represent with n bits? n John 1100 Paul 0110 George 0011 Ringo 1001 John 1000 Paul 0100 George 0010 Ringo 0001

Visual System 1000 x 1000 visual map For each location, encode: –orientation –direction of motion –speed –size –color –depth Blows up combinatorically! … …

Coarse Coding info you can encode with one fine resolution unit = info you can encode with a few coarse resolution units Now as long as we need fewer coarse units total, we’re good

Coarse-Fine Coding but we can run into ghost “images” Feature 2 e.g. Direction of Motion Feature 1 e.g. Orientation Y X G G Y-Orientation X-Orientation Y-DirX-Dir Coarse in F2, Fine in F1 Coarse in F1, Fine in F2

Back-Propagation Algorithm We define the error term for a single node to be t i - y i xixi f yjyj w ij yiyi x i = ∑ j w ij y j y i = f(x i ) t i :target Sigmoid:

Gradient Descent i2i2 i1i1 global mimimum: this is your goal it should be 4-D (3 weights) but you get the idea

kji w jk w ij E = Error = ½ ∑ i (t i – y i ) 2 yiyi t i : target The derivative of the sigmoid is just The output layer learning rate

kji w jk w ij E = Error = ½ ∑ i (t i – y i ) 2 yiyi t i : target The hidden layer

Let’s just do an example E = Error = ½ ∑ i (t i – y i ) 2 x0x0 f i1i1 w 01 y0y0 i2i2 b=1 w 02 w 0b E = ½ (t 0 – y 0 ) 2 i1i1 i2i2 y0y /(1+e^-0.5) E = ½ (0 – ) 2 = learning rate suppose  =