Introduction to Computational Natural Language Learning Linguistics 79400 (Under: Topics in Natural Language Processing ) Computer Science 83000 (Under:

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Artificial Neural Networks (1)
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 20 Jim Martin.
Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Artificial Neural Networks
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Marco Loog.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2004.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
Back-Propagation Algorithm
Chapter 6: Multilayer Neural Networks
1 Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
Sentence Processing using a Simple Recurrent Network EE 645 Final Project Spring 2003 Dong-Wan Kang 5/14/2003.
Artificial Neural Networks
1 Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
CS 484 – Artificial Intelligence
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function Networks
Scot Exec Course Nov/Dec 04 Ambitious title? Confidence intervals, design effects and significance tests for surveys. How to calculate sample numbers when.
Radial Basis Function Networks
Modelling Language Evolution Lecture 2: Learning Syntax Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Classification Part 3: Artificial Neural Networks
Computer Science and Engineering
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
2101INT – Principles of Intelligent Systems Lecture 10.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Machine Learning Chapter 4. Artificial Neural Networks
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Appendix B: An Example of Back-propagation algorithm
February 22, 2010 Connectionist Models of Language.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Introduction to the TLearn Simulator n CS/PY 399 Lab Presentation # 5 n February 8, 2001 n Mount Union College.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Linear Classification with Perceptrons
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Today’s Topics Read: Chapters 7, 8, and 9 on Logical Representation and Reasoning HW3 due at 11:55pm THURS (ditto for your Nannon Tourney Entry) Recipe.
Artificial Neural Network
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
1 Learning Bias & Clustering Louis Oliphant CS based on slides by Burr H. Settles.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Neural Networks.
Simple recurrent networks.
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Prof. Carolina Ruiz Department of Computer Science
Machine Learning Today: Reading: Maria Florina Balcan
David Kauchak CS158 – Spring 2019
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under: Topics in Artificial Intelligence ) The Graduate School of the City University of New York Fall 2001 William Gregory Sakas Hunter College, Department of Computer Science Graduate Center, PhD Programs in Computer Science and Linguistics The City University of New York

Meeting 4: Notes: My Web page was a little messed up. Sorry about that! Should be OK now. There is a link to this course, but we will probably move to the new blackboard system soon. I got some asking about the details of how ANN’s work. Yes. Working out the math for a simple perceptron is fair game for a midterm question. A good link to check out: pris.comp.nus.edu.sg/ArtificialNeuralNetworks/perceptrons.html And I will be happy to arrange to meet with people to go over the math (as I will today at the beginning of class).

Now we have to talk about learning. Training simply means the process by which the weights of the ANN are calculated by exposure to training data. Supervised learning: Training dataSupervisor's answers One datum at a time This is a bit simplified. In the general case, it is possible to feed the learner batch data. But the models we will look at in this course data is fed one datum at a time.

0000 From the training file 1 ANN's prediction based on the current weights (which haven't converged yet) 0 From the supervisor's file. Ooops! Gotta go back and increase the weights so that the output unit fires.

f(net 9 ) a8a8 a7a7 a9a9 w 79 w 89 a1a1 1 w a0a0 Let’s look at how we might train an OR unit. First: set the weights to values picked out of a hat. and the bias activation to 1. Then: feed in 1,1. What does the network predict? The prediction is fine (f(.3) = 1) so do nothing. Boolean OR

f(net 9 ) a8a8 a7a7 a9a9 w 79 w 89 1 w 91 a1a1 1 w a0a0 Now: feed in 0,1. What does the network predict? Now got to adjust the weights. ANN’s predicition = 0 = f(-.3), But supervisor’s answer = 1 (remember we’re doing boolean OR) But how much to adjust? The modeler picks a value:  = learning rate (Let’s pick.1 for this example)

The weights are adjusted to minimize the error rate of the ANN. Perceptron Learning Procedure: w ij = old w ij +  (Supervisor’s answer - ANN’s prediction) So for example, the ANN predicts 0 and the supervisor says 1 w ij = old w ij +.1 ( 1 - 0) I.e. all weights are increased by.1

For multilayer ANN’s, the error adjustment is backpropagated through the hidden layer. eyey exex approx = w 3 (Supervisor’s answer-ANN’s prediction) approx = w 4 (Supervisor’s answer - ANN’s prediction) e z = w 1 e y + w 2 e x w1w1 w2w2 w3w3 w4w4 Backpropagated adjustment for one unit. Of course the error is calculated for ALL units.

In summary: 1)Multilayer ANN’s are Universal Function Approximators - they can approximate any function a modern computer can represent. 2)They learn without explicitly being told any “rules” - they simply cut up the hypothesis space by inducing boundaries. Importantly, they are "non-symbolic" computational devices. That is, they simply multiply activations by weights.

So,what does all of this have to do with linguistics and language? Some assumptions of “classical” language processing (roughly from Elman (1995)) 1)symbols and rules that operate over symbols (S, VP, IP, etc) 2)static structures of competence (e.g. parse trees) More or less, the classical viewpoint is language as algebra ANN’s make none of these assumptions, so if an ANN can learn language, then perhaps language as algebra is wrong. We’re going to discuss the pros and cons of Elman’s viewpoint in some depth next week, but for now, let’s go over his variation of the basic, feedforward ANN that we’ve been talking about.

boydogrun book rockseeeat boydogrun book rockseeeat..... Localist representation in a standard feedforward ANN Localist = each node represents a single item. If more than one output node fires, then a group of items can be considered activated. Basic idea is activate a single input node (representing a word) and see which group of output nodes (words) are activated. Output nodes Hidden nodes Input nodes

boydogrun book rockseeeat boydogrun book rockseeeat Elman’s Single Recurrent Network 1) activate from input to output as usual (one input word at a time), but copy the hidden activations to the context layer. 2) repeat 1 over and over - but activate from the input AND copy layers to the ouput layer. 1-to-1 exact copy of activations "regular" trainable weight connections

From Elman (1990) Templates were set up and lexical items were chosen at random from "reasonable" categories. Templates for sentence generator NOUN-HUM VERB-EAT NOUN-FOOD NOUN-HUM VERB-PERCEPT NOUN-INANIM NOUN-HUM VERB-DESTROY NOUN-FRAG NOUN-HUM VERB-INTRAN NOUN-HUM VERB-TRAN NOUN-HUM NOUN-HUM VERB-AGPAT NOUN-INANIM NOUN-HUM VERB-AGPAT NOUN-ANIM VERB-EAT NOUN-FOOD NOUN-ANIM VERB-TRAN NOUN-ANIM NOUN-ANIM VERB-AGPAT NOUN-INANIM NOUN-ANIM VERB-AGPAT NOUN-INANIM VERB-AGPAT NOUN-AGRESS VERB-DESTORY NOUN-FRAG NOUN-AGRESS VERB-EAT NOUN-HUM NOUN-AGRESS VERB-EAT NOUN-ANIM NOUN-AGRESS VERB-EAT NOUN-FOOD Categories of lexical items NOUN-HUM man, woman NOUN-ANIM cat, mouse NOUN-INANIM book, rock NOUN-AGRESS dragon, monster NOUN-FRAG glass, plate NOUN-FOOD cookie, sandwich VERB-INTRAN think, sleep VERB-TRAN see, chase VERB-AGPA move, break VERB-PERCEPT smell, see VERB-DESTROY break, smash VERB-EA eat

Training dataSupervisor's answers woman smash plate cat move man break car boy move girl eat bread dog smash plate cat move man break car boy move girl eat bread dog move Resulting training and supervisor files. Files were 27,354 words long, made up of 10,000 two and three word "sentences."

Cluster (Similarity) analysis Hidden activations were for each word were averaged together. For simplicity assume only 3 hidden nodes (in fact there were 150). After the SRN was trained, the file was run through the network. The activations at the hidden nodes was recorded (I made up these numbers for the example). Now the average was taken for every word: boy smash plate... dragon eat boy... boy eat cookie boy smash plate dragon eat cookie

boy smash plate dragon eat cookie Each of these vectors represents a point in 3-D space. Some points are near to each other and form "clusters" Hierarchical Clustering: 1.calculate the distance between all possible pairs of points in the space. 2.find the closed two points 3.make them a single cluster – i.e. treat them as a single point* 4.recalculate all pairs of points (you will have one less point to deal with the firs t time you hit this step). 5.go to step 2. * note there are many ways to treat clusters as single points. One could make a single point in the middle, one could calcuate median's etc. For Elman's study, I don't think it matters which he used, all would yield similar results, although this is just a guess on my part.

Each of these words represents a point in 150-Dimentional space averaged from all activations generated by the network when processing that word. Each joint (where there is a connection) represents the distance between clusters. So for example, the distance between animals and humans is approx.85 and the distance between ANIMATES and INANIMATES is approx 1.5.