Applications for Aphasiology

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Multi-Layer Perceptron (MLP)
Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Neural Network Intro Slides
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Marco Loog.
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Back-Propagation Algorithm
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Brief Overview of Connectionism to understand Learning Walter Schneider P2476 Cognitive Neuroscience of Human Learning & Instruction
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Perceptrons Gary Cottrell. Cognitive Science Summer School 2 Perceptrons: A bit of history Frank Rosenblatt studied a simple version of a neural net called.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Today’s Lecture Neural networks Training
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Fall 2004 Perceptron CS478 - Machine Learning.
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Artificial neural networks:
شبكه هاي عصبي مصنوعي جلسه دوم تاريخچه شبكه هاي عصبي مصنوعي
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSC 578 Neural Networks and Deep Learning
of the Artificial Neural Networks.
Perceptron as one Type of Linear Discriminants
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
Capabilities of Threshold Neurons
Lecture Notes for Chapter 4 Artificial Neural Networks
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence 12. Two Layer ANNs
Computer Vision Lecture 19: Object Recognition III
The Network Approach: Mind as a Web
David Kauchak CS158 – Spring 2019

PYTHON Deep Learning Prof. Muhammad Saeed.
EE 193/Comp 150 Computing with Biological Parts
Presentation transcript:

Applications for Aphasiology Neural Networks Applications for Aphasiology W. Katz, COMD 6305 UTD (With materials from UC San Diego PDP group, the Universities of Maastricht and Amsterdam, and University of Bath – Ian Walker)

Connectionism = Parallel Distributed Processing (PDP) Style of modeling Based on networks of interconnected simple processing devices

Basic components include: Set of processing units Modifiable connections between units An optimal learning procedure

Associative learning Mind Brain “When two elementary brain processes have been active together or in immediate succession, one of them, on reoccurring, tends to propagate its excitement into the other” (William James, 1890) “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes place in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” (Donald Hebb, 1949)

Localist versus distributed? Localist models one unit to represent each concept e.g., a person’s name; the meaning of the word “cat” Units are usually grouped, with inhibition within groups and excitation between groups Distributed models  concepts represented by more than one unit meaning of the word “cat” is represented by a number of units: “mammal”, “purrs”, “has fur” , etc. better at coping with incomplete data probably closer to how our minds work

Some history… A major approach in cognitive science First wave in 1950s and 1960s Culminated with the publication of Perceptrons by Minsky & Papert (1969) Second wave began in 1980s, still going strong! McClelland & Rumelhart, Hinton and others

Logical problems to be solved Separable – via general learning algorithm Exists Doesn’t exist Not And Or (inclusive) If-then Non-separable – requires back-propagation Symmetry Parity (equalness, such as odd-even) Logical OR (XOR)

OR – if A is true OR B is true or if both are true (“both”) XOR – true whenever inputs differ OR Truth Table Input Output A B F T XOR Truth Table Input Output A B F T

Two-Layer Nets OK for Linear Separable Problems * See next slide….

Multi-Layer Nets… Required for Non-Separable Problems (e.g. XOR) AND

A basic connectionist unit 1. Think of a unit (“neurone”) as a light-bulb with a light sensor attached: Sensor 2. If some light falls on the sensor, the bulb will light up the same amount: 10 10> Or: 15 15>

A simple network: a decision maker Problem: “you must only leave when both Sarah and Steven have arrived” Nobody present: (0x1) + (0x1) + (-1) = -1 STAY Only Sarah present: (1x1) + (0x1) + (-1) = 0 STAY Only Steven present: (0x1) + (1x1) + (-1) = 0 STAY Both people present: (1x1) + (1x1) + (-1) = 1 GO In technical terms, this is a logical AND gate Go when > 0 Resting level  -1 x1 x1 Sarah detector (0 or 1) Steven detector (0 or 1)

The Perceptron - Frank Rosenblatt (1958, 1962) Two-layers binary nodes that take values 0 or 1 continuous weights, initially chosen randomly

Simple example net input = 0.4  0 + -0.1  1 = -0.1 0.4 -0.1 1

The Perceptron was a big hit.. Spawned the first wave in ‘connectionism’ Great interest and optimism about the future of neural networks First neural network hardware - built in the late 50s and early 60s

Perceptron - Limitations Only binary input-output values - (later fixed via the delta rule…) Only two layers - prevented solving nonlinear problems e.g. XOR

Exclusive OR (XOR) In Out 0 1 1 1 0 1 1 1 0 0 0 0 1 0.4 0.1 1

An extra layer is necessary to represent the XOR No solid training procedure existed in 1969 to accomplish this Thus began the search for the third (or “hidden”) layer

Error-backpropagation – Rumelhart, Hinton, and Williams Meet the hidden layer

The backprop trick h = w11 + w22 + w33 + … + wnn To find the error value for a given node h in a hidden layer, … …..take the weighted sum of the errors of all nodes connected from node h : To-nodes of h 1 2 3 n Backpropgation of errors: w2 w3 w1 wn h = w11 + w22 + w33 + … + wnn Node h

Characteristics of back-propagation Works for any number of layers Use continuous nodes Must have differentiable activation rule Typically, logistic: S-shape between 0 and 1 Initial weights are random Gradient descent in error space

NetTalk: Backprop’s ‘killer-app’ AI project by Sejnowski and Rosenberg (1986) Text as input, phonetic transcription for comparison Learns to pronounce text based on associations between letters and sounds Trained, not programmed http://www.youtube.com/watch?v=gakJlr3GecE

Backprop: Pro / Con Learning is slow Easy to use New learning will rapidly overwrite old representations, unless these are repeated with the new patterns This makes it hard to keep networks up-to-date with new information Also makes it very implausible as a psychological model of human memory Easy to use Few parameters to set Algorithm is easy to implement Can be applied to a wide range of data Very popular

Benefits of connectionism? Models are analogous to how the brain works Can be used to test theories about the mind’s organization concerning: Generalization Coping with incomplete/noisy data Graceful degradation of error One can look for emergent properties Old-style information-processing box-and-arrows diagrams can be more explicitly tested

(-- A neural modeler, Dijon, France) A bright future: “Gradually, neural network models and the computers they run on will become good enough to give us a deep understanding of neurophysiological processes and their behavioral counterparts and to make precise predictions about them.” “They will be used to study epilepsy, Alzheimer’s disease, and the effects of various kinds of stroke, without requiring the presence of human patients.” “They will be, in short, like the models used in all of the other hard sciences. Neural modeling and neurobiology will then have achieved a truly symbiotic relationship.”

?

The debates Fodor & Pylyshyn (1988) argue that such models hold no real information about the relationships between nodes i.e., a link between two nodes does not capture the real essence of the relationship between the two concepts. And all links are the same…. But how do we know that links in the mind/brain are any different? Others argue the models are too low-level How do we account for things like “concepts”, “planning”, etc? Biological plausibility? What about consciousness?

Summary Connectionism - a mathematical modelling technique which uses simple interconnected units to replicate complex behaviours Models are exposed to various situations and from this learn general rules The distinction between their processing of information and their storage of it is very blurred Can simulate many aspects of human cognition, most notably generalization, processing of incomplete information, and graceful degradation Have provided new metaphor for psychologists to think about the mind Overall value -- still controversial