Information Processing

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Artificial Neural Network
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
Chapter Seven The Network Approach: Mind as a Web.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
COGN1001 Introduction to Cognitive Science Sept 2006 :: Lecture #1 :: Joe Lau :: Philosophy HKU.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Supervised Learning: Perceptrons and Backpropagation.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Method: Reaction Time (RT)
Approaches to A. I. Thinking like humans Cognitive science Neuron level Neuroanatomical level Mind level Thinking rationally Aristotle, syllogisms Logic.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
 How would you rate your memory? Does this number vary from day to day? Morning to evening?
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
The Language of Thought : Part II Joe Lau Philosophy HKU.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Biological and cognitive plausibility in connectionist networks for language modelling Maja Anđel Department for German Studies University of Zagreb.
Today’s Lesson  You will leave being able to answer the following exam question to your MTG or above  Outline two assumptions of the cognitive approach.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Neural networks.
History and Philosophy (3 and 4): A Brief History of Cognitive Science
Fall 2004 Backpropagation CS478 - Machine Learning.
Neural Network Architecture Session 2
Neural Networks.
Artificial Neural Networks
Fall 2004 Perceptron CS478 - Machine Learning.
Split-Brain Studies What do you see? “Nothing”
Chapter 7 Psychology: Memory.
Dialog Processing with Unsupervised Artificial Neural Networks
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
James L. McClelland SS 100, May 31, 2011
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Processes in Memory Three step process…
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Perceptron Learning Demonstration
Artificial Intelligence
Chapter 3. Artificial Neural Networks - Introduction -
Perceptron as one Type of Linear Discriminants
Artificial Intelligence Lecture No. 28
Cognitive Level of Analysis: Cognitive Processes
Backpropagation.
Neural Networks References: “Artificial Intelligence for Games”
Artificial Intelligence 12. Two Layer ANNs
Dialog Processing with Unsupervised Artificial Neural Networks
The Network Approach: Mind as a Web
The McCullough-Pitts Neuron
Active, dynamic, interactive, system
David Kauchak CS158 – Spring 2019

Word representations David Kauchak CS158 – Fall 2016.
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Information Processing CLPS0020: Introduction to Cognitive Science Professor Dave Sobel Fall 2016

What is information processing? Recall Skinner’s Behaviorism We act based on responses to stimuli – cognition is propensities to behave Information Processing is a reaction to that theory – we process and respond to information, not react to it like a reflex Automatic (reflexive) vs. Mindful (controlled) processes

MindBrain::SoftwareHardware Metaphor is of human beings as computers Information = data Input through perception Output through action Stored in memory (ROM) Processed by a CPU Working Memory (RAM) Decision/Inference Processes Other systems (attention/imagery) and methods (cognitive neuroscience) kludged onto this account.

The Magic Number 7 Example: How much information can you hold in short term memory? Whoa! What’s information? Let’s define it as a “piece of meaningful knowledge and can be independent of other pieces of knowledge” Notice: Context So, in a simple experiment, like digit span, it’s a number What’s short term memory? The Modal Model

The Modal Model Developed by Atkinson & Shiffrin (1968) Long Term Memory (LTM) is a warehouse for information Short Term Memory is a “loading dock” through which information goes into LTM (encoding) or comes out of LTM (retrieval) Have you already noticed that this metaphor is about processing information?

The Magic Number 7 Example: How much information can you hold in short term memory? 6 4 8 2 4 9 4 2 9 8 6 1 7 5 3 8 6 7 4 6 2 9 Answer 7 +/- 2

Miller 1956 Miller observed that in any task in which information was manipulated (usually categories in a discrimination task), human participants could track 7 +/- 2 categories Digit Span Words in free recall Associations of stimuli with responses Artificial category labels Many other examples

MindBrain::SoftwareHardware Metaphor is of human beings as computers Information = data Input through perception Output through action Stored in memory (ROM) Processed by a CPU Working Memory (RAM) Decision/Inference Processes Can we build software that runs on hardware that isn’t our brain? Recall: Turing Machines Contemporary Idea: Computational Models

Back to Turing Machines Remember: Symbol processors – take input, look up rules, generate output. Modern Computers are built on this principle (Symbolic AI) “Computational-level” explanation of the mind Example How do you learn phonetics of plurals (Berko, 1958) A wug, two ? A dax, two ? A blicket, two ? Rules: S1->/s/, S2->/z/. S3/es/

Why are Turing Machines appealing? Turing Machines manipulate symbols and have clear, analyzable rules Serial processing. One thing after another. Discrete representation. We know what is represented (i.e., Semantics) Those representations combine in only particular ways (i.e., Syntax) Syntax can relate to semantics in some cases, but also can be independent. “Mary kissed John” vs. “John kissed Mary” mean different things “Colorless green ideas sleep furiously.” Proper syntax, not semantics

What if this metaphor is completely wrong? Where do the mappings between symbol and meaning come from? Perhaps they do not exist Alternative View: Cognition is an emergent process of many dumb processors working together Uses the brain as a metaphor - many neurons working continuously and in parallel Redundant No syntax or semantics (they’re illusions based on the algorithms evolving into a stable state)

Initial Idea: Perceptrons (Minsky & Pappert, 1969) Returns 1 if a chair, 0 if not Perceptrons were a model of categorization How do you get a child to learn what a chair is? Well, you could show it stuff. Some of the stuff are chairs, some of the stuff are not chairs. You label the chairs “chairs” and you label the non chairs “not a chair” Chair? Input 1 Input 2 Two ways in which the input is represented (could be n)

How does it work? Each nodes has an activation (a) that propagates through the system according to its connections. Simplest version: a = 1 or 0 Each connection has a weight Activation is combined by weight (e.g., a*w, and w [0:1]) Output node has a threshold level (t) If sum of activation given connection weights is above threshold, it fires (i.e., returns 1), otherwise it does not (i.e., returns 0) 1 or 0 Chair? t w1 w2 Input 1 Input 2 a1 a2

How does it learn? Standard training: Error Correction Chicken-sexing: Change weights to try and minimize the error Backpropagation (Delta Rule) If a solution exists, this algorithm is guaranteed to find it. Chair? Input 1 Input 2

The Trouble with Perceptrons Blicket? Consider blickets, which are white or square, but not white and square White? Square? There is no threshold level such that the model won’t learn that white squares aren’t blickets. Perceptrons can’t model XOR, or any category that is linearly separable Shape: Square or not Color: White or not

Historically Historically, people put this idea on a shelf for about 15 years. Perceptrons were considered nonstarters, and people focused on symbolic models. With moderate, but limited success Until, McClosky & Rumelhart (1986) Multi-layer perceptrons (neural networks)

Example Positive w Negative w White? Blicket? Square? White? Blicket?

And here’s the funny thing… Neural Networks provide good models of memory, attention, categorization (and language processing, particularly things like pluralization and past tense) Pretty much all of human cognition No symbol manipulation or processing Brain like in architecture – lots of dumb processes talking to each other, with graceful degradation Why do we hold on to the mind as computer metaphor? There are limitations on what they can learn and do The explanation is not satisfactory (perhaps algorithmic?) But how do you build a neural network? On a computer, which is a symbolic processor. Do we really know what the brain is doing?