Explanation and Simulation in Cognitive Science Simulation and computational modeling Symbolic models Connectionist models Comparing symbolism and connectionism.

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Computation and representation Joe Lau. Overview of lecture What is computation? Brief history Computational explanations in cognitive science Levels.
C HAPTER 8. N EURAL N ETWORKS : T HE N EW C ONNECTIONISM Bodrov Alexey.
Best-First Search: Agendas
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Machine Learning Neural Networks
Simple Neural Nets For Pattern Classification
Knowledge, Mental Models and HCI. Introduction 4 If we want to –predict learning time –identify “typical” errors –relative ease of performance of tasks.
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
COGNITIVE NEUROSCIENCE
Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard.
Chapter Seven The Network Approach: Mind as a Web.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Physical Symbol System Hypothesis
Symbolic vs Subsymbolic, Connectionism (an Introduction) H. Bowman (CCNCS, Kent)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function Networks
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
General Knowledge Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
NEURAL NETWORKS FOR DATA MINING
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
Hebbian Coincidence Learning
An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
CHAPTER 6 COGNITIVE PERSPECTIVES: 1. THE PROCESSING OF INFORMATION EPSY DR. SANDRA RODRIGUEZ PRESENTATION BY: RUTH GARZA Gredler, M. E. (2009).
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Chapter 6 Neural Network.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
The Language of Thought : Part II Joe Lau Philosophy HKU.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
What is cognitive psychology?
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Neural Network Architecture Session 2
Chapter 7 Psychology: Memory.
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 8, 2018.
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Learning linguistic structure with simple recurrent neural networks
Artificial Intelligence Chapter 3 Neural Networks
The Network Approach: Mind as a Web
CO Games Development 2 Week 22 Machine Learning
CSC 578 Neural Networks and Deep Learning
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Explanation and Simulation in Cognitive Science Simulation and computational modeling Symbolic models Connectionist models Comparing symbolism and connectionism Hybrid architectures Cognitive architectures

Simulation and Computational Modeling With detailed and explicit cognitive theories, we can implement the theory as a computational model And then execute the model to: –Simulate cognitive capacity –Derive predictions from the theory The predictions can then be compared to empirical data

Questions What kinds of theories are amenable to simulation? What techniques work for simulation? Is simulating the mind different from simulating the weather?

The Mind & the Weather The mind may just be a complex dynamic system, but it isn’t amenable to generic simulation techniques: –The relation between theory and implementation is indirect: theories tend to be rather abstract –The relation between simulation results and empirical data is indirect: simulations tend to be incomplete The need to simulate helps make theories more concrete But “improvement” of the simulation must be theory-drive, not just an attempt to capture the data

Symbolic Models High-level functions (e.g., problem solving, reasoning, language) appear to involve explicit symbol manipulation Example: Chess and shopping seem to involve representation of aspects of the world and systematic manipulation of those representations

Central Assumptions Mental representations exist Representations are structured Representations are semantically interpretable

What’s in a representation? Representation must consist of symbols Symbols must have parts Parts must have independent meanings Those meanings must contribute to the meanings of the symbols which contain them –e.g., “34” contains “3” and “4”, parts which have independent meanings –the meaning of “34” is a function of the meaning of “3” in the tens position and “4” in the units position

In favor of structured mental representations Productivity –It is through structuring that thought is productive (finite number of elements, infinite number of possible combinations) Systematicity –If you think “John loves Mary”, you can think “Mary loves John” Compositionality –The meaning of “John loves Mary is a function of its parts, and their modes of combination Rationality –If you know A and B is true, then you can infer A is true Fodor & Pylyshyn (1988)

What do you do with them? Suppose we accept that there are symbolic representations How can they be manipulated? …by a computing machine Any such approach has three components –A representational system –A processing strategy –A set of predefined machine operations

Automata Theory Identifies a family of increasingly powerful computing machines –Finite state automata –Push down automata –Turning machines

Automata, in brief (Figure 2.2 in Green et al., Chapter 2) This FSA takes as input a sequence of on and off messages, and accepts any sequence ending with an “on” A PDA adds a stack: an infinite-capacity, limited access memory, so that what a machine does depends on input, current state, plus the memory

A Turing machine changes this memory to allow any location to be accessed at any time. An the State transition function specifies read/write instructions, as well as which state to move to next. Any effective procedure can be implemented on an appropriately programmed Turing machine And Universal Turing machines can emulate any Turing machine, via a description on the tape of the machine and its inputs Hence, philosophical disputes: –Is the brain Turing powerful? –Does machine design matter or not?

More practical architectures Von Neumann machines: –Strictly less powerful than Turing machines (finite memory) –Distinguished area of memory for stored programs –Makes them conceptually easier to use than TMs –Special memory location points to next-instruction on each processing cycle: fetch instruction, move pointer to next instruction, execute current instruction

Production Systems Introduced by Newell & Simon (1972) Cyclic processor with two main memory structures –Long term memory with rules (~productions) –Working memory with symbolic representation of current system state Example: IF goal (sweeten(X) AND available (sugar) THEN action (add(sugar, X)) and retract (goal(sweeten(X)))

Recognize phase (pattern matching) –Find all rules in LTM that match elements in WM Act phase (conflict resolution) –Choose one matching rule, execute, update WM and (possibly) perform action Complex sequences of behavior can thus result Power of pattern matcher can be varied, allowing different use of WM Power of conflict resolution will influence behavior given multiple matches –Most specific? This works well for problem-solving. Would it work for pole-balancing?

Connectionist Models The basic assumption –There are many processors connected together, and operating simultaneously –Processors: units, nodes, artificial neurons

A connectionist network is… A set of nodes, connected in some fashion Nodes have varying activation levels Nodes interact via the flow of activation along the connections Connections are usually directed (one-way flow), and weighted (strength and nature of interaction; positive weight = excitatory; negative = inhibitory) A node’s activation will be computed from the weighted sum of its inputs

Local vs. Distributed Representation Parallel Distributed Processing is a (the?) major branch of connectionism In principle, a connectionist node could have an interpretable meaning –E.g., active when ‘red’ input, or ‘grandmother’, or whatever However, an individual PDP node will not have such an interpretable meaning –Activation over whole set of nodes corresponds to ‘red’ –Individual node participates in many such representations

PDP PDP systems lack systematicity and compositionality Three main types of networks: –Associative –Feed-forward –Recurrent

Associative To recognize and reconstruct patterns –Present activation pattern to subset of units –Let network ‘settle’ in stable activation pattern (reconstruction of previously learned state)

Feedforward Not for reconstruction, but for mapping from one domain to another –Nodes are organized into layers –Activation spreads through layers in sequence –A given layer can be thought of as an “activation vector” Simplest case: –Input layer (stimulus) –Output layer (response) Two layer networks are very restricted in power. Intermediate (hidden) layers gain most of the additional computational power needed.

Recurrent Feedforward nets compute mappings given current input only. Recurrent networks allow mapping to take into account previous input. Jordan (1986) and Elman (1990) introduced networks with: –Feedback links from output or hidden layers to context units, and –Feedforward links from the context units to the hidden units Jordan network output depends on current input and previous output Elman network output depends on current input and whole of previous input history

Key Points about PDP It’s not just that a net can recognize a pattern or perform a mapping It’s the fact that it can learn to do so, on the basis of limited data And the way that networks respond to damage is crucial

Learning Present network with series of training patterns –Adjust the weights on connections so that the patterns are encoded in the weights Most training algorithms perform small adjustments to the weights per trial, but require many presentations of the training set to reach a reasonable degree of performance There are many different learning algorithms

Learning (contd.) Associative nets support Hebbian learning rule: –Adjust weight of connection by amount proportional to the correlation in activity of corresponding nodes –So if both active, increase weight; if both inactive, increase weight; if they differ, decrease weight Important because this is biologically plausible…and very effective

Learning (contd.) Feedforward and recurrent nets often exploit the backpropagation of error rule –Actual output compared to expected output –Difference computed and propagated back to input, layer by layer, requiring weight adjustments Note: unlike Hebb, this is supervised learning

Psychological Relevance Given a network of fixed size, if there are two few units to encode the training set, then interference occurs This is suboptimal, but is better than nothing, since at least approximate answers are provided And this is the flipside of generalization, which provides output for unseen input –E.g., weep  wept; bid  bid

Damage Either remove a proportion of connections Or introduce random noise into activation propagation And behavior can simulate that of people with various forms of neurological damage “Graceful degradation”: impairment, but residual function

Example of Damage Hinton & Shallice (1991), Plaut & Shallice (1993) on deep dyslexia: –Visual error (‘cat’ read as ‘cot’) –Semantic error (‘cat’ read as ‘dog’) Networks constructed for orthography-to- phonology mapping, lesioned in various ways, producing behavior similar to human subjects

Symbolic Networks Though distributed representations have proved very important, some researchers prefer localist approaches Semantic networks: –Frequently used in AI-based approaches, and in cognitive approaches which focus on conceptual knowledge –One node per concept; typed links between concepts –Inference: link-following

Production systems with spreading activation Anderson’s work (ACT, ACT*, ACT-R) –Symbolic networks with continuous activation values –ACT-R never removes working memory elements; activation instead decays over time –Productions chosen on basis of (co-) activation

Interactive Activation Networks Essentially, localist connectionist networks Featuring self-excitatory and lateral inhibitory links, which ensure that there’s always a winner in a competition (e.g., McClelland & Rumelhart’s model of letter perception) Appropriate combinations of levels, with feedback loops in them, allow modeling of complex data- driven and expectation-driven bahavior

Comparing Symbolism & Connectionism As is so often the case in science, the two approaches were initially presented as exclusive alternatives

Connectionist: Interference Generalization Graceful degradation Symbolists complain: –Connectionists don’t capture structured information –Network computation is opaque –Networks are “merely” implementation-level

Symbolic Productive Systematic Compositional Connectionists complain: –Symbolists don’t relate assumed structures to brain –They relate them to von Neumann machines

Connectionists can claim: Complex rule-oriented behavior *emerges* from interaction of subsymbolic behavior So symbolic models describe, but do not explain

Symbolists can claim: Though PDP models can learn implicit rules, the learning mechanisms are usually not neurally plausible after all Performance is highly dependent on exact choice of architecture

Hybrid Architectures But really, the truth is that different tasks demand different technologies Hybrid approaches explicitly assume: –Neither connectionist nor symbolic approach is flawed –Their techniques are compatible

Two main hybrid options: Physically hybrid models: –Contain subsystems of both types –Issues: interfacing, modularity (e.g., use Interactive Activation Network to integrate results) Non-physically hybrid models –Subsystems of only one type, but described two ways –Issue: levels of description (e.g., connectionist production systems)

Cognitive Architectures Most modeling is aimed at specific processes or tasks But it has been argued that: –Most real tasks involve many cognitive processes –Most cognitive processes are used in many tasks Hence, we need unified theories of cognition

Examples ACT-R (Anderson) Soar (Newell) Both based on production system technology –Task-specific knowledge coded into the productions –Single processing mechanism, single learning mechanism

Like computer architectures, cognitive architectures tend to make some tasks easy, at the price of making other hard Unlike computer architectures, cognitive architectures must include learning mechanisms But note that the unified approaches sacrifice genuine task-appropriateness and perhaps also biological plausibility

A Cognitive Architecture is: A fixed arrangement of particular functional components A processing strategy