Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Cognitive Modelling

Similar presentations


Presentation on theme: "Computational Cognitive Modelling"— Presentation transcript:

1 Computational Cognitive Modelling
Lecture 3 Computational Cognitive Modelling COGS 511-Lecture 3 SOAR and ACT-R COGS 511 COGS 511

2 Related Readings Course Pack:
Anderson et al.’s (2004) An Integrated Theory of the Mind Lehman et al. (2006). A Gentle Introduction to SOAR See also (optional) Anderson and Lebiere (2003). The Newell Test for a Theory of Cognition. Behavioral and Brain Sciences 26, Anderson J.R. (2007) How Can the Human Mind Occur in the Physical Universe? OUP. (Latest book on ACT-R as a unified theory) Chapter 2 of Polk and Seifert Anderson and Lebiere (1998) Atomic Components of Thought. Lawrence-Erlbaum (The previous one..) Some slided are adopted from Lebiere’s Introductory tutorial, see Thanks to Evgueni Stepanov for letting me use his drawings;see Masters theses by Stepanov and Özyörük. COGS 511

3 SOAR States, Operators and Reasoning
Descendant of General Problem Solver (1963) Software is in now version with 9.1 and 9.2 betas...(as of March, 2010) COGS 511

4 SOAR’s Cognitive Design Principles
Lecture 3 SOAR’s Cognitive Design Principles Be goal oriented. Use symbols and abstractions. Be flexible and exhibit adaptive behaviour. Learn from experience and environment. What aspects of cognitive behaviour is missing here? Unrealistic aspects: e.g. Forgetting in SOAR results when new associations prevent old ones from firing. SOAR is not: self-aware, realizable as a neural system, non-evolutionary, non-social, limited in perceptual-motor Capacity etc. COGS 511 COGS 511

5 COGS 511 A gentle intro to SOAR-1999

6 Soar as a Production System
All long term memory used to be composed of productions. Now semantic and episodic memory exists although there is a heavy weight towards procedural knowledge. Conflict resolution – all satisfied productions put their contents to working memory, so rules are allowed to fire in parallel, but at the level of operator proposals only. Rules do not take actions by themselves, they propose actions (i.e. operators) COGS 511

7 A gentle intro to SOAR-1999 COGS 511

8 Lecture 3 Elements of SOAR Problem Spaces – Restricting the arena of action to what is relevant domain knowledge Goals – Knowledge of objectives Operators – Knowledge about actions States - An Internal Representation of a situation Working Memory Elements-Feature and Values, where values of features may be features themselves There are objects, rather in a distributed sense, related with a single identifier. COGS 511 COGS 511

9 … … … … Problem space S12 f1 v1 f1 v2 S1 f1 v1 f1 v2 goal S91 f1 v1
operator S91 f1 v1 f1 v2 S2 initial state S0 f1 v1 f1 v2 f1 v1 f1 v2 goal state f1 v1 f1 v2 S30 S3 f1 v1 f1 v2 S80 f1 v1 f1 v2 goal state (Lehman et al., 2006) COGS 511

10 SOAR’s Decision Cycle Perception is asynchronous wrt to decision
Recognize/Elaborate Match Working Memory against “if”s in LTM; parallel firings- ends with quiescence-all the knowledge that can be elicited in the current context is in WM Decide Evaluate preferences – symbolic (better, best, acceptable, worst, prohibit) or numeric; apply the chosen operator Act A single operator per decision cycle ~~100msec (typical decision cycle); comparison w. Behavioural data in terms of decision cycles COGS 511

11 COGS 511 A Gentle Intro to SOAR (1999)

12 Impasses If a successful decision cannot be made, an impasse arises, resolving this impasse becomes a subgoal of the original goal Predetermined set of impasses Operator tie impasse – more than one acceptable preference No change impasse- A new operator can not be selected Conflict impasse – Conflicting preferences COGS 511

13 Chunking Learning associations via examining the preimpasse environment and the solution to the impasse Other learning styles used to be built on chunking Now, reinforcement learning Operates on operator selection rules Learns rules that tests features Learns expected rewards COGS 511

14 Perception/Motor Interface
Working Memory serve fastserve curvedserve Resolve tie Curved serve Long-term Memory a1…a5, a7…a9 Adapted from (Lehman et al., 2006)

15 Soar Syntax If I exist, then write “Hello World” and halt.
sp {hello-world (state <s> ^type state) (write “Hello World”) (halt)} COGS 511

16 SOAR Content Theories and Applications
R1 SOAR – an Expert systems that configures computer systems Designer-SOAR- algorithm generation from specification NL-Soar and LG-Soar – About natural language comprehension and production Nasa Test Director – NTD- Soar TacAir Soar- Soar MOUTBOT – military behaviour models (TacAir Soar > 8000 rules) Hybrids: EASE – Elements of SOAR, ACT-R and EPIC COGS 511

17 Advantages and Disadvantages of SOAR
Parsimony – single long term memory, single learning rule (also a disadvantage?)-has changed in Soar 9- Addition of episodic and semantic memory + reinforcement and concept learning Symbolic (also a disadvantage?-utility, frequency etc. is not accounted for-also revised recently) COGS 511

18 Some of the Recent Advances to SOAR
Soar Technology – still freely available. New interface, better editing and debugging facilities and kernel allowing better interaction with other agents and applications Development of experimental simulation agents Implementation of reinforcement learning and preliminary coverage of emotions COGS 511

19 Some Further Developments
Port of NLSOAR to Ver. 9 with minimalist syntax and access to Wordnet and corpora... SOAR in Java iSOAR on iPhone SOAR and Robotics Bayesian – Causal hybrids into SOAR COGS 511

20 History of the ACT-R framework
Adaptive Control of Thought-Rational Predecessor HAM Declarative memory only Theory versions ACT-E Added productions ACT* Learning and subsymbolic part ACT-R Further development 1993+ Implementations ACT-R (1993) ACT-R 3.0 ACT-R 4.0 ACT-RN Neural-network implementation ACT-R/PM EPIC’s perception-motor added ACT-R PM integration, theory updates ACT-R Current version COGS 511

21 ACT-R Models by Topic Area
III. Problem Solving & Decision Making 1. Tower of Hanoi 2. Choice & Strategy Selection 3. Mathematical Problem Solving 4. Spatial Reasoning 5. Dynamic Systems 6. Use and Design of Artifacts 7. Game Playing 8. Insight and Scientific Discovery IV. Language Processing 1. Parsing 2. Analogy & Metaphor 3. Learning 4. Sentence Memory V. Other 1. Cognitive Development 2. Individual Differences 3. Emotion 4. Cognitive Workload 5. Computer Generated Forces 6. fMRI 7. Communication, Negotiation, Group Decision Making I. Perception & Attention 1. Psychophysical Judgements 2. Visual Search 3. Eye Movements 4. Psychological Refractory Period 5. Task Switching 6. Subitizing 7. Stroop 8. Driving Behavior 9. Situational Awareness 10. Graphical User Interfaces II. Learning & Memory 1. List Memory 2. Fan Effect 3. Implicit Learning 4. Skill Acquisition 5. Cognitive Arithmetic 6. Category Learning 7. Learning by Exploration and Demonstration 8. Updating Memory & Prospective Memory 9. Causal Learning Visit link.

22 ACT-R Architecture Matching Selection Execution Intentional Module
Lecture 3 ACT-R Architecture Matching Selection Execution Intentional Module Retrieval Buffer Goal Buffer Declarative Module Perceptual-Motor Buffers Perceptual-Motor Modules External World productions core production system ACT-R consists of a set of modules devoted to processing different kind of information. They can be divided into two: core production system that contains declarative, procedural memories and intentional module and associated retrieval and goal buffers, that are illustrated in the upper part of the figure; perceptual-motor systems, that consists of visual, auditory, motor and other modules that provide basic means of communication with the environment. The number of modules is not fixed but several has been implemented. Central production system communicates with modules through buffers, it can harvest the information there, and make requests to perform some action. The architecture is a combination of serial and parallel processing. Modules operate in parallel, but buffers can contain only one piece of information at a time. And only one production is executed at a time, serial processing enables the system to be always in control of computation. perceptual-motor system (Anderson et. al, 2004) COGS 511 COGS 511

23 ACT-R Buffers 1. Goal Buffer -represents where one is in the task
-preserves information across production cycles 2. Retrieval Buffer -holds information retrieval from declarative memory -seat of activation computations 3. Visual Buffers -location -visual objects 4. Auditory, Vocal, and Manual Buffers Modules and Core Production System communicate via buffers. A buffer can hold only one unit of information at a time. COGS 511

24 Basic Elements of ACT-R
Chunks are schema-like units of declarative knowledge. They have types and slots which can contain chunks themselves as values. Chunks can be created by productions or encodings in buffers. Productions are basic units of procedural knowledge. Any module can create/use chunks. Not all chunks are in Declarative Memory but Decl. Memory chunks cannot be changed from within a production; chunks merge into Decl Memory from buffers (ACT-R 6.0) COGS 511

25 Productions Modules operate in parallel and asynchronously but one production fires at each cycle. Production cycle is approximated at 50 ms. Productions can recognize information in buffers, can make requests to buffers, and update them. COGS 511

26 ACT-R: Knowledge Representation
Lecture 3 ACT-R: Knowledge Representation fact1 animal class isa category-fact fact1 DOG MAMMAL animal DOG class MAMMAL The examples represent a chunk that encodes a fact that “dog is a mammal” and a production that checks whether there is such a fact in declarative memory. COGS 511 Courtesy of Stepanov,E. COGS 511

27 Attending to a Word in Two Productions
Lecture 3 Attending to a Word in Two Productions (P find-next-word =goal> ISA comprehend-sentence word nil ==> +visual-location> ISA visual-location screen-x lowest attended nil word looking ) (P attend-next-word word looking =visual-location> word attending +visual> ISA visual-object screen-pos =visual-location  no word currently being processed.  find left-most unattended location  update state  looking for a word  visual location has been identified  update state  attend to object in that location =<name of the buffer> refers to the current contents of a buffer +<name of the buffer> request a retrieval to that buffer from sources of input Visual-location buffers determines the location of an object that is not yet attended to, whereas in second production that object is attended to and identified. See ACT-R Tutorial for full description COGS 511

28 Subsymbolic ACT-R Chunk retrieval depends on
Base level activation, which rises and falls acc. to practice and delay Contextual activation, association strength with slots of the current goal and attentional weighting (depends on fan) (partial) matching to retrieval specifications Noise A chunk will be retrieved only if its activation is over a threshold. The lower the activation of a chunk, the longer it takes to retrieve it (latency). COGS 511

29 Subsymbolic ACT-R Which production is selected to fire depends on its utility: past successes and failures of that specific production for the achievement of the current goal, the current goal’s importance and an estimate of the cost of the production given in seconds. COGS 511

30 ACT-R: Subsymbolic CHUNK i Sji Bi animal class Sji Wj class DOG MAMMAL
Lecture 3 ACT-R: Subsymbolic CHUNK i =goal> isa classification animal DOG kindof MAMMAL state questioned Sji Bi animal class Sji Wj class DOG MAMMAL Mki +retrieval> isa category-fact animal DOG class MAMMAL fact1 isa category-fact animal DOG Pk Subsymbolic level of the architecture controls overall behavior of the system. It determines the production rules to be selected with respect to their utilities, which are calculated by the Production Utility Equation. In the equation P is the probability of achieving the goal if the production is selected, G is the value of the goal which reflects its internal importance. C is the cost of the production, i.e. how long it will take to execute. Both probability of success and cost are learned with experience. And since production rules are noisy, the noise parameter is added to the equation to make the selection of productions variable. Activation Equation controls the retrieval of chunks from declarative memory based on their activation. Activation of the chunk is the sum of its base-level activation, which reflects general usefulness of the chunk in the past. Associative activation – which reflects its relevance to the current context. Chunks that are slot values in the goal chunk – activate certain chunks in the declarative memory with respect to the attentional weighting given to the slot it occupies and the strength of association between it and the chunk in declarative memory. Matching score – slots in the retrieval request similar to the slots in goal chunk have certain importance given to them, Pk, and they are multiplied with the degree of similarity between the slot value in the retrieval request and the corresponding slot of the chunk in declarative memory. Two noise parameters in the equation reflect the variability of encoding and retrieval (i.e. one added when the chunk was created, the other is added when the chunk is retrieved). Together with the retrieval threshold, that represented minimum activation necessary for the chunk to be retrieved, they account for errors of omission. i.e. failure to retrieve a chunk, and for errors of commission, i.e. retrieval of the wrong chunk. The retrieval time is also based on the activation of the chunk, the more active it is, the faster it is retrieved. class MAMMAL Production Utility Equation Activation Equation COGS 511 Courtesy of Stepanov,E. COGS 511

31 Parameters Default values of parameters are either taken from empirical data or are working approximations induced from a number of models (e.g. decay 0.5) Issue: All parameters can be set, but should you? COGS 511

32 Learning in ACT-R Production Compilation: Successive productions are built into a single production that has the effect of both except when perceptual-motor dependencies are in effect. Subsymbolic values are updated accordingly. COGS 511

33 Recent Developments Emphasis on finding neural anchors for the concepts in the ACT-R model: to acquire new sources of data to guide the development of the theory eg via predictions of BOLD functions in fMRI. Integration of ACT-R with a computational neuroscience toolkit (Nengo) Decision Tree Learning for production matching Threaded Cognition – managing a set of goals COGS 511

34 ACT-R 6.0 Mostly a reimplementation but the same theory.
Uniform and better module-buffer structure; a better vision module Support for multiple models; faster than ACT-R 5.0 etc New utility learning mechanism via temporal difference reinforcement learning COGS 511

35 Other Recent Applications
ACT-R and Semantic Web Integration ACT-R on a Robot Module for competitive and sequential tasks: Stroop, Picture Word Interference... Other modules in development: temporal, metacognitive, spatial... COGS 511

36 Newell Test for a Theory of Cognition
Lecture 3 Newell Test for a Theory of Cognition Flexible behaviour Real time performance Adaptive behaviour Vast Knowledge Base Dynamic Behaviour Knowledge Integration Natural Language Learning Development Evolution Consciousness Brain Realization Italics are for self-perceived areas needing improvement (see Anderson and Lebiere, 2003). COGS 511 COGS 511

37 Some Further General Developments
project – A volunteer computing project 3-D Brain (ACT-R); situated language and active vision (ACT-R); fatigue research (ACT-R) COGS 511

38 Lecture 4 Connectionist and Dynamic Modelling Paradigms
Lecture 4 Connectionist and Dynamic Modelling Paradigms Readings: McLeod et al. Chaps 1,5,7 Eliasmith. The Third Contender in Thagard, Chap. 13 HW is to be posted next week – do tutorials (esp. 1,2 and 3 during this week) See Forum activity on project readings COGS 511 COGS 511


Download ppt "Computational Cognitive Modelling"

Similar presentations


Ads by Google