Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: Christian Lebiere Human Computer.

Slides:



Advertisements
Similar presentations
MODELING PARAGIGMS IN ACT-R -Adhineth Mahanama.V.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
A Comparison of Rule-Based versus Exemplar-Based Categorization Using the ACT-R Architecture Matthew F. RUTLEDGE-TAYLOR, Christian LEBIERE, Robert THOMSON,
Behavioral Theories of Motor Control
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Brain functions and kinematics Mostafa M. Dini July 2012.
Integrating Object-Oriented Technology with Cognitive Modeling Pamela E. Scott-Johnson, Ph.D. Department of Psychology, College of Liberal Arts Bheem Kattel,
Copyright © Allyn & Bacon (2007) Research is a Process of Inquiry Graziano and Raulin Research Methods: Chapter 2 This multimedia product and its contents.
Copyright © Allyn & Bacon (2010) Research is a Process of Inquiry Graziano and Raulin Research Methods: Chapter 2 This multimedia product and its contents.
SAL: A Hybrid Cognitive Architecture Y. Vinokurov 1, C. Lebiere 1, D. Wyatte 2, S. Herd 2, R. O’Reilly 2 1. Carnegie Mellon University, 2. University of.
Threaded Cognition: An Integrated Theory of Concurrent Multitasking
Unit 7: Base-Level Activation February 25, February 26, 2002Unit 72 Activation Revisited  Both latency and probability of recall depend on activation.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
ACT-R.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Abstraction and ACT-R. Outline Motivations for the Theory –Architecture –Abstraction Production Rules ACT-R Architecture of Cognition.
Baysian Approaches Kun Guo, PhD Reader in Cognitive Neuroscience School of Psychology University of Lincoln Quantitative Methods 2011.
DDMLab – September 27, ACT-R models of training Cleotilde Gonzalez and Brad Best Dynamic Decision Making Laboratory (DDMLab)
Applying Multi-Criteria Optimisation to Develop Cognitive Models Peter Lane University of Hertfordshire Fernand Gobet Brunel University.
Research Methods for HCI: Cognitive Modelling BCS HCI Tutorial 1 st September, 2008.
Cognitive Modeling & Information Processing Metaphor.
THEORIES OF MIND: AN INTRODUCTION TO COGNITIVE SCIENCE Jay Friedenberg and Gordon Silverman.
Information Processing Approach Define cognition and differentiate among the stage, levels-of-processing, parallel distributed processing, and connectionist.
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence Using Expectations to Drive Cognitive Behavior Unmesh Kurup, Christian Lebiere,
Modeling Driver Behavior in a Cognitive Architecture
Chapter 7: Memory and Training Slide Template. WORKING MEMORY.
MODELING HUMAN PERFORMANCE Anderson et al. Article and Taatgen, Lebeire, & Anderson Article Presented by : Kelsey Baldree.
노홍찬.  ACT-R 5.0: An Integrated Theory of the Mind  J.R. Anderson et al., Psychological Review, 2004, 633 ◦ ACT-R architecture overview ◦ The.
Modeling meditation ?! Marieke van Vugt.
Welcome to the Data Warehouse HOME HELP COGNITIVE LEVELS Assessments COGNITIVE LEVELS.
Introduction to ACT-R Tutorial 21st Annual Conference
Cognitive Architectures: A Way Forward for the Psychology of Programming Michael Hansen Ph.D Student in Comp/Cog Sci CREST / Percepts & Concepts Lab Indiana.
HOW PROFESSIONALS LEARN AND ACQUIRE EXPERTISE  Model of professionals as learners:  How professionals know  How professionals incorporate knowledge.
Introduction to ACT-R 5.0 ACT-R Post Graduate Summer School 2001 Coolfont Resort ACT-R Home Page: John R. Anderson Psychology Department.
An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented.
Mental Models for Human-Robot Interaction Christian Lebiere Florian Jentsch and Scott Ososky 2 1 Psychology Department, Carnegie.
Learning in ACT-R: Chunking Revisited Richard L. Lewis Department of Psychology University of Michigan March 22, 2003.
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Compilation and Instruction ACT-R Post Graduate Summer School 2001 Coolfont Resort ACT-R Home Page: John R. Anderson Psychology.
Korea Univ. Division Information Management Engineering UI Lab. Korea Univ. Division Information Management Engineering UI Lab. IMS 802 Cognitive Modeling.
고려대학교 산업공학과 IND 643 Cognitive Engineering 1. Introduction  Divide-and-conquer strategy  Allen Newell (1973) – unified theory, production system  ACT-R,
ACT-R 6.0 Updates Summer ‘10 – Summer ‘12 Dan Bothell Carnegie Mellon University
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Theories of Learning: Cognitive Theories Dr. K. A. Korb University of Jos 15 May 2009.
1/28 Unit 3. Attention 3.1 Visual Locations 3.2 The Sperling Task 3.3 Visual Attention 3.4 Auditory Attention 3.5 Typing and Control 3.6 Declarative Finsts.
Human Abilities 2 How do people think? 1. Agenda Memory Cognitive Processes – Implications Recap 2.
Human Performance Modeling
U i Modleing SGOMS in ACT-R: Linking Macro- and Microcognition Lee Yong Ho.
Dynamic Decision Making Laboratory Carnegie Mellon University 1 Social and Decision Sciences Department ACT-R models of training Cleotilde Gonzalez and.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
CognitiveViews of Learning Chapter 7. Overview n n The Cognitive Perspective n n Information Processing n n Metacognition n n Becoming Knowledgeable.
ACT-R 5.0 Architecture Christian Lebiere Human-Computer Interaction Institute Carnegie Mellon University
Multitasking Computational Neuroscience NSCI 492 Spring 2008.
Copyright 2006 John Wiley & Sons, Inc Chapter 5 – Cognitive Engineering HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane Carey.
ACT-R 6.0 Software Updates Summer ‘09 – Summer ‘10 Dan Bothell Carnegie Mellon University
Computational Cognitive Modelling
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Modelling & Simulation of Semiconductor Devices Lecture 1 & 2 Introduction to Modelling & Simulation.
OPERATING SYSTEMS CS 3502 Fall 2017
Assist. Prof. Dr. Ilmiye Seçer Fall
Learning Fast and Slow John E. Laird
Chapter 7: Memory and Training
Introduction Artificial Intelligent.
Bryan Stearns University of Michigan Soar Workshop - May 2018
Presentation transcript:

Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: Christian Lebiere Human Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA

Tutorial Overview 1. Introduction 2. Symbolic ACT-R Declarative Representation: Chunks Procedural Representation: Productions ACT-R 5.0 Buffers: A Complete Model for Sentence Memory 3. Chunk Activation in ACT-R Activation Calculations Spreading Activation: The Fan Effect Partial Matching: Cognitive Arithmetic Noise: Paper Rocks Scissors Base-Level Learning: Paired Associate 4. Production Utility in ACT-R Principles and Building Sticks Example 5. Production Compilation Principles and Successes 6. Predicting fMRI BOLD response Principles and Algebra example

Motivations for a Cognitive Architecture 1. Philosophy: Provide a unified understanding of the mind. 2. Psychology: Account for experimental data. 3. Education: Provide cognitive models for intelligent tutoring systems and other learning environments. 4. Human Computer Interaction: Evaluate artifacts and help in their design. 5. Computer Generated Forces: Provide cognitive agents to inhabit training environments and games. 6. Neuroscience: Provide a framework for interpreting data from brain imaging.

Approach: Integrated Cognitive Models t Cognitive model = computational process that thinks/acts like a person  Integrated cognitive models … User Model Driver Model

Model PredictionsHuman Data Study 1: Dialing Times t Total time to complete dialing

Model PredictionsHuman Data Study 1: Lateral Deviation t Deviation from lane center (RMSE)

These Goals for Cognitive Architectures Require 1. Integration, not just of different aspects of higher level cognition but of cognition, perception, and action. 2. Systems that run in real time. 3. Robust behavior in the face of error, the unexpected, and the unknown. 4. Parameter-free predictions of behavior. 5. Learning.

History of the ACT-framework Predecessor HAM(Anderson & Bower 1973) Theory versions ACT-E (Anderson, 1976) ACT* (Anderson, 1978) ACT-R(Anderson, 1993) ACT-R 4.0(Anderson & Lebiere, 1998) ACT-R 5.0(Anderson & Lebiere, 2001) Implementations GRAPES(Sauers & Farrell, 1982) PUPS(Anderson & Thompson, 1989) ACT-R 2.0(Lebiere & Kushmerick, 1993) ACT-R 3.0(Lebiere, 1995) ACT-R 4.0(Lebiere, 1998) ACT-R/PM (Byrne, 1998) ACT-R 5.0(Lebiere, 2001) Windows Environment(Bothell, 2001) Macintosh Environment (Fincham, 2001)

I. Perception & Attention 1. Psychophysical Judgements 2. Visual Search 3. Eye Movements 4. Psychological Refractory Period 5. Task Switching 6. Subitizing 7. Stroop 8. Driving Behavior 9. Situational Awareness 10. Graphical User Interfaces II. Learning & Memory 1. List Memory 2. Fan Effect 3. Implicit Learning 4. Skill Acquisition 5. Cognitive Arithmetic 6. Category Learning 7. Learning by Exploration and Demonstration 8. Updating Memory & Prospective Memory 9. Causal Learning ~ 100 Published Models in ACT-R III. Problem Solving & Decision Making 1. Tower of Hanoi 2. Choice & Strategy Selection 3. Mathematical Problem Solving 4. Spatial Reasoning 5. Dynamic Systems 6. Use and Design of Artifacts 7. Game Playing 8. Insight and Scientific Discovery IV. Language Processing 1. Parsing 2. Analogy & Metaphor 3. Learning 4. Sentence Memory V. Other 1. Cognitive Development 2. Individual Differences 3. Emotion 4. Cognitive Workload 5. Computer Generated Forces 6. fMRI 7. Communication, Negotiation, Group Decision Making Visit link.

ACT-R 5.0 Environment Productions (Basal Ganglia) Retrieval Buffer (VLPFC) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Goal Buffer (DLPFC) Visual Buffer (Parietal) Manual Buffer (Motor) Manual Module (Motor/Cerebellum) Visual Module (Occipital/etc) Intentional Module (not identified) Declarative Module (Temporal/Hippocampus)

ACT-R: Knowledge Representation  goal buffer  visual buffer  retrieval buffer

ACT-R: Assumption Space

A DDITION -F ACT A DDEND 1 T HREE A DDEND 2 F OUR S UM F ACT 3+4 ( S EVEN ) isa Chunks: Example C HUNK -T YPE N AME S LOT1 S LOT2 S LOTN ( )

Chunks: Example (CLEAR-ALL) (CHUNK-TYPE addition-fact addend1 addend2 sum) (CHUNK-TYPE integer value) (ADD-DM (fact3+4 isa addition-fact addend1 three addend2 four sum seven) (three isa integer value 3) (four isa integer value 4) (seven isa integer value 7)

A DDITION -F ACT F ACT 3+4 A DDEND 1S UM A DDEND 2 T HREE F OUR S EVEN isa I NTEGER isa V ALUE 37 isa Chunks: Example V ALUE 4

Chunks: Exercise I Fact: Encoding: (Chunk-Type proposition agent action object) The cat sits on the mat. proposition action cat007 sits_on mat isa fact007 agent object (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) )

Chunks: Exercise II Fact The black cat with 5 legs sits on the mat. Chunks (Chunk-Type proposition agent action object) (Chunk-Type cat legs color) (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) (cat007 isa cat legs 5 color black) ) proposition action cat007 sits_on mat isa fact007 agentobject cat isa color 5 black legs

Chunks: Exercise III Fact Chunk The rich young professor buys a beautiful and expensive city house. (Chunk-Type proposition agent action object) (Chunk-Type prof money-status age) (Chunk-Type house kind price status) (Add-DM (fact008 isa proposition agent prof08 action buys object house1001 ) (prof08 isa prof money-status rich age young ) (obj1001 isa house kind city-house price expensive status beautiful ) proposition action buys isa fact008 agentobject prof isa prof08 age young rich house kind city-house obj1001 price expensive isa status beautiful money- status

A Production is 1. The greatest idea in cognitive science. 2. The least appreciated construct in cognitive science. 3. A 50 millisecond step of cognition. 4. The source of the serial bottleneck in otherwise parallel system. 5. A condition-action data structure with “variables”. 6. A formal specification of the flow of information from cortex to basal ganglia and back again.

Key Properties modularity modularity abstraction abstraction goal/buffer factoring goal/buffer factoring conditional asymmetry conditional asymmetry Productions ( p ==> ) Specification of Buffer Transformations condition part delimiter action part name Specification of Buffer Tests Structure of productions

ACT-R 5.0 Buffers 1. Goal Buffer (=goal, +goal) -represents where one is in the task -preserves information across production cycles 2. Retrieval Buffer (=retrieval, +retrieval) -holds information retrieval from declarative memory -seat of activation computations 3. Visual Buffers -location (=visual-location, +visual-location) -visual objects (=visual, +visual) -attention switch corresponds to buffer transformation 4. Auditory Buffers (=aural, +aural) -analogous to visual 5. Manual Buffers (=manual, +manual) -elaborate theory of manual movement include feature preparation, Fitts law, and device properties 6. Vocal Buffers (=vocal, +vocal) -analogous to manual buffers but less well developed

Model for Anderson (1974) Participants read a story consisting of Active and Passive sentences. Subjects are asked to verify either active or passive sentences. All Foils are Subject-Object Reversals. Predictions of ACT-R model are “almost” parameter-free. DATA:Studied-form/Test-form Active-active Active-passive Passive-active Passive-passive Targets: Foils: Predictions: Active-active Active-passive Passive-active Passive-passive Targets: Foils: CORRELATION: MEAN DEVIATION: 0.072

250m msec in the life of ACT-R: Reading the Word “The” Identifying Left-most Location Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Attending to Word Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Encoding Word Time : Read-Word Selected Time : Read-Word Fired Time : Failure Retrieved Skipping The Time : Skip-The Selected Time : Skip-The Fired

Attending to a Word in Two Productions (P find-next-word =goal> ISA comprehend-sentence word nil ==> +visual-location> ISA visual-location screen-x lowest attended nil =goal> word looking ) (P attend-next-word =goal> ISA comprehend-sentence word looking =visual-location> ISA visual-location ==> =goal> word attending +visual> ISA visual-object screen-pos =visual-location )  no word currently being processed.  find left-most unattended location  update state  looking for a word  visual location has been identified  update state  attend to object in that location

Processing “The” in Two Productions (P read-word =goal> ISA comprehend-sentence word attending =visual> ISA text value =word status nil ==> =goal> word =word +retrieval> ISA meaning word =word ) (P skip-the =goal> ISA comprehend-sentence word "the" ==> =goal> word nil )  attending to a word  word has been identified  hold word in goal buffer  retrieve word’s meaning  the word is “the”  set to process next word

Processing “missionary” in 450 msec. Identifying left-most unattended Location Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Attending to Word Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Encoding Word Time : Read-Word Selected Time : Read-Word Fired Time : Missionary Retrieved Processing the First Noun Time : Process-First-Noun Selected Time : Process-First-Noun Fired

Processing the Word “missionary” Missionary isa MEANING word "missionary" (P process-first-noun =goal> ISA comprehend-sentence agent nil action nil word =y =retrieval> ISA meaning word =y ==> =goal> agent =retrieval word nil )  neither agent or action has been assigned  word meaning has been retrieved  assign meaning to agent and set to process next word

Three More Words in the life of ACT-R: 950 msec. Processing “was” Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Time : Read-Word Selected Time : Read-Word Fired Time : Failure Retrieved Time : Skip-Was Selected Time : Skip-Was Fired Processing “feared” Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Time : Read-Word Selected Time : Read-Word Fired Time : Fear Retrieved Time : Process-Verb Selected Time : Process-Verb Fired Processing “by” Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Time : Read-Word Selected Time : Read-Word Fired Time : Failure Retrieved Time : Skip-By Selected Time : Skip-By Fired

(P skip-by =goal> ISA comprehend-sentence word "by" agent =per ==> =goal> word nil object =per agent nil ) Reinterpreting the Passive

Two More Words in the life of ACT-R: 700 msec. Processing “the” Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Time : Read-Word Selected Time : Read-Word Fired Time : Failure Retrieved Time : Skip-The Selected Time : Skip-The Fired Processing “cannibal” Time : Find-Next-Word Selected Time : Find-Next-Word Fired Time : Module :VISION running command FIND-LOCATION Time : Attend-Next-Word Selected Time : Attend-Next-Word Fired Time : Module :VISION running command MOVE-ATTENTION Time : Module :VISION running command FOCUS-ON Time : Read-Word Selected Time : Read-Word Fired Time : Cannibal Retrieved Time : Process-Last-Word-Agent Selected Time : Process-Last-Word-Agent Fired

Retrieving a Memory: 250 msec (P retrieve-answer =goal> ISA comprehend-sentence agent =agent action =verb object =object purpose test ==> =goal> purpose retrieve-test +retrieval> ISA comprehend-sentence action =verb purpose study )  sentence processing complete  update state  retrieve sentence involving verb Time : Retrieve-Answer Selected Time : Retrieve-Answer Fired Time : Goal Retrieved

Generating a Response: 410 ms. (P answer-no =goal> ISA comprehend-sentence agent =agent action =verb object =object purpose retrieve-test =retrieval> ISA comprehend-sentence - agent =agent action =verb - object =object purpose study ==> =goal> purpose done +manual> ISA press-key key "d" )  ready to test  retrieve sentence does not match agent or object  update state  indicate no Time : Answer-No Selected Time : Answer-No Fired Time : Module :MOTOR running command PRESS-KEY Time : Module :MOTOR running command PREPARATION-COMPLETE Time : Device running command OUTPUT-KEY

Subsymbolic Level 1. Production Utilities are responsible for determining which productions get selected when there is a conflict. 2. Production Utilities have been considerably simplified in ACT-R 5.0 over ACT-R Chunk Activations are responsible for determining which (if any chunks) get retrieved and how long it takes to retrieve them. 4. Chunk Activations have been simplified in ACT-R 5.0 and a major step has been taken towards the goal of parameter-free predictions by fixing a number of the parameters. As with the symbolic level, the subsymbolic level is not a static level, but is changing in the light of experience. Subsymbolic learning allows the system to adapt to the statistical structure of the environment. The subsymbolic level reflects an analytic characterization of connectionist computations. These computations have been implemented in ACT-RN (Lebiere & Anderson, 1993) but this is not a practical modeling system.

Chunk i Seven ThreeFour Addend1 Addend2 Sum =Goal> isa write relation sum arg1 Three arg2 Four + Conditions +Retrieval> isa addition-fact addend1 Three addend2 Four + Actions S ji Sim kl B i Activation

Chunk Activation base activation =+ Activation makes chunks available to the degree that past experiences indicate that they will be useful at the particular moment: Base-level: general past usefulness Associative Activation: relevance to the general context Matching Penalty: relevance to the specific match required Noise: stochastic is useful to avoid getting stuck in local minima associative strength source activation ( ) * similarity value mismatch penalty ( ) * ++ noise

Activation, Latency and Probability Retrieval time for a chunk is a negative exponential function of its activation: Probability of retrieval of a chunk follows the Boltzmann (softmax) distribution: The chunk with the highest activation is retrieved provided that it reaches the retrieval threshold  For purposes of latency and probability, the threshold can be considered as a virtual chunk

Base-level Activation The base level activation B i of chunk C i reflects a context- independent estimation of how likely C i is to match a production, i.e. B i is an estimate of the log odds that C i will be used. Two factors determine B i : frequency of using C i recency with which C i was used B i = ln ( ) P(C i ) A i = B i base activation =

Source Activation The source activations W j reflect the amount of attention given to elements, i.e. fillers, of the current goal. ACT-R assumes a fixed capacity for source activation W=  W j reflects an individual difference parameter. associative strength source activation ( ) + * +  W j * S ji j

Associative Strengths The association strength S ji between chunks C j and C i is a measure of how often C i was needed (retrieved) when C j was element of the goal, i.e. S ji estimates the log likelihood ratio of C j being a source of activation if C i was retrieved. associative strength source activation ( ) + * +  W j * S ji ( ) P(N i C j ) P(N i ) S ji = ln = S - ln(P(N i |C j ))

Application: Fan Effect

Partial Matching The mismatch penalty is a measure of the amount of control over memory retrieval: MP = 0 is free association; MP very large means perfect matching; intermediate values allow some mismatching in search of a memory match. Similarity values between desired value k specified by the production and actual value l present in the retrieved chunk. This provides generalization properties similar to those in neural networks; the similarity value is essentially equivalent to the dot-product between distributed representations. similarity value mismatch penalty ( ) * +

Application: Cognitive Arithmetic

Noise Noise provides the essential stochasticity of human behavior Noise also provides a powerful way of exploring the world Activation noise is composed of two noises: A permanent noise accounting for encoding variability A transient noise for moment-to-moment variation + noise

Application: Paper Rocks Scissors Too little noise makes the system too deterministic. Too much noise makes the system too random. This is not limited to game-playing situations! (Lebiere & West, 1999)

Base-Level Learning Based on the Rational Analysis of the Environment (Schooler & Anderson, 1997) Base-Level Activation reflects the log-odds that a chunk will be needed. In the environment the odds that a fact will be needed decays as a power function of how long it has been since it has been used. The effects of multiple uses sum in determining the odds of being used. Base-Level Learning Equation ≈ n(n / (1-d)) - d*n(L) Note: The decay parameter d has been set to.5 in most ACT-R models

Paired Associate: Study Time 5.000: Find Selected Time 5.050: Module :VISION running command FIND-LOCATION Time 5.050: Find Fired Time 5.050: Attend Selected Time 5.100: Module :VISION running command MOVE-ATTENTION Time 5.100: Attend Fired Time 5.150: Module :VISION running command FOCUS-ON Time 5.150: Associate Selected Time 5.200: Associate Fired (p associate =goal> isa goal arg1 =stimulus step attending state study =visual> isa text value =response status nil ==> =goal> isa goal arg2 =response step done +goal> isa goal state test step waiting)  attending word during study  visual buffer holds response  store response in goal with stimulus  prepare for next trial

Paired Associate: Successful Recall Time : Find Selected Time : Module :VISION running command FIND-LOCATION Time : Find Fired Time : Attend Selected Time : Module :VISION running command MOVE-ATTENTION Time : Attend Fired Time : Module :VISION running command FOCUS-ON Time : Read-Stimulus Selected Time : Read-Stimulus Fired Time : Goal Retrieved Time : Recall Selected Time : Module :MOTOR running command PRESS-KEY Time : Recall Fired Time : Module :MOTOR running command PREPARATION-COMPLETE Time : Device running command OUTPUT-KEY

Paired Associate: Successful Recall (cont.) (p read-stimulus =goal> isa goal step attending state test =visual> isa text value =val ==> +retrieval> isa goal relation associate arg1 =val =goal> isa goal relation associate arg1 =val step testing) (p recall =goal> isa goal relation associate arg1 =val step testing =retrieval> isa goal relation associate arg1 =val arg2 =ans ==> +manual> isa press-key key =ans =goal> step waiting)

Paired Associate Example TrialAccuracyLatency ? (collect-data 10)  Note simulated runs show random fluctuation. ACCURACY ( ) CORRELATION: MEAN DEVIATION: LATENCY ( ) CORRELATION: MEAN DEVIATION: NIL TrialAccuracyLatency DataPredictions

P is expected probability of success G is value of goal C is expected cost t reflects noise in evaluation and is like temperature in the Bolztman equation  is prior successes m is experienced successes  is prior failures n is experienced failures Production Utility

Building Sticks Task (Lovett)

(2/3)(5/6) Lovett & Anderson, 1996

Building Sticks Demo Web Address: ACT-R Home Page Published ACT-R Models Atomic Components of Thought Chapter 4 Building Sticks Model

Decay of Experience Note: Such temporal weighting is critical in the real world.

Production Compilation: The Basic Idea (p read-stimulus =goal> isa goal step attending state test =visual> isa text value =val ==> +retrieval> isa goal relation associate arg1 =val arg2 =ans =goal> relation associate arg1 =val step testing) (p recall =goal> isa goal relation associate arg1 =val step testing =retrieval> isa goal relation associate arg1 =val arg2 =ans ==> +manual> isa press-key key =ans =goal> step waiting) (p recall-vanilla =goal> isa goal step attending state test =visual> isa text value "vanilla ==> +manual> isa press-key key "7" =goal> relation associate arg1 "vanilla" step waiting)

Production Compilation: The Principles 1. Perceptual-Motor Buffers: Avoid compositions that will result in jamming when one tries to build two operations on the same buffer into the same production. 2. Retrieval Buffer: Except for failure tests proceduralize out and build more specific productions. 3. Goal Buffers: Complex Rules describing merging. 4. Safe Productions: Production will not produce any result that the original productions did not produce. 5. Parameter Setting: Successes = P*initial-experience* Failures = (1-P) *initial-experience* Efforts = (Successes + Efforts)(C + *cost-penalty*)

Production Compilation: The Successes 2. Taatgen: Learning of air-traffic control task – shows that production compilation can deal with complex perceptual motor skill. 3. Anderson: Learning of productions for performing paired associate task from instructions. Solves mystery of where the productions for doing an experiment come from. 4. Anderson: Learning to perform an anti-air warfare coordinator task from instructions. Shows the same as 2 & Anderson: Learning in the fan effect that produces the interaction between fan and practice. Justifies a major simplification in the parameterization of productions – no strength separate from utility. Note all of these examples involve all forms of learning occurring in ACT-R simultaneous – acquiring new chunks, acquiring new productions, activation learning, and utility learning. 1. Taatgen: Learning of inflection (English past and German plural). Shows that production compilation can come up with generalizations.

Predicting fMRI Bold Response from Buffer Activity Example: Retrieval buffer during equation-solving predicts activity in left dorsolateral prefrontal cortex. where D i is the duration of the ith retrieval and t i is the time of initiation of the retrieval.

c x + 3 = a a=18 b=6 c=5 1.5 Second Scans Load EquationBlank Period 21 Second Structure of fMRI Trial

Solving 5 x + 3 = 18 Time 3.000: Find-Right-Term Selected Time 3.050: Find-Right-Term Fired Time 3.050: Module :VISION running command FIND- Time 3.050: Attend-Next-Term-Equation Selected Time 3.100: Attend-Next-Term-Equation Fired Time 3.100: Module :VISION running command MOVE- Time 3.150: Module :VISION running command FOCUS-ON Time 3.150: Encode Selected Time 3.200: Encode Fired Time 3.281: 18 Retrieved Time 3.281: Process-Value-Integer Selected Time 3.331: Process-Value-Integer Fired Time 3.331: Module :VISION running command FIND- Time 3.331: Attend-Next-Term-Equation Selected Time 3.381: Attend-Next-Term-Equation Fired Time 3.381: Module :VISION running command MOVE- Time 3.431: Module :VISION running command FOCUS-ON Time 3.431: Encode Selected Time 3.481: Encode Fired Time 3.562: 3 Retrieved Time 3.562: Process-Op1-Integer Selected Time 3.612: Process-Op1-Integer Fired Time 3.612: Module :VISION running command FIND- Time 3.612: Attend-Next-Term-Equation Selected Time 3.662: Attend-Next-Term-Equation Fired Time 3.662: Module :VISION running command MOVE- Time 3.712: Module :VISION running command FOCUS-ON Time 3.712: Encode Selected Time 3.762: Encode Fired Time 4.362: Inverse-of-+ Retrieved Time 4.362: Process-Operator Selected

Solving 5 x + 3 = 18 (cont.) Time 4.412: Process-Operator Fired Time 5.012: F318 Retrieved Time 5.012: Finish-Operation1 Selected Time 5.062: Finish-Operation1 Fired Time 5.062: Module :VISION running command FIND- Time 5.062: Attend-Next-Term-Equation Selected Time 5.112: Attend-Next-Term-Equation Fired Time 5.112: Module :VISION running command MOVE- Time 5.162: Module :VISION running command FOCUS-ON Time 5.162: Encode Selected Time 5.212: Encode Fired Time 5.293: 5 Retrieved Time 5.293: Process-Op2-Integer Selected Time 5.343: Process-Op2-Integer Fired Time 5.943: F315 Retrieved Time 5.943: Finish-Operation2 Selected Time 5.993: Finish-Operation2 Fired Time 5.993: Retrieve-Key Selected Time 6.043: Retrieve-Key Fired Time 6.124: 3 Retrieved Time 6.124: Generate-Answer Selected Time 6.174: Generate-Answer Fired Time 6.174: Module :MOTOR running command PRESS-KEY Time 6.424: Module :MOTOR running command PREPARATION- Time 6.574: Device running command OUTPUT-KEY ("3" 3.574)

Left Dorsolateral Prefrontal Cortex

Scan (1.5 sec.) Percent Activation Change cx + 3 = a 5x + 3 = 18