Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: Christian Lebiere Human Computer.

Similar presentations


Presentation on theme: "Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: Christian Lebiere Human Computer."— Presentation transcript:

1 Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: http://act.psy.cmu.edu Christian Lebiere Human Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA 15213 cl@cmu.edu

2 Tutorial Overview 1. Introduction 2. Symbolic ACT-R Declarative Representation: Chunks Procedural Representation: Productions ACT-R 5.0 Buffers: A Complete Model for Sentence Memory 3. Chunk Activation in ACT-R Activation Calculations Spreading Activation: The Fan Effect Partial Matching: Cognitive Arithmetic Noise: Paper Rocks Scissors Base-Level Learning: Paired Associate 4. Production Utility in ACT-R Principles and Building Sticks Example 5. Production Compilation Principles and Successes 6. Predicting fMRI BOLD response Principles and Algebra example

3 Motivations for a Cognitive Architecture 1. Philosophy: Provide a unified understanding of the mind. 2. Psychology: Account for experimental data. 3. Education: Provide cognitive models for intelligent tutoring systems and other learning environments. 4. Human Computer Interaction: Evaluate artifacts and help in their design. 5. Computer Generated Forces: Provide cognitive agents to inhabit training environments and games. 6. Neuroscience: Provide a framework for interpreting data from brain imaging.

4 Approach: Integrated Cognitive Models t Cognitive model = computational process that thinks/acts like a person  Integrated cognitive models … User Model Driver Model

5 Model PredictionsHuman Data Study 1: Dialing Times t Total time to complete dialing

6 Model PredictionsHuman Data Study 1: Lateral Deviation t Deviation from lane center (RMSE)

7 These Goals for Cognitive Architectures Require 1. Integration, not just of different aspects of higher level cognition but of cognition, perception, and action. 2. Systems that run in real time. 3. Robust behavior in the face of error, the unexpected, and the unknown. 4. Parameter-free predictions of behavior. 5. Learning.

8 History of the ACT-framework Predecessor HAM(Anderson & Bower 1973) Theory versions ACT-E (Anderson, 1976) ACT* (Anderson, 1978) ACT-R(Anderson, 1993) ACT-R 4.0(Anderson & Lebiere, 1998) ACT-R 5.0(Anderson & Lebiere, 2001) Implementations GRAPES(Sauers & Farrell, 1982) PUPS(Anderson & Thompson, 1989) ACT-R 2.0(Lebiere & Kushmerick, 1993) ACT-R 3.0(Lebiere, 1995) ACT-R 4.0(Lebiere, 1998) ACT-R/PM (Byrne, 1998) ACT-R 5.0(Lebiere, 2001) Windows Environment(Bothell, 2001) Macintosh Environment (Fincham, 2001)

9 I. Perception & Attention 1. Psychophysical Judgements 2. Visual Search 3. Eye Movements 4. Psychological Refractory Period 5. Task Switching 6. Subitizing 7. Stroop 8. Driving Behavior 9. Situational Awareness 10. Graphical User Interfaces II. Learning & Memory 1. List Memory 2. Fan Effect 3. Implicit Learning 4. Skill Acquisition 5. Cognitive Arithmetic 6. Category Learning 7. Learning by Exploration and Demonstration 8. Updating Memory & Prospective Memory 9. Causal Learning ~ 100 Published Models in ACT-R 1997-2002 III. Problem Solving & Decision Making 1. Tower of Hanoi 2. Choice & Strategy Selection 3. Mathematical Problem Solving 4. Spatial Reasoning 5. Dynamic Systems 6. Use and Design of Artifacts 7. Game Playing 8. Insight and Scientific Discovery IV. Language Processing 1. Parsing 2. Analogy & Metaphor 3. Learning 4. Sentence Memory V. Other 1. Cognitive Development 2. Individual Differences 3. Emotion 4. Cognitive Workload 5. Computer Generated Forces 6. fMRI 7. Communication, Negotiation, Group Decision Making Visit http://act.psy.cmu.edu/papers/ACT-R_Models.htm link.

10 ACT-R 5.0 Environment Productions (Basal Ganglia) Retrieval Buffer (VLPFC) Matching (Striatum) Selection (Pallidum) Execution (Thalamus) Goal Buffer (DLPFC) Visual Buffer (Parietal) Manual Buffer (Motor) Manual Module (Motor/Cerebellum) Visual Module (Occipital/etc) Intentional Module (not identified) Declarative Module (Temporal/Hippocampus)

11 ACT-R: Knowledge Representation  goal buffer  visual buffer  retrieval buffer

12 ACT-R: Assumption Space

13 A DDITION -F ACT A DDEND 1 T HREE A DDEND 2 F OUR S UM F ACT 3+4 ( S EVEN ) isa Chunks: Example C HUNK -T YPE N AME S LOT1 S LOT2 S LOTN ( )

14 Chunks: Example (CLEAR-ALL) (CHUNK-TYPE addition-fact addend1 addend2 sum) (CHUNK-TYPE integer value) (ADD-DM (fact3+4 isa addition-fact addend1 three addend2 four sum seven) (three isa integer value 3) (four isa integer value 4) (seven isa integer value 7)

15 A DDITION -F ACT F ACT 3+4 A DDEND 1S UM A DDEND 2 T HREE F OUR S EVEN isa I NTEGER isa V ALUE 37 isa Chunks: Example V ALUE 4

16 Chunks: Exercise I Fact: Encoding: (Chunk-Type proposition agent action object) The cat sits on the mat. proposition action cat007 sits_on mat isa fact007 agent object (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) )

17 Chunks: Exercise II Fact The black cat with 5 legs sits on the mat. Chunks (Chunk-Type proposition agent action object) (Chunk-Type cat legs color) (Add-DM (fact007 isa proposition agent cat007 action sits_on object mat) (cat007 isa cat legs 5 color black) ) proposition action cat007 sits_on mat isa fact007 agentobject cat isa color 5 black legs

18 Chunks: Exercise III Fact Chunk The rich young professor buys a beautiful and expensive city house. (Chunk-Type proposition agent action object) (Chunk-Type prof money-status age) (Chunk-Type house kind price status) (Add-DM (fact008 isa proposition agent prof08 action buys object house1001 ) (prof08 isa prof money-status rich age young ) (obj1001 isa house kind city-house price expensive status beautiful ) proposition action buys isa fact008 agentobject prof isa prof08 age young rich house kind city-house obj1001 price expensive isa status beautiful money- status

19 A Production is 1. The greatest idea in cognitive science. 2. The least appreciated construct in cognitive science. 3. A 50 millisecond step of cognition. 4. The source of the serial bottleneck in otherwise parallel system. 5. A condition-action data structure with “variables”. 6. A formal specification of the flow of information from cortex to basal ganglia and back again.

20 Key Properties modularity modularity abstraction abstraction goal/buffer factoring goal/buffer factoring conditional asymmetry conditional asymmetry Productions ( p ==> ) Specification of Buffer Transformations condition part delimiter action part name Specification of Buffer Tests Structure of productions

21 ACT-R 5.0 Buffers 1. Goal Buffer (=goal, +goal) -represents where one is in the task -preserves information across production cycles 2. Retrieval Buffer (=retrieval, +retrieval) -holds information retrieval from declarative memory -seat of activation computations 3. Visual Buffers -location (=visual-location, +visual-location) -visual objects (=visual, +visual) -attention switch corresponds to buffer transformation 4. Auditory Buffers (=aural, +aural) -analogous to visual 5. Manual Buffers (=manual, +manual) -elaborate theory of manual movement include feature preparation, Fitts law, and device properties 6. Vocal Buffers (=vocal, +vocal) -analogous to manual buffers but less well developed

22 Model for Anderson (1974) Participants read a story consisting of Active and Passive sentences. Subjects are asked to verify either active or passive sentences. All Foils are Subject-Object Reversals. Predictions of ACT-R model are “almost” parameter-free. DATA:Studied-form/Test-form Active-active Active-passive Passive-active Passive-passive Targets: 2.25 2.80 2.30 2.75 Foils: 2.55 2.95 2.55 2.95 Predictions: Active-active Active-passive Passive-active Passive-passive Targets: 2.36 2.86 2.36 2.86 Foils: 2.51 3.01 2.51 3.01 CORRELATION: 0.978 MEAN DEVIATION: 0.072

23 250m msec in the life of ACT-R: Reading the Word “The” Identifying Left-most Location Time 63.900: Find-Next-Word Selected Time 63.950: Find-Next-Word Fired Time 63.950: Module :VISION running command FIND-LOCATION Attending to Word Time 63.950: Attend-Next-Word Selected Time 64.000: Attend-Next-Word Fired Time 64.000: Module :VISION running command MOVE-ATTENTION Time 64.050: Module :VISION running command FOCUS-ON Encoding Word Time 64.050: Read-Word Selected Time 64.100: Read-Word Fired Time 64.100: Failure Retrieved Skipping The Time 64.100: Skip-The Selected Time 64.150: Skip-The Fired

24 Attending to a Word in Two Productions (P find-next-word =goal> ISA comprehend-sentence word nil ==> +visual-location> ISA visual-location screen-x lowest attended nil =goal> word looking ) (P attend-next-word =goal> ISA comprehend-sentence word looking =visual-location> ISA visual-location ==> =goal> word attending +visual> ISA visual-object screen-pos =visual-location )  no word currently being processed.  find left-most unattended location  update state  looking for a word  visual location has been identified  update state  attend to object in that location

25 Processing “The” in Two Productions (P read-word =goal> ISA comprehend-sentence word attending =visual> ISA text value =word status nil ==> =goal> word =word +retrieval> ISA meaning word =word ) (P skip-the =goal> ISA comprehend-sentence word "the" ==> =goal> word nil )  attending to a word  word has been identified  hold word in goal buffer  retrieve word’s meaning  the word is “the”  set to process next word

26 Processing “missionary” in 450 msec. Identifying left-most unattended Location Time 64.150: Find-Next-Word Selected Time 64.200: Find-Next-Word Fired Time 64.200: Module :VISION running command FIND-LOCATION Attending to Word Time 64.200: Attend-Next-Word Selected Time 64.250: Attend-Next-Word Fired Time 64.250: Module :VISION running command MOVE-ATTENTION Time 64.300: Module :VISION running command FOCUS-ON Encoding Word Time 64.300: Read-Word Selected Time 64.350: Read-Word Fired Time 64.550: Missionary Retrieved Processing the First Noun Time 64.550: Process-First-Noun Selected Time 64.600: Process-First-Noun Fired

27 Processing the Word “missionary” Missionary 0.000 isa MEANING word "missionary" (P process-first-noun =goal> ISA comprehend-sentence agent nil action nil word =y =retrieval> ISA meaning word =y ==> =goal> agent =retrieval word nil )  neither agent or action has been assigned  word meaning has been retrieved  assign meaning to agent and set to process next word

28 Three More Words in the life of ACT-R: 950 msec. Processing “was” Time 64.600: Find-Next-Word Selected Time 64.650: Find-Next-Word Fired Time 64.650: Module :VISION running command FIND-LOCATION Time 64.650: Attend-Next-Word Selected Time 64.700: Attend-Next-Word Fired Time 64.700: Module :VISION running command MOVE-ATTENTION Time 64.750: Module :VISION running command FOCUS-ON Time 64.750: Read-Word Selected Time 64.800: Read-Word Fired Time 64.800: Failure Retrieved Time 64.800: Skip-Was Selected Time 64.850: Skip-Was Fired Processing “feared” Time 64.850: Find-Next-Word Selected Time 64.900: Find-Next-Word Fired Time 64.900: Module :VISION running command FIND-LOCATION Time 64.900: Attend-Next-Word Selected Time 64.950: Attend-Next-Word Fired Time 64.950: Module :VISION running command MOVE-ATTENTION Time 65.000: Module :VISION running command FOCUS-ON Time 65.000: Read-Word Selected Time 65.050: Read-Word Fired Time 65.250: Fear Retrieved Time 65.250: Process-Verb Selected Time 65.300: Process-Verb Fired Processing “by” Time 65.300: Find-Next-Word Selected Time 65.350: Find-Next-Word Fired Time 65.350: Module :VISION running command FIND-LOCATION Time 65.350: Attend-Next-Word Selected Time 65.400: Attend-Next-Word Fired Time 65.400: Module :VISION running command MOVE-ATTENTION Time 65.450: Module :VISION running command FOCUS-ON Time 65.450: Read-Word Selected Time 65.500: Read-Word Fired Time 65.500: Failure Retrieved Time 65.500: Skip-By Selected Time 65.550: Skip-By Fired

29 (P skip-by =goal> ISA comprehend-sentence word "by" agent =per ==> =goal> word nil object =per agent nil ) Reinterpreting the Passive

30 Two More Words in the life of ACT-R: 700 msec. Processing “the” Time 65.550: Find-Next-Word Selected Time 65.600: Find-Next-Word Fired Time 65.600: Module :VISION running command FIND-LOCATION Time 65.600: Attend-Next-Word Selected Time 65.650: Attend-Next-Word Fired Time 65.650: Module :VISION running command MOVE-ATTENTION Time 65.700: Module :VISION running command FOCUS-ON Time 65.700: Read-Word Selected Time 65.750: Read-Word Fired Time 65.750: Failure Retrieved Time 65.750: Skip-The Selected Time 65.800: Skip-The Fired Processing “cannibal” Time 65.800: Find-Next-Word Selected Time 65.850: Find-Next-Word Fired Time 65.850: Module :VISION running command FIND-LOCATION Time 65.850: Attend-Next-Word Selected Time 65.900: Attend-Next-Word Fired Time 65.900: Module :VISION running command MOVE-ATTENTION Time 65.950: Module :VISION running command FOCUS-ON Time 65.950: Read-Word Selected Time 66.000: Read-Word Fired Time 66.200: Cannibal Retrieved Time 66.200: Process-Last-Word-Agent Selected Time 66.250: Process-Last-Word-Agent Fired

31 Retrieving a Memory: 250 msec (P retrieve-answer =goal> ISA comprehend-sentence agent =agent action =verb object =object purpose test ==> =goal> purpose retrieve-test +retrieval> ISA comprehend-sentence action =verb purpose study )  sentence processing complete  update state  retrieve sentence involving verb Time 66.250: Retrieve-Answer Selected Time 66.300: Retrieve-Answer Fired Time 66.500: Goal123032 Retrieved

32 Generating a Response: 410 ms. (P answer-no =goal> ISA comprehend-sentence agent =agent action =verb object =object purpose retrieve-test =retrieval> ISA comprehend-sentence - agent =agent action =verb - object =object purpose study ==> =goal> purpose done +manual> ISA press-key key "d" )  ready to test  retrieve sentence does not match agent or object  update state  indicate no Time 66.500: Answer-No Selected Time 66.700: Answer-No Fired Time 66.700: Module :MOTOR running command PRESS-KEY Time 66.850: Module :MOTOR running command PREPARATION-COMPLETE Time 66.910: Device running command OUTPUT-KEY

33 Subsymbolic Level 1. Production Utilities are responsible for determining which productions get selected when there is a conflict. 2. Production Utilities have been considerably simplified in ACT-R 5.0 over ACT-R 4.0. 3. Chunk Activations are responsible for determining which (if any chunks) get retrieved and how long it takes to retrieve them. 4. Chunk Activations have been simplified in ACT-R 5.0 and a major step has been taken towards the goal of parameter-free predictions by fixing a number of the parameters. As with the symbolic level, the subsymbolic level is not a static level, but is changing in the light of experience. Subsymbolic learning allows the system to adapt to the statistical structure of the environment. The subsymbolic level reflects an analytic characterization of connectionist computations. These computations have been implemented in ACT-RN (Lebiere & Anderson, 1993) but this is not a practical modeling system.

34 Chunk i Seven ThreeFour Addend1 Addend2 Sum =Goal> isa write relation sum arg1 Three arg2 Four + Conditions +Retrieval> isa addition-fact addend1 Three addend2 Four + Actions S ji Sim kl B i Activation

35 Chunk Activation base activation =+ Activation makes chunks available to the degree that past experiences indicate that they will be useful at the particular moment: Base-level: general past usefulness Associative Activation: relevance to the general context Matching Penalty: relevance to the specific match required Noise: stochastic is useful to avoid getting stuck in local minima associative strength source activation ( ) * similarity value mismatch penalty ( ) * ++ noise

36 Activation, Latency and Probability Retrieval time for a chunk is a negative exponential function of its activation: Probability of retrieval of a chunk follows the Boltzmann (softmax) distribution: The chunk with the highest activation is retrieved provided that it reaches the retrieval threshold  For purposes of latency and probability, the threshold can be considered as a virtual chunk

37 Base-level Activation The base level activation B i of chunk C i reflects a context- independent estimation of how likely C i is to match a production, i.e. B i is an estimate of the log odds that C i will be used. Two factors determine B i : frequency of using C i recency with which C i was used B i = ln ( ) P(C i ) A i = B i base activation =

38 Source Activation The source activations W j reflect the amount of attention given to elements, i.e. fillers, of the current goal. ACT-R assumes a fixed capacity for source activation W=  W j reflects an individual difference parameter. associative strength source activation ( ) + * +  W j * S ji j

39 Associative Strengths The association strength S ji between chunks C j and C i is a measure of how often C i was needed (retrieved) when C j was element of the goal, i.e. S ji estimates the log likelihood ratio of C j being a source of activation if C i was retrieved. associative strength source activation ( ) + * +  W j * S ji ( ) P(N i C j ) P(N i ) S ji = ln = S - ln(P(N i |C j ))

40 Application: Fan Effect

41 Partial Matching The mismatch penalty is a measure of the amount of control over memory retrieval: MP = 0 is free association; MP very large means perfect matching; intermediate values allow some mismatching in search of a memory match. Similarity values between desired value k specified by the production and actual value l present in the retrieved chunk. This provides generalization properties similar to those in neural networks; the similarity value is essentially equivalent to the dot-product between distributed representations. similarity value mismatch penalty ( ) * +

42 Application: Cognitive Arithmetic

43 Noise Noise provides the essential stochasticity of human behavior Noise also provides a powerful way of exploring the world Activation noise is composed of two noises: A permanent noise accounting for encoding variability A transient noise for moment-to-moment variation + noise

44 Application: Paper Rocks Scissors Too little noise makes the system too deterministic. Too much noise makes the system too random. This is not limited to game-playing situations! (Lebiere & West, 1999)

45 Base-Level Learning Based on the Rational Analysis of the Environment (Schooler & Anderson, 1997) Base-Level Activation reflects the log-odds that a chunk will be needed. In the environment the odds that a fact will be needed decays as a power function of how long it has been since it has been used. The effects of multiple uses sum in determining the odds of being used. Base-Level Learning Equation ≈ n(n / (1-d)) - d*n(L) Note: The decay parameter d has been set to.5 in most ACT-R models

46 Paired Associate: Study Time 5.000: Find Selected Time 5.050: Module :VISION running command FIND-LOCATION Time 5.050: Find Fired Time 5.050: Attend Selected Time 5.100: Module :VISION running command MOVE-ATTENTION Time 5.100: Attend Fired Time 5.150: Module :VISION running command FOCUS-ON Time 5.150: Associate Selected Time 5.200: Associate Fired (p associate =goal> isa goal arg1 =stimulus step attending state study =visual> isa text value =response status nil ==> =goal> isa goal arg2 =response step done +goal> isa goal state test step waiting)  attending word during study  visual buffer holds response  store response in goal with stimulus  prepare for next trial

47 Paired Associate: Successful Recall Time 10.000: Find Selected Time 10.050: Module :VISION running command FIND-LOCATION Time 10.050: Find Fired Time 10.050: Attend Selected Time 10.100: Module :VISION running command MOVE-ATTENTION Time 10.100: Attend Fired Time 10.150: Module :VISION running command FOCUS-ON Time 10.150: Read-Stimulus Selected Time 10.200: Read-Stimulus Fired Time 10.462: Goal Retrieved Time 10.462: Recall Selected Time 10.512: Module :MOTOR running command PRESS-KEY Time 10.512: Recall Fired Time 10.762: Module :MOTOR running command PREPARATION-COMPLETE Time 10.912: Device running command OUTPUT-KEY

48 Paired Associate: Successful Recall (cont.) (p read-stimulus =goal> isa goal step attending state test =visual> isa text value =val ==> +retrieval> isa goal relation associate arg1 =val =goal> isa goal relation associate arg1 =val step testing) (p recall =goal> isa goal relation associate arg1 =val step testing =retrieval> isa goal relation associate arg1 =val arg2 =ans ==> +manual> isa press-key key =ans =goal> step waiting)

49 Paired Associate Example TrialAccuracyLatency 1.0000.000 2.5262.156 3.6671.967 4.7981.762 5.8871.680 6.9241.552 7.9581.467 8.9541.402 ? (collect-data 10)  Note simulated runs show random fluctuation. ACCURACY (0.0 0.515 0.570 0.740 0.850 0.865 0.895 0.930) CORRELATION: 0.996 MEAN DEVIATION: 0.053 LATENCY (0 2.102 1.730 1.623 1.589 1.508 1.552 1.462) CORRELATION: 0.988 MEAN DEVIATION: 0.112 NIL TrialAccuracyLatency 1.0000.000 2.5152.102 3.5701.730 4.7401.623 5.8501.584 6.8651.508 7.8951.552 8.9301.462 DataPredictions

50 P is expected probability of success G is value of goal C is expected cost t reflects noise in evaluation and is like temperature in the Bolztman equation  is prior successes m is experienced successes  is prior failures n is experienced failures Production Utility

51 Building Sticks Task (Lovett)

52 (2/3)(5/6) Lovett & Anderson, 1996

53 Building Sticks Demo Web Address: ACT-R Home Page Published ACT-R Models Atomic Components of Thought Chapter 4 Building Sticks Model

54

55 Decay of Experience Note: Such temporal weighting is critical in the real world.

56 Production Compilation: The Basic Idea (p read-stimulus =goal> isa goal step attending state test =visual> isa text value =val ==> +retrieval> isa goal relation associate arg1 =val arg2 =ans =goal> relation associate arg1 =val step testing) (p recall =goal> isa goal relation associate arg1 =val step testing =retrieval> isa goal relation associate arg1 =val arg2 =ans ==> +manual> isa press-key key =ans =goal> step waiting) (p recall-vanilla =goal> isa goal step attending state test =visual> isa text value "vanilla ==> +manual> isa press-key key "7" =goal> relation associate arg1 "vanilla" step waiting)

57 Production Compilation: The Principles 1. Perceptual-Motor Buffers: Avoid compositions that will result in jamming when one tries to build two operations on the same buffer into the same production. 2. Retrieval Buffer: Except for failure tests proceduralize out and build more specific productions. 3. Goal Buffers: Complex Rules describing merging. 4. Safe Productions: Production will not produce any result that the original productions did not produce. 5. Parameter Setting: Successes = P*initial-experience* Failures = (1-P) *initial-experience* Efforts = (Successes + Efforts)(C + *cost-penalty*)

58 Production Compilation: The Successes 2. Taatgen: Learning of air-traffic control task – shows that production compilation can deal with complex perceptual motor skill. 3. Anderson: Learning of productions for performing paired associate task from instructions. Solves mystery of where the productions for doing an experiment come from. 4. Anderson: Learning to perform an anti-air warfare coordinator task from instructions. Shows the same as 2 & 3. 5. Anderson: Learning in the fan effect that produces the interaction between fan and practice. Justifies a major simplification in the parameterization of productions – no strength separate from utility. Note all of these examples involve all forms of learning occurring in ACT-R simultaneous – acquiring new chunks, acquiring new productions, activation learning, and utility learning. 1. Taatgen: Learning of inflection (English past and German plural). Shows that production compilation can come up with generalizations.

59 Predicting fMRI Bold Response from Buffer Activity Example: Retrieval buffer during equation-solving predicts activity in left dorsolateral prefrontal cortex. where D i is the duration of the ith retrieval and t i is the time of initiation of the retrieval.

60 c x + 3 = a a=18 b=6 c=5 1.5 Second Scans Load EquationBlank Period 21 Second Structure of fMRI Trial

61 Solving 5 x + 3 = 18 Time 3.000: Find-Right-Term Selected Time 3.050: Find-Right-Term Fired Time 3.050: Module :VISION running command FIND- Time 3.050: Attend-Next-Term-Equation Selected Time 3.100: Attend-Next-Term-Equation Fired Time 3.100: Module :VISION running command MOVE- Time 3.150: Module :VISION running command FOCUS-ON Time 3.150: Encode Selected Time 3.200: Encode Fired Time 3.281: 18 Retrieved Time 3.281: Process-Value-Integer Selected Time 3.331: Process-Value-Integer Fired Time 3.331: Module :VISION running command FIND- Time 3.331: Attend-Next-Term-Equation Selected Time 3.381: Attend-Next-Term-Equation Fired Time 3.381: Module :VISION running command MOVE- Time 3.431: Module :VISION running command FOCUS-ON Time 3.431: Encode Selected Time 3.481: Encode Fired Time 3.562: 3 Retrieved Time 3.562: Process-Op1-Integer Selected Time 3.612: Process-Op1-Integer Fired Time 3.612: Module :VISION running command FIND- Time 3.612: Attend-Next-Term-Equation Selected Time 3.662: Attend-Next-Term-Equation Fired Time 3.662: Module :VISION running command MOVE- Time 3.712: Module :VISION running command FOCUS-ON Time 3.712: Encode Selected Time 3.762: Encode Fired Time 4.362: Inverse-of-+ Retrieved Time 4.362: Process-Operator Selected

62 Solving 5 x + 3 = 18 (cont.) Time 4.412: Process-Operator Fired Time 5.012: F318 Retrieved Time 5.012: Finish-Operation1 Selected Time 5.062: Finish-Operation1 Fired Time 5.062: Module :VISION running command FIND- Time 5.062: Attend-Next-Term-Equation Selected Time 5.112: Attend-Next-Term-Equation Fired Time 5.112: Module :VISION running command MOVE- Time 5.162: Module :VISION running command FOCUS-ON Time 5.162: Encode Selected Time 5.212: Encode Fired Time 5.293: 5 Retrieved Time 5.293: Process-Op2-Integer Selected Time 5.343: Process-Op2-Integer Fired Time 5.943: F315 Retrieved Time 5.943: Finish-Operation2 Selected Time 5.993: Finish-Operation2 Fired Time 5.993: Retrieve-Key Selected Time 6.043: Retrieve-Key Fired Time 6.124: 3 Retrieved Time 6.124: Generate-Answer Selected Time 6.174: Generate-Answer Fired Time 6.174: Module :MOTOR running command PRESS-KEY Time 6.424: Module :MOTOR running command PREPARATION- Time 6.574: Device running command OUTPUT-KEY ("3" 3.574)

63 Left Dorsolateral Prefrontal Cortex

64 91011121314 Scan (1.5 sec.) Percent Activation Change cx + 3 = a 5x + 3 = 18


Download ppt "Introduction to ACT-R 5.0 Tutorial 24th Annual Conference Cognitive Science Society ACT-R Home Page: Christian Lebiere Human Computer."

Similar presentations


Ads by Google