Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
PROBLEM SOLVING AND SEARCH
Q uest for the elusive transfer Danièle Bracke Michel Aubé PhD, Sciences cognitives.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Expert System Human expert level performance Limited application area Large component of task specific knowledge Knowledge based system Task specific knowledge.
Teaching by Fostering Learning Strategies EDU 221.
SPECIFYING COGNITIVE MODELS Using Patterns and Conflicts A. Macklem, F. Mili Oakland University S. Dungrani TARDEC June, 2004.
UnInformed Search What to do when you don’t know anything.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Meaningful Learning in an Information Age
EECC722 - Shaaban #1 Lec # 9 Fall Conventional & Block-based Trace Caches In high performance superscalar processors the instruction fetch.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Empirical Explorations with The Logical Theory Machine: A Case Study in Heuristics by Allen Newell, J. C. Shaw, & H. A. Simon by Allen Newell, J. C. Shaw,
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l all learning algorithms,
Operating Systems Lecture No. 2. Basic Elements  At a top level, a computer consists of a processor, memory and I/ O Components.  These components are.
Overview Of Expert System Tools Expert System Tools : are all designed to support prototyping. Prototype : is a working model that is functionally equivalent.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
A Roadmap towards Machine Intelligence
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
/ PSWLAB Thread Modular Model Checking by Cormac Flanagan and Shaz Qadeer (published in Spin’03) Hong,Shin Thread Modular Model.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Son Thanh To Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California Dongkyu Choi Department of Aerospace Engineering University.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Navigation Piles with Applications to Sorting, Priority Queues, and Priority Deques Jyrki Katajainen and Fabio Vitale Department of Computing, University.
Learning Teleoreactive Logic Programs by Observation
Artificial Intelligence Problem solving by searching CSC 361
Introduction Artificial Intelligent.
What to do when you don’t know anything know nothing
An Integrated Theory of the Mind
CSE (c) S. Tanimoto, 2001 Search-Introduction
Hierarchical Modeling & Constructive Solid Geometry
A Value-Driven Architecture for Intelligent Behavior
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Presentation transcript:

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture Thanks to D. Choi, T. Konik, U. Kutur, D. Nau, S. Rogers, and D. Shapiro for their many contributions. This talk reports research funded by grants from DARPA IPTO, which is not responsible for its contents.

The I CARUS Architecture I CARUS is a theory of the human cognitive architecture that posits: It shares the assumptions with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993). 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Learning involves monotonic addition of elements to memory 6.Learning is incremental and interleaved with performance

Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these tenets also appear in Bonasso et al.s (2003) 3T, Freeds APEX, and Sun et al.s (2001) CLARION. 1.Cognition is grounded in perception and action 2.Categories and skills are separate cognitive entities 3.Short-term elements are instances of long-term structures 4.Inference and execution are more basic than problem solving 5.Skill/concept hierarchies are learned in a cumulative manner

Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Like other unified cognitive architectures, I CARUS incorporates a number of distinct modules.

I CARUS Memories and Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory concepts skills

I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. concepts skills

Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from intentions. I CARUS matches patterns to recognize concepts and select skills.

I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no This organization reflects the psychological distinction between automatized and controlled behavior.

Means-Ends Problem Solving in I CARUS Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Previous versions of I CARUS have used means-ends analysis, which has been observed repeatedly in humans, but it differs from most versions in that it interleaves backward chaining with execution.

A Successful Means-Ends Trace (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C initial state goal

Problem Solving as Iterative Sampling However, in some domains, humans carry out forward-chaining search with methods like progressive deepening (de Groot, 1978). In response, we have added a new module to I CARUS that: performs mental simulation of a single trajectory consistent with its stored hierarchical skills; performs mental simulation of a single trajectory consistent with its stored hierarchical skills; repeats this process to find a number of alternative paths from the same initial state; repeats this process to find a number of alternative paths from the same initial state; selects the path that produces the best outcome to determine the next primitive skill to execute. selects the path that produces the best outcome to determine the next primitive skill to execute. We refer to this memory-limited search method as hierarchical iterative sampling (Langley, 1992).

More on Iterative Sampling Our initial version of forward search makes a few implausible psychological assumptions: stochastic path selection and final choices are based on reachability heuristics from the AI planning literature; stochastic path selection and final choices are based on reachability heuristics from the AI planning literature; parameters determine the depth of search and number of iterations, rather than memory capacity and time available; parameters determine the depth of search and number of iterations, rather than memory capacity and time available; no progressive deepening occurs when two alternatives produce similar scores. no progressive deepening occurs when two alternatives produce similar scores. Nevertheless, it seems a promising first step toward modeling heuristic search in domains like chess.

Unifying Forward and Backward Search A key question concerns when humans carry out means-ends analysis vs. forward search; some candidate hypotheses are: they use backward chaining except when the branching factor from the goal becomes too large, as in most games; they use backward chaining except when the branching factor from the goal becomes too large, as in most games; they favor backward chaining when goals are very specific and forward search for less constrained goals; they favor backward chaining when goals are very specific and forward search for less constrained goals; they prefer backward chaining but fall back on forward search when they retrieve no relevant skills (Jones & Langley, 2006). they prefer backward chaining but fall back on forward search when they retrieve no relevant skills (Jones & Langley, 2006). We need detailed psychological studies to select among these alternatives or replace them with better ones. Once answered, we can incorporate the results into I CARUS to offer a unified theory of human problem solving.

Contributions of the Research includes hierarchical memories for concepts and skills; includes hierarchical memories for concepts and skills; interleaves conceptual inference with reactive execution; interleaves conceptual inference with reactive execution; resorts to problem solving when it lacks relevant skills; resorts to problem solving when it lacks relevant skills; carries out both means-ends analysis and forward search. carries out both means-ends analysis and forward search. I CARUS is a unified theory of the cognitive architecture that: The latter each account for some aspects of human problem solving, but not for when to invoke each method. Explaining this choice should be a high priority for future work. For more information about the I CARUS architecture, see:

End of Presentation

I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0)))

Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

Directions for Future Research progressive deepening in forward-chaining search progressive deepening in forward-chaining search graded nature of categories and category learning graded nature of categories and category learning model-based character of human reasoning model-based character of human reasoning persistent but limited nature of short-term memories persistent but limited nature of short-term memories creating perceptual chunks to reduce these limitations creating perceptual chunks to reduce these limitations storing and retrieving episodic memory traces storing and retrieving episodic memory traces Future work on I CARUS should incorporate other ideas about: These additions will increase further I CARUS debt to psychology. For more details, see: