Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
In the name of God An Application of Planning An Application of PlanningJSHOP BY: M. Eftekhari and G. Yaghoobi.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture7: PushDown Automata (Part 1) Prof. Amos Israeli.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Meaningful Learning in an Information Age
Problem Solving & Creativity Dr. Claudia J. Stanny EXP 4507 Memory & Cognition Spring 2009.
CSCI-383 Object-Oriented Programming & Design Lecture 4.
Fundamentals of Python: From First Programs Through Data Structures Chapter 14 Linear Collections: Stacks.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Introduction CS 3358 Data Structures. What is Computer Science? Computer Science is the study of algorithms, including their  Formal and mathematical.
SE: CHAPTER 7 Writing The Program
Cognitive Views of Learning
Introduction CS 3358 Data Structures. What is Computer Science? Computer Science is the study of algorithms, including their  Formal and mathematical.
Chapter Five The Cognitive Approach II: Memory, Imagery, and Problem Solving.
Implementing Subprograms What actions must take place when subprograms are called and when they terminate? –calling a subprogram has several associated.
Cognitive Theories of Learning Dr. K. A. Korb University of Jos.
Theories of Learning: Cognitive Theories Dr. K. A. Korb University of Jos 15 May 2009.
Plans and Situated Actions
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
 How would you rate your memory? Does this number vary from day to day? Morning to evening?
(1) Test Driven Development Philip Johnson Collaborative Software Development Laboratory Information and Computer Sciences University of Hawaii Honolulu.
How conscious experience and working memory interact Bernard J. Baars and Stan Franklin Soft Computing Laboratory 김 희 택 TRENDS in Cognitive Sciences vol.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Planning I: Total Order Planners Sections
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Son Thanh To Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California Dongkyu Choi Department of Aerospace Engineering University.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Knowledge Representation and Reasoning
Computer Organization
Learning Fast and Slow John E. Laird
Using Cognitive Science To Inform Instructional Design
Conflicting Perspectives in Curriculum Organization.
Learning Teleoreactive Logic Programs by Observation
Knowledge Representation and Reasoning
Chapter 1: Object-Oriented Thinking
Presentation transcript:

Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA An Extended Theory of Human Problem Solving Thanks to D. Choi, K. Cummings, N. Nejati, S. Sage, D. Shapiro, and J. Xuan for their contributions. This talk reports research. funded by grants from DARPA IPTO and the US National Science Foundation, which are not responsible for its contents.

Traditional theories claim that human problem solving occurs in response to unfamiliar tasks and involves: the mental inspection and manipulation of list structures; the mental inspection and manipulation of list structures; search through a space of states generated by operators; search through a space of states generated by operators; backward chaining from goals through means-ends analysis; backward chaining from goals through means-ends analysis; a shift from backward to forward chaining with experience. a shift from backward to forward chaining with experience. The Standard Theory of Problem Solving These claims characterize problem solving accurately, but this does not mean they are complete.

We maintain that the standard theory is incomplete and that: Human problem solving occurs in a physical context. Human problem solving occurs in a physical context. Problem solving abstracts away from physical details, yet must return to them for implementing solutions. Problem solving abstracts away from physical details, yet must return to them for implementing solutions. Problem solving interleaves reasoning with execution. Problem solving interleaves reasoning with execution. Eager execution can lead to situations that require restarts. Eager execution can lead to situations that require restarts. Learning from solutions transforms backward chaining into informed forward execution. Learning from solutions transforms backward chaining into informed forward execution. Further Claims about Problem Solving These claims are not entirely new, but they have received little attention in previous computational models.

The I CARUS Architecture 1.Cognition grounded in perception and action 2.Cognitive separation of categories and skills 3.Hierarchical organization of long-term memory 4.Cumulative learning of skill/concept hierarchies 5.Correspondence of long-term/short-term structures We have embedded these extensions in I CARUS, a cognitive architecture that builds on five principles: These ideas distinguish I CARUS from other architectures like ACT-R, Soar, and EPIC.

Hierarchical Structure of Long-Term Memory concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS organizes both concepts and skills in a hierarchical manner.

I CARUS Memories and Processes Long-TermConceptualMemory Short-TermConceptualMemory Short-TermGoal/SkillMemory Categorization and Inference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval Long-Term Skill Memory

The Physical Context of Problem Solving 1.places descriptions of sensed objects in the perceptual buffer; 2.infers instances of concepts implied by the current situation; 3.finds paths through the skill hierarchy from top-level goals; 4.selects one or more applicable skill paths for execution; 5.invokes the actions associated with each selected path. I CARUS is a cognitive architecture for physical, embodied agents. On each successive perception-execution cycle, the architecture: Problem solving in I CARUS builds upon this basic ability to recognize physical situations and execute skills therein.

Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from intentions. I CARUS matches patterns to recognize concepts and select skills.

Abstraction from Physical Details conceptual inference augments perceptions using high-level concepts that provide abstract state descriptions. conceptual inference augments perceptions using high-level concepts that provide abstract state descriptions. execution operates over high-level durative skills that serve as abstract problem-space operators. execution operates over high-level durative skills that serve as abstract problem-space operators. both inference and execution occur in an automated manner that demands few attentional resources. both inference and execution occur in an automated manner that demands few attentional resources. I CARUS typically pursues problem solving at an abstract level: However, concepts are always grounded in primitive percepts and skills always terminate in executable actions. I CARUS holds that cognition relies on a symbolic physical system which utilizes mental models of the environment.

Interleaved Problem Solving and Execution chains backward off skills that would produce the goal; chains backward off skills that would produce the goal; chains backwards off concepts if no skills are available; chains backwards off concepts if no skills are available; creates subgoals based on skill or concept conditions; creates subgoals based on skill or concept conditions; pushes these subgoals onto a goal stack and recurses; pushes these subgoals onto a goal stack and recurses; executes any selected skill as soon as it is applicable. executes any selected skill as soon as it is applicable. I CARUS includes a module for means-ends problem solving that: Embedding execution within problem solving reduces memory load and uses the environment as an external store.

Restarting on Problems detecting when action has made backtracking impossible; detecting when action has made backtracking impossible; storing the goal context to avoid repeating the error; storing the goal context to avoid repeating the error; physically restarting the problem in the initial situation; physically restarting the problem in the initial situation; repeating this process until succeeding or giving up. repeating this process until succeeding or giving up. Even when combined with backtracking, eager execution can lead problem solving to unrecoverable states. I CARUS problem solver handles such untenable situations by: This strategy produces quite different behavior from the purely mental systematic search assumed by most models.

Interleaved Problem Solving and Execution Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. This is traditional means-ends analysis, with three exceptions: (1) conjunctive goals must be defined concepts; (2) backward chaining occurs over both skills and concepts; and (3) selected skills are executed whenever applicable.

Learning from Problem Solutions operates whenever problem solving overcomes an impasse; operates whenever problem solving overcomes an impasse; incorporates only information available from the goal stack; incorporates only information available from the goal stack; generalizes beyond the specific objects concerned; generalizes beyond the specific objects concerned; depends on whether chaining involved skills or concepts; depends on whether chaining involved skills or concepts; supports cumulative learning and within-problem transfer. supports cumulative learning and within-problem transfer. I CARUS incorporates a mechanism for learning new skills that: This skill creation process is fully interleaved with means-ends analysis and execution. Learned skills carry out forward execution in the environment rather than backward chaining in the mind.

Execution, Problem Solving, and Learning Executed plan Problem ? Skill Hierarchy Primitive Skills Skill Execution impasse? Problem Solving yes no Skill Learning

(ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C Constructing Skills from a Trace

Learned Skills in the Blocks World Learned Skills in the Blocks World (clear (?C) :percepts((block ?D) (block ?C)) :start(unstackable ?D ?C) :skills((unstack ?D ?C))) (clear (?B) :percepts ((block ?C) (block ?B)) :start[(on ?C ?B) (hand-empty)] :skills((unstackable ?C ?B) (unstack ?C ?B))) (unstackable (?C ?B) :percepts((block ?B) (block ?C)) :start [(on ?C ?B) (hand-empty)] :skills((clear ?C) (hand-empty))) (hand-empty ( ) :percepts ((block ?D) (table ?T1)) :start (putdownable ?D ?T1) :skills ((putdown ?D ?T1)))

Three Questions about Skill Learning What is the hierarchical structure of the skill network? What is the hierarchical structure of the skill network? The structure is determined by the subproblems that arise in problem solving, which, because operator conditions and goals are single literals, form a semilattice. The structure is determined by the subproblems that arise in problem solving, which, because operator conditions and goals are single literals, form a semilattice. What are the heads of the learned clauses/methods? What are the heads of the learned clauses/methods? The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. What are the conditions on the learned clauses/methods? What are the conditions on the learned clauses/methods? If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved concept chaining, they are the subconcepts that held at the outset of the subproblem. If the subproblem involved concept chaining, they are the subconcepts that held at the outset of the subproblem.

Related Theoretical Extensions Zhang and Norman have noted the role of external memory; Zhang and Norman have noted the role of external memory; Gunzelman has modeled interleaved planning and execution; Gunzelman has modeled interleaved planning and execution; Jones and Langley modeled restarts on unsolved problems; Jones and Langley modeled restarts on unsolved problems; Soar and Prodigy learn production rules from impasses. Soar and Prodigy learn production rules from impasses. We are not the first to propose revisions to the standard theory: However, I CARUS is the first cognitive architecture that includes these extensions in a unified way.

Directions for Future Research How much do humans abstract away from physical details? How much do humans abstract away from physical details? How often do they return to this setting during their search? How often do they return to this setting during their search? How tightly do they interleave cognition with execution? How tightly do they interleave cognition with execution? Under what conditions do they start over on a problem? Under what conditions do they start over on a problem? How rapidly do they acquire automatized strategies? How rapidly do they acquire automatized strategies? Many questions about human problem solving still remain open: We should address these and related issues in future extensions to the standard theory of problem solving.