Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,

Slides:



Advertisements
Similar presentations
Problem- Based Learning in STEM Disciplines Saturday, November 10, 2007 JHU/MSU STEM Initiative.
Advertisements

Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
SETTINGS AS COMPLEX ADAPTIVE SYSTEMS AN INTRODUCTION TO COMPLEXITY SCIENCE FOR HEALTH PROMOTION PROFESSIONALS Nastaran Keshavarz Mohammadi Don Nutbeam,
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
PROBLEM SOLVING AND SEARCH
Q uest for the elusive transfer Danièle Bracke Michel Aubé PhD, Sciences cognitives.
Presented by: Thabet Kacem Spring Outline Contributions Introduction Proposed Approach Related Work Reconception of ADLs XTEAM Tool Chain Discussion.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Expert System Human expert level performance Limited application area Large component of task specific knowledge Knowledge based system Task specific knowledge.
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Meaningful Learning in an Information Age
Taxonomies of Learning Foundational Knowledge: Understanding and remembering information and ideas. Application: Skills Critical, creative, and practical.
Integrated Interpretation and Generation of Task-Oriented Dialogue Alfredo Gabaldon 1 Ben Meadows 2 Pat Langley 1,2 1 Carnegie Mellon Silicon Valley 2.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Introduction to AI Robotics Chapter 2. The Hierarchical Paradigm Hyeokjae Kwon.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Webb’s Depth of Knowledge
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
Learning Theories with Technology Learning Theories with Technology By: Jessica Rubinstein.
Learning Theory Last Update Copyright Kenneth M. Chipps Ph.D
Construct-Centered Design (CCD) What is CCD? Adaptation of aspects of learning-goals-driven design (Krajcik, McNeill, & Reiser, 2007) and evidence- centered.
Chapter 8 Object Design Reuse and Patterns. Object Design Object design is the process of adding details to the requirements analysis and making implementation.
Math Units. The following are pictures of what you will see when you go to the internet.
Cognitive Theories of Learning Dr. K. A. Korb University of Jos.
Theories of Learning: Cognitive Theories Dr. K. A. Korb University of Jos 15 May 2009.
Ch2b: Decisions &Decision Makers Decision Support Systems in the 21 st Century by George M. Marakas.
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Beyond Chunking: Learning in Soar March 22, 2003 John E. Laird Shelley Nason, Andrew Nuxoll and a cast of many others University of Michigan.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Computational Assistance for Systems Biology of Aging Thanks to.
Son Thanh To Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California Dongkyu Choi Department of Aerospace Engineering University.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Using Cognitive Science To Inform Instructional Design
Learning Teleoreactive Logic Programs by Observation
Artificial Intelligence (AI)
Architecture Components
Introduction Artificial Intelligent.
Artificial Intelligence Chapter 25. Agent Architectures
A Value-Driven Architecture for Intelligent Behavior
Presentation transcript:

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi, T. Konik, U. Kutur, D. Nau, S. Ohlsson, S. Rogers, and D. Shapiro for their many contributions. This talk reports research partly funded by grants from DARPA IPTO, which is not responsible for its contents.

The I CARUS Architecture I CARUS is a theory of the human cognitive architecture that posits: It shares the assumptions with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993). 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Learning involves monotonic addition of elements to memory 6.Learning is incremental and interleaved with performance

Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these tenets also appear in Bonasso et al.s (2003) 3T, Freeds APEX, and Sun et al.s (2001) CLARION. 1.Cognition is grounded in perception and action 2.Categories and skills are separate cognitive entities 3.Short-term elements are instances of long-term structures 4.Inference and execution are more basic than problem solving 5.Skill hierarchies are learned in a cumulative manner

Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Like other unified cognitive architectures, I CARUS incorporates a number of distinct modules.

I CARUS Memories and Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory concepts skills

I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. concepts skills

Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from goals. I CARUS matches patterns to recognize concepts and select skills.

I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no This organization reflects the psychological distinction between automatized and controlled behavior.

Means-Ends Problem Solving in I CARUS Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Previous versions of I CARUS have used means-ends analysis, which has been observed repeatedly in humans, but it differs from standard variants in that it interleaves backward chaining with execution.

Learning from Problem Solutions operates whenever problem solving overcomes an impasse; operates whenever problem solving overcomes an impasse; incorporates only information available from the goal stack; incorporates only information available from the goal stack; generalizes beyond the specific objects concerned; generalizes beyond the specific objects concerned; depends on whether chaining involved skills or concepts; depends on whether chaining involved skills or concepts; supports cumulative learning and within-problem transfer. supports cumulative learning and within-problem transfer. I CARUS incorporates a mechanism for learning new skills that: This skill creation process is fully interleaved with means-ends analysis and execution. Learned skills carry out forward execution in the environment rather than backward chaining in the mind.

Forward Search and Mental Simulation However, in some domains, humans carry out forward-chaining search with methods like progressive deepening (de Groot, 1978). In response, we are adding a new module to I CARUS that: performs mental simulation of a single trajectory consistent with its stored hierarchical skills; performs mental simulation of a single trajectory consistent with its stored hierarchical skills; repeats this process to find a number of alternative paths from the current environmental state; repeats this process to find a number of alternative paths from the current environmental state; selects the path that produces the best outcome to determine the next primitive skill to execute. selects the path that produces the best outcome to determine the next primitive skill to execute. We refer to this memory-limited search method as hierarchical iterative sampling (Langley, 1992).

A Trace of Iterative Sampling

Challenges in Lookahead Search To support such mental simulation in I CARUS, we must first: extend its representation to associate beliefs with states; extend its representation to associate beliefs with states; use expected values to guide selection of desirable paths. use expected values to guide selection of desirable paths. We want a single mechanism that will let I CARUS handle all of these situations. This should be easy for some domains, but it must also: reason about environments that change on their own; reason about environments that change on their own; operate in settings that involve other goal-directed agents. operate in settings that involve other goal-directed agents.

More on Mental Simulation We must address other issues to make this idea operational: determine the depth of lookahead and number of samples; determine the depth of lookahead and number of samples; ensure reasonable diversity among the sampled paths; ensure reasonable diversity among the sampled paths; explain when problem solvers chain backward and forward; explain when problem solvers chain backward and forward; use the results of forward search to drive skill acquisition. use the results of forward search to drive skill acquisition. Answering these questions will let I CARUS provide a more complete theory of human problem solving.

Learning from Undesirable Outcomes Despite their best efforts, humans sometimes take actions that produce undesired results. We plan to model learning from such outcomes in I CARUS by: identifying conditions on path that, if violated, avoid result; identifying conditions on path that, if violated, avoid result; carry out search to find another path that would violate it; carry out search to find another path that would violate it; analyze the alternative path to learn skills that produce it; analyze the alternative path to learn skills that produce it; store the new skills so as to mask older, problematic ones. store the new skills so as to mask older, problematic ones. Learning from such counterfactual reasoning is an important human ability.

Plans for Evaluation We propose to evaluate these extensions to I CARUS on two different testbeds: a simulated urban driving environment that contains other vehicles and pedestrians; a simulated urban driving environment that contains other vehicles and pedestrians; a mobile robot that carries out joint activities with humans to achieve shared goals. a mobile robot that carries out joint activities with humans to achieve shared goals. Both dynamic environments should illustrate the benefits of mental simulation and counterfactual learning.

Concluding Remarks includes hierarchical memories for concepts and skills; includes hierarchical memories for concepts and skills; interleaves conceptual inference with reactive execution; interleaves conceptual inference with reactive execution; resorts to problem solving when it lacks routine skills; resorts to problem solving when it lacks routine skills; learns such skills from successful resolution of impasses. learns such skills from successful resolution of impasses. I CARUS is a unified theory of the cognitive architecture that: However, we plan to extend the framework so it can support: forward-chaining search via repeated mental simulation; forward-chaining search via repeated mental simulation; learning new skills through counterfactual reasoning. learning new skills through counterfactual reasoning. These will let I CARUS more fully model human cognition.

End of Presentation

I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0)))

Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

A Successful Means-Ends Trace (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C initial state goal