Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.

Slides:



Advertisements
Similar presentations
Problem- Based Learning in STEM Disciplines Saturday, November 10, 2007 JHU/MSU STEM Initiative.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
LECTURE 6 COGNITIVE THEORIES OF CONSCIOUSNESS
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Formal Semantics for an Abstract Agent Programming Language K.V. Hindriks, Ch. Mayer et al. Lecture Notes In Computer Science, Vol. 1365, 1997
The CLARION Cognitive Architecture: A Tutorial Part 5 – Conclusion Nick Wilson, Michael Lynch, Ron Sun, Sébastien Hélie Cognitive Science, Rensselaer Polytechnic.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Software Architecture in Practice (3rd Ed) Introduction
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
Reasoning Abilities Slide #1 김 민 경 Reasoning Abilities David F. Lohman Psychological & Quantitative Foundations College of Education University.
Modeling Driver Behavior in a Cognitive Architecture
Integrated Interpretation and Generation of Task-Oriented Dialogue Alfredo Gabaldon 1 Ben Meadows 2 Pat Langley 1,2 1 Carnegie Mellon Silicon Valley 2.
MODELING HUMAN PERFORMANCE Anderson et al. Article and Taatgen, Lebeire, & Anderson Article Presented by : Kelsey Baldree.
Guide to Simulation Run Graphic: The simulation runs show ME (memory element) activation, production matching and production firing during activation of.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Learning Theories with Technology Learning Theories with Technology By: Jessica Rubinstein.
Cattell-Horn-Carroll model of psychometric abilities.
Korea Univ. Division Information Management Engineering UI Lab. Korea Univ. Division Information Management Engineering UI Lab. IMS 802 Cognitive Modeling.
APPROACHES TO PSYCHOLOGY. Theoretical Approaches Since the 1950s, psychologists have adopted a number of diverse approaches to understanding human nature.
Cognitive Science and Biomedical Informatics Department of Computer Sciences ALMAAREFA COLLEGES.
Cognitive Modular Neural Architecture
Vector and symbolic processors
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
SOAR A cognitive architecture By: Majid Ali Khan.
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
March 31, 2006 Assessing Students with the Most Significant Cognitive Disabilities Step 4: Who are the Students who take Alternate Assessments on Alternate.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Joscha Bach Nick Cassimatis Ken Forbus Ben Goertzel Stacey Marsella John Laird Pat Langley Christian Lebiere Paul Rosenbloom Matthias Scheutz Satinder.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Using Cognitive Science To Inform Instructional Design
Learning Teleoreactive Logic Programs by Observation
Formalizations of Commonsense Psychology
Lee, Jung-Woo Interdisciplinary Program in Cognitive Science
Introduction Artificial Intelligent.
Symbolic cognitive architectures
An Integrated Theory of the Mind
A Value-Driven Architecture for Intelligent Behavior
Presentation transcript:

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks to D. Choi, T. Konik, N. Li, D. Shapiro, and D. Stracuzzi for their contributions. This talk reports research partly funded by grants from DARPA IPTO, which is not responsible for its contents.

Cognitive Architectures A cognitive architecture (Newell, 1990) is the infrastructure for an intelligent system that is constant across domains: the memories that store domain-specific content the systems representation and organization of knowledge the mechanisms that use this knowledge in performance the processes that learn this knowledge from experience An architecture typically comes with a programming language that eases construction of knowledge-based systems. Research in this area incorporates many ideas from psychology about the nature of human thinking.

The I CARUS Architecture I CARUS (Langley, 2006) is a computational theory of the human cognitive architecture that posits: It shares these assumptions with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993) Short-term memories are distinct from long-term stores 2. 2.Memories contain modular elements cast as symbolic structures 3. 3.Long-term structures are accessed through pattern matching 4. 4.Cognition occurs in retrieval/selection/action cycles 5. 5.Learning involves monotonic addition of elements to memory 6. 6.Learning is incremental and interleaved with performance

Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these tenets also appear in Bonasso et al.s (2003) 3T, Freeds (1998) APEX, and Sun et al.s (2001) CLARION Cognition is grounded in perception and action 2. 2.Categories and skills are separate cognitive entities 3. 3.Short-term elements are instances of long-term structures 4. 4.Inference and execution are more basic than problem solving 5. 5.Skill/concept hierarchies are learned in a cumulative manner

Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Like other unified cognitive architectures, I CARUS incorporates a number of distinct modules.

Goals for I CARUS Our main objectives in developing I CARUS are to produce: a computational theory of higher-level cognition in humans that is qualitatively consistent with results from psychology that exhibits as many distinct cognitive functions as possible Although quantitative fits to specific results are desirable, they can distract from achieving broad theoretical coverage.

An I CARUS Agent for Urban Driving Consider driving a vehicle in a city, which requires: selecting routes obeying traffic lights avoiding collisions being polite to others finding addresses staying in the lane parking safely stopping for pedestrians following other vehicles delivering packages These tasks range from low-level execution to high-level reasoning.

I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) (<= ?dist 0)))

Structure and Use of Conceptual Memory I CARUS organizes conceptual memory in a hierarchical manner. Conceptual inference occurs from the bottom up, starting from percepts to produce high-level beliefs about the current state.

Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :actions ((steer 1))) I CARUS Skills for In-City Driving

I CARUS Skills Build on Concepts concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS stores skills in a hierarchical manner that links to concepts.

Skill Execution in I CARUS This process repeats on each cycle to give teleoreactive control (Nilsson, 1994) with a bias toward persistence of initiated skills. Skill execution occurs from the top down, starting from goals to find applicable paths through the skill hierarchy.

Execution and Problem Solving in I CARUS Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Problem solving involves means-ends analysis that chains backward over skills and concept definitions, executing skills whenever they become applicable.

I CARUS Learns Skills from Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Skill Learning

Learning from Problem Solutions operates whenever problem solving overcomes an impasse incorporates only information available from the goal stack generalizes beyond the specific objects concerned depends on whether chaining involved skills or concepts supports cumulative learning and within-problem transfer I CARUS incorporates a mechanism for learning new skills that: This skill creation process is fully interleaved with means-ends analysis and execution. Learned skills carry out forward execution in the environment rather than backward chaining in the mind.

I CARUS Memories and Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

An I CARUS Agent for Urban Combat

I CARUS Summary includes hierarchical memories for concepts and skills; interleaves conceptual inference with reactive execution; resorts to problem solving when it lacks routine skills; learns such skills from successful resolution of impasses. I CARUS is a unified theory of the cognitive architecture that: We have developed I CARUS agents for a variety of simulated physical environments, including urban driving. However, it has a number of limitations that we must address to improve its coverage of human intelligence.

Challenge 1: Arbitrary Behaviors Retrieving relevant candidate skills for execution Determining when skill execution should terminate Constructing new skills from successful solutions I CARUS indexes skills by the goals they achieve; this aids in: But these goals can describe only instantaneous states of the environment, which limits I CARUS representational power. For example, it cannot encode skills for complex dance steps that end where they start or the notion of a round trip.

Incorporating Temporal Constraints Concepts that indicate temporal relations which must hold among their subconcepts Skills that use these temporally-defined concepts as their goals and subgoals A belief memory that includes episodic traces of when each concept instance began and ended To support richer skills, we are extending I CARUS to include: We are also augmenting its inference, execution, and learning modules to take advantage of these temporal structures.

Any round trip from A to B involves: First being located at place A Then being located at place B Then being located at place A again We can specify this concept in the new formalism as: The Concept of a Round Trip ((round-trip ?self ?a ?b) :percepts((self ?self) (location ?a) (location ?b)) :relations((at ?self ?a) ?start1 ?end1 (at ?self ?b) ?start2 ?end2 (at ?self ?a) ?start3 ?end3) :constraints (( ?end1 ?start2) ( ?end2 ?start3)))

The inference module automatically adds episodic traces like: Episodes and Skills for Round Trips ((round-trip ?self ?a ?b) :percepts((self ?self) (location ?a) (location ?b)) :start((at ?self ?a) ?start1 ?end1) :subgoals((at ?self ?b) ?start2 ?end2 (at ?self ?a) ?start3 ?end3) :constraints (( ?end1 ?start2) ( ?end2 ?start3))) The execution module compares these to extended skills like: [(at me loc1) ][(home loc1) 200 …] [(in-transit me loc1 loc2) ][(office loc2) 220 …] [(at me loc2) ] [(at me loc1) 558 …] This checks their heads and uses constraints to order subgoals.

Challenge 2: Robust Learning Solving novel problems through means-ends analysis Analyzing the steps used to achieve each subgoal Storing one skill clause for each solved subproblem I CARUS currently acquires new hierarchical skill clauses by: However, this mechanism has two important limitations: It can create skills with overly general start conditions It depends on a hand-crafted hierarchy of concepts We hypothesize that a revised mechanism which also learns new concepts can address both of these problems.

Forming New Concepts Create new conceptual predicates and associated definitions for start conditions and effects of acquired skills That are functionally motivated but structurally defined That extend the concept hierarchy to support future problem solving and skill learning To support better skill learning, we are extending ICARUS to: Learned concepts for skills preconditions serve as perceptual chunks which access responses that achieve the agents goals.

(clear A) (unstacked B A) (unstackable B A) (clear B)(hand-empty)(on B A) (clear C) (unstacked D C) (unstackable D C) D C B A B DCA Learning Concepts in the Blocks World When the problem solver achieves a goal, it learns both a new skill and two concepts, one for its preconditions and one for effects. The system uses a mechanism similar to that in composition (Neves & Anderson, 1981) to determine the conditions for each one. I CARUS uses the same predicate in two clauses if the achieved goals are the same and if the initially true subconcepts are the same (for concept chaining) or the utilized skills are the same (for skill chaining). This produces disjunctive and recursive concepts.

(clear A) (unstacked B A) (unstackable B A) (clear B)(hand-empty)(on B A) (clear C) (unstacked D C) (unstackable D C) ((scclear ?C) :percepts ((block ?C) (block ?D)) :relations ((unstackable ?D ?C))) D C B A B DCA Learning Concepts in the Blocks World I CARUS solves novel problems in a top-down manner, using means-ends analysis to chain backward from goals. But it acquires concepts from the bottom up, just as it learns skills. Here it defines the base case for the start concept associated with the skill for making a block clear.

(clear A) (unstacked B A) (unstackable B A) (clear B)(hand-empty)(on B A) (clear C) (unstacked D C) (unstackable D C) ((scclear ?B) :percepts ((block ?B) (block ?C)) :relations ((scunstackable ?C ?B))) ((scclear ?C) :percepts ((block ?C) (block ?D)) :relations ((unstackable ?D ?C))) D C B A B DCA Learning Concepts in the Blocks World This process continues upward as the architecture achieves higher-level goals. Here I CARUS defines the recursive case for the start concept associated with the skill for making a block clear.

(clear A) (unstacked B A) (unstackable B A) (clear B)(hand-empty)(on B A) (clear C) (unstacked D C) (unstackable D C) ((scclear ?B) :percepts ((block ?B) (block ?C)) :relations ((scunstackable ?C ?B))) ((scunstackable ?B ?A) :percepts ((block ?B) (block ?A)) :relations ((on ?B ?A) (hand-empty) (scclear ?B))) ((scclear ?C) :percepts ((block ?C) (block ?D)) :relations ((unstackable ?D ?C))) D C B A B DCA Learning Concepts in the Blocks World Skills acquired with these learned concepts appear to be more accurate than those created with I CARUS old mechanism.

(clear A) (unstacked B A) (unstackable B A) (clear B)(hand-empty)(on B A) (clear C) (unstacked D C) (unstackable D C) ((scclear ?B) :percepts ((block ?B) (block ?C)) :relations ((scunstackable ?C ?B))) ((scunstackable ?B ?A) :percepts ((block ?B) (block ?A)) :relations ((on ?B ?A) (hand-empty) (scclear ?B))) ((scclear ?A) :percepts ((block ?A) (block ?B)) :relations ((scunstackable ?B ?A))) ((scclear ?C) :percepts ((block ?C) (block ?D)) :relations ((unstackable ?D ?C))) D C B A B DCA Learning Concepts in the Blocks World

Benefits of Concept Learning (Free Cell)

Benefits of Concept Learning (Logistics)

Challenge 3: Reasoning about Others The framework can deal with other independent agents, but only by viewing them as other objects in the environment. I CARUS is designed to model intelligent behavior in embodied agents, but our work to date has treated them in isolation. But people can reason more deeply about the goals and actions of others, then use their inferences to make decisions. Adding this ability to I CARUS will require knowledge, but it may also demand extensions to the architecture.

An Urban Driving Example You are driving in a city behind another vehicle when a dog suddenly runs across the road ahead of it. You do not want to hit the dog, but you are in no danger of that, yet you guess the other driver shares this goal. You reason that, if you were in his situation, you would swerve or step on the brakes to avoid hitting the dog. This leads you to predict that the other car may soon slow down very rapidly. Since you have another goal – to avoid collisions – you slow down in case that event happens.

Social Cognition in I CARUS Imagine itself in another agents physical/social situation; Infer the other agents goals either by default reasoning or based on its behavior; Carry out mental simulation of the other agents plausible actions and their effects on the world; Take high-probability trajectories into account in selecting which actions to execute itself. For I CARUS to handle social cognition of this sort, it must: Each of these abilities require changes to the architecture of I CARUS, not just its knowledge base.

Architectural Extensions Add abductive reasoning that makes plausible inferences about goals – via relational cascaded Bayesian classifier Extend the problem solver to support forward-chaining search – via mental simulation using repeated lookahead Revise skill execution to consider probability of future events – using the desirability of likely trajectories In response, we are planning a number of changes to I CARUS: These extensions will let I CARUS agents reason about other agents and use the results to influence its own behavior.

Automating Social Cognition Treating goals achieved via anticipation as solved impasses; Analyzing steps that led to this solution to learn new skills; Using these skills to automate behavior when the agent finds itself in a similar situation. Although humans can reason explicitly about other agents likely actions, they gradually compile responses and automate them. The I CARUS skill learning module should achieve this effect by: Over time, the agent will behave in socially relevant ways with no need for explicit reasoning or mental simulation.

Concluding Remarks Represent concepts and skills with temporal relations and use them to execute arbitrary behaviors; Acquire new predicates that extend the concept hierarchy and enable better skill learning; Reason about other agents situations and goals, predict their behavior, and select appropriate responses. I CARUS is a unified theory of cognition that exhibits important human abilities but that also has limitations. However, our recent work has extended the architecture to: These extensions bring I CARUS a few steps closer to a broad- coverage theory of higher-level cognition.

End of Presentation