Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.

Slides:



Advertisements
Similar presentations
Problem- Based Learning in STEM Disciplines Saturday, November 10, 2007 JHU/MSU STEM Initiative.
Advertisements

ARTIFICIAL INTELLIGENCE
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California and Center for the Study of Language and Information Stanford University,
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
Advantages and disadvantages, architectures and data models
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
What is Artificial Intelligence? –Depends on your perspective... Philosophical: a method for modeling intelligence Psychological: a method for studying.
Cognitive Robotics Candice Harris. Introduction Definition Cognitive Robotics (CR) – is a concern with endowing robots with high-level cognitive capabilities.
IofT 1910 W Fall 2006 Week 5 Plan for today:  discuss questions asked in writeup  talk about approaches to building intelligence  talk about the lab.
Introduction to Systems CSCI102 - Systems ITCS905 - Systems MCS Systems.
Physical Symbol System Hypothesis
What is Artificial Intelligence? –not programming in LISP or Prolog (!) –depends on your perspective... a method for modeling intelligence a method for.
Introduction to Bloom’s Taxonomy. The Idea Purpose ◦ Organize and classify educational goals ◦ Provide a systematized approach to course design Guided.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Introduction Numerous studies show that verbal labels help to launch new category formation (Deng & Sloutsky, 2013), or select specific information from.
ARTIFICIAL INTELLIGENCE Introduction: Chapter 1. Outline Course overview What is AI? A brief history The state of the art.
Remediation of Vocabulary Deficits Enhance classroom linguistic environment Pre-teach vocabulary Teach vocabulary Provide incentives for extending vocabulary.
Integrated Interpretation and Generation of Task-Oriented Dialogue Alfredo Gabaldon 1 Ben Meadows 2 Pat Langley 1,2 1 Carnegie Mellon Silicon Valley 2.
Introduction: Chapter 1
Introduction to AI Robotics Chapter 2. The Hierarchical Paradigm Hyeokjae Kwon.
Knowledge representation
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
THE COGNITIVE REVOLUTION: A HISTORICAL PERSPECTIVE Asheley Landrum and Amy Louise Schwarz.
1 Module F1 Modular Training Cycle and Integrated Curriculum.
K-3 Formative Assessment Process: The Five Domains of Learning Welcome! This webinar will begin at 3:30. While you are waiting, please: Locate the question.
Thinking  Cognition  mental activities associated with thinking, knowing, remembering, and communicating  Cognitive Psychology  study of mental activities.
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Subject of research “Information management system for supporting educational process” Student: Scherbinin T. A. Supervisor: Grankov M. V.
Learning Teleoreactive Logic Programs by Observation
Symbolic cognitive architectures
An Integrated Theory of the Mind
Allen Newell by: Raymond Barringer.
LANGUAGE, SPEECH, AND THOUGHT
A Value-Driven Architecture for Intelligent Behavior
Project-Based Learning Integrating 21st Century Skills
Thinking, Language, and Intelligence
Presentation transcript:

Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA A Unified Cognitive Architecture for Physical Agents Thanks to K. Cummings, N. Nejati, S. Rogers, S. Sage, and D. Shapiro for their many contributions. This talk reports research. funded by grants from DARPA IPTO, which is not responsible for its contents.

Psychological Ideas as Design Heuristics how the system should represent and organize knowledge how the system should represent and organize knowledge how the system should use that knowledge in performance how the system should use that knowledge in performance how the system should acquire knowledge from experience how the system should acquire knowledge from experience To develop intelligent systems, we must constrain their design, and findings about human behavior can suggest: This approach has led to many new insights, starting with Newell, Shaw, and Simons (1956) work on the Logic Theorist.

Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to system integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning In this talk I will use I CARUS a unified cognitive architecture to illustrate the value of ideas from psychology.

concepts are distinct cognitive entities that support both categorization and inference; concepts are distinct cognitive entities that support both categorization and inference; the majority of human concepts are grounded in perception and action (Barsalou, 1999); the majority of human concepts are grounded in perception and action (Barsalou, 1999); many human concepts are relational in nature, describing connections among entities (Kotovsky & Gentner, 1996); many human concepts are relational in nature, describing connections among entities (Kotovsky & Gentner, 1996); concepts are organized in a hierarchical manner, with more complex categories defined in terms of simpler ones. concepts are organized in a hierarchical manner, with more complex categories defined in terms of simpler ones. Representing and Using Concepts E.g., psychology makes claims about conceptual knowledge: I CARUS adopts these assumptions about conceptual memory.

I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0)))

Structure and Use of Conceptual Memory I CARUS organizes conceptual memory in a hierarchical manner. Conceptual inference occurs from the bottom up, starting from percepts to produce high-level beliefs about the current state.

Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

the same generic skill may be applied to distinct objects that meet its application conditions; the same generic skill may be applied to distinct objects that meet its application conditions; skills support the execution of complex activities that have hierarchical organization (Rosenbaum et al., 2001); skills support the execution of complex activities that have hierarchical organization (Rosenbaum et al., 2001); humans can carry out open-loop sequences, but they can also operate in closed-loop reactive mode; humans can carry out open-loop sequences, but they can also operate in closed-loop reactive mode; humans can deal with multiple goals with different priorities, which can lead to interrupted behavior. humans can deal with multiple goals with different priorities, which can lead to interrupted behavior. Skills and Execution I CARUS embodies these ideas in its skill execution module. Psychology also makes claims about skills and their execution:

((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

I CARUS Skills Build on Concepts concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS stores skills in a hierarchical manner that links to concepts.

Skill Execution in I CARUS This occurs repeatedly on each cycle to support reactive control with a bias toward persistence of initiated skills. Skill execution occurs from the top down, starting from goals to find applicable paths through the skill hierarchy.

humans often resort to means-ends analysis to solve novel problems (Newell & Simon, 1961); humans often resort to means-ends analysis to solve novel problems (Newell & Simon, 1961); problem solving often occurs in a physical context and is interleaved with execution (Gunzelman & Anderson, 2003); problem solving often occurs in a physical context and is interleaved with execution (Gunzelman & Anderson, 2003); efforts to overcome impasses during problem solving leads to incremental acquisition of new skills (Anzai & Simon, 1979); efforts to overcome impasses during problem solving leads to incremental acquisition of new skills (Anzai & Simon, 1979); structural learning involves monotonic addition of symbolic elements to long-term memory; structural learning involves monotonic addition of symbolic elements to long-term memory; learning can transform backward-chaining heuristic search into informed forward-chaining execution (Larkin et al., 1980). learning can transform backward-chaining heuristic search into informed forward-chaining execution (Larkin et al., 1980). Ideas about Problem Solving and Learning Psychology also has ideas about problem solving and learning: I CARUS reflects these ideas in its problem solving and learning.

I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no This organization reflects the psychological distinction between automatized and controlled behavior.

I CARUS Learns Skills from Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Skill Learning

Learning Skills for In-City Driving We have trained I CARUS to drive in a simulated in-city environment. We provide the system with tasks of increasing complexity. Learning transforms the problem-solving traces into hierarchical skills. The agent uses these skills to change lanes, turn, and park using only reactive control.

Similarities to Previous Architectures I CARUS has much in common with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993): These ideas all have their origin in theories of human memory, problem solving, and skill acquisition. 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Learning involves monotonic addition of elements to memory 6.Learning is incremental and interleaved with performance

Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these assumptions appear in Bonasso et al.s (2003) 3T, Freeds APEX, and Sun et al.s (2001) CLARION architectures. 1.Cognition is grounded in perception and action 2.Categories and skills are separate cognitive entities 3.Short-term elements are instances of long-term structures 4.Inference and execution are more basic than problem solving 5.Skill/concept hierarchies are learned in a cumulative manner These ideas have their roots in cognitive psychology, but they are also effective in building integrated intelligent agents.

Directions for Future Research progressive deepening in forward-chaining search progressive deepening in forward-chaining search graded nature of categories and category learning graded nature of categories and category learning model-based character of human reasoning model-based character of human reasoning persistent but limited nature of short-term memories persistent but limited nature of short-term memories creating perceptual chunks to reduce these limitations creating perceptual chunks to reduce these limitations storing and retrieving episodic memory traces storing and retrieving episodic memory traces Future work on I CARUS should incorporate other ideas about: These additions will increase further I CARUS debt to psychology. For more details, see:

End of Presentation