Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.

Similar presentations


Presentation on theme: "Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated."— Presentation transcript:

1 Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona http://cll.stanford.edu/ A Cognitive Architecture for Integrated Intelligent Agents Thanks to D. Choi, K. Cummings, N. Nejati, S. Rogers, S. Sage, and D. Shapiro for their contributions. This talk reports research. funded by grants from DARPA IPTO and the National Science Foundation, which are not responsible for its contents.

2 The original goal of artificial intelligence was to design and implement computational artifacts that: handled difficult tasks that require cognitive processing; handled difficult tasks that require cognitive processing; combined many capabilities into integrated systems; combined many capabilities into integrated systems; provided insights into the nature of mind and intelligence. provided insights into the nature of mind and intelligence. Claim 1: Integrated Cognitive Systems Instead, modern AI has divided into many subfields that care little about cognition, systems, or intelligence. But the challenge remains and we need far more research on integrated cognitive systems.

3 Claim 2: Psychology and Design Heuristics how the system should represent and organize knowledge how the system should represent and organize knowledge how the system should use that knowledge in performance how the system should use that knowledge in performance how the system should acquire knowledge from experience how the system should acquire knowledge from experience To develop intelligent systems, we must constrain their design, and findings about human behavior can suggest: This approach has led to many new insights and methods, but few modern AI researchers take advantage of it. We need far more work that incorporates ideas from cognitive psychology into the design of AI systems.

4 The Fragmentation of AI Research action perception reasoning learning planning language

5 The Domain of In-City Driving Consider driving a vehicle in a city, which requires: selecting routes selecting routes obeying traffic lights obeying traffic lights avoiding collisions avoiding collisions being polite to others being polite to others finding addresses finding addresses staying in the lane staying in the lane parking safely parking safely stopping for pedestrians stopping for pedestrians following other vehicles following other vehicles delivering packages delivering packages These tasks range from low-level execution to high-level reasoning.

6 Newells Critique move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; demonstrate our systems intelligence on the same range of domains and tasks as humans can handle; demonstrate our systems intelligence on the same range of domains and tasks as humans can handle; evaluate these systems in terms of generality and flexibility rather than success on a single class of tasks. evaluate these systems in terms of generality and flexibility rather than success on a single class of tasks. However, there are different paths toward achieving such systems. In 1973, Allen Newell argued You cant play twenty questions with nature and win. Instead, he proposed that we:

7 A System with Communicating Modules action perception reasoning learning planning language software engineering / multi-agent systems

8 action perception reasoning learning planning language short-term beliefs and goals A System with Shared Short-Term Memory blackboard architectures

9 Newells vision for research on theories of intelligence was that: cognitive systems should make strong theoretical assumptions about the nature of the mind; cognitive systems should make strong theoretical assumptions about the nature of the mind; theories of intelligence should change only gradually, as new structures or processes are determined necessary; theories of intelligence should change only gradually, as new structures or processes are determined necessary; later design choices should be constrained heavily by earlier ones, not made independently. later design choices should be constrained heavily by earlier ones, not made independently. Integration vs. Unification A successful framework is all about mutual constraints, and it should provide a unified theory of intelligent behavior. He associated these aims with the idea of a cognitive architecture, which were also to incorporate results from psychology.

10 A System with Shared Long-Term Memory action perception reasoning learning planning language short-term beliefs and goals long-term memory structures cognitive architectures

11 A Constrained Cognitive Architecture action perception reasoning learning planning language short-term beliefs and goals long-term memory structures

12 The I CARUS Architecture In this talk I will use one such framework I CARUS to illustrate the advantages of cognitive architectures. I CARUS incorporates a variety of assumptions from psychological theories; the most basic are that: These claims give I CARUS much in common with other cognitive architectures like ACT-R, Soar, and Prodigy. 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as list structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Performance and learning compose elements in memory

13 Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Each of these modules incorporates a variety of ideas that have their origin in cognitive psychology.

14 concepts are distinct cognitive entities that support both categorization and inference; concepts are distinct cognitive entities that support both categorization and inference; the majority of human concepts are grounded in perception and action (Barsalou, 1999); the majority of human concepts are grounded in perception and action (Barsalou, 1999); many human concepts are relational in nature, describing connections among entities (Kotovsky & Gentner, 1996); many human concepts are relational in nature, describing connections among entities (Kotovsky & Gentner, 1996); concepts are organized in a hierarchical manner, with more complex categories defined in terms of simpler ones. concepts are organized in a hierarchical manner, with more complex categories defined in terms of simpler ones. Representing and Using Concepts Cognitive psychology makes claims about conceptual knowledge: I CARUS adopts these assumptions about conceptual memory.

15 I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0)))

16 Structure and Use of Conceptual Memory I CARUS organizes conceptual memory in a hierarchical manner. Conceptual inference occurs from the bottom up, starting from percepts to produce high-level beliefs about the current state.

17 Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

18 the same generic skill may be applied to distinct objects that meet its application conditions; the same generic skill may be applied to distinct objects that meet its application conditions; skills support the execution of complex activities that have hierarchical organization (Rosenbaum et al., 2001); skills support the execution of complex activities that have hierarchical organization (Rosenbaum et al., 2001); humans can carry out open-loop sequences, but they can also operate in closed-loop reactive mode; humans can carry out open-loop sequences, but they can also operate in closed-loop reactive mode; humans can deal with multiple goals with different priorities, which can lead to interrupted behavior. humans can deal with multiple goals with different priorities, which can lead to interrupted behavior. Skills and Execution I CARUS embodies these ideas in its skill execution module. Psychology also makes claims about skills and their execution:

19 ((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

20 I CARUS Skills Build on Concepts concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS stores skills in a hierarchical manner that links to concepts.

21 Skill Execution in I CARUS This process repeats on each cycle to give teleoreactive control (Nilsson, 1994) with a bias toward persistence of initiated skills. Skill execution occurs from the top down, starting from goals to find applicable paths through the skill hierarchy.

22 humans often resort to means-ends analysis to solve novel problems (Newell & Simon, 1961); humans often resort to means-ends analysis to solve novel problems (Newell & Simon, 1961); problem solving often occurs in a physical context and is interleaved with execution (Gunzelman & Anderson, 2003); problem solving often occurs in a physical context and is interleaved with execution (Gunzelman & Anderson, 2003); efforts to overcome impasses during problem solving leads to incremental acquisition of new skills (Anzai & Simon, 1979); efforts to overcome impasses during problem solving leads to incremental acquisition of new skills (Anzai & Simon, 1979); structural learning involves monotonic addition of symbolic elements to long-term memory; structural learning involves monotonic addition of symbolic elements to long-term memory; learning can transform backward-chaining heuristic search into informed forward-chaining execution (Larkin et al., 1980). learning can transform backward-chaining heuristic search into informed forward-chaining execution (Larkin et al., 1980). Ideas about Problem Solving and Learning Psychology also has ideas about problem solving and learning: I CARUS reflects these ideas in its problem solving and learning.

23 I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no This organization reflects the psychological distinction between automatized and controlled behavior.

24 A Successful Problem-Solving Trace (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C initial state goal

25 I CARUS Learns Skills from Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Skill Learning

26 (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 skill chaining Constructing Skills from a Trace

27 (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 2 skill chaining Constructing Skills from a Trace

28 (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 3 2 concept chaining Constructing Skills from a Trace

29 (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 3 2 4 skill chaining Constructing Skills from a Trace

30 Learned Skills in the Blocks World Learned Skills in the Blocks World (clear (?C) :percepts((block ?D) (block ?C)) :start((unstackable ?D ?C)) :skills((unstack ?D ?C))) (clear (?B) :percepts ((block ?C) (block ?B)) :start((on ?C ?B) (hand-empty)) :skills((unstackable ?C ?B) (unstack ?C ?B))) (unstackable (?C ?B) :percepts((block ?B) (block ?C)) :start ((on ?C ?B) (hand-empty)) :skills((clear ?C) (hand-empty))) (hand-empty ( ) :percepts ((block ?D) (table ?T1)) :start ((putdownable ?D ?T1)) :skills ((putdown ?D ?T1))) Hierarchical skills are generalized traces of successful means-ends problem solving

31 Cumulative Curves for Blocks World

32 Cumulative Curves for FreeCell

33 Learning Skills for In-City Driving We have also trained I CARUS to drive in our in-city environment. We provide the system with tasks of increasing complexity. Learning transforms the problem-solving traces into hierarchical skills. The agent uses these skills to change lanes, turn, and park using only reactive control.

34 Skill Clauses Learning for In-City Driving Skill Clauses Learning for In-City Driving ((parked ?me ?g1152) :percepts((lane-line ?g1152) (self ?me)) :start( ) :subgoals((in-rightmost-lane ?me ?g1152) (stopped ?me)) ) ((in-rightmost-lane ?me ?g1152) :percepts((self ?me) (lane-line ?g1152)) :start ((last-lane ?g1152)) :subgoals((driving-well-in-segment ?me ?g1101 ?g1152)) ) ((driving-well-in-segment ?me ?g1101 ?g1152) :percepts((lane-line ?g1152) (segment ?g1101) (self ?me)) :start ((steering-wheel-straight ?me)) :subgoals((in-lane ?me ?g1152) (centered-in-lane ?me ?g1101 ?g1152) (aligned-with-lane-in-segment ?me ?g1101 ?g1152) (steering-wheel-straight ?me)) )

35 Learning Curves for In-City Driving

36 The architecture also supports the transfer of knowledge in that: skills acquired later can build on those learned earlier; skills acquired later can build on those learned earlier; skill clauses are indexed by the goals they achieve; skill clauses are indexed by the goals they achieve; conceptual inference supports mapping across domains. conceptual inference supports mapping across domains. Transfer of Skills in I CARUS We are exploring such effects in I CARUS as part of a DARPA program on the transfer of learned knowledge. Testbeds include first-person shooter games, board games, and physics problem solving.

37 Transfer Effects in FreeCell On tasks with more cards, prior training aids solution probability.

38 Transfer Effects in FreeCell However, it also lets the system solve problems with less effort.

39 Cognitive architectures come with a programming language that: includes a syntax linked to its representational assumptions includes a syntax linked to its representational assumptions inputs long-term knowledge and initial short-term elements inputs long-term knowledge and initial short-term elements provides an interpreter that runs the specified program provides an interpreter that runs the specified program incorporates tracing facilities to inspect system behavior incorporates tracing facilities to inspect system behavior Architectures as Programming Languages Such programming languages ease construction and debugging of knowledge-based systems. For this reason, cognitive architectures support far more efficient development of software for intelligent systems.

40 The programming language associated with I CARUS comes with: a syntax for concepts, skills, beliefs, and percepts a syntax for concepts, skills, beliefs, and percepts the ability to load and parse such programs the ability to load and parse such programs an interpreter for inference, execution, planning, and learning an interpreter for inference, execution, planning, and learning a trace package that displays system behavior over time a trace package that displays system behavior over time Programming in I CARUS We have used this language to develop adaptive intelligent agents in a variety of domains.

41 An I CARUS Agent for Urban Combat

42 I CARUS Memories and Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

43 Similarities to Previous Architectures I CARUS has much in common with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993): These ideas all have their origin in theories of human memory, problem solving, and skill acquisition. 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Learning involves monotonic addition of elements to memory 6.Learning is incremental and interleaved with performance

44 Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these assumptions appear in Bonasso et al.s (2003) 3T, Freeds APEX, and Sun et al.s (2001) CLARION architectures. 1.Cognition is grounded in perception and action 2.Categories and skills are separate cognitive entities 3.Short-term elements are instances of long-term structures 4.Inference and execution are more basic than problem solving 5.Skill/concept hierarchies are learned in a cumulative manner

45 Directions for Future Research progressive deepening in forward-chaining search progressive deepening in forward-chaining search graded nature of categories and category learning graded nature of categories and category learning model-based character of human reasoning model-based character of human reasoning persistent but limited nature of short-term memories persistent but limited nature of short-term memories creating perceptual chunks to reduce these limitations creating perceptual chunks to reduce these limitations storing and retrieving episodic memory traces storing and retrieving episodic memory traces Future work on I CARUS should incorporate other ideas about: These additions will increase further I CARUS debt to psychology.

46 Concluding Remarks are embedded within a unified cognitive architecture; are embedded within a unified cognitive architecture; incorporate constraints based on ideas from psychology; incorporate constraints based on ideas from psychology; demonstrate a wide range of intelligent behavior; demonstrate a wide range of intelligent behavior; are evaluated on multiple tasks in challenging testbeds. are evaluated on multiple tasks in challenging testbeds. We need more research on integrated intelligent systems that: For more information about the I CARUS architecture, see: http://cll.stanford.edu/research/ongoing/icarus/ http://cll.stanford.edu/research/ongoing/icarus/ http://cll.stanford.edu/research/ongoing/icarus/

47 End of Presentation

48 I CARUS Inference-Execution Cycle 1.places descriptions of sensed objects in the perceptual buffer; 2.infers instances of concepts implied by the current situation; 3.finds paths through the skill hierarchy from top-level goals; 4.selects one or more applicable skill paths for execution; 5.invokes the actions associated with each selected path. On each successive execution cycle, the I CARUS architecture: I CARUS agents are teleoreactive (Nilsson, 1994) in that they are executed reactively but in a goal-directed manner.

49 I CARUS Constraints on Skill Learning What determines the hierarchical structure of skill memory? What determines the hierarchical structure of skill memory? The structure emerges the subproblems that arise during problem solving, which, because operator conditions and goals are single literals, form a semilattice. The structure emerges the subproblems that arise during problem solving, which, because operator conditions and goals are single literals, form a semilattice. What determines the heads of the learned clauses/methods? What determines the heads of the learned clauses/methods? The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. What are the conditions on the learned clauses/methods? What are the conditions on the learned clauses/methods? If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved concept chaining, they are the subconcepts that held at the subproblems outset. If the subproblem involved concept chaining, they are the subconcepts that held at the subproblems outset.

50 Cumulative Curves for Blocks World


Download ppt "Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated."

Similar presentations


Ads by Google