Soar: An Architecture for Human Behavior Representation

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Modelling CGFs for tactical air-to-air combat training
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
JSIMS 28-Jan-99 1 JOINT SIMULATION SYSTEM Modeling Command and Control (C2) with Collaborative Planning Agents Randall Hill and Jonathan Gratch University.
Outline Introduction Soar (State operator and result) Architecture
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Chapter 12: Expert Systems Design Examples
Introduction to HCC and HCM. Human Centered Computing Philosophical-humanistic position regarding the ethics and aesthetics of a workplace Any system.
1 Learning from Behavior Performances vs Abstract Behavior Descriptions Tolga Konik University of Michigan.
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Behavior- Based Approaches Behavior- Based Approaches.
Ideas for Explainable AI
© 2004 Soar Technology, Inc.  July 13, 2015  Slide 1 Thinking… …inside the box Soar Workshop Presentation Presented on 10 June 2004 by Jacob Crossman.
Intelligent Agents: an Overview. 2 Definitions Rational behavior: to achieve a goal minimizing the cost and maximizing the satisfaction. Rational agent:
The Need of Unmanned Systems
COMPUTATIONAL MODELING OF INTEGRATED COGNITION AND EMOTION Bob MarinierUniversity of Michigan.
Chapter 5 Models and theories 1. Cognitive modeling If we can build a model of how a user works, then we can predict how s/he will interact with the interface.
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
Modeling Driver Behavior in a Cognitive Architecture
8th CGF & BR Conference May 1999 Copyright 1999 Institute for Simulation & Training Modeling Perceptual Attention in Virtual Humans Randall W.
Invitation to Computer Science 5th Edition
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
计算机科学概述 Introduction to Computer Science 陆嘉恒 中国人民大学 信息学院
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Knowledge representation
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
8th CGF & BR Conference May 1999 Copyright 1999 Institute for Simulation & Training Continuous Planning and Collaboration for Command and Control.
Key Centre of Design Computing and Cognition – University of Sydney Concept Formation in a Design Optimization Tool Wei Peng and John S. Gero 7, July,
Bob Marinier Advisor: John Laird Functional Contributions of Emotion to Artificial Intelligence.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Comp 15 - Usability and Human Factors
1 The main topics in AI Artificial intelligence can be considered under a number of headings: –Search (includes Game Playing). –Representing Knowledge.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
8th CGF & BR Conference May 1999 Copyright 1999 Institute for Simulation & Training Synthetic Forces Behavioral Architecture Ian Page
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
SOAR A cognitive architecture By: Majid Ali Khan.
Beyond Chunking: Learning in Soar March 22, 2003 John E. Laird Shelley Nason, Andrew Nuxoll and a cast of many others University of Michigan.
Game AI Matthew Hsieh Meng Tran. Computer Games Many different genres  Action  Role Playing  Adventure  Strategy  Simulation  Sports  Racing Each.
1 Learning through Interactive Behavior Specifications Tolga Konik CSLI, Stanford University Douglas Pearson Three Penny Software John Laird University.
AtGentive Project Overview; kick-off meeting; 7-8 December 2005, Fontainebleau AtGentive Project Overview AtGentive; Kick-off Meeting; 7-8 December 2005,
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Joscha Bach Nick Cassimatis Ken Forbus Ben Goertzel Stacey Marsella John Laird Pat Langley Christian Lebiere Paul Rosenbloom Matthias Scheutz Satinder.
3/14/20161 SOAR CIS 479/579 Bruce R. Maxim UM-Dearborn.
Learning Procedural Knowledge through Observation -Michael van Lent, John E. Laird – 인터넷 기술 전공 022ITI02 성유진.
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Artificial Intelligence
Knowledge Representation and Reasoning
Cognitive Language Processing for Rosie
Learning Fast and Slow John E. Laird
Decision Support Systems
It’s All About Me From Big Data Models to Personalized Experience
Artificial Intelligence and Lisp #2
Knowledge Representation and Reasoning
CIS 488/588 Bruce R. Maxim UM-Dearborn
Artificial Intelligence
Bryan Stearns University of Michigan Soar Workshop - May 2018
CIS 488/588 Bruce R. Maxim UM-Dearborn
SOAR as a Cognitive Architecture for Modeling Driver Workload
Symbolic cognitive architectures
An Integrated Theory of the Mind
SOAR 1/18/2019.
CIS 488/588 Bruce R. Maxim UM-Dearborn
Presentation transcript:

Soar: An Architecture for Human Behavior Representation Randall W. Hill, Jr. Information Sciences Institute University of Southern California http://www.isi.edu/soar/hill

What is Soar? Artificial Intelligence Architecture System for building intelligent agents Learning system Cognitive Architecture A candidate Unified Theory of Cognition (Allen Newell, 1990)

History Inventors Officially created in 1983 Allen Newell, John Laird, Paul Rosenbloom Officially created in 1983 Roots in 1950’s and onwards Currently on version 8 of Soar architecture Written in ANSI C for portability and speed In the public domain

User Community Academia International Commercial USC, U. of Michigan, CMU, BYU, others International Britain, Europe, Japan Commercial Soar Technology, Inc. ExpLore Reasoning Systems, Inc.

Objectives of Architecture Support multi-method problem solving Apply to a wide variety of tasks and methods Combine reactive and goal directed symbolic processing Represent and use multiple knowledge forms Procedural, declarative, episodic, iconic Support very large bodies of knowledge (>100,000 rules) Interact with the outside world Learn about all aspects of tasks Full Range of Tasks: NL processing and generation. Design. Multiple Knowledge forms Full range of problem solving methods: means-ends analysis (MEA), planning (HTN/PO), Interact with outside world: sensors e.g., helicopter pilots can send/receive messages, sense other entities and the terrain, and sense and control the helicopter and weapon systems. Quake agents.

Cognitive Behavior: Underlying Assumptions Goal-oriented Reactive Requires use of symbols Problem space hypothesis Requires learning Symbolic Computational System -- Manipulates and evaluates symbols via other symbol structures. That computations can be performed on symbol structures rather than on numbers was a key insight in AI in 1950’s. Problem Spaces -- Insight from early AI: use heuristic search spaces to deal with difficult tasks. Tasks are formulated as search in a space of states by means of operators that produce new states. Production System -- Long-term memory for both program and data consists of parallel acting condition-action rules. Flexible, intelligent action requires that data in working memory call forth the knowledge in long-term memory about what to do -- recognize-act cycle. Goal Hierarchy -- Intelligent activity is driven by difficulties. Subgoals are set up as ways of thwarting small difficulties. This principle applies in studies on human problem solving and in AI. Chunking -- Based on George Miller’s paper on chunking. Soar learns continuously by building new productions to capture knowledge that Soar generated in process of resolving difficulties (impasses). Soar is an accumulation and integration of results achieved over last 40 years.

Soar Architecture Perception / Motor Interface Long Term Knowledge e.g., Doctrine, Tactics, Flying Techniques, Missions, Coordination, Properties of Planes, Weapons, Sensors, … [ ] [ ] [ ] [ ] [ ] [ ] Match Changes Working Memory situational assessment, intermediate results, actions, goals, … Perception / Motor Interface

Soar Decision Cycle Elaboration Phase Input Phase Output Phase Perception Cognition Motor Elaboration Phase Fire rules Generate preferences Update working memory Input Phase Output Phase Sense world Perceptual pre-processing Assert to WM Decision Phase Command effectors Adjust perception Evaluate operator preferences Select new operator OR Create new state

Which Rule(s) Should Fire? Fire all matched rules in parallel until quiescence Sequential operators generate behavior e.g., Turn, adjust-radar, select-missile, climb Provides trace of behavior comparable to human actions Rules select, apply, terminate operators. Select: create preferences to propose and compare operators Apply: modify the current situation, send motor commands Terminate: determine that operator is finished Elaboration (propose operators) Decide (select operator) Elaboration (apply operator) Elaboration (terminate operator & propose) Output Decide Output Input Input Decide Input

Example Rules PROPOSE: If I encounter the enemy, propose an operator to break contact with the enemy. SELECT: If I am enroute to my holding area and I come into contact with an enemy unit, prefer breaking contact over engaging targets. APPLY: If the enemy is to my left, break to the right. APPLY: If the enemy is to my right, break to the left. TERMINATE: If break contact is the current operator, and contact is broken, then terminate break operator.

Goal Driven Behavior Complex operators are decomposed to simpler ones Occurs whenever rules are insufficient to apply operator Decomposition is dynamic and situation dependent Over 90 operators in RWA-Soar Execute-Mission Fly-Flight-Plan Engage Prepare-to- return-to-base Fly-control-route Select- point route High- level Low- Contour NOE Mask Unmask Employ- weapons Initialize- hover Return- to- control-

Coordination of Behavior & Action Combines goal-driven and reactive behaviors Suggest new operators anywhere in goal hierarchy Generate preferences for operators Terminate operators Provides limited multi-task capability Constrained by single goal hierarchy in Soar Other possible approaches Multiple goal hierarchies Flush and re-build goal hierarchies when needed

Modeling Perceptual Attention Problem Naïve vision model Entity-level resolution Unrealistic field of view (360o, 7 km) No focus of attention Perceptual overload often occurs Pilot crashes helicopter Approach Zoom lens model of attention Gestalt grouping in pre-attentive stage Multi-resolution focus Control of attention Goal-driven: task-based, group-oriented Stimulus-driven: abrupt onset, contrast

Naïve Vision Model Model of Attention Entity-oriented Stimulus-driven No perceptual control Model of Attention Gestalt grouping Zoom lens effect Goal and stimulus driven

Underlying Technologies/Algorithms Optimized RETE algorithm Enables efficient matching in large rule sets Universal subgoaling Operator-based architecture Truth Maintenance System (TMS) Learning algorithm Chunking mechanism

Soar Applications Agents for Synthetic Battlespaces Commanders and Helicopter Pilots (USC) Fixed Wing Aircraft Pilots (UM, Soar Technology) Animated, Pedagogical Agents Steve (Rickel and Johnson, USC) Game Agents Quake (Laird and van Lent, UM) Animated Pedagogical Agents - Center for Advanced Research in Technology for Education - CARTE

Intelligent Synthetic Forces Helicopter pilots Teamwork C3I Modeling Planning Execution Re-planning Collaboration

Steve: An Embodied Intelligent Agent for Virtual Environments 3D agent that interacts with students in virtual environments Can take different roles: teammate, teacher, guide, demonstrator Multiple trainees and agents work together in virtual teams Intelligent tutoring in the context of a shared team environment

Soar/Games Project U. of Michigan, Laird and van Lent Build an AI Engine around the Soar AI architecture Soar/Quake II project Soar/Descent 3 project U. of Michigan, Laird and van Lent Soar/Quake AI Socket DLL = Dynamically Loadable Library - Quake II has a API published and you use a DLL that it automatically loads to connect to it. Interface DLL Sensor Data AI Engine (Soar) Knowledge Files Actions

Validation Efforts Intelligent Synthetic Forces Synthetic Theater of War ‘97 experience Subject Matter Experts Human Factors / HCI studies e.g., B. John (CMU) & R. Young (U.K.) Better models for validating integrated models of human behavior needed

Summary of Capabilities/Limitations Mixes goal-oriented and reactive behavior Supports interaction with external world Architecture lends itself to creating integrated models of human behavior Limitations Learning mechanism not easily used

Future Development / Application Plans Integrate emotion with cognition Learn from experience Incorporate inductive models of learning Continue work on models of collaboration in complex decision-making Extend the current C3I models