Goal-Driven Autonomy Learning for Long-Duration Missions Héctor Muñoz-Avila.

Slides:



Advertisements
Similar presentations
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Advertisements

Basic Concepts of Strategic Management
Intelligent Agents Russell and Norvig: 2
Executing an Uncertain Schedule: Adapting to Reality to Maximize Target Completion Attainment Annaka Kalton Stottler Henke Associates, Inc.
Personnel and Transfer Management (PTM). FOR PERSONNEL BEING TRANSFERRED Lack of automation in the transfer process leads to inefficiency Stress on the.
The Art and Science of Teaching (2007)
1 Finding good models for model-based control and optimization Paul Van den Hof Okko Bosgra Delft Center for Systems and Control 17 July 2007 Delft Center.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
By: Samantha Catanzaro Jessie Mitchell-Jemison Matthew Klingensmith Sharon Kong Anthony Santos.
IT Strategic Planning Project – Hamilton Campus FY2005.
Automatic Control & Systems Engineering Autonomous Systems Research Mini-UAV for Urban Environments Autonomous Control of Multi-UAV Platforms Future uninhabited.
Reinforcement Learning
Provisional draft 1 ICT Work Programme Challenge 2 Cognition, Interaction, Robotics NCP meeting 19 October 2006, Brussels Colette Maloney, PhD.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Change is a Process Organizational Stages Individual Stages (ADKAR) Business Need Concept and Design Implementation Post-Implementation Awareness Desire.
Third Edition Dr. Wasim Al-Habil. Chapter Strategic Management in the Public Sector.
Robotics for Intelligent Environments
Mobile Robotics Dieter Fox. Task l Design mobile robots that can act autonomously in unknown, dynamic environments l Apply probabilistic representations.
Meaningful Learning in an Information Age
1 IPSG WORKSHOP 1 - CHALLENGES AND TOOLS FOR THE CENTRE OF GOVERNMENT There is an observable trend towards direction of centralization of the CoG: Reasons.
Sensing, Tracking, and Reasoning with Relations Leonidas Guibas, Feng Xie, and Feng Zhao Xerox PARC and Stanford University.
Planning and Strategic Management
STRATEGIC MANAGEMENT 1
INTEGRATED PROGRAMME IN AERONAUTICAL ENGINEERING Coordinated Control, Integrated Control and Condition Monitoring in Uninhabited Air-Vehicles Ian Postlethwaite,
Strategy for Excellence Leadership Development & Succession Planning Carl L. Harshman & Associates.
Introduction to Jadex programming Reza Saeedi
Rainbow Facilitating Restorative Functionality Within Distributed Autonomic Systems Philip Miseldine, Prof. Taleb-Bendiab Liverpool John Moores University.
Susan S. Gunby, RN, PhD Fall Semester 2003
Controlling and Configuring Large UAV Teams Paul Scerri, Yang Xu, Jumpol Polvichai, Katia Sycara and Mike Lewis Carnegie Mellon University and University.
European Network of Excellence in AI Planning Intelligent Planning & Scheduling An Innovative Software Technology Susanne Biundo.
Strategic and operational plan. Planning it is a technical function that enables HSO to deal with present and anticipate the future. It involve deciding.
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
UNDERWATER GLIDERS.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
® Designing an M&E System for Market Engagement at CARE Christian Pennotti Technical Advisor, Learning & Impact Economic Development Unit
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
MULTISENSOR INTEGRATION AND FUSION Presented by: Prince Garg.
A return to our conversation about the organization of SEARCH.
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
Build the Right Team 1 Organize for Success 2 Build Coalition with Business Partners 3 Maintain Flexibility 4 Key Success Factors KSF 1.1: Relentlessly.
Dr. Bea Bourne 1. 2 If you have any troubles in seminar, please do call Tech Support at: They can assist if you get “bumped” from the seminar.
The use of scenarios to develop Concepts of Operation for unmanned vehicles 19 ISMOR 30th August Michael Tulip.
A Prescriptive Adaptive Test Framework (PATFrame) for Unmanned and Autonomous Systems: A Collaboration Between MIT, USC, UT Arlington and Softstar Systems.
1 V&V Needs for NextGen of 2025 and Beyond A JPDO Perspective Maureen Keegan JPDO Integration Manager October 13, 2010.
Lecture 24 Electronic Business (MGT-485). Recap – Lecture 23 E-Business Strategy: Formulation – External Assessment Key External Factors Relationships.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Autonomous Mission Management of Unmanned Vehicles using Soar Scott Hanford Penn State Applied Research Lab Distribution A Approved for Public Release;
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Unclassified//For Official Use Only 1 RAPID: Representation and Analysis of Probabilistic Intelligence Data Carnegie Mellon University PI : Prof. Jaime.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
Learning for Physically Diverse Robot Teams Robot Teams - Chapter 7 CS8803 Autonomous Multi-Robot Systems 10/3/02.
Approved for public release; distribution is unlimited. 10/7/09 Autonomous Systems Sensors – The Front End of ISR Mr. Patrick M. Sullivan SPAWAR ISR/IO.
" The Importance of RM in strategic in sustainable service delivery How to avoid Service Delivery Protest ” Institute of Municipal Finance Officers & Related.
Cognitive explanations of learning Esther Fitzpatrick.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
CS 5751 Machine Learning Chapter 13 Reinforcement Learning1 Reinforcement Learning Control learning Control polices that choose optimal actions Q learning.
Machine Reasoning and Learning Workshops III and IV Kickoff Wen Masters Office of Naval Research Code 311 (703)
Real-Time Systems Laboratory Seolyoung, Jeong The CASCADAS Framework for Autonomic Communications Autonomic Communication Springer.
Autonomous Navigation of a
Strategic and operational plan
Chapter 33 Introduction to the Nursing Process
Automation as the Subject of Mechanical Engineer’s interest
CS b659: Intelligent Robotics
Understanding by Design
© James D. Skrentny from notes by C. Dyer, et. al.
Reinforcement Learning with Partially Known World Dynamics
Sprint Planning April 2018.
Why a "Vision for WIGOS in 2040"?
Presentation transcript:

Goal-Driven Autonomy Learning for Long-Duration Missions Héctor Muñoz-Avila

Goal-Driven Autonomy (GDA) Key concepts: Expectation: the expected state after executing an action. Discrepancy: A mismatch between expected state and actual state. Explanation: A reason for the mismatch. Goal: State or desired states. GDA is a model of introspective reasoning in which an agent revises its own goals Where the GDA knowledge is coming from?

Objectives Enable greater autonomy and flexibility for unmanned systems  A key requirement of the project is for autonomous systems to be robust over long periods of time  Very difficult to encode for all possible circumstances in advance (i.e., months ahead).  Changes (e.g., environmental) over time Need for learning: adapt to a changing environment.  GDA knowledge needs to adapt to uncertain and dynamic environments while performing long-duration activities

Goal-Driven Autonomy (GDA) Concrete Objective: Learn/adapt the GDA knowledge elements for each of the four components Three levels:  At the object (TREX) level,  At the meta-cognition (MIDCA) level, and  At the integrative object and meta-reasoning (MIDCA+TREX) level

Goal Management Learning An initial set of priorities can be set at the beginning of the deployment But for a long-term mission such priorities will need to be adjusted automatically as a function of changes in the environment. For example, by default we might prioritize sonar sensory goals  E.g., to determine potential hazardous conditions surrounding the UUV

Goal Management Learning - Example Situation: four unknown contacts in area Default: identification of each contact can be set as a goal  Each goal can be associated with a priority as a function of the distance to the UUV  Unknown contact has some initial sensor readings Adaptation: once the contact have been identified, system might change the priority of future contacts with same sensor readings  E.g., giving higher priority to contact that could be a rapid moving vessel  Initial sensor readings for same target might change as a result of changing environmental conditions

Goal Formulation Learning New goals can be formulated depending on:  the discrepancies encountered  the explanation generated and  the observations from state Example:  As before, unknown contact has some initial sensor readings  Contact turns out to be a large mass that moves very close to the vehicle forcing trajectory change  New goal: keep distance from contact with initial readings

Explanation Learning Explanations are assumed to be deterministic:  Discrepancy  Explanation But these frequently assume perfect observability Need to relax this assumption to handle sensor information Associate priorities to explanations  Need to be adapted over time  Prior work studied underpinnings  Need to consider sensor readings

Expectation Learning Expectations need to consider time intervals. Must take into account the plan look-ahead and latency These two factors can be adapted over time by reasoning at the integrative object and meta-reasoning (MIDCA+TREX) level

Conclusions Operating autonomously over long periods of time is a challenging task:  Too difficult to pre-define all circumstances in advance.  Conditions change over time Our vision is for UUVs that adapt to uncertain and dynamic environments while performing long- duration activities  By learning and refining GDA knowledge