Lecture 4: Command and Behavior Fusion Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

Mobile Robot ApplicationsMobile Robot Applications Textbook: –T. Bräunl Embedded Robotics, Springer 2003 Recommended Reading: 1. J. Jones, A. Flynn: Mobile.
Reactive and Potential Field Planners
Lecture 7: Potential Fields and Model Predictive Control
Torque, Equilibrium, and Stability
Sources of the Magnetic Field
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Multi Robot Systems Course Bar Ilan University Mor Vered.
Wednesday, Oct. 26, 2005PHYS , Fall 2005 Dr. Jaehoon Yu 1 PHYS 1444 – Section 003 Lecture #16 Wednesday, Oct. 26, 2005 Dr. Jaehoon Yu Charged Particle.
Rotational Equilibrium and Rotational Dynamics
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Planning under Uncertainty
Torque and Rotational Equilibrium
Lecture 6: Hybrid Robot Control Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
Lecture 2: Reactive Systems Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
III–2 Magnetic Fields Due to Currents.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Behavior Coordination Mechanisms – State-of-the- Art Paper by: Paolo Pirjanian (USC) Presented by: Chris Martin.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Topics: Introduction to Robotics CS 491/691(X) Lecture 8 Instructor: Monica Nicolescu.
Lecture 3: Behavior Selection Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
D. Roberts PHYS 121 University of Maryland Physic² 121: Phundament°ls of Phy²ics I November 13, 2006.
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
D. Roberts PHYS 121 University of Maryland Physic² 121: Phundament°ls of Phy²ics I November 15, 2006.
uniform electric field region
DAMN : A Distributed Architecture for Mobile Navigation Julio K. Rosenblatt Presented By: Chris Miles.
Chapter 3: VECTORS 3-2 Vectors and Scalars 3-2 Vectors and Scalars
Number Patterns. What are they? Number patterns are a non-ending list of numbers that follow some sort of pattern from on number to the next.
Overview and Mathematics Bjoern Griesbach
Gauss’ Law. Class Objectives Introduce the idea of the Gauss’ law as another method to calculate the electric field. Understand that the previous method.
Electric Field You have learned that two charges will exert a force on each other even though they are not actually touching each other. This force is.
Chapter 3 Vectors.
 The gravitational force between two masses, m1 & m2 is proportional to the product of the masses and inversely proportional to the square of the distance.
1 Constant Following Distance Simulations CS547 Final Project December 6, 1999 Jeremy Elson.
Nuttapon Boonpinon Advisor Dr. Attawith Sudsang Department of Computer Engineering,Chulalongkorn University Pattern Formation for Heterogeneous.
Adding Vectors, Rules When two vectors are added, the sum is independent of the order of the addition. This is the Commutative Law of Addition.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
4 Introduction to AI Robotics (MIT Press)Chapter 4: The Reactive Paradigm1 The Reactive Paradigm Describe the Reactive Paradigm in terms of the 3 robot.
Chapter 3 Vectors. Coordinate Systems Used to describe the position of a point in space Coordinate system consists of a fixed reference point called the.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Combining Motion and Vector concepts 2D Motion Moving Motion Forward Velocity, Displacement and Acceleration are VECTORS Vectors have magnitude AND direction.
Evolving Virtual Creatures & Evolving 3D Morphology and Behavior by Competition Papers by Karl Sims Presented by Sarah Waziruddin.
CSC 2535 Lecture 8 Products of Experts Geoffrey Hinton.
Deflections We have already done programs on Gravitational Deflection (trajectories - PHYS 201, Vol. 1, #6) and Electric Deflection (cathode ray tube,
Adrian Treuille, Seth Cooper, Zoran Popović 2006 Walter Kerrebijn
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
AP Physics C Montwood High School R. Casao
Behavior Control of Virtual Vehicle
COSC 2007 Data Structures II Chapter 13 Advanced Implementation of Tables IV.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
REFERENCES: FLOCKING.
PHYS 1442 – Section 004 Lecture #12 Wednesday February 26, 2014 Dr. Andrew Brandt Chapter 20 -Charged Particle Moving in Magnetic Field -Sources of Magnetic.
Internal States Sensory Data Learn/Act Action Development Action Selection Action[1] Action[2] Action[N] Action[i] (being developed) Action[i] Environment.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Decision Trees.
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
CSCI 4310 Lecture 5: Steering Behaviors in Raven.
3/14/20161 SOAR CIS 479/579 Bruce R. Maxim UM-Dearborn.
Chapter 8 Rotational Equilibrium and Rotational Dynamics
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
CIS 488/588 Bruce R. Maxim UM-Dearborn
Subsuption Architecture
Presentation transcript:

Lecture 4: Command and Behavior Fusion Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments

2 © Gal Kaminka Previously, on Robots … Behavior Selection/Arbitration Activation-based selection winner-take-all selection, behavior networks argmax selection (priority, utility, success likelihood, … ) State-based selection world models, preconditions and termination conditions Sequencing through FSA

3 © Gal Kaminka This week, on Robots …. Announcements Programming homework: This is not an option! Project: expectations Rosenblatt-Payton Subsumption: Command override  Information loss An alternative design: Command Fusion Behavior Fusion Context-dependent merging of behaviors

4 © Gal Kaminka Brooks’ Subsumption Architecture Multiple layers, divided by competence All layers get all sensor readings, issue commands Higher layers override commands by lower layers What happens when a command is overriden?

5 © Gal Kaminka Information Loss in Selection Layer chooses best command for its competence level Implication: No other command is possible But layer implicitly knows about satisficing solutions Satisficing: Loosely translated as “good enough” These get ignored if command is overriden

6 © Gal Kaminka Why is this a problem Example: Avoid-obstacle wants to keep away from left [20,160] heading possible Best: +90 degree heading Seek-goal wants to keep towards goal ahead [-30,30] heading possible Best: +0 degree heading No way to layer these such that overriding works But [20,30] is good for both

7 © Gal Kaminka There is a deeper problem When a higher-level behavior subsumes another It must take the other’s decision-making into account A higher behavior may have to contain lower behaviors Include within itself their decision-making Example: Goal-seek overrides avoid Must still avoid while seeking goal May need to reason about obstacles while seeking

8 © Gal Kaminka Rosenblatt-Payton Command Fusion Two principles: Every behavior weights ALL possible commands Weights are numbers [-inf,+inf] “No information loss” Any behavior can access weights of another Not just override its output “no information hidden”

9 © Gal Kaminka Weighting outputs AVOID does not choose a specific heading It provides preferences for all possible headings AVOID Sensor -20 degrees 20 degrees 30 degrees 160 degrees

10 © Gal Kaminka Merging outputs Now combine AVOID and SEEK-GOAL by adding AVOID Sensor -20 degrees 20 degrees 30 degrees 160 degrees SEEK

11 © Gal Kaminka Merging outputs Now combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +30 SEEK

12 © Gal Kaminka Advantages Easy to add new behaviors that modify heading Their output is also merged Considers all possibilities—finds useful compromises Negative weights possible, useful Can forbid certain commands, in principle And….

13 © Gal Kaminka Advantages Easy to add new behaviors that modify heading Their output is also merged Considers all possibilities—finds useful compromises Negative weights possible, useful Can forbid certain commands, in principle And….Intermediate Variables

14 © Gal Kaminka Rosenblatt-Payton 2 nd Principle: No information hidden Not only all choices of behavior should be open Internal state can also be important for other layers In Rosenblatt-Payton, a variable is a vector of weights We operate, combine, and reason about weights Even as internal state variables Example: Trajectory speed behavior (in article) Combines internal “Safe-Path”, “Turn-choices” output

15 © Gal Kaminka A problem Two behaviors If goal is far, speed should be fast if obstacle is close, speed should be stop Robot may not slow sufficiently Because overall results compromises between stop and fast Options: Tweak weights… Hack the numbers Add distance to goal into obstacle-avoidance rules Re-define fast to be slower

16 © Gal Kaminka Behavior Weighting Use decision context to weight the behaviors Dynamic weighting: change weights based on context Have a meta-behavior with context rules IF obstacle-close THEN increase weight of AVOID IF not obstacle-close THEN increase weight of SEEK Scale/weight based on context rules Weights of AVOID, SEEK multiplied by the behavior weight

17 © Gal Kaminka Merging outputs: Static Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +30 SEEK

18 © Gal Kaminka When obstacle close Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, +105 SEEK

19 © Gal Kaminka When obstacle not close Combine AVOID and SEEK-GOAL by adding Then choose top one (e.g., winner-take-all) AVOID Sensor -20 degrees, degrees, degrees, degrees, -30 SEEK

20 © Gal Kaminka Final thoughts… Context rules combine activations and fusion Activation of behavior by priority Behavior has vector output (rather than single value) When should we use this? What about meta-meta behaviors? Compromises are a problem: Can cause a selection of non-satisficing solution Sometimes must choose! Compromises must be on package deals: Otherwise, get behavior X on speed, behavior Y on heading!

21 © Gal Kaminka שאלות?

22 © Gal Kaminka Potential Fields Independently developed from Payton-Rosenblatt Inspired by Physics Robot is particle Moving through forces of attraction and repulsion Basic idea: Each behavior is a force: Pushing robot this way or that Combination of forces results in selected action by robot

23 © Gal Kaminka Example: Goal-directed obstacle-avoidance Move towards goal while avoiding obstacles We have seen this before: In Brooks' subsumption architecture (two layers) In Payton-Rosenblatt command fusion Two behaviors: One pushes robot towards goal One pushes robot away from obstacle

24 © Gal Kaminka SEEK Goal

25 © Gal Kaminka Avoid Obstacle

26 © Gal Kaminka In combination....

27 © Gal Kaminka Run-time Robot calculates forces current acting on it Combines all forces Moves along the resulting vector

28 © Gal Kaminka But how to calculate these fields? Translate a position into a direction and magnitude Given X,Y of robot position Generate force acting on this position (dx,dy) Do this for all forces Combine forces: Final_dx = sum of all forces dx Final_dy = sum of all forces dy

29 © Gal Kaminka Example: SEEK Given: (Xg, Yg) goal coordinates, (x,y) current position Calculate: Find distance to goal d = sqrt((Xg-x)^2 + (Yg-y)^2) Find angle to goal a = tan -1 ((Yg-y)/(Xg-x)) Now: If d = 0, then dx = dy = 0 If d < s, then dx = d cos(a), dy = d sin(a) If d >= s, then dx = s cos(a), dy = s sin(a)

30 © Gal Kaminka Other types of potential fields Potential fiels useful in more than avoiding obstacles Can set in advance many different types Combine them dynamically Each field is a behavior All behaviors always active

31 © Gal Kaminka Uniform potential field dx = constant, dy = 0 e.g., for follow wall, or return to area

32 © Gal Kaminka Perpendicular field dx = +constant or -constant, depending on reference e.g., for avoid fence

33 © Gal Kaminka Problem: Stuck! Can get stuck in local minimum All forces in a certain area push towards a sink Once robot stuck, cannot get out One solution: Random field Distance d and angle a chosen randomly Gets robot unstuck, but also unstable

34 © Gal Kaminka A really difficult problem:

35 © Gal Kaminka A surprising solution Balch and Arkin found a surprisingly simple solution Avoid-the-past field Repulsion from places already visited This is a field that changes dynamically