Today: AI Control & Localization Monday: Hunting Demonstration

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Artificial Intelligence in Game Design Intelligent Decision Making and Decision Trees.
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Lecture 6: Hybrid Robot Control Gal A. Kaminka Introduction to Robots and Multi-Robot Systems Agents in Physical and Virtual Environments.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Engineering H193 - Team Project Gateway Engineering Education Coalition P. 1 Spring Quarter 2008 Robot Programming Tips Week 4 Day 2 By Matt Gates and.
1 Motion Planning Algorithms : BUG-family. 2 To plan a path  find a continuous trajectory leading from initial position of the automaton (a mobile robot)
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Chapter 25: Robotics April 27, The Week Ahead … Wednesday: Dmitrii Zagorodnov Thursday: Jeff Elser’s presentation, general discussion Friday: Rafal.
Topics: Introduction to Robotics CS 491/691(X) Lecture 8 Instructor: Monica Nicolescu.
Introduction to mobile robots Slides modified from Maja Mataric’s CSCI445, USC.
Understanding Perception and Action Using the Kalman filter Mathematical Models of Human Behavior Amy Kalia April 24, 2007.
Behavior- Based Approaches Behavior- Based Approaches.
A Robust Layered Control System for a Mobile Robot Rodney A. Brooks Presenter: Michael Vidal.
On Three-Layer Architecture Erann Gat Jet Propulsion Laboratory California Institute of Technology Presentation by: Ekkasit Tiamkaew Date: 09/09/04.
Using Neural Networks to Improve the Performance of an Autonomous Vehicle By Jon Cory and Matt Edwards.
What is it? A mobile robotics system controls a manned or partially manned vehicle-car, submarine, space vehicle | Website for Students.
Localization Using Interactive C and a Standard LEGO Mindstorms Hardware Suite Gary R. Mayer, Dr. Jerry Weinberg, Dr. Xudong Yu
Mobile Robot Control Architectures “A Robust Layered Control System for a Mobile Robot” -- Brooks 1986 “On Three-Layer Architectures” -- Gat 1998? Presented.
Introduction to Behavior- Based Robotics Based on the book Behavior- Based Robotics by Ronald C. Arkin.
Localisation & Navigation
Department of Computing and Information Sciences Kansas State University Design Methodology for State based Embedded Systems Case Study: Robot Controller.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Intelligent Robotics Today & Wednesday: Localization & Navigation Workers Of The World, Meet Your Robot Replacements.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Intelligent Robotics Today: Robot Control Architectures Next Week: Localization Reading: Murphy Sections 2.1, 2.3, 2.5, 3.1, 3.5, 3.6, 4.1 – 4.3, 4.5,
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Robotics Club: 5:30 this evening
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Chapter 1 –Defining AI Next Tuesday –Intelligent Agents –AIMA, Chapter 2 –HW: Problem.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
James Irwin Amirkhosro Vosughi Mon 1-5pm
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
CSPP Artificial Intelligence March 10, 2004
CMSC Artificial Intelligence March 11, 2008
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Process Management Process Concept Why only the global variables?
Intelligent Mobile Robotics
CS b659: Intelligent Robotics
CSCE 580 Artificial Intelligence Ch
Schedule for next 2 weeks
Artificial Intelligence Lecture No. 5
Build Intelligence from the bottom up!
Build Intelligence from the bottom up!
Today: Classic & AI Control Wednesday: Image Processing/Vision
Intelligent Agents Chapter 2.
Part II Chapter 9: Topological Path Planning
Robot Teams Topics: Teamwork and Its Challenges
Sasha Popov November 16, 2018 iRobot Create.
CIS 488/588 Bruce R. Maxim UM-Dearborn
EA C461 – Artificial Intelligence Problem Solving Agents
Build Intelligence from the bottom up!
Subsuption Architecture
Robot Intelligence Kevin Warwick.
Today: Localization & Navigation Wednesday: Image Processing
Robot Vision Today: Reactive Control & Vision
Programming Concepts (Part B) ENGR 10 Introduction to Engineering
Chapter 4 . Trajectory planning and Inverse kinematics
Motion Planning for a Point Robot (1/2)
Chapter 12: Building Situated Robots
Presentation transcript:

Today: AI Control & Localization Monday: Hunting Demonstration Intelligent Robotics Today: AI Control & Localization Monday: Hunting Demonstration CoWorker by iRobot Corporation http://www.irobot.com/industrial/coworker.asp

Robot Control: Layers of Abstraction

Robot: Movement, Sensing, Reasoning Assessment of your environment and the robot’s goals What will it manipulate? How will it move? (ME) What situations does it need to sense? (sensor suite, signal processing) (ECE) How does it decide what actions to take? (CS)

What does it take to get an intelligent robot to do a simple task? Robot Parts: Two Arms, Vision, and Brain The Brain can communicate with all parts Arms can take commands as left, right, up, down, forward, and backward Arms can answer yes/no about whether they are touching something but cannot distinguish what they are touching The vision system can answer any question the brain asks, but cannot volunteer information. The vision system can move around to get a better view.

Why is this simple task so difficult? Coordination is difficult Indirect feedback Updating knowledge about the environment Unexpected events Need to re-plan Different coordinate systems need to be resolved Box-centered and arm-centered

Dealing with the Physical World A robot needs to be able to handle its environment or the environment must be altered and controlled. Close World Assumption The robot knows everything relevant to performing “Complete World Model” no surprises Open World Assumption The robot does not assume complete knowledge The robot must be able to handle unexpected events.

Spectrum of AI Robot Control

Deliberative/Hierarchical Robot Control Emphasizes Planning Robot senses the world, constructs a model representation of the world, “shuts its eyes”, creates a plan of action, makes the action, then senses the results of the action.

Deliberative: Good & Bad Goal Oriented Solve problems that need cognitive abilities Ability to optimize solution Predictable Dependence on a world model Requires a closed world assumption Symbol Grounding Problem Frame Problem Qualification Problem

Reactive/Behavior-Based Control Sense Act Ignores world models “The world is its own best model” Tightly couples perceptions to actions No intervening abstract representations Primitive Behaviors are used as building blocks Individual behaviors can be made up of primitive behaviors Reactive: no memory Behavior-Based: Short Term Memory (STM)

Behavior Coordination If multiple behaviors are possible which one does the robot do?

Where does the overall robot behavior come from? No planning, goal is generally not explicit Emergent Behavior Emergence is the appearance of a novel property of a whole system that cannot be explained by examining the individual components, for example the wetness of water. Overall behavior is a result of robots interaction with its surroundings and the coordination between the individual behaviors.

Reactive: Good & Bad Works with the Open World Assumption Provides a timely response in a dynamic environment where the environment is difficult to characterize and contains a lot of uncertainty. Unpredictable Low level intelligence Cannot manage tasks that require LTM or planning Tasks requiring localization and order dependent steps

Hybrid Paradigm Combines Reactive and Deliberative Control Planner Act Sense Act

Reactive/Behavior-Based Control Design Sense Act Design Considerations What are the primitive behaviors? What are the individual behaviors? Individual behaviors can be made up of primitive and other individual behaviors How are behaviors grounded to sensors and actuators? How are these behaviors effectively coordinated? If more than one behavior is appropriate for the situation, how does the robot choose which to take?

Design for robot soccer What primitive behaviors would you program? What individual behaviors? What situations does the robot need to recognize If the “pass behavior” is active and the “shoot behavior” is active, how does it choose?

Situated Activity Design Robot actions are based on the situations in which it finds itself Robot perception is characterized by recognizing what situations it is in and choosing an appropriate action

Implementing Behaviors Schema: knowledge + process Perceptual Schema :interpretation of sensory data Motor Schema: actions to take. Releasers :instantiates motor schema

Schema for Toad Feeding Behavior

Visual Representation in a Finite State Automata

Design of Behaviors represented by a State Transition Table q Є K Set of states (behaviors) σ Є Σ Set of releasers δ Transition function s State Robot starts in q Є F Set of terminating states Trash Pick-up Example

Competitive Coordination Action Selection Method Behaviors compete using an activation level The response associated with the behavior with the highest activation level wins Activation level is determined by attention (sensors) and intention (goals)

Competitive Coordination Suppression Network Method Response is determined by a fixed prioritization in which a strict behavioral dominance hierarchy exists. Higher priority behaviors can inhibit or suppress lower priority behaviors.

Subsumption Architecture A suppression network architecture built in layers Each layer gives the system a set of pre-wired behaviors Layers reflect a hierarchy of intelligence. Lower layers are basic survival functions (obstacle avoidance) Higher layers are more goal directed (navigation) The layers operate asynchronously (Multi-tasking) Lower layers can override the output from behaviors in the next higher level Rank ordering

Foraging Example

Using Multiple Behaviors requires the Robot to Multi-task Multi-tasking is having more than one computing processing run in parallel. True parallel processing requires multiple CPU’s. IC functions can be run as processes operating in parallel. The processor is actually shared among the active processes main is always an active process Each process, in turn, gets a slice of processing time (5ms) Each process gets its own default program stack of 256bytes A process, once started, continues until it has received enough processing time to finish (or until it is “killed” by another process) Global variables are used for inter-process communications

IC: Functions vs. Processes Functions are called sequentially Processes can be run simultaneously start_process(function-call); returns a process-id processes halt when function exits or parent process exits processes can be halted by using kill_process(process_id); hog_processor(); allows a process to take over the CPU for an additional 250 milliseconds, cancelled only if the process finishes or defers defer(); causes process to give up the rest of its time slice until next time More info: http://www.newtonlabs.com/ic/ic_11.html#SEC77

IC: Process Example #use pause.ic int done; /* global variable for inter-process communication */ void main() { pause(); done=0; start_process (ao_when_stop()); start_process (avoidBehavior()); start_process (cruiseBehavior()); start_process (collisionBehavior()); start_process (arbitrate()); . . . more code . . . } void ao_when_stop() while (stop_button() == 0); /* wait for stop button */ done=1; /* signal other processes */ ao(); /* stop all motors */

// Example Behavior: Avoid */ int avoidCommand; // global variable to indicate when behavior is active void avoidBehavior() { while(1) { if (LIGHT_SENSOR < averageLight - 3) { /* releaser*/ // Back away from the border. avoidCommand = COMMAND_STOP; Wait(20); avoidCommand = COMMAND_REVERSE; Wait(50); // Turn left or right for a random duration. if (Random(1) == 0) avoidCommand = COMMAND_LEFT; else avoidCommand = COMMAND_RIGHT; Wait(Random(200)); avoidCommand = COMMAND_NONE; }

// Example Coordinator function: Arbitrator int motorCommand; // global command setting the motors to the winning behavior motor schema void arbitrate() { while(1) { if (cruiseCommand != COMMAND_NONE) motorCommand = cruiseCommand; if (avoidCommand != COMMAND_NONE) motorCommand = avoidCommand; if (collisionCommand != COMMAND_NONE) motorCommand = collisionCommand; motorControl(); //set actual motor controls to winning behavior }

// Example function grounding behavior to motor commands void motorControl() { if (motorCommand == COMMAND_FORWARD) fd(1); fd(3); else if (motorCommand == COMMAND_REVERSE) bk(1); bk(3); else if (motorCommand == COMMAND_LEFT) { fd(1); bk(3); } else if (motorCommand == COMMAND_RIGHT) { bk(1); fd(3); else if (motorCommand == COMMAND_STOP) brake(1); brake(3);

Localization & Navigation

Mobiles Robots: Computer on the move Where am I? Localization problem Kidnapped robot problem What way should I take to get there? Path Planning (Pathing) Where am I going? Mission planning Where have I been? Map Making or Map Enhancement

“Where am I”? Localization Given an initial estimate, q, of the robot’s location in configuration space, maintain an ongoing estimate of the robot pose with respect to the map, P(q).

Overview of Location

Behavior Based Navigation Navigating without an abstraction of the world (map). Can Robbie the Reactive Robot get from point A to point B?

Goal Directed Behavior Based Control

Start-Goal Algorithm: Lumelsky Bug Algorithms

Lumelsky Bug Algorithms Unknown obstacles, known start and goal. Simple “bump” sensors, encoders. Choose arbitrary direction to turn (left/right) to make all turns, called “local direction” Motion is like an ant walking around: In Bug 1 the robot goes all the way around each obstacle encountered, recording the point nearest the goal, then goes around again to leave the obstacle from that point In Bug 2 the robot goes around each obstacle encountered until it can continue on its previous path toward the goal How would Robbie navigate an office building?

Behavior-Based Navigation Vs. Delibrative Navigation

If your robot has a map, why is it difficult for it to know where it is? Sensors are the fundamental input for the process of perception, therefore the degree sensors can discriminate the world state is critical Sensor Aliasing Many-to-one mapping between environmental states to the robot’s perceptual inputs Amount of information is generally insufficient to identify the robot’s position from a single sensor reading

Why is it difficult for a robot to know where it is? Sensor Noise Adds a limitation on the consistency of sensor readings Often the source of noise problems is that some environmental features are not captured by the robot’s representation. Dynamic Environments Unanticipated Events Obstacle Avoidance

The Configuration Space A set of “reachable” areas constructed from knowledge of both the robot and the world How to create it First abstract the robot as a point object. Then, enlarge the obstacles to account for the robot’s footprint and degrees of freedom

Configuration Space: the robot has... A Footprint The amount of space a robot occupies Degrees of Freedom The number of variables necessary to fully describe a robot’s configuration in space Six DoF Most Use 2 DoF Why? When would you need more?

Configuration Space: Accommodate Robot Size Obstacles Free Space Robot (treat as point object) x,y

The Cartographer: Spatial Memory Data structures and methods for interpreting and storing sensory input with relation to the robot’s world Representation of the world Sensory input interpretation Focus attention Path planning & evaluation Collection of information about the current environment

Map Representations Quantitative (Metric Representations) Topological (Landmarks) Important considerations Sufficiently represent the environment Enough detail to navigate potential problems Space and time complexity Sufficiently represent the limitations of the robot Support for map changes and re-planning Compatibility with reactive control layer

Using Dead Reckoning to Estimate Pose “ded reckoning” or deduced reckoning Reckon: to determine by reference to a fixed basis Keep track off the current position by noting how far the robot has traveled on a specific heading Used for maritime navigation Proprioceptive

Dead Reckoning with Shaft Encoders How far has a wheel traveled? distance = 2 * PI * radius * #revolutions Two types of wheel encoders reflectance sensor slot sensor

How far has the robot traveled and how far has it turned?

Which way am I going? Heading Percentage of the circumference of the circle with radius d

How far have I gone? Half the distance of the arc at the end of the motion Distance moved for center of the robot

Adding it up (x,y,Θ) Update the position information at each sensor update. How large can a single segment be? What does this assume?

Many Types of Mobile Bases Differential Drive Two independently driven wheels on opposite sides of the robot 3 DoF: pose = [x,y,Θ] Holonomic: can be treated as a massless point that can move in any direction

Types of Mobile Bases Omni Drive Synchro Drive Wheel capable of rolling in any direction. Robot can change direction without rotating the base Synchro Drive

Types of Mobile Bases Ackerman Drive Typical car steering Non-holonomic: must take into account position and velocity variable (can’t turn a car without moving it forward)

Types of Mobile Bases

Topological Representation Use of Landmarks Natural or Artificial Passive or Active Based on a specific sensing modality Vision, laser, sonar, IR

Example

Errors Systematic Errors vs. Random Errors Can they be managed?