Download presentation
Presentation is loading. Please wait.
Published byJarno Ketonen Modified over 5 years ago
1
Today: AI Control & Localization Monday: Hunting Demonstration
Intelligent Robotics Today: AI Control & Localization Monday: Hunting Demonstration CoWorker by iRobot Corporation
2
Robot Control: Layers of Abstraction
3
Robot: Movement, Sensing, Reasoning
Assessment of your environment and the robot’s goals What will it manipulate? How will it move? (ME) What situations does it need to sense? (sensor suite, signal processing) (ECE) How does it decide what actions to take? (CS)
4
What does it take to get an intelligent robot to do a simple task?
Robot Parts: Two Arms, Vision, and Brain The Brain can communicate with all parts Arms can take commands as left, right, up, down, forward, and backward Arms can answer yes/no about whether they are touching something but cannot distinguish what they are touching The vision system can answer any question the brain asks, but cannot volunteer information. The vision system can move around to get a better view.
5
Why is this simple task so difficult?
Coordination is difficult Indirect feedback Updating knowledge about the environment Unexpected events Need to re-plan Different coordinate systems need to be resolved Box-centered and arm-centered
6
Dealing with the Physical World
A robot needs to be able to handle its environment or the environment must be altered and controlled. Close World Assumption The robot knows everything relevant to performing “Complete World Model” no surprises Open World Assumption The robot does not assume complete knowledge The robot must be able to handle unexpected events.
7
Spectrum of AI Robot Control
8
Deliberative/Hierarchical Robot Control
Emphasizes Planning Robot senses the world, constructs a model representation of the world, “shuts its eyes”, creates a plan of action, makes the action, then senses the results of the action.
9
Deliberative: Good & Bad
Goal Oriented Solve problems that need cognitive abilities Ability to optimize solution Predictable Dependence on a world model Requires a closed world assumption Symbol Grounding Problem Frame Problem Qualification Problem
10
Reactive/Behavior-Based Control
Sense Act Ignores world models “The world is its own best model” Tightly couples perceptions to actions No intervening abstract representations Primitive Behaviors are used as building blocks Individual behaviors can be made up of primitive behaviors Reactive: no memory Behavior-Based: Short Term Memory (STM)
11
Behavior Coordination
If multiple behaviors are possible which one does the robot do?
12
Where does the overall robot behavior come from?
No planning, goal is generally not explicit Emergent Behavior Emergence is the appearance of a novel property of a whole system that cannot be explained by examining the individual components, for example the wetness of water. Overall behavior is a result of robots interaction with its surroundings and the coordination between the individual behaviors.
13
Reactive: Good & Bad Works with the Open World Assumption
Provides a timely response in a dynamic environment where the environment is difficult to characterize and contains a lot of uncertainty. Unpredictable Low level intelligence Cannot manage tasks that require LTM or planning Tasks requiring localization and order dependent steps
14
Hybrid Paradigm Combines Reactive and Deliberative Control Planner Act
Sense Act
15
Reactive/Behavior-Based Control Design
Sense Act Design Considerations What are the primitive behaviors? What are the individual behaviors? Individual behaviors can be made up of primitive and other individual behaviors How are behaviors grounded to sensors and actuators? How are these behaviors effectively coordinated? If more than one behavior is appropriate for the situation, how does the robot choose which to take?
16
Design for robot soccer
What primitive behaviors would you program? What individual behaviors? What situations does the robot need to recognize If the “pass behavior” is active and the “shoot behavior” is active, how does it choose?
17
Situated Activity Design
Robot actions are based on the situations in which it finds itself Robot perception is characterized by recognizing what situations it is in and choosing an appropriate action
18
Implementing Behaviors
Schema: knowledge + process Perceptual Schema :interpretation of sensory data Motor Schema: actions to take. Releasers :instantiates motor schema
19
Schema for Toad Feeding Behavior
20
Visual Representation in a Finite State Automata
21
Design of Behaviors represented by a State Transition Table
q Є K Set of states (behaviors) σ Є Σ Set of releasers δ Transition function s State Robot starts in q Є F Set of terminating states Trash Pick-up Example
22
Competitive Coordination
Action Selection Method Behaviors compete using an activation level The response associated with the behavior with the highest activation level wins Activation level is determined by attention (sensors) and intention (goals)
23
Competitive Coordination
Suppression Network Method Response is determined by a fixed prioritization in which a strict behavioral dominance hierarchy exists. Higher priority behaviors can inhibit or suppress lower priority behaviors.
24
Subsumption Architecture
A suppression network architecture built in layers Each layer gives the system a set of pre-wired behaviors Layers reflect a hierarchy of intelligence. Lower layers are basic survival functions (obstacle avoidance) Higher layers are more goal directed (navigation) The layers operate asynchronously (Multi-tasking) Lower layers can override the output from behaviors in the next higher level Rank ordering
25
Foraging Example
26
Using Multiple Behaviors requires the Robot to Multi-task
Multi-tasking is having more than one computing processing run in parallel. True parallel processing requires multiple CPU’s. IC functions can be run as processes operating in parallel. The processor is actually shared among the active processes main is always an active process Each process, in turn, gets a slice of processing time (5ms) Each process gets its own default program stack of 256bytes A process, once started, continues until it has received enough processing time to finish (or until it is “killed” by another process) Global variables are used for inter-process communications
27
IC: Functions vs. Processes
Functions are called sequentially Processes can be run simultaneously start_process(function-call); returns a process-id processes halt when function exits or parent process exits processes can be halted by using kill_process(process_id); hog_processor(); allows a process to take over the CPU for an additional 250 milliseconds, cancelled only if the process finishes or defers defer(); causes process to give up the rest of its time slice until next time More info:
28
IC: Process Example #use pause.ic int done; /* global variable
for inter-process communication */ void main() { pause(); done=0; start_process (ao_when_stop()); start_process (avoidBehavior()); start_process (cruiseBehavior()); start_process (collisionBehavior()); start_process (arbitrate()); . . . more code . . . } void ao_when_stop() while (stop_button() == 0); /* wait for stop button */ done=1; /* signal other processes */ ao(); /* stop all motors */
29
// Example Behavior: Avoid */
int avoidCommand; // global variable to indicate when behavior is active void avoidBehavior() { while(1) { if (LIGHT_SENSOR < averageLight - 3) { /* releaser*/ // Back away from the border. avoidCommand = COMMAND_STOP; Wait(20); avoidCommand = COMMAND_REVERSE; Wait(50); // Turn left or right for a random duration. if (Random(1) == 0) avoidCommand = COMMAND_LEFT; else avoidCommand = COMMAND_RIGHT; Wait(Random(200)); avoidCommand = COMMAND_NONE; }
30
// Example Coordinator function: Arbitrator
int motorCommand; // global command setting the motors to the winning behavior motor schema void arbitrate() { while(1) { if (cruiseCommand != COMMAND_NONE) motorCommand = cruiseCommand; if (avoidCommand != COMMAND_NONE) motorCommand = avoidCommand; if (collisionCommand != COMMAND_NONE) motorCommand = collisionCommand; motorControl(); //set actual motor controls to winning behavior }
31
// Example function grounding behavior to motor commands
void motorControl() { if (motorCommand == COMMAND_FORWARD) fd(1); fd(3); else if (motorCommand == COMMAND_REVERSE) bk(1); bk(3); else if (motorCommand == COMMAND_LEFT) { fd(1); bk(3); } else if (motorCommand == COMMAND_RIGHT) { bk(1); fd(3); else if (motorCommand == COMMAND_STOP) brake(1); brake(3);
32
Localization & Navigation
33
Mobiles Robots: Computer on the move
Where am I? Localization problem Kidnapped robot problem What way should I take to get there? Path Planning (Pathing) Where am I going? Mission planning Where have I been? Map Making or Map Enhancement
35
“Where am I”? Localization
Given an initial estimate, q, of the robot’s location in configuration space, maintain an ongoing estimate of the robot pose with respect to the map, P(q).
36
Overview of Location
37
Behavior Based Navigation
Navigating without an abstraction of the world (map). Can Robbie the Reactive Robot get from point A to point B?
38
Goal Directed Behavior Based Control
39
Start-Goal Algorithm: Lumelsky Bug Algorithms
40
Lumelsky Bug Algorithms
Unknown obstacles, known start and goal. Simple “bump” sensors, encoders. Choose arbitrary direction to turn (left/right) to make all turns, called “local direction” Motion is like an ant walking around: In Bug 1 the robot goes all the way around each obstacle encountered, recording the point nearest the goal, then goes around again to leave the obstacle from that point In Bug 2 the robot goes around each obstacle encountered until it can continue on its previous path toward the goal How would Robbie navigate an office building?
41
Behavior-Based Navigation Vs. Delibrative Navigation
42
If your robot has a map, why is it difficult for it to know where it is?
Sensors are the fundamental input for the process of perception, therefore the degree sensors can discriminate the world state is critical Sensor Aliasing Many-to-one mapping between environmental states to the robot’s perceptual inputs Amount of information is generally insufficient to identify the robot’s position from a single sensor reading
43
Why is it difficult for a robot to know where it is?
Sensor Noise Adds a limitation on the consistency of sensor readings Often the source of noise problems is that some environmental features are not captured by the robot’s representation. Dynamic Environments Unanticipated Events Obstacle Avoidance
44
The Configuration Space
A set of “reachable” areas constructed from knowledge of both the robot and the world How to create it First abstract the robot as a point object. Then, enlarge the obstacles to account for the robot’s footprint and degrees of freedom
45
Configuration Space: the robot has...
A Footprint The amount of space a robot occupies Degrees of Freedom The number of variables necessary to fully describe a robot’s configuration in space Six DoF Most Use 2 DoF Why? When would you need more?
46
Configuration Space: Accommodate Robot Size
Obstacles Free Space Robot (treat as point object) x,y
47
The Cartographer: Spatial Memory
Data structures and methods for interpreting and storing sensory input with relation to the robot’s world Representation of the world Sensory input interpretation Focus attention Path planning & evaluation Collection of information about the current environment
48
Map Representations Quantitative (Metric Representations)
Topological (Landmarks) Important considerations Sufficiently represent the environment Enough detail to navigate potential problems Space and time complexity Sufficiently represent the limitations of the robot Support for map changes and re-planning Compatibility with reactive control layer
49
Using Dead Reckoning to Estimate Pose
“ded reckoning” or deduced reckoning Reckon: to determine by reference to a fixed basis Keep track off the current position by noting how far the robot has traveled on a specific heading Used for maritime navigation Proprioceptive
50
Dead Reckoning with Shaft Encoders
How far has a wheel traveled? distance = 2 * PI * radius * #revolutions Two types of wheel encoders reflectance sensor slot sensor
51
How far has the robot traveled and how far has it turned?
52
Which way am I going? Heading
Percentage of the circumference of the circle with radius d
53
How far have I gone? Half the distance of the arc at the end of the motion Distance moved for center of the robot
54
Adding it up (x,y,Θ) Update the position information at each sensor update. How large can a single segment be? What does this assume?
55
Many Types of Mobile Bases
Differential Drive Two independently driven wheels on opposite sides of the robot 3 DoF: pose = [x,y,Θ] Holonomic: can be treated as a massless point that can move in any direction
56
Types of Mobile Bases Omni Drive Synchro Drive
Wheel capable of rolling in any direction. Robot can change direction without rotating the base Synchro Drive
57
Types of Mobile Bases Ackerman Drive Typical car steering
Non-holonomic: must take into account position and velocity variable (can’t turn a car without moving it forward)
58
Types of Mobile Bases
59
Topological Representation
Use of Landmarks Natural or Artificial Passive or Active Based on a specific sensing modality Vision, laser, sonar, IR
60
Example
61
Errors Systematic Errors vs. Random Errors Can they be managed?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.