Download presentation
Presentation is loading. Please wait.
Published byDwayne Joseph Modified over 9 years ago
1
AI Robotics Concepts Chapter 25
2
Content Tasks Effectors Sensors Agent Architectures Actions in continuous space
3
Robot Definition from the “Robot Institute of America” (fits a conveyor belt) A robot is a programmable, multifunction manipulator design to move material, parts, tools, or specific devices through variable programmed motions for the performance of a variety of tasks. Russels&Norvig definition (excludes softbots) A robot is an active, artificial agent whose environment is the physical world.
4
What is 'Artificial Intelligence'? What is 'Artificial Intelligence' (Jim Hendler, CNN, 1999)? –What computers cannot do yet E.g. NLP started as an AI field. Once successful (found objective mathematical model) became a field by itself. Getting out of AI (conventional wisdom): –Artificial Neural Networks are the 2 nd best solution to any problem –Genetic Algorithms are the 3 rd best solution to any problem –The (non-AI) mathematical solution, once found, is the best solution of any problem. AI is about generalization to unseen and unexpected situations. Once the situation is explicitly taken into account, we can no longer speak of AI. 1
5
The real world as an environment Inaccessible (sensors do not tell all) –need to maintain a model of the world Nondeterministic (wheels slip, parts break) –need to deal with uncertainty Nonepisodic (effects of an action change over time) –need to handle sequential decision problems Dynamic –need to know when to plan and when to reflex Continuous –cannot enumerate possible actions
6
Tasks Manufacturing and material handling –not autonomous: manufacturing, handling –simple machines are the best solution need accuracy, power, shapes put in standard cradles Gofer robots (with autonomy) –Mobile robots (mobots): couriers, under-water spies Hazardous environments –toxic or deep sea environment Telepresence and Virtual Reality –teleoperated robots for bomb scares in New York Augmentation of human abilities –artificial fingers, eyes, exoskeletons
7
Components Rigid Links Rigid Joints End Effectors (screw-drivers) –connected to “actuators” (convert electricity to movement) an actuator = a degree of freedom uses: locomotion, manipulation, processing
8
DOF Holonomic Robots (same # effective and controllable DOF) –a car has 3 effective but 2 controllable DOF robotic arms are holonomic mobile robots are not (more complex mechanics) Stanford manipulator 6DOF 5 revolute + 1 prismatic joint non-holonomic 4-wheeled vehicle
9
Distance sensors laser scanner & range scan Sensors –force, torque, tactile, range (sonar), image, odometry & shaft decoders (proprioceptive sensor), inertial, GPS
10
Locomotion Types: –statically stable (wheels, chenille, slow legs) –dynamically stable (legs, hopping) less practical for most robots (expensive) 4-legged “Big Dog” DFKI Bremen Marc Raibert
11
Perception Dynamic belief network –Update the belief net –filtering equations combine transition model / motion model sensor model
12
Localization Problems –Tracking –Global localization –Kidnapping problem Techniques –Probabilistic Prediction –Landmarks Models or Range Scan Models
13
Localization Monte Carlo Localization, (MCL, particle filtering) –starts with a uniform distribution of particles –computes/updates their probability –Generates another set of samples based on the new distribution Kalman filters –need local linearization, e.g. using Taylor expansion –Uncertainty grows until landmarks are spotted –uncertainty of landmarks => data association problem
14
Monte Carlo Localization
15
MCL Run initial global uncertainty bimodal uncertainty (symmetry) unimodal uncertainty
16
Mapping Simultaneous localization and mapping SLAM –similar to the localization (filtering) problem has the map in the posterior Often uses Kalman filters + Landmarks
18
Usual Mapping Other approaches are less probabilistic and more ad-hoc, and they represent a trend!
19
Planning to Move Two types of motion: –point-to-point (free motion) –compliant motion (touching a wall, a screw) Two representations of planning problems –configuration space –workspace Planning algorithms –cell decomposition –skeletonization.
20
Configuration Space The Workspace is the 3D world around. –typically a position is needed for each joint. The configuration Space is the n-ary world of possible robot configurations. –sometimes smaller size than the Workspace, as it integrates linkage constraints. occupied space vs. free space Sometimes both spaces are required: –Direct kinematics (configuration Workspace) –Inverse kinematics (workspace configuration)
21
Workspace vs Configuration Space Workspace of a robot with 2 DOF box with flat hanging obstacle Configuration space of same robot White regions are free of collisions The dot is for the current configuration
22
Robot configurations in workspace and configuration space.
23
Cell decomposition methods Continuous space is hard to work with Discretization uses cell decomposition –Grid Easy to scan with dynamic programming Difficult to handle “gray” boxes –Incompleteness –unsoundness May need further subdivision exponential in dimensions Difficult to prove that a box is fully available –Quad-trees –Exact cell decomposition (complex shapes) –Potential Field optimization via value iteration
24
Same path in workspace coordinates Robot bends elbow to avoid collision. Value function and path for discrete cell approximation of the configuration space
25
Potential A repelling potential field pushes robot away from obstacles Path found by minimizing path length and the potential
26
Skeletonization methods Skeleton – a lower dimensional representation of the configuration space –Voronoi graph (points equally distant from two obstacles) –Probabilistic roadmap
27
Voronoi Voronoi graph is the set of points equidistant to the two or more obstacles in configuration space Probabilistic roadmap, 400 randomly chosen points in free space
28
Skeletonization methods Skeleton – a lower dimensional representation of the configuration space –Voronoi graph (points equally distant from two obstacles) Reduces the dimension of the path planning. Difficult to compute in the configuration space. Difficult for many dimensions. Leads to large/inefficient detours. –Probabilistic roadmap Random generation of many points in the configuration space (discarding occupied ones) Lines between close points, if they are in free space. –Distribution of points may be based on need and promise –Scales best
29
Planning Uncertain Movements Errors –from the stochastic model –From the approximation (particle filtering) algorithms Common simplification: –Assume the “most likely state” –“Works” when uncertainty is limited Solution –(with observable states) Markov Decision Processes Returns optimal policy (action in each state) –Called “Navigation function” –(with partially observable states) POMDPs Creates policies based on distributions of state probabilities –Not practical
30
Robust Methods Assumes bounded amount of uncertainty –An interval of possibilities –Works if the plan fails in that interval –“Conformant planning” is the state independent planning (Chap 12) Fine Motion Planning (FMP) –Assume motion in proximity of static objects –Based on sensor feedback, where the motion is too sensible for the robot Use “guarded motions” (guard is a predicate stating when to end motion, and they attach “compliant motions”, i.e which suffer slide if needed) –Constructing FMP uses: configuration-space, angle of uncertainty in directions, sensors Generates a set of steps guaranteeing success in the worst case May not be the most efficient.
31
Fine Motion Planning 2-dimensional environment, velocity uncertainty cone and envelope of possible robot movements.
32
Fine Motion Plan First motion. No matter the error, we know that the final configuration will be left of the hole 2 nd motion command. Even with error we get into the home
33
Moving Computing forces and inertia –Dynamic state = position + velocities –Relation kinematics-velocity via differential eq. Controller : keeps the robot on track –Reference controller robot on reference path –Optimizing a cost function Optimal controllers (e.g., MDP policies) –(Markov decision processes)
34
Control a)proportional control with gain factor 1. b)proportional control with gain 0.1 c)proportional derivative control: gain 0.3 for proportional and 0.8 for derivative Robot tries to follow the path shown in gray
35
Controllers Control that is proportional to displacement: P-controllers –a t =K P (y(t)-x t ) y(t) desired location K P – gain parameter Assume small perturbations –Stable controller: bounded error y(t)-x t. E.g. P-controllers –Strictly Stable controller: can return to reference path
36
P(I)D-Controllers Proportional Derivative controllers: –a t =K P (y(t)-x t )+K D *d(y(t)-x t )/dt Proportional Integrative Derivative Controllers: –a t =K P (y(t)-x t )+K I *∫(y(t)-x t )+K D *d(y(t)-x t )/dt Corrects systematic errors
37
Potential for Control Has many local minima. Does not depend on velocities The robot ascends a potential field composed of repelling forces asserted from obstacles and an attracting force to the goal configuration successful path local optimum
38
Reactive Control Reflex agent design = reactive control –For intractable many DOFs or too few sensors –Simple rules: 3 legs at a time, a simple control Emergent behavior (no explicit environment model) Genghis, a hexapod robot. An augmented finite state machine AFSM for the control of a single leg. If stuck is moved increasingly higher
39
Augmented Finite State Machines Finite States augmented with timers Timers enable state changes after preprogrammed periods of time AFSM lets behaviors override each other: –a suppression signal overrides the normal input signal –an inhibation signal causes output to be completely inhibated
40
Robotic Software Architectures Software Architecture = methodology for structuring algorithms –Combining reactive with deliberative control Reactive ctrl is sensor driven, for low level decisions Leads to hybrid architectures. Reactive architectures –Subsumption architecture (1986) Each layer’s goal subsumes that of underlying layers. –bottom-up design –explore wander avoid objects Composable augmented finite state machines (AFSM, see hexapod). –Assumes good sensors; focuses on one task; complex
41
Three-layer architecture Most popular hybrid architecture –Reactive layer (low-level control, milliseconds) –Executive layer: Selects which reactive control to invoke Following points proposed by deliberative ctrl –Deliberative layer (planning, minutes/cycle) Other possible layers –User Interface Layer –Inter-robot interface
42
Summary Chapters to read –11.4 GraphPlan –12. Planning and Acting in Real World –14. Bayesian Nets –15. (Filtering, HMM, Kalmann, DBN) –17 (Decision) 17.1 to 17.4
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.