CMSC Artificial Intelligence March 11, 2008

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Introduction to Robotics Lecture One Robotics Club -Arjun Bhasin.
Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Motion Planning for Point Robots CS 659 Kris Hauser.
DESIGN OF A GENERIC PATH PATH PLANNING SYSTEM AILAB Path Planning Workgroup.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #2: The Big.
Robotics CSPP Artificial Intelligence March 10, 2004.
ECE 4340/7340 Exam #2 Review Winter Sensing and Perception CMUcam and image representation (RGB, YUV) Percept; logical sensors Logical redundancy.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Robotics R&N: ch 25 based on material from Jean- Claude Latombe, Daphne Koller, Stuart Russell.
IofT 1910 W Fall 2006 Week 5 Plan for today:  discuss questions asked in writeup  talk about approaches to building intelligence  talk about the lab.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Chapter 25: Robotics April 27, The Week Ahead … Wednesday: Dmitrii Zagorodnov Thursday: Jeff Elser’s presentation, general discussion Friday: Rafal.
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
Navigation and Motion Planning for Robots Speaker: Praveen Guddeti CSE 976, April 24, 2002.
Introduction to mobile robots Slides modified from Maja Mataric’s CSCI445, USC.
Behavior- Based Approaches Behavior- Based Approaches.
Robotics In which agents are endowed with physical effectors with which to do mischief.
A Robust Layered Control System for a Mobile Robot Rodney A. Brooks Presenter: Michael Vidal.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
Lab 3 How’d it go?.
Search: Heuristic &Optimal Artificial Intelligence CMSC January 16, 2003.
Localisation & Navigation
Motion Control Locomotion Mobile Robot Kinematics Legged Locomotion
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Robotics Introduction Robot Hardware Robotic Perception Planning to Move Dynamics and Control Robotic Software Applications.
© Manfred Huber Autonomous Robots Robot Path Planning.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Motion Planning in Games Mark Overmars Utrecht University.
Robotics Sharif In the name of Allah. Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #2: The.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Navigation & Motion Planning Cell Decomposition Skeletonization Bounded Error Planning (Fine-motion Planning) Landmark-based Planning Online Algorithms.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Robotics Sharif In the name of Allah Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #4: The.
CSPP Artificial Intelligence March 10, 2004
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
Solving problems by searching
Intelligent Agents (Ch. 2)
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Chapter 11: Artificial Intelligence
CS b659: Intelligent Robotics
Schedule for next 2 weeks
Artificial Intelligence Lecture No. 5
Cover Option2.
Build Intelligence from the bottom up!
Build Intelligence from the bottom up!
Today: Classic & AI Control Wednesday: Image Processing/Vision
Solving problems by searching
Solving problems by searching
Concurrent Graph Exploration with Multiple Robots
CIS 488/588 Bruce R. Maxim UM-Dearborn
Build Intelligence from the bottom up!
Subsuption Architecture
Robot Intelligence Kevin Warwick.
Planning and Navigation
Solving problems by searching
Solving problems by searching
Introduction to Robotics
Solving problems by searching
Area Coverage Problem Optimization by (local) Search
Behavior Based Systems
Solving problems by searching
Presentation transcript:

CMSC 25000 Artificial Intelligence March 11, 2008 Robotics CMSC 25000 Artificial Intelligence March 11, 2008

Roadmap Robotics is AI-complete Classic AI (Ultra) Modern AI Integration of many AI techniques Classic AI Search in configuration space (Ultra) Modern AI Subsumption architecture Multi-level control Conclusion

Mobile Robots

Robotics is AI-complete Robotics integrates many AI tasks Perception Vision, sound, haptics Reasoning Search, route planning, action planning Learning Recognition of objects/locations Exploration

Sensors and Effectors Robotics interact with real world Need direct sensing for Distance to objects – range finding/sonar/GPS Recognize objects – vision Self-sensing – proprioception: pose/position Need effectors to Move self in world: locomotion: wheels, legs Move other things in world: manipulators Joints, arms: Complex many degrees of freedom

Real World Complexity Real world is hardest environment Problems: Partially observable, multiagent, stochastic Problems: Localization and mapping Where things are What routes are possible Where robot is Sensors may be noisy; Effectors are imperfect Don’t necessarily go where intend Solved in probabilistic framework

Navigation

Application: Configuration Space Problem: Robot navigation Move robot between two objects without changing orientation Possible? Complex search space: boundary tests, etc

Configuration Space Basic problem: infinite states! Convert to finite state space. Cell decomposition: divide up space into simple cells, each of which can be traversed “easily" (e.g., convex) Skeletonization: Identify finite number of easily connected points/lines that form a graph such that any two points are connected by a path on the graph

Skeletonization Example First step: Problem transformation Model robot as point Model obstacles by combining their perimeter + path of robot around it “Configuration Space”: simpler search

Navigation

Navigation

Navigation as Simple Search Replace funny robot shape in field of funny shaped obstacles with Point robot in field of configuration shapes All movement is: Start to vertex, vertex to vertex, or vertex to goal Search: Start, vertices, goal, & connections A* search yields efficient least cost path

Online Search Offline search: Online search: Think a lot, then act once Online search: Think a little, act, look, think,.. Necessary for exploration, (semi)dynamic env Components: Actions, step-cost, goal test Compare cost to optimal if env known Competitive ratio (possibly infinite)

Online Search Agents Exploration: Perform action in state -> record result Search locally Why? DFS? BFS? Backtracking requires reversibility Strategy: Hill-climb Use memory: if stuck, try apparent best neighbor Unexplored state: assume closest Encourages exploration

Acting without Modeling Goal: Move through terrain Problem I: Don’t know what terrain is like No model! E.g. rover on Mars Problem II: Motion planning is complex Too hard to model Solution: Reactive control

Reactive Control Example Hexapod robot in rough terrain Sensors inadequate for full path planning 2 DOF*6 legs: kinematics, plan intractable

Model-free Direct Control No environmental model Control law: Each leg cycles: on ground; in air Coordinate so that 3 legs on ground (opposing) Retain balance Simple, works on flat terrain

Handling Rugged Terrain Problem: Obstacles Block leg’s forward motion Solution: Add control rule If blocked, lift higher and repeat Implementable as FSM Reflex agent with state

FSM Reflex Controller Retract, lift higher yes no S3 Stuck? S4 Move Forward Set Down Lift up S2 S1 Push back

Emergent Behavior Reactive controller walks robustly Model-free; no search/planning Depends on feedback from the environment Behavior emerges from interaction Simple software + complex environment Controller can be learned Reinforcement learning

Subsumption Architecture Assembles reactive controllers from FSMs Test and condition on sensor variables Arcs tagged with messages; sent when traversed Messages go to effectors or other FSMs Clocks control time to traverse arc- AFSM E.g. previous example Reacts to contingencies between robot and env Synchronize, merge outputs from AFSMs

Subsumption Architecture Composing controllers from composition of AFSM Bottom up design Single to multiple legs, to obstacle avoidance Avoids complexity and brittleness No need to model drift, sensor error, effector error No need to model full motion

Subsumption Problems Relies on raw sensor data Hard to change task Sensitive to failure, limited integration Typically restricted to local tasks Hard to change task Emergent behavior – not specified plan Hard to understand Interactions of multiple AFSMs complex

Solution Hybrid approach 3 layer architecture Integrates classic and modern AI 3 layer architecture Base reactive layer: low-level control Fast sensor action loop Executive (glue) layer Sequence actions for reactive layer Deliberate layer Generates global solutions to complex tasks with planning Model based: pre-coded and/or learned Slower Some variant appears in most modern robots

Conclusion Robotics as AI microcosm Back to PEAS model Performance measure, environment, actuators, sensors Robots as agents act in full complex real world Tasks, rely on actuators and sensing of environment Exploits perceptions, learning, and reasoning Integrates classic AI search, representation with modern learning, robustness, real-world focus