Https://www.youtube.com/watch?v=INyVbf- yG7s#t=15 https://www.youtube.com/watch?v=INyVbf- yG7s#t=15.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Announcements Course TA: Danny Kumar
Uninformed search strategies
Problem Solving and Search Andrea Danyluk September 9, 2013.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
1 Lecture 3: 18/4/1435 Uninformed search strategies Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Nonholonomic Multibody Mobile Robots: Controllability and Motion Planning in the Presence of Obstacles (1991) Jerome Barraquand Jean-Claude Latombe.
Review: Search problem formulation
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
Solving problems by searching
1 Lecture 3 Uninformed Search
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Lecture 3 Uninformed Search.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Lab 3 How’d it go?.
CSE 511a: Artificial Intelligence Spring 2012
Class material vs. Lab material – Lab 2, 3 vs. 4,5, 6 BeagleBoard / TI / Digilent GoPro.
© Manfred Huber Autonomous Robots Robot Path Planning.
Beyond trial and error…. Establish mathematically how robot should move Kinematics: how robot will move given motor inputs Inverse-kinematics: how to.
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
Artificial Intelligence
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Lecture 3: Uninformed Search
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Robotics Club: 5:30 this evening
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
1 search CS 331/531 Dr M M Awais REPRESENTATION METHODS Represent the information: Animals are generally divided into birds and mammals. Birds are further.
CSCI 4310 Lecture 2: Search. Search Techniques Search is Fundamental to Many AI Techniques.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
CS 343H: Artificial Intelligence
AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.
James Irwin Amirkhosro Vosughi Mon 1-5pm
Gaits Cost of Transportation Wheeled Mobile Robots Most popular locomotion mechanism Highly efficient Simple mechanical implementation Balancing is.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
Schedule for next 2 weeks
Problem Solving as Search
Solving problems by searching
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
Searching for Solutions
Search Strategies CMPT 420 / CMPG 720.
Presentation transcript:

yG7s#t=15 yG7s#t=15

Robot Position: ξ I =[x I,y I,θ I ] T Mapping between frames ξ R =R(θ)ξ I, ξ I =R(θ) -1 ξ R R(θ)= Each wheel contributes to speed: rφ/2 For rotation, right wheel contributes: ω r =rφ/2l

Example θ=π/4 r l =2, r r =3 l=5 φ l = φ r =6 sin(π/4)=1/√2, cos(π/4)=1/√2 1) What is ξ R ? 2) What is ξ I ?

Note: Piazza post Note: Lab 3, Lab 4, lecture

There is no ideal drive configuration that simultaneously maximizes stability, maneuverability, and controllability Example: typically inverse correlation between controllability and maneuverability

Holonomicity If the controllable degrees of freedom is equal to the total degrees of freedom then the robot is said to be holonomic. – Holonomic constraints reduce the number of degrees of freedom of the system If the controllable degrees of freedom are less than the total degrees of freedom it is non-holonomic A robot is considered to be redundant if it has more controllable degrees of freedom than degrees of freedom in its task space

Why study holonomic constraints? How many independent motions can our turtlebot robot produce? – 2 at the most How many DOF in the task space does the robot need to control? – 3 DOF The difference implies that our system has holonomic constraints

Open loop control vs. Closed loop control Car’s cruse control

Open loop control vs. Closed loop control Recap – Control Architectures – Sensors & Vision – Control & Kinematics

Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal location Representation – State (state space) – Actions (operators) – Initial and goal states Plan: – Sequence of actions/states that achieve desired goal state

Model of the world Compute Path Smooth it and satisfy differential constraints Design a trajectory (velocity function) along the path Design a feedback control law that tracks the trajectory Execute

Fundamental Questions How is a plan represented? How is a plan computed? What does the plan achieve? How do we evaluate a plan’s quality?

1) World is known, goal is not

2) Goal is known, world is not

3) World is known, goal is known

4) Finding shortest path?

5) Finding shortest path with costs

The World consists of... Obstacles – Places where the robot can’t (shouldn’t) go Free Space – Unoccupied space within the world – Robots “might” be able to go here There may be unknown obstacles The state may be unreachable

Configuration Space (C-Space) Configuration Space: the space of all possible robot configurations. – Data structure that allows us to represent occupied and free space in the environment

Configuration Space For a point robot moving in 2-D plane, C-space is q goal q init C C free C obs Point robot (no constraints)

What if the robot is not a point?

23 Expand obstacle(s) Reduce robot What if the robot is not a point?

What if we want to preserve the angular component?

If we want to preserve the angular component…

Rigid Body Planning

Transfer in Reinforcement Learning via Shared Features: Konidaris, Scheidwasser, and Barto, 2012

Back to Path Planning… Typical simplifying assumptions for indoor mobile robots: – 2 DOF for representation – robot is round, so that orientation doesn’t matter – robot is holonomic, can move in any direction

Back to Path Planning…  How is a plan represented?  How is a plan computed?  What does the plan achieve?  How do we evaluate a plan’s quality? Fundamental Questions start goal q init to q goa l ?

Discretize start goal

Occupancy Grid start goal

Occupancy Grid, accounting for C-Space start goal

Occupancy Grid, accounting for C-Space start goal Slightly larger grid size can make the goal unreachable. Problem if grid is “too small”?

4) Finding shortest path

General Tree Search Important ideas: – Fringe – Expansion – Exploration strategy Main question: which fringe nodes to explore?

Review: Depth First Search S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand deepest node first Implementation: Fringe is a LIFO stack

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front (i.e. a stack)

Depth-first search DEMOS

Review: Breadth First Search S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r Search Tiers Strategy: expand shallowest node first Implementation: Fringe is a FIFO queue

Breadth-first search Expand shallowest unexpanded node Implementation: – Fringe is a FIFO queue, i.e., new successors go at end DEMOS

DFS Infinite paths make DFS incomplete… How can we fix this? AlgorithmCompleteOptimalTimeSpace DFS Depth First Search NN O(B LMAX )O(LMAX) START GOAL a b NNInfinite

DFS With cycle checking, DFS is complete.* AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking YN O(b m+1 )O(bm) … b 1 node b nodes b 2 nodes b m nodes m tiers * Or with graph search version of DFS

BFS When is BFS optimal? AlgorithmCompleteOptimalTimeSpace DFS w/ Path Checking BFS YN O(b m+1 )O(bm) … b 1 node b nodes b 2 nodes b m nodes s tiers YN* O(b s+1 )O(b s ) b s nodes

In this problem the start state is S, and the goal state is G. IGNORE the heuristic estimate, h, and the transition costs next to the edges. Assume that ordering is defined by, and ties are always broken by, choosing the state that comes first alphabetically. 1.What is the order of states expanded using Depth First Search? Assume DFS terminates as soon as it reaches G. 2.What is the order of states expanded using Breadth First Search? Assume BFS terminates as soon as it reaches G.