Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Mobile Robot Localization and Mapping using the Kalman Filter
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
Observers and Kalman Filters
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Probabilistic Robotics: Kalman Filters
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Probability: Review TexPoint fonts used in EMF.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Probabilistic Robotics
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Slam is a State Estimation Problem. Predicted belief corrected belief.
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
TGI Friday’s Mistletoe Drone /12/10/mistletoe- quadcopter-draws- blood/
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Probabilistic Robotics Bayes Filter Implementations.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Mobile Robot Localization (ch. 7)
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.3: Reasoning with Bayes Law Jürgen Sturm Technische Universität.
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
An Introduction to Kalman Filtering by Arthur Pece
State Estimation and Kalman Filtering
Robotics Club: 5:30 this evening
Probabilistic Robotics
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Tracking with dynamics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
CSE 468/568 Deadlines Lab1 grades out tomorrow (5/1) HW2 grades out by weekend Lab 3 grades out next weekend HW3 – Probability homework out Due 5/7 FINALS:
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
Today.
Probabilistic Robotics
Probabilistic Robotics
Markov ó Kalman Filter Localization
Lecture 10: Observers and Kalman Filters
State Estimation Probability, Bayes Filtering
More about Posterior Distributions
CSE-490DF Robotics Capstone
A Short Introduction to the Bayes Filter and Related Models
EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5
Probabilistic Map Based Localization
Hidden Markov Models Markov chains not so useful for most agents
Chapter14-cont..
Kalman Filter: Bayes Interpretation
Presentation transcript:

Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours and simpleBlobDetection functions in OpenCV. Give a 1-2 sentence description of both, and then use one of them 5.Extra credit: use a video rather than an image

Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                            Examine different possible robot positions.

General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation

1.Uniform Prior 2.Observation: see pillar 3.Action: move right 4.Observation: see pillar

Localization Sense Move Initial Belief Gain Information Lose Information

Bayes Formula

Simple Example of State Estimation

Example  P(z|open) = 0.6P(z|  open) = 0.3  P(open) = P(  open) = 0.5

Actions Often the world is dynamic since – actions carried out by the robot, – actions carried out by other agents, – or just the time passing by change the world. How can we incorporate such actions?

Typical Actions Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty. (Can you think of an exception?)

Modeling Actions

Example: Closing the door

for : If the door is open, the action “close door” succeeds in 90% of all cases.

Integrating the Outcome of Actions Applied to status of door, given that we just (tried to) close it?

Integrating the Outcome of Actions P(closed | u) = P(closed | u, open) P(open) + P(closed|u, closed) P(closed)

Example: The Resulting Belief

OK… but then what’s the chance that it’s still open?

Example: The Resulting Belief

Summary Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence.

Example

s P(s)

Example

How does the probability distribution change if the robot now senses a wall?

Example 2 0.2

Example Robot senses yellow. Probability should go up. Probability should go down.

Example 2 States that match observation Multiply prior by 0.6 States that don’t match observation Multiply prior by Robot senses yellow

Example 2 ! The probability distribution no longer sums to 1! Normalize (divide by total) Sums to 0.36

Nondeterministic Robot Motion R The robot can now move Left and Right

Nondeterministic Robot Motion R The robot can now move Left and Right When executing “move x steps to right” or left:.8: move x steps.1: move x-1 steps.1: move x+1 steps

Nondeterministic Robot Motion Right 2 The robot can now move Left and Right When executing “move x steps to right”:.8: move x steps.1: move x-1 steps.1: move x+1 steps

Nondeterministic Robot Motion Right 2 The robot can now move Left and Right ( ) 0.1 When executing “move x steps to right”:.8: move x steps.1: move x-1 steps.1: move x+1 steps

Nondeterministic Robot Motion What is the probability distribution after 1000 moves?

Example

Right

Example

Right

Kalman Filter Model Gaussian σ2σ2

Kalman Filter Sense Move Initial Belief Gain Information Lose Information Bayes Rule (Multiplication) Convolution (Addition) Gaussian: μ, σ 2

Measurement Example μ, σ 2 v, r 2

Measurement Example μ, σ 2 v, r 2

Measurement Example μ, σ 2 v, r 2 σ 2 ’?

Measurement Example μ, σ 2 v, r 2 μ', σ 2 ’ To calculate, go through and multiply the two Gaussians and renormalize to sum to 1 Also, the multiplication of two Gaussian random variables is itself a Gaussian

Multiplying Two Gaussians 1. (ignoring the constant term in front) 2.To find the mean, need to maximize this function (i.e. take derivative) 3.Set equal to and solve for 4.…something similar for the variance μ, σ 2 v, r 2 1.(ignoring constant term in front) 2.To find mean, maximize function (i.e., take derivative) Set equal to 0 and solve for x 5. 6.… something similar for the variance:

Another point of view… μ, σ 2 v, r 2 μ', σ 2 ’ : p(x) : p(z|x) : p(x|z)

Example

μ’=12.4 σ 2 ’=1.6

Kalman Filter Sense Move Initial Belief Lose Information Convolution (Addition) Gaussian: μ, σ 2

Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

Motion Update For motion Model of motion noise μ’=μ+u=18 σ 2 ’=σ 2 +r 2 =10 u, r 2

Kalman Filter Sense Move Initial Belief Gaussian: μ, σ 2 μ’=μ+u σ 2 ’=σ 2 +r 2

Kalman Filter in Multiple Dimensions 2D Gaussian