/09/dji-phantom-crashes-into- canadian-lake/

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates Edwin Olson John Leonard, Seth Teller
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
(Includes references to Brian Clipp
Robot Localization Using Bayesian Methods
The GraphSLAM Algorithm Daniel Holman CS 5391: AI Robotics March 12, 2014.
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics: Kalman Filters
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Active Simultaneous Localization and Mapping Stephen Tully, : Robotic Motion Planning This project is to actively control the.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Overview and Mathematics Bjoern Griesbach
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
Kalman filter and SLAM problem
Mobile Robot controlled by Kalman Filter
Robotics Daniel Vasicek 2012/04/15.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
9-1 SA-1 Probabilistic Robotics: SLAM = Simultaneous Localization and Mapping Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti,
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
3D SLAM for Omni-directional Camera
Computer vision: models, learning and inference Chapter 19 Temporal models.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Young Ki Baik, Computer Vision Lab.
Mobile Robot Localization (ch. 7)
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
1 Assignment, Project and Presentation Mobile Robot Localization by using Particle Filter by Chong Wang, Chong Fu, and Guanghui Luo. Tracking, Mapping.
CSE-473 Mobile Robot Mapping. Mapping with Raw Odometry.
State Estimation and Kalman Filtering
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Probabilistic Robotics
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
Tracking with dynamics
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Sebastian Thrun Michael Montemerlo
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM : Simultaneous Localization and Mapping
Probabilistic Robotics Graph SLAM
Using Sensor Data Effectively
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Simultaneous Localization and Mapping
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Simultaneous Localization and Mapping
Presentation transcript:

/09/dji-phantom-crashes-into- canadian-lake/

HW2 Under door by 11:59pm today Via Angel by 11:59pm today (or just hand it to me now) Any questions? (Can also talk after class.)

Next Monday (10/14) – Tell Me About It: Language and Learning for Human-Robot Interaction – 4:10pm, EME 52 – Cynthia Matuszek is a Ph.D. candidate in Computer Science and Engineering at the University of Washington, studying with Drs. Dieter Fox and Luke Zettlemoyer. Her research focuses on how robots can interact robustly in unconstrained real-world settings. Next Tuesday – Intro to HRI Next Thursday – Reaction to “Camera-Based Navigation of a Low-Cost Quadrocopter” – Review Next next Tuesday – Midterm – Remember, book is in lab if you want to study

Due by 11:59pm on Wednesday, 10/16 gel12iros.pdf gel12iros.pdf Reaction 1 paragraph to ½ a page What did you find interesting? What was confusing? Summarize the paper What questions does this paper raise? Any ideas for a future project from this paper?

Overview Histogram Filter – Discrete – Multimodal Particle Filter – Continuous – Multimodal Kalman Filter – Continuous – Unimodal

Example

Right

Example

Right

Kalman Filter Model Gaussian σ2σ2

Kalman Filter Sense Move Initial Belief Gain Information Lose Information Bayes Rule (Multiplication) Convolution (Addition) Gaussian: μ, σ 2

Measurement Example μ, σ 2 v, r 2

Measurement Example μ, σ 2 v, r 2

Measurement Example μ, σ 2 v, r 2 σ 2 ’?

Measurement Example μ, σ 2 v, r 2 μ', σ 2 ’ To calculate, go through and multiply the two Gaussians and renormalize to sum to 1 Also, the multiplication of two Gaussian random variables is itself a Gaussian

Multiplying Two Gaussians 1. (ignoring the constant term in front) 2.To find the mean, need to maximize this function (i.e. take derivative) 3.Set equal to and solve for 4.…something similar for the variance μ, σ 2 v, r 2 1.(ignoring constant term in front) 2.To find mean, maximize function (i.e., take derivative) Set equal to 0 and solve for x 5. 6.… something similar for the variance:

Another point of view… μ, σ 2 v, r 2 μ', σ 2 ’ : p(x) : p(z|x) : p(x|z)

Example

μ’=12.4 σ 2 ’=1.6

Kalman Filter Sense Move Initial Belief Lose Information Convolution (Addition) Gaussian: μ, σ 2

Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

Motion Update For motion Model of motion noise μ’=μ+u σ 2 ’=σ 2 +r 2 u, r 2

Motion Update For motion Model of motion noise μ’=μ+u=18 σ 2 ’=σ 2 +r 2 =10 u, r 2

Kalman Filter Sense Move Initial Belief Gaussian: μ, σ 2 μ’=μ+u=18 σ 2 ’=σ 2 +r 2 =10

Kalman Filter in Multiple Dimensions 2D Gaussian

Implementing a Kalman Filter example VERY simple model of robot movement: What information do our sensors give us?

Implementing a Kalman Filter example H = [1 0] F H

Implementing a Kalman Filter estimate P’ uncertainty covariance state transition matrix u motion vector F motion noise measurement measurement function measurement noise identity matrix

Summary Kalman Filter – Continuous – Unimodal – Harder to implement – More efficient – Requires a good starting guess of robot location Particle Filter – Continuous – Multimodal – Easier to implement – Less efficient – Does not require an accurate prior estimate

SLAM Simultaneous localization and mapping: Is it possible for a mobile robot to be placed at an unknown location in an unknown environment and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map?

Three Basic Steps The robot moves – increases the uncertainty on robot pose – need a mathematical model for the motion – called motion model

Three Basic Steps The robot discovers interesting features in the environment – called landmarks – uncertainty in the location of landmarks – need a mathematical model to determine the position of the landmarks from sensor data – called inverse observation model

Three Basic Steps The robot observes previously mapped landmarks – uses them to correct both self localization and the localization of all landmarks in space – uncertainties decrease – need a model to predict the measurement from predicted landmark location and robot localization – called direct observation model

How to do SLAM

The Essential SLAM Problem

SLAM Paradigms Some of the most important approaches to SLAM: – Extended Kalman Filter SLAM (EKF SLAM) – Particle Filter SLAM (FAST SLAM) – GraphSLAM

EKF Slam Keep track of combined state vector at time t: – x, y, θ – m 1,x, m 1,y, s 1 – … – m N,x, m N,y, s N m = estimated coordinates of a landmark s = sensor’s signature for this landmark Very similar to EKF localization, starting at origin

EKF-SLAM Grey: Robot Pose Estimate White: Landmark Location Estimate

Visual Slam Single Camera What’s harder? How could it be possible?

GraphSLAM SLAM can be interpreted as a sparse graph of nodes and constraints between nodes.

GraphSLAM SLAM can be interpreted as a sparse graph of nodes and constraints between nodes. nodes: robot locations and map-feature locations edges: constraints between ▫ consecutive robot poses (given by the odometry input u) ▫ robot poses and the features observed from these poses.

GraphSLAM Key property: constraints are not to be thought as rigid constraints but as soft constraints ▫ Constraints acting like springs Solve full SLAM by relaxing these constraints ▫ get the best estimate of the robot path and the environment map by computing the state of minimal energy of this spring mass network

GraphSLAM

1.Build graph 2.Inference: solve system of linear equations to get map and path

x1x1 x2x2 x3x3 m1m1 m2m2 x1x x2x x3x m1m m2m x1x1 Each observation (movement, sensing) adds a constraint to the matrix and vector through addition to the four elements of the matrix defined over those entities. x 2 = x x 1 – x 2 = -5 x 2 - x 1 = x3x3 0 0 Ω ξ

Ω ξ

x1x1 x2x2 x3x3 m1m1 m2m2 x1x x2x x3x m1m m2m x3x3 0 0

Graph SLAM Ωξ

Example

GraphSLAM The update time of the graph is constant. The required memory is linear in the number of features. Final graph optimization can become computationally costly if the robot path is long. Impressing results with even hundred million features.