Brian Peasley and Stan Birchfield

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Active Appearance Models
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Beyond Linear Separability
COMP Robotics: An Introduction
Optimizing and Learning for Super-resolution
Registration for Robotics Kurt Konolige Willow Garage Stanford University Patrick Mihelich JD Chen James Bowman Helen Oleynikova Freiburg TORO group: Giorgio.
Edge Preserving Image Restoration using L1 norm
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Accurate On-Line 3D Occupancy Grids Using Manhattan World Constraints Brian Peasley and Stan Birchfield Dept. of Electrical and Computer Engineering Clemson.
Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates Edwin Olson John Leonard, Seth Teller
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Discrete-Continuous Optimization for Large-scale Structure from Motion David Crandall, Andrew Owens, Noah Snavely, Dan Huttenlocher Presented by: Rahul.
Aspects of Conditional Simulation and estimation of hydraulic conductivity in coastal aquifers" Luit Jan Slooten.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Isoparametric Elements Element Stiffness Matrices
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Accurate Non-Iterative O( n ) Solution to the P n P Problem CVLab - Ecole Polytechnique Fédérale de Lausanne Francesc Moreno-Noguer Vincent Lepetit Pascal.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Motion Analysis (contd.) Slides are from RPI Registration Class.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 19 Solution of Linear System of Equations - Iterative Methods.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
An Algebraic Multigrid Solver for Analytical Placement With Layout Based Clustering Hongyu Chen, Chung-Kuan Cheng, Andrew B. Kahng, Bo Yao, Zhengyong Zhu.
Reinforcement Learning Game playing: So far, we have told the agent the value of a given board position. How can agent learn which positions are important?
Artificial Neural Networks
Semi-Stochastic Gradient Descent Methods Jakub Konečný (joint work with Peter Richtárik) University of Edinburgh SIAM Annual Meeting, Chicago July 7, 2014.
Overview and Mathematics Bjoern Griesbach
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Introduction to Adaptive Digital Filters Algorithms
Least-Mean-Square Training of Cluster-Weighted-Modeling National Taiwan University Department of Computer Science and Information Engineering.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
Machine Learning Chapter 4. Artificial Neural Networks
Chem Math 252 Chapter 5 Regression. Linear & Nonlinear Regression Linear regression –Linear in the parameters –Does not have to be linear in the.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Stochastic Gradient Descent and Tree Parameterizations in SLAM
Young Ki Baik, Computer Vision Lab.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Status of Reference Network Simulations John Dale ILC-CLIC LET Beam Dynamics Workshop 23 June 2009.
Improving Performance of The Interior Point Method by Preconditioning Project Proposal by: Ken Ryals For: AMSC Fall 2007-Spring 2008.
Announcements No midterm Project 3 will be done in pairs same partners as for project 2.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Numerical Methods for Inverse Kinematics Kris Hauser ECE 383 / ME 442.
1 An approach based on shortest path and connectivity consistency for sensor network localization problems Makoto Yamashita (Tokyo Institute of Technology)
CSCE 441: Computer Graphics Forward/Inverse kinematics
Answering ‘Where am I?’ by Nonlinear Least Squares
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Non-linear Least-Squares
Collaborative Filtering Matrix Factorization Approach
CSCE 441: Computer Graphics Forward/Inverse kinematics
Logistic Regression & Parallel SGD
Structure from Motion with Non-linear Least Squares
Noah Snavely.
Problems with Gauss-Seidel
Convolutional networks
Overfitting and Underfitting
Mathematical Solution of Non-linear equations : Newton Raphson method
Linear Algebra Lecture 16.
Structure from Motion with Non-linear Least Squares
Presentation transcript:

Brian Peasley and Stan Birchfield Fast and Accurate PoseSLAM by Combining Relative and Global State Spaces Brian Peasley and Stan Birchfield Microsoft Robotics Clemson University

PoseSLAM Problem: Given a sequence of robot poses and loop closure(s), update the poses Update loop closure edge pose at a: relative measurement between a and b: pb=(xb,yb,qb) pose at b: dab = (dxab,dyab,dqab) pa=(xa,ya,qa) end { error caused by sensor drift start Initial (noisy) pose estimates Final pose estimates

Video

PoseSLAM as graph optimization Key: Formulate as graph Nodes = poses Edges = relative pose measurements Solution: Minimize an objective function where state residual for single edge total error info matrix Question: How does state x relate to poses p? observed measurement predicted by model

PoseSLAM as nonlinear minimization With a little math, becomes A x = b Jacobian current state estimate increment to state estimate Repeatedly solve this linear system till convergence

How to solve linear system? Use sparse linear algebra Perform gradient descent Solve the equation directly Drawback: Solving equation requires external sparse linear algebra package (e.g., CHOLMOD) Update based on single edge Drawback: Requires many iterations* *With global state space where is preconditioning matrix where is learning rate Example: g2o (Kummerle et al. ICRA 2011) Example: TORO (Olson et al. ICRA 2006; Grisetti et al. RSS 2007)

Choosing the state space Global state space (GSS) allows fine adjustments Incremental state space (ISS) simple Jacobian, decoupled parameters, slow convergence Relative state space (RSS) simple Jacobian, coupled parameters, fast convergence (Olson et al. ICRA 2006) (our approach) state pose

Global state space (GSS) Global State Space (x, y, θ) 𝒙 0 = 0 0 0 𝒙 1 = 2 0 0 𝒙 2 = 4 0 0 𝒙 3 = 6 0 0 𝒙 0 = 0 0 0 𝒙 1 = 2 0 45 𝒙 2 = 4 0 0 𝒙 3 = 6 0 0 𝐩 𝑖 = 𝒙 𝑖

Incremental state space (ISS) Incremental State Space (Δx, Δ y, Δ θ) 𝒙 0 = 0 0 0 𝒙 1 = 2 0 0 𝒙 2 = 2 0 0 𝒙 3 = 2 0 0 𝒙 0 = 0 0 0 𝒙 1 = 2 0 45 𝒙 2 = 2 0 0 𝒙 3 = 2 0 0 𝐩 𝑖 = 𝑗=0 𝑖−1 𝒙 𝑗

Relative state space (RSS) Relative State Space (Δx, Δ y, Δ θ) 𝒙 0 = 0 0 0 𝒙 1 = 2 0 0 𝒙 2 = 2 0 0 𝒙 3 = 2 0 0 𝒙 0 = 0 0 0 𝒙 1 = 2 0 45 𝒙 2 = 2 0 0 𝒙 3 = 2 0 0 𝐩 𝑖 = 𝑗=0 𝑖−1 𝑅( 𝑘=0 𝑗 𝜃 𝑘 ) 𝒙 𝑗

Proposed approach Two Phases: Non-Stochastic Gradient Descent with Relative State Space (POReSS) RSS allows many poses to be affected in each iteration nSGD prevents being trapped in local minima Result: Quickly gets to a “good” solution Drawback: Long time to convergence Gauss-Seidel with Global State Space (Graph-Seidel) Initialize using output from POReSS GSS only changes one pose at a time in each iteration Gauss-Seidel allows for quick calculations Result: Fast “fine tuning”

Pose Optimization using a Relative State Space (POReSS) Δ𝒙= 𝐽 𝑇 ( 𝒙 )Ω𝐽( 𝒙 ) −1 𝐽 𝑇 ( 𝒙 )Ω𝑟( 𝒙 ) nSGD only considers one edge at a time: Δ𝒙= 𝑀 −1 𝐽 𝑎𝑏 𝑇 𝒙 Ω 𝑎𝑏 𝑟 𝑎𝑏 ( 𝒙 ) ISS RSS 𝐽 𝑎𝑏 = 𝟎 ⋯ 𝐈 ⋯ 𝐈 𝟎 ⋯ 𝐽 𝑎𝑏 = 𝟎 ⋯ 𝐈 ⋯ 𝐈 𝟎 ⋯ a+1 b a+1 b (between consecutive nodes) (between non-consecutive nodes)

How edges affect states 3 e23 e30 x0 x1 x2 x3 e01 e12 e23 2 e12 e01 1 e03 e01 affects x1 e12 affects x2 e23 affects x3 e03 affects x0, x1, x2, x3

A x b Gauss-Seidel A is sparse, so solve Repeat: Linearize about current estimate, then solve Ax=b.  A is updated each time A x b A is sparse, so solve 𝒙 𝒊 = 𝟏 𝒂 𝒊𝒊 𝒃 𝒊 − 𝒋 ≠𝒊 𝒂 𝒊𝒋 𝒙 𝒋 Row i: 𝒂 𝒊𝒊 𝒙 𝒊 + 𝒋 ≠𝒊 𝒂 𝒊𝒋 𝒙 𝒋 = 𝒃 𝒊

Graph-Seidel Do not linearize! Instead assume qs are constant  A remains constant A x b A: Diagonal: Connectivity of node (sum of off-diagonals) Off-diagonal: Connectivity between nodes (− if connected, 0 otherwise) b: Sum of edges in minus sum of edges out

Refining the estimate using a Global State Space (Graph-Seidel) (𝑖,𝑏)∈ 𝜀 𝑖 −Ω 𝑖𝑏 𝑥 𝑏 𝑑𝑖𝑎𝑔 (𝑖,𝑏)∈ 𝜀 𝑖 Ω 𝑖𝑏 −1

Refining the estimate using a Global State Space (Graph-Seidel) Form linear system 𝐴𝑥=𝑏 𝐴𝑥=𝑏 Compute new state Using Graph-Seidel

Results: Manhattan world On Manhattan World, our approach is faster and more powerful than TORO faster and comparable with g2o

Performance comparison Residual vs. time Time vs. size of graph (Manhattan world dataset) (corrupted square dataset) Our approach combines the Minimization capability of g2o (with faster convergence) Simplicity of TORO

Results: Parking lot On parking lot data (Blanco et al. AR 2009), our approach is faster and more powerful than TORO more powerful than g2o

Graph-Seidel

Results: Simple square

Rotational version (rPOReSS) Majority of drift is caused by rotational errors Remove rotational drift Treat (x,y) components of poses as constants Greatly reduces complexity of Jacobian derivation

Results of rPOReSS: Intel dataset

Incremental version (irPOReSS) Rotational POReSS can be run incrementally Goal is to keep graph “close” to optimized Run Graph-Seidel whenever a fully optimized graph is desired (Note: Further iterations of POReSS will undo fine adjustments made by Graph-Seidel)

Video

Conclusion Introduced a new method for graph optimization for loop closure Two phases: POReSS uses relative state space non-stochastic gradient descent  fast but rough convergence Graph-Seidel does not linearize about current estimate instead assumes orientation constant  very fast iterations, refines estimate No linear algebra needed! ~100 lines of C++ code Natural extensions: rotational incremental Future work: Landmarks 3D

Thanks!

Video