COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.

Slides:



Advertisements
Similar presentations
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Advertisements

Computer Vision, Robert Pless
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Computer Networks Group Universität Paderborn Ad hoc and Sensor Networks Chapter 9: Localization & positioning Holger Karl.
Computer vision: models, learning and inference
Probabilistic Robotics
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Probabilistic Robotics Probabilistic Motion Models.
CPSC 425: Computer Vision (Jan-April 2007) David Lowe Prerequisites: 4 th year ability in CPSC Math 200 (Calculus III) Math 221 (Matrix Algebra: linear.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Uncalibrated Geometry & Stratification Sastry and Yang
1 Localization Technologies for Sensor Networks Craig Gotsman, Technion/Harvard Collaboration with: Yehuda Koren, AT&T Labs.
Mobile Robotics: 10. Kinematics 1
Computer Science Department Andrés Corrada-Emmanuel and Howard Schultz Presented by Lawrence Carin from Duke University Autonomous precision error in low-
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference
Optimal Placement and Selection of Camera Network Nodes for Target Localization A. O. Ercan, D. B. Yang, A. El Gamal and L. J. Guibas Stanford University.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
Localisation & Navigation
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Optics Real-time Rendering of Physically Based Optical Effects in Theory and Practice Masanori KAKIMOTO Tokyo University of Technology.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
1 Structure of Aalborg University Welcome to Aalborg University.
1 Camera calibration based on arbitrary parallelograms 授課教授:連震杰 學生:鄭光位.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Robotics Club: 5:30 this evening
Cherevatsky Boris Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Camera Model Calibration
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Computer vision: models, learning and inference
Processing visual information for Computer Vision
Paper – Stephen Se, David Lowe, Jim Little
CS b659: Intelligent Robotics
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
The Brightness Constraint
Robotic Guidance.
Simultaneous Localization and Mapping
Two-view geometry Computer Vision Spring 2018, Lecture 10
Vehicle Segmentation and Tracking in the Presence of Occlusions
The Brightness Constraint
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Motion Models (cont) 2/16/2019.
Sensor Placement Agile Robotics Program Review August 8, 2008
Presentation transcript:

COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization

Introduction Who am I? Overview, Camera Networks for Robot Localization What Where Why How (technical stuff)

Introduction - Hardware

Intro - What Previously: Localization is a key task for a robot. It’s typically achieved using the robot’s sensors and a map. Can “the environment” help with this?

Typical Robot Localization

Sensor Networks

Intro - Where In cases where there is sensing already in the environment, we can invert the direction of sensing. Where is this true? Buildings with security systems Public transportation areas (metro) More and more large cities (scary but true)

Intro – Why Advantages: In many cases sensors already exist Many robots operating in the same place, can all share the same sensors Computation can be done at a powerful central computer, saves robot computation Interesting research problem

Intro – How As the robot appears in images, we can use 3-D vision techniques to determine its position relative to the cameras What do we need to know about the cameras to make this work? Can we assume we know where the cameras are? Can we assume we know the camera properties?

Problem Can we use images from arbitrary cameras placed in unknown positions in the environment to help a robot navigate?

Proposed Method 1. Detect the robot 2. Measure the relative positions 3. Place the camera in the map 4. Move robot to the next camera 5. Repeat

Detection – An algorithm to detect these robots?

Detection (cont’d) Computer Vision techniques attempt detection of (moving) objects Background subtraction or image differencing Image templates Color matching Feature matching A robust algorithm for arbitrary robots is likely beyond current methods

Detection – Our Method

ARTag Markers

Proposed Method Detect the robot 2. Measure the relative positions 3. Place the camera in the map 4. Move robot to the next camera 5. Repeat

Position Measurement Question: Can we determine the 3-D position of an object relative to the camera from examining 2-D images? Hint: start from the introduction to Computer Vision from last time

Pinhole Camera Model

Camera Calibration An image depends on BOTH scene geometry and camera properties For example, zooming in and out and moving the object closer and farther have essentially the same effect Calibration means determining relevant camera properties (e.g. focal length f)

Projective Calibration Equations

Coordinate Transformation

Calibration Equations Matrix AT is a 3x4 and fully describes the geometry of image formation Given known object points M, and image points m, it is possible to solve for both A and T How many points are needed?

Calibration Targets

3-Plane ARTag Target

Position Measurement Conclusion With enough image points whose 3-D location are known, measurement of coordinate transformation T is possible The process is more complicated than traditional sensing, but luckily, we only need to do it once per camera

Proposed Method Detect the robot Measure the relative positions 3. Place the camera in the map 4. Move robot to the next camera 5. Repeat

Mapping Camera Locations Given the robot’s position, a measurement of the relative position of the camera allows us to place it in our map Question: What affects the accuracy of this type of relative measurement?

Proposed Method Detect the robot Measure the relative positions Place the camera in the map 4. Move robot to the next camera 5. Repeat

Robot Motion A robot moves by using electric motors to turn its wheels. There are numerous strategies here in each of the important aspects: Physical Design Control algorithms Programming Interface High-level software architecture

Nomad Scout

Differential Drive Kinematics

Odometry Position Readings

Robot Motion - Specifics Robot control accomplished by using an in-house application – Robodaemon Allows “point and shoot” motion, not continuous control Graphical and programmatic interface to query robot odometry, send motion commands, collect sensor data

Proposed Method Detect the robot Measure the relative positions Place the camera in the map Move robot to the next camera Repeat Are we done?

Challenges In general, it’s impossible to know the robot or camera positions exactly. All measurements have error What should the robot do if the cameras can’t see the whole environment? I didn’t say anything about how the robot should decide where to go next More?

Mapping with Uncertainty Given exact knowledge of the robot’s position, mapping is possible Given a pre-built map, localization is possible What if neither are present? Is it realistic to assume they will be? If so, when?

Uncertainty in Robot Position In general, kinematics equations do not exactly predict robot locations Sources of error Wheel slippage Encoder quantization Manufacturing artifacts Uneven and terrain Rough/slippery/wet terrain

Typical Odometry Error

Simultaneous Localization and Mapping (SLAM) When both the robot and map features are uncertain, both must be estimated Progress can be made by viewing measurements as probability densities instead of precise quantities

SLAM Progress

SLAM (cont’d) A quantity of the work in robotics in the last 5-10 years has involved localization and SLAM, results are now very pleasing indoors with good sensing These methods apply to our system More on this later in the course, or after class today if you’re interested

Motion Planning The mapping framework described is dependant on the robot’s motion: The robot must pass in front of a camera in order to collect any images Numerous points are needed for each camera to perform calibration SLAM accuracy affected by order of camera visitation

Local and Global Planning Local: how should the robot move while in front of one camera, to collect the set of calibration images? Global: in which order should the cameras be visited?

Local Planning Modern calibration algorithms are quite good at estimating from noisy data, but there are some geometric considerations Field of view Detection accuracy Singularities in calibration equations

Local Planning We must avoid configurations where all points collected lie in a linear sub-space of R 3 For example, a set of images of a single plane moved only through translation, gives all co-planar points

Projective Calibration Equations

Global Planning Camera positions estimated by relative measurements from the robot This information is only as accurate as our knowledge about the robot “Re-localizing” is our only way to reduce error

Distance / Accuracy Tradeoff Returning to well-known cameras helps our position estimates but causes the robot to travel farther than necessary An intelligent strategy is needed to manage this tradeoff Some partial results so far, this is work in progress

Review Using sensors in the environment, we can localize a robot In order to use previously un-calibrated and unmapped cameras, a robot can carry out exploration, and SLAM This must only be done once, and then accurate localization is possible

Future Work Better motion planning strategies globally Integrate other sensing (especially if the cameras have blind spots) Lose the targets? Other types of ubiquitous sensing (wireless, motion detection, etc)