ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Velocity Estimation using the Kinect Sensor Everett Bryan Bryce Pincock 29-Nov.-2011.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
The fundamental matrix F
3D reconstruction.
Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
Capturing Your Audience with Kinect
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
KinectFusion: Real-Time Dense Surface Mapping and Tracking
EUCLIDEAN POSITION ESTIMATION OF FEATURES ON A STATIC OBJECT USING A MOVING CALIBRATED CAMERA Nitendra Nath, David Braganza ‡, and Darren Dawson EUCLIDEAN.
Face Alignment with Part-Based Modeling
A Modified EM Algorithm for Hand Gesture Segmentation in RGB-D Data 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) July 6-11, 2014, Beijing,
Computer vision: models, learning and inference
Nikolas Engelhard 1, Felix Endres 1, Jürgen Hess 1, Jürgen Sturm 2, Wolfram Burgard 1 1 University of Freiburg, Germany 2 Technical University Munich,
Gerald Schweighofer RIGOROSUM Online SaM for GCM Institute of Electrical Measurement and Measurement Signal Processing Online Structure and.
Chapter 6 Feature-based alignment Advanced Computer Vision.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Vision-Based Motion Control of Robots
Video Processing EN292 Class Project By Anat Kaspi.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Prepared By: Kevin Meier Alok Desai
Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV Presenter: Yi Shi & Saul Rodriguez March 26, 2008.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Control & Robotics Lab  Presented By: Yishai Eilat & Arnon Sattinger  Instructor: Shie Mannor Project Presentation.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Estimation of physical properties of real world objects Rohan Chabra & Akash Bapat.
Exploration Robot with Stereovision Vladislav Richter Miroslav Skrbek FIT, CTU in Prague
3D Fingertip and Palm Tracking in Depth Image Sequences
Camera Geometry and Calibration Thanks to Martial Hebert.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Young Ki Baik, Computer Vision Lab.
By: 1- Aws Al-Nabulsi 2- Ibrahim Wahbeh 3- Odai Abdallah Supervised by: Dr. Kamel Saleh.
A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System The th International Congress on Image and Signal.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Unscented Kalman Filter (UKF) CSCE 774 – Guest Lecture Dr. Gabriel Terejanu Fall 2015.
Maryam Alizadeh Feb 4 th  Introduction  Characterizing the error › Initial position › Sampling rate of camera › ECG  Future work 2.
RGB-D Images and Applications
Multi Scale CRF Based RGB-D Image Segmentation Using Inter Frames Potentials Taha Hamedani Robot Perception Lab Ferdowsi University of Mashhad The 2 nd.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
G. Casalino, E. Zereik, E. Simetti, A. Turetta, S. Torelli and A. Sperindè EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
DISCRIMINATIVELY TRAINED DENSE SURFACE NORMAL ESTIMATION ANDREW SHARP.
How multiple views enable one to reconstruct depth in the world
1 PP Minimum Bias Triggering Simulations Alan Dion Stony Brook University.
On Mobile Sink Node for Target Tracking in Wireless Sensor Networks Thanh Hai Trinh and Hee Yong Youn Pervasive Computing and Communications Workshops(PerComW'07)
Optimal Relay Placement for Indoor Sensor Networks Cuiyao Xue †, Yanmin Zhu †, Lei Ni †, Minglu Li †, Bo Li ‡ † Shanghai Jiao Tong University ‡ HK University.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
Microsoft Kinect How does a machine infer body position?
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. High level representations of the two data flows. (a) The state-of-the-art data.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Computer vision: models, learning and inference
Trilateral Filtering of Range Images Using Normal Inner Products
Localization for Anisotropic Sensor Networks
Signal and Image Processing Lab
Salient Features of Soft Tissue Examination Velocity during Manual Palpation Jelizaveta Konstantinova1, Kaspar Althoefer1, Prokar Dasgupta2, Thrishantha.
Add and subtract complex numbers
Vision Based Motion Estimation for UAV Landing
Range Imaging Through Triangulation
Multiple View Geometry for Robotics
By Viput Subharngkasen
Revolutionize food industry by robots
Presentation transcript:

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Velocity Estimation using the Kinect Sensor Everett Bryan Bryce Pincock 29-Nov.-2011

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Outline  Problem Statement  Noise Characterization  Velocity Estimation  Results  Conclusion

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Problem Statement  The Microsoft Kinect is a new RGB-D sensor  Great alternative to stereo cameras  No available models of noise  Robots operating in GPS-denied environments require external proximity sensors to estimate its states  Safe and desired operation  Characterize noise in Kinect and apply to a linear velocity estimator

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Data Collection  Error analysis  Noise analysis  Verification of Noise Models

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Data Collection  Kinect parallel to flat wall  Capture depth map at 1cm increments

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Error Analysis  True depth map known  Subtract captured depth map from truth

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Noise Analysis  Deterministic Noise  Random Noise

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Noise Analysis  Deterministic Noise  Error vs distance

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Noise Analysis  Random Noise

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Noise Analysis  Random Noise

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Noise Characterization  Kinect Measurement  Simulated Measurement  Verification of Noise Models

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Velocity Estimation  Improve velocity estimates using Minimum Mean Squared Error (MMSE) linear estimator

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Velocity Estimation  Track features in successive frames  Simplify to tracking the nxn center pixels in the depth map  Requires no solution to complex feature extraction and data correspondence  Take average value from nxn pixels as r t

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Results

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Results

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Conclusion  Successfully characterized noise within the Kinect  Successfully applied a MMSE linear estimator to estimate velocity

ECEn 670 Mini-Conference29-Nov.-2011Everett Bryan, Bryce Pincock Thank you