Sensor Fusion Localization and Navigation for Visually Impaired People

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

EU funded FP7: Oct 11 – Sep 14 Co-evolution of Future AR Mobile Platforms Paul Chippendale, Bruno Kessler Foundation FBK, Italy.
Travi-Navi: Self-deployable Indoor Navigation System
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
“Mapping while walking”
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Luis Mejias, Srikanth Saripalli, Pascual Campoy and Gaurav Sukhatme.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
Segmentation of Floor in Corridor Images for Mobile Robot Navigation Yinxiao Li Clemson University Committee Members: Dr. Stanley Birchfield (Chair) Dr.
Did You See Bob? Human Localization using Mobile Phones Ionut Constandache Duke University Presented by: Di Zhou Slides modified from Nichole Stockman.
Vision Based Control Motion Matt Baker Kevin VanDyke.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
MASKS © 2004 Invitation to 3D vision Lecture 11 Vision-based Landing of an Unmanned Air Vehicle.
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
A Study of Approaches for Object Recognition
Planar Matchmove Using Invariant Image Features Andrew Kaufman.
Visually Fingerprinting Humans without Face Recognition
Haptic: Image: Audio: Text: Landmark: YesNo YesNo YesNo YesNo YesNo Haptic technology, or haptics, is a tactile feedback technology that takes advantage.
POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico.
Overview and Mathematics Bjoern Griesbach
Project Objectives o Developing android application which turns standard cellular phone into a tracking device that is capable to estimate the current.
TAXISAT PROJECT Low Cost GNSS and Computer Vision based data fusion solution for driverless vehicles Marc POLLINA
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
1. The Promise of MEMS to LBS and Navigation Applications Dr. Naser El-Shiemy, CEO Trusted Positioning Inc. 2.
MACHINE VISION GROUP Multimodal sensing-based camera applications Miguel Bordallo 1, Jari Hannuksela 1, Olli Silvén 1 and Markku Vehviläinen 2 1 University.
Kalman filter and SLAM problem
Presented by: Z.G. Huang May 04, 2011 Did You See Bob? Human Localization using Mobile Phones Romit Roy Choudhury Duke University Durham, NC, USA Ionut.
Real-time Dense Visual Odometry for Quadrocopters Christian Kerl
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Indoor Localization Using a Modern Smartphone Carick Wienke Advisor: Dr. Nicholas Kirsch Although indoor localization is an important tool for a wide range.
Simultaneous Estimations of Ground Target Location and Aircraft Direction Heading via Image Sequence and GPS Carrier-Phase Data Luke K.Wang, Shan-Chih.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
LMDD : L ight-weight M agnetic-based D oor D etection with Your Smartphone Yiyang Zhao Tsinghua University Chen Qian University of Kentucky Liangyi Gong.
3D SLAM for Omni-directional Camera
Vision-based Landing of an Unmanned Air Vehicle
Sérgio Ronaldo Barros dos Santos (ITA-Brazil)
Submitted by:- Vinay kr. Gupta Computer Sci. & Engg. 4 th year.
WSCG2008, Plzen, 04-07, Febrary 2008 Comparative Evaluation of Random Forest and Fern classifiers for Real-Time Feature Matching I. Barandiaran 1, C.Cottez.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Young Ki Baik, Computer Vision Lab.
IMPROVE THE INNOVATION Today: High Performance Inertial Measurement Systems LI.COM.
ESR 2 / ER 2 Testing Campaign Review A. CrivellaroY. Verdie.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Vision and Obstacle Avoidance In Cartesian Space.
Chapter 1: Image processing and computer vision Introduction
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Frank Bergschneider February 21, 2014 Presented to National Instruments.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Mobile Activity Recognition
CSE-473 Mobile Robot Mapping.
Signal and Image Processing Lab
Paper – Stephen Se, David Lowe, Jim Little
Map for Easy Paths GIANLUCA BARDARO
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Anastasios I. Mourikis and Stergios I. Roumeliotis
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Chapter 1: Image processing and computer vision Introduction
Image processing and computer vision
Activity Recognition Classification in Action
AHED Automatic Human Emotion Detection
Related Work in Camera Network Tracking
Smartphone-based Acoustic Indoor Space Mapping
Presentation transcript:

Sensor Fusion Localization and Navigation for Visually Impaired People G. Galioto, I. Tinnirello, D. Croce L. Giarré F. Inderst, F. Pascucci

Outline Arianna navigation system Problem setting Constraints Set up Arianna 2.0 navigation and localization system Activity recognition/Step detection Position Heading Quaternion Camera Sensor fusion Results Indoor/Outdoor

Key ideas Smartphone as enabling technology Camera as user eye Tactile interface (vibration) Predefined path (painted lines)

Problem setting - Target 𝑥 𝑦 𝜙 Tracking the pose of visually impaired user to support navigation in unknown planar environments Cartesian reference frame (i.e., the Navigation frame) Position 𝑥,𝑦 Heading (𝜙)

Constraints A handheld device Sensory system inside the device (accelerometers, gyroscopes, camera) Human activities (moving and standing still) Visual landmarks and a map is available Easy to deploy and maintain Online computation Low power consumption

Set up Visual landmarks: painted lines or colored tapes deployed on the floor Smartphone camera detects the lines on the floor and provides a continuous feedback to the users on the direction of the path using IMU (Body frame)

2.0 𝑥 𝑖 𝑦 𝑖 𝜙 𝑖 𝛼 𝛽 (𝑚,𝑠𝑠) 𝑎 𝑘 𝜔 𝑘 𝛾 𝑘 , Γ 𝑘 ℐ 𝑖 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 𝑃<𝛼 standing still moving ACTIVITY RECOGNITION 𝑎 𝑖,𝑧 𝑀 𝑎 𝑖,𝑧 𝑚 POSITION 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin⁡(𝜙 𝑖 ) cos⁡(𝜙 𝑖 ) 𝑙 𝑖 =0 𝑙 𝑖 =𝛽 4 𝑎 𝑖,𝑧 𝑀 − 𝑎 𝑖,𝑧 𝑚 𝑥 𝑖 𝑦 𝑖 HUMAN BEHAVIOR MODEL A PRIORI KNOWLEDGE CAMERA HEADING smoothing Gaussian filter edge detection Canny scheme line/slopes Hough transform 𝜙 𝑖 = 𝛾 𝑖 + Γ 𝑘 Γ 𝑘 +𝑅 𝛾 𝐶,𝑖 − 𝛾 𝑖 SENSOR FUSION QUATERNIONS ATTITUDE 𝑞 𝑘|𝑘−1 = 𝑒 Ω 𝑘 ∆𝑡 𝑘 𝑞 𝑘−1|𝑘−1 prediction 𝑃 𝑘 = Φ 𝑘 Γ 𝑘−1 Φ 𝑘 𝑇 + 𝑄 𝑘 correction 𝑆 𝑘 = 𝐻 𝑘 Γ 𝑘−1 𝐻 𝑘 𝑇 + 𝑉 𝑘 𝐾 𝑘 = 𝑃 𝑘 𝐻 𝑘 𝑇 𝑆 𝑘 −1 𝑞 𝑘|𝑘 = 𝑞 𝑘|𝑘−1 + 𝐾 𝑘 𝑎 𝑘,𝑧 −𝑅 𝐾 𝑎 𝑞 𝑘|𝑘−1 𝑔 Γ 𝑘 = 𝐼− 𝐾 𝑘 𝐻 𝑘 𝑃 𝑘 𝜙 𝑖 ACCELEROMETER GYRO CAMERA 𝛼 𝛽 (𝑚,𝑠𝑠) 𝛾 𝑘 , Γ 𝑘 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 ℐ 𝑖 𝜔 𝑘 𝑎 𝑘

Activity recognition (𝑚,𝑠𝑠) 𝛼 Requirements Step detection HUMAN BEHAVIOR MODEL Activity recognition 𝛼 Requirements Standing still Walking Step detection Number of steps standing still (𝑚,𝑠𝑠) 𝑃<𝛼 𝑎 𝑖,𝑧 𝑀 moving 𝑎 𝑖,𝑧 𝑚

Position 𝛽 𝑥 𝑖 𝑦 𝑖 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin⁡(𝜙 𝑖 ) cos⁡(𝜙 𝑖 ) HUMAN BEHAVIOR MODEL Position 𝛽 Position of the handheld device The heading is a parameter The human model is considered 𝑥 𝑖 𝑦 𝑖 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin⁡(𝜙 𝑖 ) cos⁡(𝜙 𝑖 ) (𝑚,𝑠𝑠) 𝜙 𝑖 standing still moving 𝑙 𝑖 =𝛽 4 𝑎 𝑖,𝑧 𝑀 − 𝑎 𝑖,𝑧 𝑚 𝑙 𝑖 =0

Heading – Quaternions 𝛾 𝑘 , Γ 𝑘 EKF to compute 𝑎 𝑘 correction F. De Cillis, et al., Hybrid Indoor Positioning System for First Responders, in IEEE Trans. on Systems, Man, and Cybernetics: Systems. EKF to compute Attitude of the smartphone Accuracy of the estimation 𝑞 𝑘|𝑘−1 = 𝑒 Ω 𝑘 ∆𝑡 𝑘 𝑞 𝑘−1|𝑘−1 prediction 𝑃 𝑘 = Φ 𝑘 Γ 𝑘−1 Φ 𝑘 𝑇 + 𝑄 𝑘 correction 𝑆 𝑘 = 𝐻 𝑘 Γ 𝑘−1 𝐻 𝑘 𝑇 + 𝑉 𝑘 𝐾 𝑘 = 𝑃 𝑘 𝐻 𝑘 𝑇 𝑆 𝑘 −1 𝑞 𝑘|𝑘 = 𝑞 𝑘|𝑘−1 + 𝐾 𝑘 𝑎 𝑘,𝑧 −𝑅 𝐾 𝑎 𝑞 𝑘|𝑘−1 𝑔 Γ 𝑘 = 𝐼− 𝐾 𝑘 𝐻 𝑘 𝑃 𝑘 𝛾 𝑘 , Γ 𝑘 𝜔 𝑘 / 𝑚 𝑘 𝑎 𝑘

Heading – Camera ℐ 𝑖 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 Feature extraction Smoothing -> Gaussian filter Edge detection -> Canny scheme Line/slope detection -> Hough transform ℐ 𝑖 edge detection Canny scheme 𝛾 𝐶,𝑖 line/slopes detection Hough transform smoothing Gaussian filter 𝜙 𝑚𝑎𝑝

𝜙 𝑖 Sensor fusion KF update 𝛾 𝑘 , Γ 𝑘 𝛾 𝐶,𝑖 Performed when images are available Synchronization 𝛾 𝑘 , Γ 𝑘 𝜙 𝑖 = 𝛾 𝑖 + Γ 𝑘 Γ 𝑘 +𝑅 𝛾 𝐶,𝑖 − 𝛾 𝑖 𝜙 𝑖 𝛾 𝐶,𝑖

Experimental set up Smartphone Ground truth Samsung Galaxy S6 (SM-G920F) Running Android 6.0.1 IMU- MPU6500 by Invensense 100Hz IMX240 camera by Sony 20 Hz Ground truth OptiTrack 10 infrared cameras 4 markers on the smartphone

Results – Square Test Target: to evaluate the accuracy when a closed loop is considered Path: square path in an indoor environment executed 5 times without stops Length: 130 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm avg err min max err cov err PDR 0,66 0,15 1,77 0,22 TS 0,34 0,61 0,02

Results – Outdoor path Target: to evaluate the accuracy in a real scenario (Favara Cultural Park - Agrigento) Path: open loop path in a urban canyon having sharp turns Length: 76 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm final err % final PDR 3,10 4% TS 0,41 < 1%

Conclusion – Future Developments ARIANNA 2.0 innovative smartphone-centric tracking system Indoor/ outdoor environments PDR + computer vision What else Human activities Handheld device Human in the loop Indoor localization Augmented reality: Arianna 4.0 without infrastructure

Many thanks for sharing your thoughts Keep the gradient “to the TOP” Many thanks for sharing your thoughts federica.pascucci@uniroma3.it