Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sensor Fusion Localization and Navigation for Visually Impaired People

Similar presentations


Presentation on theme: "Sensor Fusion Localization and Navigation for Visually Impaired People"β€” Presentation transcript:

1 Sensor Fusion Localization and Navigation for Visually Impaired People
G. Galioto, I. Tinnirello, D. Croce L. GiarrΓ© F. Inderst, F. Pascucci

2 Outline Arianna navigation system Problem setting
Constraints Set up Arianna 2.0 navigation and localization system Activity recognition/Step detection Position Heading Quaternion Camera Sensor fusion Results Indoor/Outdoor

3 Key ideas Smartphone as enabling technology Camera as user eye
Tactile interface (vibration) Predefined path (painted lines)

4 Problem setting - Target
π‘₯ 𝑦 πœ™ Tracking the pose of visually impaired user to support navigation in unknown planar environments Cartesian reference frame (i.e., the Navigation frame) Position π‘₯,𝑦 Heading (πœ™)

5 Constraints A handheld device
Sensory system inside the device (accelerometers, gyroscopes, camera) Human activities (moving and standing still) Visual landmarks and a map is available Easy to deploy and maintain Online computation Low power consumption

6 Set up Visual landmarks: painted lines or colored tapes deployed on the floor Smartphone camera detects the lines on the floor and provides a continuous feedback to the users on the direction of the path using IMU (Body frame)

7 2.0 π‘₯ 𝑖 𝑦 𝑖 πœ™ 𝑖 𝛼 𝛽 (π‘š,𝑠𝑠) π‘Ž π‘˜ πœ” π‘˜ 𝛾 π‘˜ , Ξ“ π‘˜ ℐ 𝑖 𝛾 𝐢,𝑖 πœ™ π‘šπ‘Žπ‘
𝑃<𝛼 standing still moving ACTIVITY RECOGNITION π‘Ž 𝑖,𝑧 𝑀 π‘Ž 𝑖,𝑧 π‘š POSITION π‘₯ 𝑖 𝑦 𝑖 = π‘₯ π‘–βˆ’1 𝑦 π‘–βˆ’1 + 𝑙 𝑖 sin⁑(πœ™ 𝑖 ) cos⁑(πœ™ 𝑖 ) 𝑙 𝑖 =0 𝑙 𝑖 =𝛽 4 π‘Ž 𝑖,𝑧 𝑀 βˆ’ π‘Ž 𝑖,𝑧 π‘š π‘₯ 𝑖 𝑦 𝑖 HUMAN BEHAVIOR MODEL A PRIORI KNOWLEDGE CAMERA HEADING smoothing Gaussian filter edge detection Canny scheme line/slopes Hough transform πœ™ 𝑖 = 𝛾 𝑖 + Ξ“ π‘˜ Ξ“ π‘˜ +𝑅 𝛾 𝐢,𝑖 βˆ’ 𝛾 𝑖 SENSOR FUSION QUATERNIONS ATTITUDE π‘ž π‘˜|π‘˜βˆ’1 = 𝑒 Ξ© π‘˜ βˆ†π‘‘ π‘˜ π‘ž π‘˜βˆ’1|π‘˜βˆ’1 prediction 𝑃 π‘˜ = Ξ¦ π‘˜ Ξ“ π‘˜βˆ’1 Ξ¦ π‘˜ 𝑇 + 𝑄 π‘˜ correction 𝑆 π‘˜ = 𝐻 π‘˜ Ξ“ π‘˜βˆ’1 𝐻 π‘˜ 𝑇 + 𝑉 π‘˜ 𝐾 π‘˜ = 𝑃 π‘˜ 𝐻 π‘˜ 𝑇 𝑆 π‘˜ βˆ’1 π‘ž π‘˜|π‘˜ = π‘ž π‘˜|π‘˜βˆ’1 + 𝐾 π‘˜ π‘Ž π‘˜,𝑧 βˆ’π‘… 𝐾 π‘Ž π‘ž π‘˜|π‘˜βˆ’1 𝑔 Ξ“ π‘˜ = πΌβˆ’ 𝐾 π‘˜ 𝐻 π‘˜ 𝑃 π‘˜ πœ™ 𝑖 ACCELEROMETER GYRO CAMERA 𝛼 𝛽 (π‘š,𝑠𝑠) 𝛾 π‘˜ , Ξ“ π‘˜ 𝛾 𝐢,𝑖 πœ™ π‘šπ‘Žπ‘ ℐ 𝑖 πœ” π‘˜ π‘Ž π‘˜

8 Activity recognition (π‘š,𝑠𝑠) 𝛼 Requirements Step detection HUMAN
BEHAVIOR MODEL Activity recognition 𝛼 Requirements Standing still Walking Step detection Number of steps standing still (π‘š,𝑠𝑠) 𝑃<𝛼 π‘Ž 𝑖,𝑧 𝑀 moving π‘Ž 𝑖,𝑧 π‘š

9 Position 𝛽 π‘₯ 𝑖 𝑦 𝑖 π‘₯ 𝑖 𝑦 𝑖 = π‘₯ π‘–βˆ’1 𝑦 π‘–βˆ’1 + 𝑙 𝑖 sin⁑(πœ™ 𝑖 ) cos⁑(πœ™ 𝑖 )
HUMAN BEHAVIOR MODEL Position 𝛽 Position of the handheld device The heading is a parameter The human model is considered π‘₯ 𝑖 𝑦 𝑖 π‘₯ 𝑖 𝑦 𝑖 = π‘₯ π‘–βˆ’1 𝑦 π‘–βˆ’1 + 𝑙 𝑖 sin⁑(πœ™ 𝑖 ) cos⁑(πœ™ 𝑖 ) (π‘š,𝑠𝑠) πœ™ 𝑖 standing still moving 𝑙 𝑖 =𝛽 4 π‘Ž 𝑖,𝑧 𝑀 βˆ’ π‘Ž 𝑖,𝑧 π‘š 𝑙 𝑖 =0

10 Heading – Quaternions 𝛾 π‘˜ , Ξ“ π‘˜ EKF to compute π‘Ž π‘˜ correction
F. De Cillis, et al., Hybrid Indoor Positioning System for First Responders, in IEEE Trans. on Systems, Man, and Cybernetics: Systems. EKF to compute Attitude of the smartphone Accuracy of the estimation π‘ž π‘˜|π‘˜βˆ’1 = 𝑒 Ξ© π‘˜ βˆ†π‘‘ π‘˜ π‘ž π‘˜βˆ’1|π‘˜βˆ’1 prediction 𝑃 π‘˜ = Ξ¦ π‘˜ Ξ“ π‘˜βˆ’1 Ξ¦ π‘˜ 𝑇 + 𝑄 π‘˜ correction 𝑆 π‘˜ = 𝐻 π‘˜ Ξ“ π‘˜βˆ’1 𝐻 π‘˜ 𝑇 + 𝑉 π‘˜ 𝐾 π‘˜ = 𝑃 π‘˜ 𝐻 π‘˜ 𝑇 𝑆 π‘˜ βˆ’1 π‘ž π‘˜|π‘˜ = π‘ž π‘˜|π‘˜βˆ’1 + 𝐾 π‘˜ π‘Ž π‘˜,𝑧 βˆ’π‘… 𝐾 π‘Ž π‘ž π‘˜|π‘˜βˆ’1 𝑔 Ξ“ π‘˜ = πΌβˆ’ 𝐾 π‘˜ 𝐻 π‘˜ 𝑃 π‘˜ 𝛾 π‘˜ , Ξ“ π‘˜ πœ” π‘˜ / π‘š π‘˜ π‘Ž π‘˜

11 Heading – Camera ℐ 𝑖 𝛾 𝐢,𝑖 πœ™ π‘šπ‘Žπ‘ Feature extraction
Smoothing -> Gaussian filter Edge detection -> Canny scheme Line/slope detection -> Hough transform ℐ 𝑖 edge detection Canny scheme 𝛾 𝐢,𝑖 line/slopes detection Hough transform smoothing Gaussian filter πœ™ π‘šπ‘Žπ‘

12 πœ™ 𝑖 Sensor fusion KF update 𝛾 π‘˜ , Ξ“ π‘˜ 𝛾 𝐢,𝑖
Performed when images are available Synchronization 𝛾 π‘˜ , Ξ“ π‘˜ πœ™ 𝑖 = 𝛾 𝑖 + Ξ“ π‘˜ Ξ“ π‘˜ +𝑅 𝛾 𝐢,𝑖 βˆ’ 𝛾 𝑖 πœ™ 𝑖 𝛾 𝐢,𝑖

13 Experimental set up Smartphone Ground truth
Samsung Galaxy S6 (SM-G920F) Running Android 6.0.1 IMU- MPU6500 by Invensense 100Hz IMX240 camera by Sony 20 Hz Ground truth OptiTrack 10 infrared cameras 4 markers on the smartphone

14 Results – Square Test Target: to evaluate the accuracy when a closed loop is considered Path: square path in an indoor environment executed 5 times without stops Length: 130 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm avg err min max err cov err PDR 0,66 0,15 1,77 0,22 TS 0,34 0,61 0,02

15 Results – Outdoor path Target: to evaluate the accuracy in a real scenario (Favara Cultural Park - Agrigento) Path: open loop path in a urban canyon having sharp turns Length: 76 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm final err % final PDR 3,10 4% TS 0,41 < 1%

16 Conclusion – Future Developments
ARIANNA 2.0 innovative smartphone-centric tracking system Indoor/ outdoor environments PDR + computer vision What else Human activities Handheld device Human in the loop Indoor localization Augmented reality: Arianna 4.0 without infrastructure

17 Many thanks for sharing your thoughts
Keep the gradient β€œto the TOP” Many thanks for sharing your thoughts


Download ppt "Sensor Fusion Localization and Navigation for Visually Impaired People"

Similar presentations


Ads by Google