Download presentation
Presentation is loading. Please wait.
Published byGabriel Holmes Modified over 5 years ago
1
Sensor Fusion Localization and Navigation for Visually Impaired People
G. Galioto, I. Tinnirello, D. Croce L. GiarrΓ© F. Inderst, F. Pascucci
2
Outline Arianna navigation system Problem setting
Constraints Set up Arianna 2.0 navigation and localization system Activity recognition/Step detection Position Heading Quaternion Camera Sensor fusion Results Indoor/Outdoor
3
Key ideas Smartphone as enabling technology Camera as user eye
Tactile interface (vibration) Predefined path (painted lines)
4
Problem setting - Target
π₯ π¦ π Tracking the pose of visually impaired user to support navigation in unknown planar environments Cartesian reference frame (i.e., the Navigation frame) Position π₯,π¦ Heading (π)
5
Constraints A handheld device
Sensory system inside the device (accelerometers, gyroscopes, camera) Human activities (moving and standing still) Visual landmarks and a map is available Easy to deploy and maintain Online computation Low power consumption
6
Set up Visual landmarks: painted lines or colored tapes deployed on the floor Smartphone camera detects the lines on the floor and provides a continuous feedback to the users on the direction of the path using IMU (Body frame)
7
2.0 π₯ π π¦ π π π πΌ π½ (π,π π ) π π π π πΎ π , Ξ π β π πΎ πΆ,π π πππ
π<πΌ standing still moving ACTIVITY RECOGNITION π π,π§ π π π,π§ π POSITION π₯ π π¦ π = π₯ πβ1 π¦ πβ1 + π π sinβ‘(π π ) cosβ‘(π π ) π π =0 π π =π½ 4 π π,π§ π β π π,π§ π π₯ π π¦ π HUMAN BEHAVIOR MODEL A PRIORI KNOWLEDGE CAMERA HEADING smoothing Gaussian filter edge detection Canny scheme line/slopes Hough transform π π = πΎ π + Ξ π Ξ π +π
πΎ πΆ,π β πΎ π SENSOR FUSION QUATERNIONS ATTITUDE π π|πβ1 = π Ξ© π βπ‘ π π πβ1|πβ1 prediction π π = Ξ¦ π Ξ πβ1 Ξ¦ π π + π π correction π π = π» π Ξ πβ1 π» π π + π π πΎ π = π π π» π π π π β1 π π|π = π π|πβ1 + πΎ π π π,π§ βπ
πΎ π π π|πβ1 π Ξ π = πΌβ πΎ π π» π π π π π ACCELEROMETER GYRO CAMERA πΌ π½ (π,π π ) πΎ π , Ξ π πΎ πΆ,π π πππ β π π π π π
8
Activity recognition (π,π π ) πΌ Requirements Step detection HUMAN
BEHAVIOR MODEL Activity recognition πΌ Requirements Standing still Walking Step detection Number of steps standing still (π,π π ) π<πΌ π π,π§ π moving π π,π§ π
9
Position π½ π₯ π π¦ π π₯ π π¦ π = π₯ πβ1 π¦ πβ1 + π π sinβ‘(π π ) cosβ‘(π π )
HUMAN BEHAVIOR MODEL Position π½ Position of the handheld device The heading is a parameter The human model is considered π₯ π π¦ π π₯ π π¦ π = π₯ πβ1 π¦ πβ1 + π π sinβ‘(π π ) cosβ‘(π π ) (π,π π ) π π standing still moving π π =π½ 4 π π,π§ π β π π,π§ π π π =0
10
Heading β Quaternions πΎ π , Ξ π EKF to compute π π correction
F. De Cillis, et al., Hybrid Indoor Positioning System for First Responders, in IEEE Trans. on Systems, Man, and Cybernetics: Systems. EKF to compute Attitude of the smartphone Accuracy of the estimation π π|πβ1 = π Ξ© π βπ‘ π π πβ1|πβ1 prediction π π = Ξ¦ π Ξ πβ1 Ξ¦ π π + π π correction π π = π» π Ξ πβ1 π» π π + π π πΎ π = π π π» π π π π β1 π π|π = π π|πβ1 + πΎ π π π,π§ βπ
πΎ π π π|πβ1 π Ξ π = πΌβ πΎ π π» π π π πΎ π , Ξ π π π / π π π π
11
Heading β Camera β π πΎ πΆ,π π πππ Feature extraction
Smoothing -> Gaussian filter Edge detection -> Canny scheme Line/slope detection -> Hough transform β π edge detection Canny scheme πΎ πΆ,π line/slopes detection Hough transform smoothing Gaussian filter π πππ
12
π π Sensor fusion KF update πΎ π , Ξ π πΎ πΆ,π
Performed when images are available Synchronization πΎ π , Ξ π π π = πΎ π + Ξ π Ξ π +π
πΎ πΆ,π β πΎ π π π πΎ πΆ,π
13
Experimental set up Smartphone Ground truth
Samsung Galaxy S6 (SM-G920F) Running Android 6.0.1 IMU- MPU6500 by Invensense 100Hz IMX240 camera by Sony 20 Hz Ground truth OptiTrack 10 infrared cameras 4 markers on the smartphone
14
Results β Square Test Target: to evaluate the accuracy when a closed loop is considered Path: square path in an indoor environment executed 5 times without stops Length: 130 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm avg err min max err cov err PDR 0,66 0,15 1,77 0,22 TS 0,34 0,61 0,02
15
Results β Outdoor path Target: to evaluate the accuracy in a real scenario (Favara Cultural Park - Agrigento) Path: open loop path in a urban canyon having sharp turns Length: 76 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm final err % final PDR 3,10 4% TS 0,41 < 1%
16
Conclusion β Future Developments
ARIANNA 2.0 innovative smartphone-centric tracking system Indoor/ outdoor environments PDR + computer vision What else Human activities Handheld device Human in the loop Indoor localization Augmented reality: Arianna 4.0 without infrastructure
17
Many thanks for sharing your thoughts
Keep the gradient βto the TOPβ Many thanks for sharing your thoughts
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.