Sensor Fusion Localization and Navigation for Visually Impaired People G. Galioto, I. Tinnirello, D. Croce L. Giarré F. Inderst, F. Pascucci
Outline Arianna navigation system Problem setting Constraints Set up Arianna 2.0 navigation and localization system Activity recognition/Step detection Position Heading Quaternion Camera Sensor fusion Results Indoor/Outdoor
Key ideas Smartphone as enabling technology Camera as user eye Tactile interface (vibration) Predefined path (painted lines)
Problem setting - Target 𝑥 𝑦 𝜙 Tracking the pose of visually impaired user to support navigation in unknown planar environments Cartesian reference frame (i.e., the Navigation frame) Position 𝑥,𝑦 Heading (𝜙)
Constraints A handheld device Sensory system inside the device (accelerometers, gyroscopes, camera) Human activities (moving and standing still) Visual landmarks and a map is available Easy to deploy and maintain Online computation Low power consumption
Set up Visual landmarks: painted lines or colored tapes deployed on the floor Smartphone camera detects the lines on the floor and provides a continuous feedback to the users on the direction of the path using IMU (Body frame)
2.0 𝑥 𝑖 𝑦 𝑖 𝜙 𝑖 𝛼 𝛽 (𝑚,𝑠𝑠) 𝑎 𝑘 𝜔 𝑘 𝛾 𝑘 , Γ 𝑘 ℐ 𝑖 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 𝑃<𝛼 standing still moving ACTIVITY RECOGNITION 𝑎 𝑖,𝑧 𝑀 𝑎 𝑖,𝑧 𝑚 POSITION 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin(𝜙 𝑖 ) cos(𝜙 𝑖 ) 𝑙 𝑖 =0 𝑙 𝑖 =𝛽 4 𝑎 𝑖,𝑧 𝑀 − 𝑎 𝑖,𝑧 𝑚 𝑥 𝑖 𝑦 𝑖 HUMAN BEHAVIOR MODEL A PRIORI KNOWLEDGE CAMERA HEADING smoothing Gaussian filter edge detection Canny scheme line/slopes Hough transform 𝜙 𝑖 = 𝛾 𝑖 + Γ 𝑘 Γ 𝑘 +𝑅 𝛾 𝐶,𝑖 − 𝛾 𝑖 SENSOR FUSION QUATERNIONS ATTITUDE 𝑞 𝑘|𝑘−1 = 𝑒 Ω 𝑘 ∆𝑡 𝑘 𝑞 𝑘−1|𝑘−1 prediction 𝑃 𝑘 = Φ 𝑘 Γ 𝑘−1 Φ 𝑘 𝑇 + 𝑄 𝑘 correction 𝑆 𝑘 = 𝐻 𝑘 Γ 𝑘−1 𝐻 𝑘 𝑇 + 𝑉 𝑘 𝐾 𝑘 = 𝑃 𝑘 𝐻 𝑘 𝑇 𝑆 𝑘 −1 𝑞 𝑘|𝑘 = 𝑞 𝑘|𝑘−1 + 𝐾 𝑘 𝑎 𝑘,𝑧 −𝑅 𝐾 𝑎 𝑞 𝑘|𝑘−1 𝑔 Γ 𝑘 = 𝐼− 𝐾 𝑘 𝐻 𝑘 𝑃 𝑘 𝜙 𝑖 ACCELEROMETER GYRO CAMERA 𝛼 𝛽 (𝑚,𝑠𝑠) 𝛾 𝑘 , Γ 𝑘 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 ℐ 𝑖 𝜔 𝑘 𝑎 𝑘
Activity recognition (𝑚,𝑠𝑠) 𝛼 Requirements Step detection HUMAN BEHAVIOR MODEL Activity recognition 𝛼 Requirements Standing still Walking Step detection Number of steps standing still (𝑚,𝑠𝑠) 𝑃<𝛼 𝑎 𝑖,𝑧 𝑀 moving 𝑎 𝑖,𝑧 𝑚
Position 𝛽 𝑥 𝑖 𝑦 𝑖 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin(𝜙 𝑖 ) cos(𝜙 𝑖 ) HUMAN BEHAVIOR MODEL Position 𝛽 Position of the handheld device The heading is a parameter The human model is considered 𝑥 𝑖 𝑦 𝑖 𝑥 𝑖 𝑦 𝑖 = 𝑥 𝑖−1 𝑦 𝑖−1 + 𝑙 𝑖 sin(𝜙 𝑖 ) cos(𝜙 𝑖 ) (𝑚,𝑠𝑠) 𝜙 𝑖 standing still moving 𝑙 𝑖 =𝛽 4 𝑎 𝑖,𝑧 𝑀 − 𝑎 𝑖,𝑧 𝑚 𝑙 𝑖 =0
Heading – Quaternions 𝛾 𝑘 , Γ 𝑘 EKF to compute 𝑎 𝑘 correction F. De Cillis, et al., Hybrid Indoor Positioning System for First Responders, in IEEE Trans. on Systems, Man, and Cybernetics: Systems. EKF to compute Attitude of the smartphone Accuracy of the estimation 𝑞 𝑘|𝑘−1 = 𝑒 Ω 𝑘 ∆𝑡 𝑘 𝑞 𝑘−1|𝑘−1 prediction 𝑃 𝑘 = Φ 𝑘 Γ 𝑘−1 Φ 𝑘 𝑇 + 𝑄 𝑘 correction 𝑆 𝑘 = 𝐻 𝑘 Γ 𝑘−1 𝐻 𝑘 𝑇 + 𝑉 𝑘 𝐾 𝑘 = 𝑃 𝑘 𝐻 𝑘 𝑇 𝑆 𝑘 −1 𝑞 𝑘|𝑘 = 𝑞 𝑘|𝑘−1 + 𝐾 𝑘 𝑎 𝑘,𝑧 −𝑅 𝐾 𝑎 𝑞 𝑘|𝑘−1 𝑔 Γ 𝑘 = 𝐼− 𝐾 𝑘 𝐻 𝑘 𝑃 𝑘 𝛾 𝑘 , Γ 𝑘 𝜔 𝑘 / 𝑚 𝑘 𝑎 𝑘
Heading – Camera ℐ 𝑖 𝛾 𝐶,𝑖 𝜙 𝑚𝑎𝑝 Feature extraction Smoothing -> Gaussian filter Edge detection -> Canny scheme Line/slope detection -> Hough transform ℐ 𝑖 edge detection Canny scheme 𝛾 𝐶,𝑖 line/slopes detection Hough transform smoothing Gaussian filter 𝜙 𝑚𝑎𝑝
𝜙 𝑖 Sensor fusion KF update 𝛾 𝑘 , Γ 𝑘 𝛾 𝐶,𝑖 Performed when images are available Synchronization 𝛾 𝑘 , Γ 𝑘 𝜙 𝑖 = 𝛾 𝑖 + Γ 𝑘 Γ 𝑘 +𝑅 𝛾 𝐶,𝑖 − 𝛾 𝑖 𝜙 𝑖 𝛾 𝐶,𝑖
Experimental set up Smartphone Ground truth Samsung Galaxy S6 (SM-G920F) Running Android 6.0.1 IMU- MPU6500 by Invensense 100Hz IMX240 camera by Sony 20 Hz Ground truth OptiTrack 10 infrared cameras 4 markers on the smartphone
Results – Square Test Target: to evaluate the accuracy when a closed loop is considered Path: square path in an indoor environment executed 5 times without stops Length: 130 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm avg err min max err cov err PDR 0,66 0,15 1,77 0,22 TS 0,34 0,61 0,02
Results – Outdoor path Target: to evaluate the accuracy in a real scenario (Favara Cultural Park - Agrigento) Path: open loop path in a urban canyon having sharp turns Length: 76 m PDR estimate - green line Tracking system - red line Ground truth - blue line algorithm final err % final PDR 3,10 4% TS 0,41 < 1%
Conclusion – Future Developments ARIANNA 2.0 innovative smartphone-centric tracking system Indoor/ outdoor environments PDR + computer vision What else Human activities Handheld device Human in the loop Indoor localization Augmented reality: Arianna 4.0 without infrastructure
Many thanks for sharing your thoughts Keep the gradient “to the TOP” Many thanks for sharing your thoughts federica.pascucci@uniroma3.it