MAPVI: Meeting Accessibility for Persons with Visual Impairments

Slides:



Advertisements
Similar presentations
V-1 Part V: Collaborative Signal Processing Akbar Sayeed.
Advertisements

Joint Eye Tracking and Head Pose Estimation for Gaze Estimation
A Robust Method of Detecting Hand Gestures Using Depth Sensors Yan Wen, Chuanyan Hu, Guanghui Yu, Changbo Wang Haptic Audio Visual Environments and Games.
Human Computer the University of Haifa Dr. Joel Lanir.
Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Saul Greenberg Computer Supported Cooperative Work Saul Greenberg Professor Department of Computer Science University of Calgary.
Shared Workspaces: Behavioural Foundations Petra Neumann 781 October 12 th, 2005.
University of Minho School of Engineering Algoritmi Centre Uma Escola a Reinventar o Futuro – Semana da Escola de Engenharia - 24 a 27 de Outubro de 2011.
Awareness and Distributed Collaboration David Ledo.
MouseHaus Table: A Tangible User Interface for Urban Design Chen-Je Huang Design Machine Group.
Single Display Groupware Ana Zanella - CPSC
A Smart Sensor to Detect the Falls of the Elderly.
Lecture 3: Shared Workspace and Design Coordination Dr. Xiangyu WANG.
Communicating with Avatar Bodies Francesca Barrientos Computer Science UC Berkeley 8 July 1999 HCC Research Retreat.
Lecture 3: Shared Workspace Awareness Dr. Xiangyu WANG 11 th August 2008.
INTRODUCTION. Concepts HCI, CHI Usability User-centered Design (UCD) An approach to design (software, Web, other) that involves the user Interaction Design.
Robot Vision SS 2013 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther, Christian Reinbacher.
Chapter 1 The Challenges of Networked Games. Online Gaming Desire for entertainment has pushed the frontiers of computing and networking technologies.
INPUT DEVICES. KEYBOARD Most common input device for a computer.
1 Imaginarium Merging science and practice David Kirsh, UCSD Cognitive Science, Assoc. Director ACCHI Erik Viirre M.D. Ph.D., UCSD School of Medicine Sheldon.
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
2003Lenko Grigorov, CISC 839 eyePROXY Lenko Grigorov, CISC 839 Supervisor: Roel Vertegaal Additional support by Skaburskis A and Changuk S.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
Research methods for HCI, HCC Judy Kay CHAI: Computer Human Adapted Interaction School of Information Technologies.
Demo. Overview Overall the project has two main goals: 1) Develop a method to use sensor data to determine behavior probability. 2) Use the behavior probability.
Scott Klemmer Michael Thomsen Ethan Phelps-Goodman Robert Lee James Landay 23 April 2002 ACM SIGCHI Minneapolis, MN Where Do Web Sites Come From? Capturing.
Computer Vision Technologies for Remote Collaboration Using Physical Whiteboards, Projectors and Cameras Zhengyou Zhang Microsoft Research mailto:
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
Mixed Reality: A Model of Mixed Interaction Céline Coutrix, Laurence Nigay User Interface Engineering Team CLIPS-IMAG Laboratory, University of Grenoble.
4 November 2000Bridging the Gap Workshop 1 Control of avatar gestures Francesca Barrientos Computer Science Division UC Berkeley.
SciFest Overview Neil Gannon. Outline Demonstrations using a Microsoft Kinect sensor – Image Manipulation Real-time Invisibility Background removal (green.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
Nonverbal Communication. Communication in general is process of sending and receiving messages that enables humans to share knowledge, attitudes, and.
It Starts with iGaze: Visual Attention Driven Networking with Smart Glasses It Starts with iGaze: Visual Attention Driven Networking with Smart Glasses.
3DDI: 3D Direct Interaction John Canny Computer Science Division UC Berkeley.
Screen Readers Cannot See (Ontology Based Semantic Annotation for Visually impaired Web users) Yeliz Yesilada, Simon Harper, Carole Goble and Robert Stevens.
Dan Bohus Researcher Microsoft Research in collaboration with: Eric Horvitz, ASI Zicheng Liu, CCS Cha Zhang, CCS George Chrysanthakopoulos, Robotics Tim.
Outline Introduction Related Work System Overview Methodology Experiment Conclusion and Future Work.
Vision Based hand tracking for Interaction The 7th International Conference on Applications and Principles of Information Science (APIS2008) Dept. of Visual.
Building Educational Virtual Environments page 1 ICALT 2002 Building Educational Virtual Environments C. Bouras, and T. Tsiatsos Computer Engineering and.
Vineeth Balasubramanian Shayok Chakraborty Sreekar Krishna Sethuraman Panchanathan C ENTER FOR C OGNITIVE U BIQUITOUS C OMPUTING CUbiC Human Centered Machine.
Nickolas McCarley University of Alabama Abstract Robotic Navigation through Gesture Based Control (RNGBC) assists people who may not be able to operate.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
Lecture Input Devices Keyboard. Mouse Microphone Digital Camera Scanner.
Lesson Objectives Aims You should be able to:
Information Computer Technology
Information Computer Technology
A seminar on Touchless Touchscreen Technology
Demo.
IMPART: The Intelligent Mobility Partnership
6/12/2018 3:52 PM © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN.
Ubiquitous Computing and Augmented Realities
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.
Real-time Wall Outline Extraction for Redirected Walking
Microsoft Build /22/2018 3:05 AM © 2016 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY,
NBKeyboard: An Arm-based Word-gesture keyboard
Pose Recognition using Kinect for Home Care System
Chapter 2 Human Information Processing
Review on Smart Solutions for People with Visual Impairment
Attentive User Interfaces
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
Communicating with Avatar Bodies
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
Introduction to Computer-mediated Communication
LEAP MOTION: GESTURAL BASED 3D INTERACTIONS
Figure 2: Tasks and the corresponding features explored in the study.
CHAPTER 8 The Nonverbal Code.
Analytics, BI & Data Integration
Presentation transcript:

MAPVI: Meeting Accessibility for Persons with Visual Impairments Sebastian Günther, Reinhard Koutny, Naina Dhingra, Markus Funk, Christian Hirt, Klaus Miesenberger, Max Mühlhäuser, Andreas Kunz 02.07.2019 PETRA 2019

Table of Content Introduction Related Work Our Approach Summary & Outlook Q & A 02.07.2019 PETRA 2019

Distributed workspaces (artifact spaces) Introduction Distributed workspaces (artifact spaces) Logical context between workspaces  3D information space Explicit information (artifacts) is supplemented by non-verbal communication (NVC) Chosen application: Metaplan brainstorming method 02.07.2019 PETRA 2019

«Over there …. is the inaccessible information!» Introduction «Over there …. is the inaccessible information!» Setting the stage: Blind & sighted users in a collaborative brainstorming meeting Explicit information accessible using e.g. screen readers Audio channel “occupied” by ongoing discussion A lot of information is not made explicit, but transferred via NVC elements 02.07.2019 PETRA 2019

NVCs occur in the task space 136 NVCs are listed in [1] Introduction NVCs occur in the task space 136 NVCs are listed in [1] 55% of communication is non- verbal [2] [1] Brannigan, C.R.; Humphries, D.A.: «Human Non-Verbal Behavior, A Means of Communication»; Blurton-Jones (Ed.): Ethological Studies of Child Behavior; Cambridge University Press; 1972 [2] Mehrabian, A.: «Nonverbal Communication»; 1972 02.07.2019 PETRA 2019

Tracking pointing gestures above tabletop using Kinect [1] Related Work Tracking pointing gestures above tabletop using Kinect [1] Tracking pointing gestures using Leap Motion [2] Postures at a table [3] [1] Kunz, A.; Alavi, A.; Sinn, P.: «Integrating Pointing Gesture Detection for Enhancing Brainstorming Meetings using Kinect and PixelSense»; 8th International Conference on Digital Enterprise Technology; 205-212; 2014 [2] Schnelle-Walka, D.; Alavi, A.; Ostie, P.; Mühlhäuser, M.; Kunz, A.: «A Mimd Map for Brainstorming Sessions with Blind and Sighted Persons»; 14th International Conference on Computers Helping People with Special Needs; 214-219; 2014 [3] Kunz, A.; Schnelle-Walka, D.; Alavi, A.; Pölzer, S.; Mühlhäuser, M.; Miesenberger, K.: «Making Tabletop Interaction Accessible for Blind Users»; Interactive Tabletops and Surfaces; 327-332; 2014 02.07.2019 PETRA 2019

Our Approach NVCs to be captured Tracking methods Gaze direction Deictic gestures Postures (nodding, shrugging) Tracking methods SLAM (Simultaneous Localization and Mapping Structured light, time-of-flight (e.g. Kinect) Optical flow (e.g. RGB camera) Bluetooth, WiFi, etc. 02.07.2019 PETRA 2019

False alerts to blind & visually impaired should be avoided Our Approach No NVC should be missed Multiple sensors Various sensor types, including audio False alerts to blind & visually impaired should be avoided Logical context between measured signals Temporal context between measured signals Classification of behavior 02.07.2019 PETRA 2019

Eyetracker for gaze detection Our Approach Eyetracker for gaze detection Will be used as reference RGB camera for gaze detection  OpenGaze [1] SMI remote eye tracking device [1] Zhang, X.; Sugano, Y.; Andreas Bulling, A.: «Evaluation of A Appearance-Based Methods and Implications for Gaze-Based Applications»; Proceedings ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), 2019 02.07.2019 PETRA 2019

Eye tracker at ideal distance of 0.7 m, accuracy measured on screen Our Approach Eye tracker at ideal distance of 0.7 m, accuracy measured on screen Eye tracker at ideal distance of 0.7 m, accuracy measured on whiteboard Camera at ideal distance of 0.7 m, accuracy measured on whiteboard Camera on whiteboard, accuracy measured on whiteboard 02.07.2019 PETRA 2019

RGB for pose detection  OpenPose Our Approach RGB for pose detection  OpenPose [1] Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; Sheikh, Y.: «OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields»; IEEE Transactions on Pattern Analysis and Machine Intelligence; 2018 02.07.2019 PETRA 2019

Future Work Development of an accessible Metaplan tool Deep learning approaches for gesture classification Developing suitable output devices for spatial awareness of NVCs User studies on cognitive maps of BVIP and their relation to real spaces 02.07.2019 PETRA 2019

Q & A Thank you for your attention guenther@tk.tu-darmstadt.de reinhard.koutny@jku.at 02.07.2019 PETRA 2019