MULTI-LINGUAL AND DEVICELESS COMPUTER ACCESS FOR DISABLED USERS C.Premnath and J.Ravikumar S.S.N. College of Engineering TamilNadu.

Slides:



Advertisements
Similar presentations
By: Hossein and Hadi Shayesteh Supervisor: Mr J.Connan.
Advertisements

Background Implementation of Gesture Recognition in the NIST Immersive Visualization Environment Luis D. Catacora Under the guidance of Judith Terrill.
Lesson 4 Alternative Methods Of Input.
MULTI LINGUAL ISSUES IN SPEECH SYNTHESIS AND RECOGNITION IN INDIAN LANGUAGES NIXON PATEL Bhrigus Inc Multilingual & International Speech.
An Introduction to Hidden Markov Models and Gesture Recognition Troy L. McDaniel Research Assistant Center for Cognitive Ubiquitous Computing Arizona State.
Didier Perroud Raynald Seydoux Frédéric Barras.  Abstract  Objectives  Modalities ◦ Project modalities ◦ CASE/CARE  Implementation ◦ VICI, Iphone,
Feature vs. Model Based Vocal Tract Length Normalization for a Speech Recognition-based Interactive Toy Jacky CHAU Department of Computer Science and Engineering.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views
Final Year Project LYU0301 Location-Based Services Using GSM Cell Information over Symbian OS Mok Ming Fai CEG Lee Kwok Chau CEG.
Input Devices Text Entry Devices, Positioning, Pointing and Drawing.
Braille Converter For Exam Background What is Braille? Braille is a series of raised dots that can be read with the fingers by people who are.
Track: Speech Technology Kishore Prahallad Assistant Professor, IIIT-Hyderabad 1Winter School, 2010, IIIT-H.
Assistive Technology Tools Alisha Little EDN Dr. Ertzberger.
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
(CONTROLLER-FREE GAMING
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
Gesture Recognition Using Laser-Based Tracking System Stéphane Perrin, Alvaro Cassinelli and Masatoshi Ishikawa Ishikawa Namiki Laboratory UNIVERSITY OF.
Assistive Technology Russell Grayson EDUC 504 Summer 2006.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
1 CMPT 275 High Level Design Phase Architecture. Janice Regan, Objectives of Design  The design phase takes the results of the requirements analysis.
3D Fingertip and Palm Tracking in Depth Image Sequences
11.10 Human Computer Interface www. ICT-Teacher.com.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
Compiled By: Raj G Tiwari.  A pattern is an object, process or event that can be given a name.  A pattern class (or category) is a set of patterns sharing.
Braille Converter For Exam Agenda 1.Introduction 2.Research Problem 3.Objectives 4.Methodology 5.Users & Benefits 6.Expected Outputs 7.References.
COMPUTER PARTS AND COMPONENTS INPUT DEVICES
Input By Hollee Smalley. What is Input? Input is any data or instructions entered into the memory of a computer.
1 Software Reliability Assurance for Real-time Systems Joel Henry, Ph.D. University of Montana NASA Software Assurance Symposium September 4, 2002.
 Data or instructions entered into memory of computer  Input device is any hardware component used to enter data or instructions 2.
Human Computer Interaction © 2014 Project Lead The Way, Inc.Computer Science and Software Engineering.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
Editors And Debugging Systems Other System Software Text Editors Interactive Debugging Systems UNIT 5 S.Sharmili Priyadarsini.
Microsoft Assistive Technology Products Brought to you by... Jill Hartman.
BioRAT: Extracting Biological Information from Full-length Papers David P.A. Corney, Bernard F. Buxton, William B. Langdon and David T. Jones Bioinformatics.
C. Lawrence Zitnick Microsoft Research, Redmond Devi Parikh Virginia Tech Bringing Semantics Into Focus Using Visual.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
Module Overview. Aims apply your programming skills to an applied study of Digital Image Processing, Digital Signal Processing and Neural Networks investigate.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
1 Software Reliability Analysis Tools Joel Henry, Ph.D. University of Montana.
Higher Vision, language and movement. Strong AI Is the belief that AI will eventually lead to the development of an autonomous intelligent machine. Some.
INTRODUCTION TO BIOMATRICS ACCESS CONTROL SYSTEM Prepared by: Jagruti Shrimali Guided by : Prof. Chirag Patel.
Chapter 7 Speech Recognition Framework  7.1 The main form and application of speech recognition  7.2 The main factors of speech recognition  7.3 The.
Glencoe Introduction to Multimedia Chapter 8 Audio 1 Section 8.1 Audio in Multimedia Audio plays many roles in multimedia. Effective use in multimedia.
Natural Language and Speech (parts of Chapters 8 & 9)
1 What is Multimedia? Multimedia can have a many definitions Multimedia means that computer information can be represented through media types: – Text.
VIRTUAL KEYBOARD By Parthipan.L Roll. No: 36 1 PONDICHERRY ENGINEERING COLLEGE, PUDUCHERRY.
Using Assistive Technology for Learning & Revision Alan O Donnell.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Some of the utilities associated with the development of programs. These program development tools allow users to write and construct programs that the.
Vikash ranjan vipul vikram Rajat kapoor sultan amed.
Input Devices.
Human Computer Interaction Lecture 21 User Support
What is BizTalk ?
Lesson 4 Alternative Methods Of Input.
Chp 4: Input and Output Devices
Hand Gestures Based Applications
Alternative Methods Of Input
Information Computer Technology
Information Computer Technology
Input Devices.
Designing Cross-Language Information Retrieval System using various Techniques of Query Expansion and Indexing for Improved Performance  Hello everyone,
Human Computer Interaction Lecture 21,22 User Support
11.10 Human Computer Interface
Lesson 4 Alternative Methods Of Input.
Inputting Data In Other Ways
Pilar Orero, Spain Yoshikazu SEKI, Japan 2018
Lesson 4 Alternative Methods Of Input.
Auditory Morphing Weyni Clacken
Presentation transcript:

MULTI-LINGUAL AND DEVICELESS COMPUTER ACCESS FOR DISABLED USERS C.Premnath and J.Ravikumar S.S.N. College of Engineering TamilNadu

Abstract Hand forms one of the most effective interaction tool for HCI. Currently, the only technology that satisfies the advanced requirements is glove-based sensing. We present a new tool for gesture analysis, by a simple gesture-symbol/character mapping esp. suited to the disabled users.

AIM : To use the hand gestures of the user to help them control/access the computer easily OBJECTIVE : To provide simple and cheap system of communication to people with single or multiple disabilities To overcome the language problems in communication/computer access.

INTRODUCTION Difficulties and impairments reduce computer use. Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI).

Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove based sensing. Hinders the ease and naturalness with which the user can interact with the computer controlled environment Requires long calibration and setup procedures.

Glove-Based Approach Basic operation is to sense the gesture by electric/magnetic contact or by monitoring threshold values in chemical/electrode based sensors. The sensors are connected to a control unit to find the gesture. Cost, dexterity and flexibility.

Hinders the ease and naturalness with which the user can interact with the computer controlled environment Requires long calibration and setup procedures.

Computer vision has the potential to provide much more natural, non- contact solutions.

Gesture recognition methods Model-based approach Appearance-based approach

Model-Based Approach Model based approaches estimate the position of a hand by projecting a 3-D hand model to image space and comparing it with image features. The steps involved are: Extracting a set of features from the input images Projecting the model on the scene (or back- projecting the image features in 3D) Establishing a correspondence between groups of model and image features

Appearance-Based Approach Appearance based approaches estimate hand postures directly from the images after learning the mapping from image feature space to hand configuration space. The feature vectors obtained is compared against user templates to determine the user whose hand photograph was taken.

Require considerable research on mapping and other relevant work. Actually allow us to create simple and cost effective systems.

Systems providing computer-access for people with disabilities JavaSpeak Parse the program and "speak" the program’s structure to a blind user ViaVoice, which has a published API, is used as the speech reader.

Emacspeak Provides functions geared directly to programming. Only for someone familiar with a UNIX environment.

METHODOLOGY Novel approach of mapping the character set of the language with the possible set of hand gestures and executes the actions mapped for the particular gesture. Capture the user’s gesture Manipulate and create a 5-digit code Execute required system operation User-friendliness – providing audio request.

The phases involved IMAGE CAPTURING PRE-PROCESSING EDGE DETECTION EDGE TRACKING CODE GENERATION ACTION EXECUTION

Image Capturing Setup and capture

Pre-processing Synthetic image An arithmetic operation is performed with the different channels

a) Sample input image b) Synthetic image

Edge Detection Need for edge detection Edges Edge detection methods Fourier domain Spatial domain

GradientMagnitude operation Spatial Domain Method Performs convolution operations on the source image using kernels.

Sample Output a) Synthetic image b) Edge detection output

Edge Tracking Find critical points.

Lets us see in detail how we trace fingertip shown below. In-depth finger tip image

Tracing for finger valley shown below is done in the exact reverse manner as discussed for finger tip In-depth finger valley image

Output after edge tracking a) Critical points marked with red dot. b) Finger length using Pythagoras Theorem.

CODE GENERATION Using phalanx information

Information about the phalanxes of the right hand

Values to be assigned 1 if the finger is open. 0 if the finger is half-closed i.e., only the proximal phalanx is visible. Already have data about the full finger length information of the user During code generation, 1 assigned when approximate matches with the stored value 0 when the obtained finger length is half that of the corresponding one in the database.

5 fingers - 2 values each Overall 32 (2×2×2×2×2) action gestures.

Mulitilingualism Map the gestures currently associated with only English characters, to the characters in other languages by analyzing the phonetics and their translation to English.

For example, the words involving Tamil characters, Hindi characters, Telugu characters, Malayalam characters, Can all be mapped with the English letter ‘A’,

European languages the alphabet is almost similar. Voice engine support important Latin Non-Latin languages where we have no space between words (Hindi and Arabic), are supported by tailoring the Run-time speech engine Free TTS

ACTION EXECUTION The tree panel Acquires the path information of a file/folder whenever that particular file/folder gets selected by the user’s input. The filename is passed to the speech synthesizer unit and verification done by the user.

JMF player controls the browsing work For example, if character ‘A’ is passed to the file manager then it passes the next file/folder name starting with the letter ‘A’ to the JMF player. File operations Type of the file selected (media/text) and the user’s input gesture. Pass the file to the JMF player unit Execute appropriate operations

Features Minimized cost and user friendliness of the project. Flexibility to change the gesture mapping based upon user’s comfort Ambidexterity

Limitations Gesture mapping for languages with large character sets like Chinese and Japanese. Voice support from the speech engine

Conclusion Novel approach for providing computer access to disabled user with a multilingual method. Overcomes the problem of user’s age involved and physical measures. Support for the illiterate users.

FUTURE WORK Both the hands as input aided with touch pad technology for the computer access (2 10 values - taking 2 values for each of the ten fingers) Assume the ten bit code is associated with the word “Pause”, then the system would type the word “Pause” if the environment is a text editor and PAUSE the current music track if the environment is a music player.

Map the gestures with system commands. Other applications currently inaccessible for disabled users.

The project has proposed an ambidextrous system where the computer access is all within your 5 fingers and the proposed enhancement has the potential to bring the world in your hands.