A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India 203207 Ashutosh Dwivedi.

Slides:



Advertisements
Similar presentations
OpenCV Introduction Hang Xiao Oct 26, History  1999 Jan : lanched by Intel, real time machine vision library for UI, optimized code for intel 
Advertisements

QR Code Recognition Based On Image Processing
KINECT REHABILITATION
Wen-Hung Liao Department of Computer Science National Chengchi University November 27, 2008 Estimation of Skin Color Range Using Achromatic Features.
Robust Part-Based Hand Gesture Recognition Using Kinect Sensor
Wrist Recognition and the Center of the Palm Estimation Based on Depth Camera Zhengwei Yao ; Zhigeng Pan ; Shuchang Xu Virtual Reality and Visualization.
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Real-Time Hand Gesture Recognition with Kinect for Playing Racing Video Games 2014 International Joint Conference on Neural Networks (IJCNN) July 6-11,
By : Adham Suwan Mohammed Zaza Ahmed Mafarjeh. Achieving Security through Kinect using Skeleton Analysis (ASKSA)
Xin Zhang, Zhichao Ye, Lianwen Jin, Ziyong Feng, and Shaojie Xu
Multi-scenario Gesture Recognition Using Kinect Student : Sin- Jhu YE Student Id : MA Computer Engineering & Computer Science University of Louisville.
A Modified EM Algorithm for Hand Gesture Segmentation in RGB-D Data 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) July 6-11, 2014, Beijing,
A Robust Method of Detecting Hand Gestures Using Depth Sensors Yan Wen, Chuanyan Hu, Guanghui Yu, Changbo Wang Haptic Audio Visual Environments and Games.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Recent Developments in Human Motion Analysis
OpenCV Stacy O’Malley CS-590 Summer, What is OpenCV? Open source library of functions relating to computer vision. Cross-platform (Linux, OS X,
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Real-time Hand Pose Recognition Using Low- Resolution Depth Images
Triangle-based approach to the detection of human face March 2001 PATTERN RECOGNITION Speaker Jing. AIP Lab.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Interactive Sand Art Draw Using RGB-D Sensor Presenter : Senhua Chang.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Human Computer Interface based on Hand Tracking P. Achanccaray, C. Muñoz, L. Rojas and R. Rodríguez 4 th International Symposium on Mutlibody Systems and.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
(CONTROLLER-FREE GAMING
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Professor : Yih-Ran Sheu Student’s name : Nguyen Van Binh Student ID: MA02B203 Kinect camera 1 Southern Taiwan University Department of Electrical Engineering.
Introduction Kinect for Xbox 360, referred to as Kinect, is developed by Microsoft, used in Xbox 360 video game console and Windows PCs peripheral equipment.
3D Fingertip and Palm Tracking in Depth Image Sequences
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Robust Hand Tracking with Refined CAMShift Based on Combination of Depth and Image Features Wenhuan Cui, Wenmin Wang, and Hong Liu International Conference.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
Human Gesture Recognition Using Kinect Camera Presented by Carolina Vettorazzo and Diego Santo Orasa Patsadu, Chakarida Nukoolkit and Bunthit Watanapa.
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
Algirdas Beinaravičius Gediminas Mazrimas Salman Mosslem.
出處: Signal Processing and Communications Applications, 2006 IEEE 作者: Asanterabi Malima, Erol Ozgur, and Miijdat Cetin 2015/10/251 指導教授:張財榮 學生:陳建宏 學號: M97G0209.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 A Static Hand Gesture Recognition Algorithm Using K- Mean Based Radial Basis Function Neural Network 作者 :Dipak Kumar Ghosh,
A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System The th International Congress on Image and Signal.
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Spectrograms Revisited Feature Extraction Filter Bank Analysis EEG.
RoshamRobo Alexander Ciccone – EEL4665 Spring 2014 Oral Report 2 – Special Sensor Image Credit – Author: Enzoklop URL:
Professor : Tsung Fu Chien Student’s name : Nguyen Trong Tuyen Student ID: MA02B208 An application Kinect camera controls Vehicles by Gesture 1 Southern.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Expectation-Maximization (EM) Case Studies
Interactive Sand Art Drawing Using RGB-D Sensor
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar IEEE 高裕凱 陳思安.
Knowledge Systems Lab JN 1/15/2016 Facilitating User Interaction with Complex Systems via Hand Gesture Recognition MCIS Department Knowledge Systems Laboratory.
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
Vision Based hand tracking for Interaction The 7th International Conference on Applications and Principles of Information Science (APIS2008) Dept. of Visual.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Southern Taiwan University Department of Electrical Engineering
Hand Gestures Based Applications
GESTURE RECOGNITION TECHNOLOGY
Video-based human motion recognition using 3D mocap data
Higher School of Economics , Moscow, 2016
Estimation of Skin Color Range Using Achromatic Features
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Higher School of Economics , Moscow, 2016
Sign Language Recognition With Unsupervised Feature Learning
Presentation transcript:

A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi Department of Electrical Engineering Shiv Nadar University Gautam Budh Nagar, India Fourth International Conference on Communication Systems and Network Technologies

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

Introduction The Gestures can also be defined as the physical actions made by the humans that conveys some meaningful information to interact with the environment Gestures provide non-haptic interface between our physical and cyber physical world They are expressive and can be made with the movements of fingers, hands, arms, head, face or body. Gesture recognition system provides a more natural way to interact

Introduction With the involvement of human body part, we can categorize the gestures as: 1.hand and arm gestures: recognition of hand poses, sign languages and entertainment applications (playing game) 2.head and face gestures: a)nodding or shaking of head b)raising the eyebrows c)looks of surprise, fear, anger

Introduction 3. body gestures: a)tracking movement of people b)analyzing movements of dancer c)recognizing human gaits for medical rehabilitation and athletic training Among these above-mentioned gestures in verbal/ nonverbal and non-haptic human interaction, hand gestures are the most expressive and used more frequently

Introduction First attempt to solve the problem of gesture recognition in HCI was resolved using glove-based devices But, glove based interface requires the user to wear cumbersome device Vision based techniques can be used to overcome this restricted interaction. However, vision based techniques faces the problems of background subtraction, occlusion, lighting changes, rapid motion or other skin colored objects in a scene

Introduction These problems can be solved with the help of depth camera In 2010, Microsoft launched a 3D depth sensing camera known as Kinect In 2011, Ila et. al. proposed an algorithm for hand gesture recognition system. But their method requires user to wear red color gloves[5] In 2011, Meenakshi et.al. they converted the RGB information into YCbCr. Segmentation based on YCbCr [6] In 2010, Abhishek et. al. used HSV color space and the pixel values between two thresholds hsv max and hsv min for skin segmentation [7].

Introduction In this work, we captured images for different hand gestures using one, two, three, four and five fingers with Microsoft Kinect Using depth thresholding algorithm we removed background of the images. Thus we are left with only hand images with different gestures

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

The kinect device It consists of an infrared (IR) projector and two cameras (RGB and IR), a multi-array microphone in a small base with motorized pivot We are using Open source Natural Interface (OpenNI) library, which produces 640 × 480 RGB and depth images at 30 fps

The kinect device The Kinect depth information can be used for aiding foreground and background segmentation, human face tracking, human pose tracking, skeleton tracking etc.

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

Hand segmentation In the proposed algorithm, we are working with some assumptions like: 1)The human hand in a depth image is the closest object. 2)The position of the hand is in front of the human body. 3)The distance of the hand from the camera is within some predefined range. 4)There is no occlusion between the hand and the camera.

Hand segmentation The depth image needs to be converted into grayscale image to visualize the data as a visible image, as follows: Where, T 1 and T 2 are the threshold defined by the user. In this paper, we used T 1 as 20 and T 2 as 40

Hand segmentation

Preprocessing In the depth image obtained from kinect, there are some points for which Kinect is not able to measure the depth value It simply puts 0 for all these points considered as a noise We assume that the space is continuous, and the pixels are highly corelated to its neighbouring pixels. So, the missing depth value will also be same as its nearest neighbours We used nearest neighbour interpolation algorithm to interpolate these pixels and get a depth array that has meaningful values in all the pixels

Preprocessing Then, we use median filter with 5×5 windows on the depth array to make the data smooth

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

Contour detection In contour detection, the first step is to produce a binary image showing where certain objects of interest could be located To find contour we are using an algorithm given in OpenCV library This algorithm retrieves the connected components from the binary image and labels them

Convex hull Convex hull of a set of points is the smallest-area polygon enclosing those points. OpenCV library has been used for calculating the convex hull, which is based on the algorithm proposed by Skalansky[16]

Convexity defect Shapes of many complex objects are well characterized by such defects [16]

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

Experimental results we calculate the confusion matrix on a set of 75 images for each gestures showing one, two, three, four and five fingers For classifying the gestures we use naive Bayes classifier. Naive Bayes classifier requires a small amount of training data to estimate the parameters necessary for classification. We use machine learning toolkit Weka to classify. It is opensource software, written in Java

Experimental results

outline Introduction The kinect device Hand segmentation Object shape detection and feature extraction Experimental results Conclusions

We are obtaining binary images after applying the depth thresholding. Contour, convex hull and convexity defects obtained using images processing algorithms There are various potential improvements to the work as a future work. They are as follows: 1)Ability to recognize gestures from two hands. 2)Ability to recognize gestures from different orientations and rotations. 3)Ability to recognize not only static gestures but dynamic gestures as well.