Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.

Slides:



Advertisements
Similar presentations
NEURAL NETWORKS Perceptron
Advertisements

FRAMES Similar to trusses, frames are generally fixed, load carrying structures. The main difference between a frame and a truss is that in a frame at.
Design Concepts and Principles
COMP322/S2000/L41 Classification of Robot Arms:by Control Method The Control unit is the brain of the robot. It contains the instructions that direct the.
Detection and Measurement of Pavement Cracking Bagas Prama Ananta.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Gaze Tracking Using A Webcamera.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Quadtrees, Octrees and their Applications in Digital Image Processing
Ch 11 Cognitive Walkthroughs and Heuristic Evaluation Yonglei Tao School of Computing and Info Systems GVSU.
Chapter 1: Introduction to Pattern Recognition
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
Recent Developments in Human Motion Analysis
Neural Network-based Face Recognition, using ARENA Algorithm. Gregory Tambasis Supervisor: Dr T. Windeatt.
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
LYU 0602 Automatic PhotoHunt Generation1 Automatic PhotoHunt Generation Shum Hei Lung To Wan Chi Supervisor: Prof. Michael R. Lyu.
Quadtrees, Octrees and their Applications in Digital Image Processing
Requirements Analysis Concepts & Principles
Cindy Song Sharena Paripatyadar. Use vision for HCI Determine steps necessary to incorporate vision in HCI applications Examine concerns & implications.
Augmented Reality: Object Tracking and Active Appearance Model
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Oral Defense by Sunny Tang 15 Aug 2003
Software Life Cycle Model
Vision-Based Biometric Authentication System by Padraic o hIarnain Final Year Project Presentation.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
By: Hossein and Hadi Shayesteh Supervisor: Mr. James Connan.
The Implementation of a Glove-Based User Interface Chris Carey.
Introduction to Systems Analysis and Design Trisha Cummings.
West Virginia University
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
M ULTIFRAME P OINT C ORRESPONDENCE By Naseem Mahajna & Muhammad Zoabi.
© 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1 A Discipline of Software Design.
Output and User Interface Design
Optical Tracking for VR Bertus Labuschagne Christopher Parker Russell Joffe.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
CAPTCHA solving Tianhui Cai Period 3. CAPTCHAs Completely Automated Public Turing tests to tell Computers and Humans Apart Determines whether a user is.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 A Static Hand Gesture Recognition Algorithm Using K- Mean Based Radial Basis Function Neural Network 作者 :Dipak Kumar Ghosh,
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
Quadtrees, Octrees and their Applications in Digital Image Processing.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
MACHINE VISION Machine Vision System Components ENT 273 Ms. HEMA C.R. Lecture 1.
Smart Lab - RBV Network 1 Radial Basis Voronoi Network: An Internet Enabled Vision System for Remote Object Classification RAYMOND K. CHAFIN, CIHAN H.
The Implementation of a Glove-Based User Interface Chris Carey.
Mobile Image Processing
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar IEEE 高裕凱 陳思安.
INTRODUCTION TO BIOMATRICS ACCESS CONTROL SYSTEM Prepared by: Jagruti Shrimali Guided by : Prof. Chirag Patel.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Eye Tracking Using A Simple Webcamera.
Face Detection Using Neural Network By Kamaljeet Verma ( ) Akshay Ukey ( )
CSC400W Honors Project Proposal Understanding ocean surface features from satellite images Jared Tilanus Nemanja Spasic.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
 System Requirement Specification and System Planning.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Presented By Bhargav (08BQ1A0435).  Images play an important role in todays information because A single image represents a thousand words.  Google's.
MAV Optical Navigation Software System April 30, 2012 Tom Fritz, Pamela Warman, Richard Woodham Jr, Justin Clark, Andre DeRoux Sponsor: Dr. Adrian Lauf.
System Design Ashima Wadhwa.
Transact™ Mobile SDK Quickly bring capture-enabled mobile applications to market with open-ended backend integrations.
Video-based human motion recognition using 3D mocap data
Using Tensorflow to Detect Objects in an Image
Identifying Human-Object Interaction in Range and Video Data
Software life cycle models
What's New in eCognition 9
The Implementation of a Glove-Based User Interface
Using Tensorflow to Detect Objects in an Image
What's New in eCognition 9
What's New in eCognition 9
THE ASSISTIVE SYSTEM SHIFALI KUMAR BISHWO GURUNG JAMES CHOU
Presentation transcript:

Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais

PROJECT OBJECTIVES Create application where 3D objects can be manipulated using hand gestures. Interface must be simple and intuitively easy to use. Translation, rotation and selection of objects must be possible.

3D Objects to be manipulated will be molecules. Hand gestures will be input using a web camera. Set of hand gestures will be specified where each gesture relates to a specific task.

Project decomposed into three phases: 2D Image Processing (M. Chang) Data Analysis Phase (S. Preston) Front-end Visualisation (L. Matshoba)

System Architecture

AIMS –Feature extraction of sequence of hand images. –Elimination of noise. –Adequate performance in real-time

INPUT –Logitech Webcam is used as the capturing device. –Sequence of hand images captured by the webcam. –Capable of capturing 30 frames per second.

IMPLEMENTATION –Segmentation of image to isolate the hand. –Image smoothing and filtering to eliminate noise. –Threshold of image to isolate the desired features.

OUTPUT –Sets of features extracted from the sequence of hand images. –Basic representation of the hand structure.

CHALLENGES –Efficient algorithm implementation capable of real-time processing. –Clearing of background noise. –Precise and accurate identification of hand features.

Success Factor –Processing of 24 frames of hand images per second. –95% accuracy of feature extraction. –Elimination of noise.

AIMS – To analyse data provided by image processing phase. – Determine what hand gesture the user has carried out. 2 training methods will be used: – Neural Network – Principle Component Analysis (PCA)

A pattern classification problem. – Common application of training techniques such as neural networks and PCA. Many similar examples that suggest it is feasible: – Face recognition using neural networks.

Input is data extracted from 2D images. Logitech Webcam captures at most 30 frames per second. Input will consist of representation of 24 frames.

PCA and neural network will require a training data set. Hundreds of inputs will be required. Not likely to pose a problem as data collection requires no expense and little resources.

Output will be provided for the front- end visualisation phase. Simple output: One variable indicating the gesture that has been performed. Possible speed variable as well.

PROBLEM Input data is captured from a single still camera – thus input data is in 2D form. But the users performs gestures in 3D world. Tilting and rotating of the hand could make it difficult to detect the correct gesture.

SOLUTION Need appropriate set of gestures. Need a well designed neural network. OTHER PROBLEMS? Speed and efficiency not a concern.

Whether neural network implementation recognises at least 95% hand gestures correctly. Whether PCA implementation recognises at least 95% hand gestures correctly. Whether hand gestures recognised agree with at least 95% of those recognised by polhemus tracker.

AIMS -To produce a usable application for the gesture recognition interface -Test the usability of the interface for a real world application -Create a testing system to compare 2D and 3D gesture driven interface -To test the usability of the gesture system.

■The front end visualization will deal with two main kinds of input. ■Input from the 2D hand gestures as extracted by the Data Analysis Phase ■Input from the 3D hand gestures – assumed to be more accurate. ■ A metric will be generated to measure the gesture recognition capabilities of the 2D hand gesture extraction.

■visual feedback of the system ■Accuracy metric measuring difference between 2D and 3D gesture recognition

■Interface for the viewing of 3D molecule structures. ■Different molecule level of detail views offered – selected regions more detailed ■Visible section rotation ■Seamless changes between ‘Ribbon’ and ‘Ball & Stick’ representations

Whether the system can run in real time. Accuracy of data extracted from 2D images. 95% hand gestures are recognised correctly. Whether motion capture and learning technique implementations agree on 95% of gestures.

Not entirely new concept. At the very least build an application where basic transformations can be done. Compare effectiveness of using learning techniques approach against motion- capture approach.