Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Mobile Motion Tracking using Onboard Camera
Mobile Motion Tracking using Onboard Camera Lam Man Kit CEG Wong Yuk Man CEG.
Mobile Motion Tracking using Onboard Camera Supervisor: Prof. LYU, Rung Tsong Michael Prepared by: Lam Man Kit Wong Yuk Man.
LYU0203 Smart Traveller with Visual Translator for OCR and Face Recognition Supervised by Prof. LYU, Rung Tsong Michael Prepared by: Wong Chi Hang Tsang.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
ART: Augmented Reality Table for Interactive Trading Card Game Albert H.T. Lam, Kevin C. H. Chow, Edward H. H. Yau and Michael R. Lyu Department of Computer.
Planar Matchmove Using Invariant Image Features Andrew Kaufman.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
LYU0503 Document Image Reconstruction on Mobile Using Onboard Camera Supervisor: Professor Michael R.Lyu Group Members: Leung Man Kin, Stephen Ng Ying.
Scale Invariant Feature Transform (SIFT)
Extension of M-VOTE: Improving Feature Detection
Virtual Dart – An Augmented Reality Game on Mobile Device Supervised by Prof. Michael R. Lyu LYU0604Lai Chung Sum ( )Siu Ho Tung ( )
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Augmented Reality: Object Tracking and Active Appearance Model
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Company LOGO Mobile Motion Tracking using Onboard Camera Supervisor Prof. LYU, Rung Tsong Michael Presented by Lam Man Kit, Wong Yuk Man.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
REAL-TIME DETECTION AND TRACKING FOR AUGMENTED REALITY ON MOBILE PHONES Daniel Wagner, Member, IEEE, Gerhard Reitmayr, Member, IEEE, Alessandro Mulloni,
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Video Eyewear for Augmented Reality Presenter: Manjul Sharma Supervisor: Paul Calder.
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
3D SLAM for Omni-directional Camera
Learning a Fast Emulator of a Binary Decision Process Center for Machine Perception Czech Technical University, Prague ACCV 2007, Tokyo, Japan Jan Šochman.
Example: line fitting. n=2 Model fitting Measure distances.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Pedestrian Detection and Localization
Video Eyewear for Augmented Reality Presenter: Manjul Sharma Supervisor: Paul Calder.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.
Gesture Recognition in a Class Room Environment Michael Wallick CS766.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Graphcut Textures Image and Video Synthesis Using Graph Cuts
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Markerless Augmented Reality Platform Design and Verification of Tracking Technologies Author:J.M. Zhong Date: Speaker:Sian-Lin Hong.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Frank Bergschneider February 21, 2014 Presented to National Instruments.
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
Automatic License Plate Recognition for Electronic Payment system Chiu Wing Cheung d.
Hand Gestures Based Applications
Introduction to Skin and Face Detection
Author : Sang Hwa Lee, Junyeong Choi, and Jong-Il Park
CS 4501: Introduction to Computer Vision Sparse Feature Detectors: Harris Corner, Difference of Gaussian Connelly Barnes Slides from Jason Lawrence, Fei.
Contents Team introduction Project Introduction Applicability
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
FACE DETECTION USING ARTIFICIAL INTELLIGENCE
IMAGE BASED VISUAL SERVOING
Higher School of Economics , Moscow, 2016
From a presentation by Jimmy Huff Modified by Josiah Yoder
CSSE463: Image Recognition Day 30
Detection of salient points
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Corner Detection COMP 4900C Winter 2008.
Higher School of Economics , Moscow, 2016
Presentation transcript:

Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung

Outline Background Information Background Information Motivation Motivation Objective Objective Methods Methods Results Results Future Work Future Work Q & A Q & A

What is Augmented Reality (AR)? A combination of real world and computer generated data A combination of real world and computer generated data Add computer graphic into video Add computer graphic into video

Background Information Most mobile phones equipped with cameras Most mobile phones equipped with cameras Games written in J2ME & proprietary development platform Games written in J2ME & proprietary development platform

Background Information Typical mobile games Typical mobile games

Background Information Mobile games employed Augmented Reality Mobile games employed Augmented Reality

Motivation How can the game “remember” external environment? How can the game “remember” external environment?  Save external environment information

Objectives Demonstrate how a game “remember” its external environment for Augmented Reality (AR) Demonstrate how a game “remember” its external environment for Augmented Reality (AR) Virtual Dart is just a game for demonstration of the proposed methodology Virtual Dart is just a game for demonstration of the proposed methodology

Problems to be solved… 1. What information should we store? 2. How does the game recognize the information? 3. How does the game perform motion tracking?

Introduction to Mobile Video Object Tracking Engine (mVOTE) Convert the camera movement into translational movement and degree of rotation Convert the camera movement into translational movement and degree of rotation Feature Selection (Find a feature to trace) Motion Tracking of Translational Movement Motion Tracking of Rotational Movement

What is a feature? Section of an image that is easily highlighted for the purpose of detection and tracking Section of an image that is easily highlighted for the purpose of detection and tracking Have a high contrast in relation to its immediate surroundings Have a high contrast in relation to its immediate surroundings X

 What does our program need? Functions needed for our program Can mVOTE do this? Feature Selection (What information should we store?) Feature Recognition (How does the game recognize the information?)  Motion Tracking of Translational Movement (How does the game perform motion tracking?)

Program Flow – Initial Algorithm

Experiment of Feature Selection Feature Selection in mVOTE VS FAST Corner Detection Algorithm Feature Selection in mVOTE VS FAST Corner Detection Algorithm Testing Environment Testing Environment 1. Normal lighting 2. Insufficient lighting

Normal Lighting Condition Feature Selection in mVOTE FAST Corner Detector

Normal Lighting Condition Feature Selection in mVOTE FAST Corner Detector

Normal Lighting Condition Feature Selection in mVOTE FAST Corner Detector

Insufficient Lighting Condition Feature Selection in mVOTE FAST Corner Detector

Insufficient Lighting Condition Feature Selection in mVOTE FAST Corner Detector

Analysis Normal Lighting Normal Lighting  Both algorithms worked reasonably well Insufficient Lighting Insufficient Lighting  Only mVOTE’s Feature Selection could produce output Occasionally, Feature Selection in mVOTE selected some flat regions as features Occasionally, Feature Selection in mVOTE selected some flat regions as features FAST Corner worked better in terms of accuracy FAST Corner worked better in terms of accuracy

Experiment of Initial Approach Selected Features Recognized Features

Experiment of Initial Approach Selected Features Recognized Features

Experiment of Initial Approach Selected Features Recognized Features

Initial Feature Recognition Conclusion Accuracy?? LOW! Accuracy?? LOW! Selected Features Recognized Features

Analysis Matching accuracy is very bad Matching accuracy is very bad Store 3 25x25 pixels features blocks Store 3 25x25 pixels features blocks Feature Recognition on the 3 Blocks Feature Recognition on the 3 Blocks More than 1 point have same SSD More than 1 point have same SSD

Program Flow – Enhanced Feature Recognition Algorithm

Enhanced Feature Recognition Set 1 Set 2

Enhanced Feature Recognition Set 3 Set 4

Enhanced Feature Recognition Set 5 Set 6

Analysis Totally 10 set of sample photos Totally 10 set of sample photos 3 trials at each set 3 trials at each set Each run would produce a slightly different result Each run would produce a slightly different result May come from a small vibration during image capturing or maybe due to a small change in light intensity May come from a small vibration during image capturing or maybe due to a small change in light intensity

Algorithms Comparison Initial Feature Recognition VS Enhanced Feature Recognition Initial Feature Recognition VS Enhanced Feature Recognition Initial Approach: 3 Features Initial Approach: 3 Features New Approach: Whole selection area New Approach: Whole selection area Reason for LOW accuracy: (Initial Approach) Reason for LOW accuracy: (Initial Approach)  Features may not be descriptive enough

Improvement of Feature Selection Two conditions of a “Good” Feature: Two conditions of a “Good” Feature: Descriptive Descriptive Large internal intensity difference Large internal intensity difference  Corner Detector can help us to find out good features  Corner Detector can help us to find out good features

FAST Corner Detector Examine a small patch of image Examine a small patch of image Considering the Bresenham Circle of radius r around the candidate pixel which is called p Considering the Bresenham Circle of radius r around the candidate pixel which is called p Intensities of n continuous pixels on the circle are larger than p or smaller than p by barrier Intensities of n continuous pixels on the circle are larger than p or smaller than p by barrier  Potential corner

e.g. r = 3, n = 12, barrier = – 65 = 150 > 25 = barrier  Marked by red 65 – 39 = 26 > 25 = barrier  Marked by Blue

e.g. r = 3, n = 12, barrier = 25

FAST Corner Detector The typical values of r and n are 3 and 12 respectively The typical values of r and n are 3 and 12 respectively For the value of barrier, we did an experiment to choose the value For the value of barrier, we did an experiment to choose the value We chose “25” after the experiment (for what?) We chose “25” after the experiment (for what?)

FAST Corner Detector Advantage: Advantage: Fast Fast Disadvantages: Disadvantages: Cannot work well in noisy environment Cannot work well in noisy environment Accuracy depends on parameter – barrier Accuracy depends on parameter – barrier

FAST Corner Detector barrier = 10 barrier = 40

How does Feature Recognition works? Full screen as search window Full screen as search window Use Sum Square Difference (SSD) to calculate the similarity of blocks Use Sum Square Difference (SSD) to calculate the similarity of blocks Still slow in current stage (~20 – 60sec) Still slow in current stage (~20 – 60sec) Tried to use a smaller image and scale up to full screen Tried to use a smaller image and scale up to full screen Scaling step is too time consuming Scaling step is too time consuming

Motion Tracking during the game Keep track of three features Keep track of three features Use two features to locate dart board Use two features to locate dart board The last feature point is used for backup The last feature point is used for backup Use if either one of the feature points fail Use if either one of the feature points fail Condition for a feature point failure Condition for a feature point failure Feature point is at the edge of the screen Feature point is at the edge of the screen Two feature points are too close Two feature points are too close

Future Works Allow users to load saved features Allow users to load saved features Increase the speed of feature recognition Increase the speed of feature recognition Add physical calculation engine Add physical calculation engine

Q & A