The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.

Slides:



Advertisements
Similar presentations
We consider situations in which the object is unknown the only way of doing pose estimation is then building a map between image measurements (features)
Advertisements

Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Introduction to Eye Tracking
Lecture 11: Two-view geometry
Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Patch to the Future: Unsupervised Visual Prediction
Two-View Geometry CS Sastry and Yang
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
A Generic Concept for Camera Calibration Peter Sturm and Srikumar Ramaligam Sung Huh CPSC 643 Individual Presentation 4 April 15, 2009.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Eye Movements and Visual Attention
Automatic Camera Calibration
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
A General Framework for Tracking Multiple People from a Moving Camera
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
May 9, 2005 Andrew C. Gallagher1 CRV2005 Using Vanishing Points to Correct Camera Rotation Andrew C. Gallagher Eastman Kodak Company
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
Research Background: Depth Exam Presentation
Dr. Scott Umbaugh, SIUE Discrete Transforms.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
Cherevatsky Boris Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin
Counting How Many Words You Read
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Adaptive Wavelet Packet Models for Texture Description and Segmentation. Karen Brady, Ian Jermyn, Josiane Zerubia Projet Ariana - INRIA/I3S/UNSA June 5,
3D Reconstruction Using Image Sequence
Reconstruction from Two Calibrated Views Two-View Geometry
EECS 274 Computer Vision Projective Structure from Motion.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
CMSC5711 Image processing and computer vision
Imaging and Depth Estimation in an Optimization Framework
Paper – Stephen Se, David Lowe, Jim Little
René Vidal and Xiaodong Fan Center for Imaging Science
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Epipolar geometry.
CMSC5711 Image processing and computer vision
Omnidirectional epipolar geometry
Computer Graphics Recitation 12.
Presentation transcript:

The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed as: The intersection between a line through the origin of the coordinate system centered at one of the focal points of the mirror in the direction of the vector v can be described as: With a rotation between the camera and mirror coordinate systems a point on the mirror surface can then be calculated from the pixel coordinates in the captured image as: The matrix K is the camera calibration matrix: which relates points [x,y,z]T in space with image pixel coordinates u and q are normalized image coordinates. The Matlab Calibration Toolbox written by J.-Y. Bouguet was used for calibration. Imaging System for the Measurement of Head and Body Motion for the RIT-Wearable-Eye-Tracker Constantin Rothkopf, Advisor: Dr. Jeff Pelz Real part of spherical harmonics: vertically: 0  l  4 horizontally: -l  m  +l Research in visual perception and attention using eye movements has moved from signal detection paradigms and the assessment of the mechanics and metrics of eye movements to the study of complex behavior in natural tasks. In such tasks, subjects are able to move their head and body and interact in a purposeful way with a changing environment. Under such circumstances, the analysis of the eye movements is more difficult, because the eye tracker does not record the subject's head movements. Recovering the head movements can give additional information about the type of eye movement that was carried out, the overall gaze change in world coordinates, and insight into high-order perceptual strategies. The aim of this senior project is to develop a system that can make it possible to recover the head movements of a subject during natural tasks. The proposed solution utilizes an omnidirectional vision sensor consisting of a small CCD video camera and a hyperbolic mirror. The camera is mounted on an ASL eye tracker and records an image sequence at 60 Hz. Several algorithms for the estimation of rotational movement from omnidirectional image sequences have been developed. Because of the low resolution of a standard video image, a method based on the spherical harmonic decomposition developed by Makadia and Daniilidis (2003) has been implemented. The image sequence captured by the omnidirectional camera is remapped onto a sphere and represented in ,  space. The spherical harmonics: with the P l m being associated Legendre functions, are a set of orthonormal basis Functions on the sphere. The reprojected images are decomposed using the discrete spherical harmonics transformation. Under a rotation, parameterized with ZYZ Euler Angles as a sequence of two rotations: the coefficients of the decomposition can be expressed as: with the P l mn being generalized associated Legendre functions. The advantage of this parameterization is, that the unknown variables appear in the exponential terms and that the P l mn can be calculated as sums of binomial coefficients. The resulting equation was minimized using T.C.Kelley’s Matlab library implementation of Broyden’s method. A two state Hidden Markov Model was introduced by D. Salvucci (1999) for the classification of fixations and saccades from eye movement recordings in equation solving. This model was extended by incorporating the estimated head movement as a second observation variable in order to classify fixations, saccades, smooth pursuits, and VORs. The resulting probability distributions are modeled as bivariate Normal distributions. The initial guesses for the parameters of these distributions were obtained from experimental data. The Baum-Welsh algorithm is used to estimate the parameters and the transition probabilities from recorded data. The Hidden Markov Model is then used to decode the sequence of eye and head velocities in order to classify the types of eye movements. While the rotational motion estimates using synthetic images were accurate to less than one degree, a comparable accuracy could not be reached with the image sequence from the omnidirectional camera. A Fastrack system was used to obtain ground truth for the measurements. Further work should investigate methods for improving the rotational motion estimation. An increase of the spatial resolution of the camera is expected to have a significant impact. The classification algorithm was used on a sequence of 3 minutes length. One subject carried out sequences of smooth pursuits, VORs, fixations and saccades. The proposed algorithm classified the fixations, saccades, and VORs with 100% accuracy, and smooth pursuits with 65%. Further validation by trained experts should used to evaluate the algorithm. Introduction Specific Aim The omnidirectional image sensor Rotation estimation Eye movement classification Results from rotational motion estimation with synthetic images of size: 512x512: angle l  5 l  8 l  12  =5º 4.08º 5.62º 5.74º  =10º 9.26º 10.38º 10.57º  =25º 25.15º 24.19º 25.20º Schematic of the Hidden Markov Model Preliminary results Geometry of omnidirectional camera Remapping of image References: A.Makadia, K.Daniilidis: ‘Direct 3D-Rotation Estimation D.D.Salvucci: ’Mapping Eye Movements to Cognitive Processes’,T.Svoboda, T. Pajdla: ‘Epipolar Geometry for Central Catadioptric from Spherical Images via a generalized shift theorem’ PHD Thesis, Carnegie Mellon University, 1999Cameras’, IJCV 49(1), 2002