Tracking Systems in VR.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Team:. Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
Motion Capture The process of recording movement and translating that movement onto a digital model Games Fast Animation Movies Bio Medical Analysis VR.
Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
Computer Vision, Robert Pless
Selecting the right lens. They come in wide angle, telephoto and zoom. They offer a variety of apertures and handy features. They are also the key to.
Digital Camera Essential Elements Part 1 Sept
VR graphics.ssu.ac. kr 1 Ultrasonic Trackers Definition: A non-contact position measurement device that uses an ultrasonic signal produced by a stationary.
Spatiotemporal Information Processing No.2 3 components of Virtual Reality-1 Sensing System Kazuhiko HAMAMOTO Dept. of Information Media Technology, School.
Laser ​ Distance measurement ​ by Aleksi Väisänen ​ Pauli Korhonen.
Tracking Systems Cesar Martinez Internetworked Virtual Reality COMP6461 September 2002 INPUT DEVICES.
Jan 91 Tracking Sherman & Craig, pp Sherman & Craig, pp Welch, Greg and Eric Foxlin (2002). “Motion Tracking: No Silver Bullet, but a Respectable.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Tracking Gökhan Tekkaya Gürkan Vural Can Eroğul. Outline Tracking –Overview –Head Tracking –Eye Tracking –Finger/Hand Tracking Demos.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Single-view geometry Odilon Redon, Cyclops, 1914.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
The Pinhole Camera Model
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
... M A K E S Y O U R N E T W O R K S M A R T E R Lenses & Filters.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Motion Capture.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Introduction to Tracking
VE Input Devices(I) Doug Bowman Virginia Tech Edited by Chang Song.
Photography Lesson 2 Pinhole Camera Lenses. The Pinhole Camera.
Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
SS5305 – Motion Capture Initialization 1. Objectives Camera Setup Data Capture using a Single Camera Data Capture using two Cameras Calibration Calibration.
Integral University EC-024 Digital Image Processing.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 2 Introduction, Light Course webpage:
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Salim Modi, David Nguyen, Mitul Patel Virtual Environments Tracking Systems.
High Speed 3D Imaging Technology
Autonomous Robots Vision © Manfred Huber 2014.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Tracking Systems in VR.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Robotics/Machine Vision Robert Love, Venkat Jayaraman July 17, 2008 SSTP Seminar – Lecture 7.
3D Reconstruction Using Image Sequence
Single-view geometry Odilon Redon, Cyclops, 1914.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Lenses. 3 camera obscura / pinhole camera 3 Focal length is the distance between the lens and the point where the light rays converge. It controls.
CSE 185 Introduction to Computer Vision
Introduction to Tracking
Motion Capture CSE 3541 Matt Boggus.
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
: Chapter 11: Three Dimensional Image Processing
Common Classification Tasks
Day 32 Range Sensor Models 11/13/2018.
Camera Calibration class 9
Multiple View Geometry for Robotics
Distance Sensor Models
Course 6 Stereo.
Presentation transcript:

Tracking Systems in VR

Optical Trackers Photo sensors detect a range of the electromagnetic spectrum Usually arranged in 2D grid CCD (old) or CMOS (fast) A single 2D image point provides what information? Location Color? Intensity Camera = Sensor + Lens + Color Filter X pixels Y pixels

Simple Camera Model You want a projection of the 3D world i.e. what your eyes see Sensor by itself does not do this Pinhole camera is easy to understand Each pixel represents light from a particular direction Poor light gathering Lens approximates pinhole Focuses light from a particular point in space Problem with depth-of-field Lens distortions Jan 9

Simple Camera Calibration Focal Length Distance of pinhole to image plane Along with sensor size determines field of view Center of projection Was center of sensor glued directly along optical axis? Can be assumed to be res_x/2,res_y/2 Position and Orientation Well known techniques ImagePoint= C*WorldPoint C is the camera matrix (3 x 4) 2 equations per world point, means 6 required points to obtain C In practice we use many more From C we can segment focal length, center of projection, position and orientation

Issues in Camera Tracking Isolating tracking points Which pixels contain the tracked object? Finding point correspondences Which pixels correspond to what on the tracked object Mapping tracked object to 3D pose Camera resolution Limits accuracy and sensitivity Camera exposure time Limits update rate Points are rarely a single pixel This can be a good thing (additional constraint, sub-pixel accuracy)

Optical Object Tracking (with calibrated camera and known point correspondences) 1 point – direction of object relative to camera 2 points – direction of object, roll of camera 3 non-colinear points – 6 degrees of freedom (4 possible solutions) 4 coplanar, non-colinear points – 6 degrees of freedom (unique solution)

Markers Pure image features (e.g. corners) are difficult to use, segment, and if possible take too long to do (lag…) Markers Fast segmentation, high accuracy, point correspondences Encumbrance, occlusion, setup Active Marker: marker emits light Passive Marker: absorbs light Reflective markers are a good compromise Strobe light source, low-exposure, infra-red imaging Expensive cameras

Optical Tracking Configurations Outside-looking-in (fixed camera positions, moving objects) Inside-looking-out (fixed objects, moving camera) 1-N Cameras

Multiple Camera Tracking (with calibrated cameras and image correspondences) Single point is a direction (ray) from each camera Known positions and orientations of cameras Find shortest line segment between camera rays May not exactly intersect 1 point provides position only Combine multiple points for orientation Quality dependent on distance between points Good for inside looking out Bad for outside looking in

Lateral Effect Photo Diode Like a photocell, but current output proportional to center of light intensity. If the LEPD can only see a single light, it immediately senses the position of the light in the imager (direction) Multiple lights on at a time are a problem Superior update rate and little processing Near 0 latency Hi-Ball tracker 6 LEPDs Ceiling of LED strip lights 1 light on at a time (fast sequence) Best head-tracker in VR

Depth Cameras In addition to measuring light intensity, also measure distance to light source Two existing varieties Time-of-Flight: Measures how long it takes for an infrared light pulse to be picked up by each pixel High price / pixel, low latency Structured Light: Measures properties of a projected infrared light field (Kinect) Low price / pixel, but “depth shadows”, processing latency

Tracking with Depth Cameras Big advantages over normal cameras Simple background-foreground segmentation Not dependent on image features for depth (unlike stereo camera pairs) Usually in addition-to, not in replacement of normal camera Can track deformable bodies Known algorithms to fit articulated body model to depth points Primesense NITE (used by FAAST) Based on calibration pose and best-fit model (frame-to-frame) Microsoft Based on machine learning algorithm to recognize body parts Depth is low precision right now, but will improve

Optical Tracking Review Data returned: 6 DOF Spatial distortion – very low (very good accuracy) Resolution – very good Jitter (precision) – very good Drift - none Lag – moderate Update Rate – low - high Range – very large (40’ x 40’ +) Number of Tracked Points – 4 Wireless - yes Interference and noise – occlusion Mass, Inertia and Encumbrance - moderate Price – cheap to very expensive Pros: Can be inexpensive Wide area Very accurate Wireless, near zero mass Cons High quality is very expensive Occlusion Calibration

Optical Tracking Performance (Outside Looking In) Variable Quality (Relative to alternatives) Accuracy Excellent Position (<1mm), Good Orientation (<1 degree) Resolution Excellent (<.1mm <.1degree) Jitter Good position, Poor orientation (for small markers). Drift None Lag Okay (better with on board electronics) Update Rate Poor –> Excellent (30hz– 1500hz) Interference Okay (line of sight) Encumbrance Very good position (low weight, wireless), okay orientation (large marker) Space Very good (depends on camera distance), large physical space requirements Tracked Entities Excellent (no practical limit) Calibration Poor (shifts over time, by end-user) Cost Get what you pay for ($$-$$$$-$$$$$$) High quality, fast cameras are very expensive

Optical Tracking Performance (Inside Looking Out) Variable Quality (Relative to alternatives) Accuracy Okay Position (<1cm), Excellent Orientation (<.1 degree) Resolution Excellent (<.1mm <.1degree) Jitter Okay position, Excellent orientation. Drift None-some (think optical mouse) Lag Okay (better with on board electronics) Update Rate Poor –> Excellent (30hz– 1500hz) Interference Okay (line of sight) Encumbrance Poor (cameras need a wire and have significant weight ) Space Very good (Easier to move than O-L-I), can track very large spaces) Tracked Entities 1/ camera Calibration Good (markers on walls tend to stay put). Cost Get what you pay for ($$-$$$$-$$$$$) High quality, fast cameras are very expensive, but you only need to buy 1.

Hand/Finger Tracking Special case of human body tracking Very difficult, why? 14 joints in small area

Mechanical Data Gloves Most common solution to hand tracking Measure joint angles Mechanical tracking (but not good) Encumbering, sweaty, still need to track hand, etc… Need calibration Poor performance in general Not user independent

Optically Tracked Data Gloves Use specialized markers to facilitate hand tracking in a small space Pulsed LEDs (A.R.T gmbh) Colored Segments (MIT)