Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.

Slides:



Advertisements
Similar presentations
Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Advertisements

Stereo.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
High-Quality Video View Interpolation
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Image Stitching and Panoramas
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Eye tracking: principles and applications 廖文宏 Wen-Hung Liao 12/10/2009.
Computer vision: models, learning and inference
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
A Brief Overview of Computer Vision Jinxiang Chai.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Camera Geometry and Calibration Thanks to Martial Hebert.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Computer Vision, Robert Pless
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Stereo Viewing Mel Slater Virtual Environments
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
stereo Outline : Remind class of 3d geometry Introduction
A Prototype System for 3D Dynamic Face Data Collection by Synchronized Cameras Yuxiao Hu Hao Tang.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Suggested Machine Learning Class: – learning-supervised-learning--ud675
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Mobile eye tracker construction and gaze path analysis By Wen-Hung Liao 廖文宏.
Applications and Rendering pipeline
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
A Plane-Based Approach to Mondrian Stereo Matching
Processing visual information for Computer Vision
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
제 5 장 스테레오.
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Journal of Vision. 2010;10(14):32. doi: / Figure Legend:
Image Restoration using Model-based Tracking
Master’s Thesis Defense
What have we learned so far?
DEPTH RANGE ACCURACY FOR PLENOPTIC CAMERAS
Multiple View Geometry for Robotics
眼動儀與互動介面設計 廖文宏 6/26/2009.
Fig. 4 Pupil shape and image quality in the model sheep eye.
Fig. 5. UV reflections from the eye cup.
Stereo vision Many slides adapted from Steve Seitz.
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Presentation transcript:

Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal disparities when participants are engaged in everyday tasks. (A) The device. A head-mounted binocular eye tracker (EyeLink II) was modified to include outward-facing stereo cameras. The eye tracker measured the gaze direction of each eye using a video-based pupil and corneal reflection tracking algorithm. The stereo cameras captured uncompressed images at 30 Hz and were synchronized to one another with a hardware trigger. The cameras were also synchronized to the eye tracker using digital inputs. Data from the tracker and cameras were stored on mobile computers in a backpack worn by the participant. Depth maps were computed from the stereo images offline. (B) Calibration. The cameras were offset from the participant’s eyes by several centimeters. To reconstruct the disparities from the eyes’ viewpoints, we performed a calibration procedure for determining the translation and rotation between the 3D locations. The participant was positioned with a bite bar to place the eyes at known positions relative to a large display. A calibration pattern was then displayed for the cameras and used to determine the transformation between the cameras’ viewpoints and the eyes’ known positions. (C) Schematic of the entire data collection and processing workflow. (D) Two example images from the stereo cameras. Images were warped to remove lens distortion, resulting in bowed edges. The disparities between these two images were used to reconstruct the 3D geometry of the scene. These disparities are illustrated as a grayscale image with bright points representing near pixels and dark points representing far pixels. Yellow indicates regions in which disparity could not be computed due to occlusions and lack of texture. (E) The 3D points from the cameras were reprojected to the two eyes. The yellow circles (20° diameter) indicate the areas over which statistical analyses were performed. (F) Horizontal disparities for this scene at one time point. Different disparities are represented by different colors as indicated by the color map on the right. William W. Sprague et al. Sci Adv 2015;1:e1400254 Published by AAAS