Dewarped Minds United. Progress Report from Bozeman: Simulator Estimating eye motion Dewarping Montaging Mosaicing.

Slides:



Advertisements
Similar presentations
Reducing Drift in Parametric Motion Tracking
Advertisements

With support from: NSF DUE in partnership with: George McLeod Prepared by: Geospatial Technician Education Through Virginia’s Community Colleges.
776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Distributed Video Coding 林明德. Outline DCT base DSC DWT base DSC.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
Computing motion between images
A Robust Scene-Change Detection Method for Video Segmentation Chung-Lin Huang and Bing-Yao Liao IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Feature-based object recognition Prof. Noah Snavely CS1114
Augmented Reality: Object Tracking and Active Appearance Model
Lecture 8: Image Alignment and RANSAC
Transform Domain Distributed Video Coding. Outline  Another Approach  Side Information  Motion Compensation.
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Computing transformations Prof. Noah Snavely CS1114
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Image Stitching Ali Farhadi CSE 455
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
AIBO Camera Stabilization Tom Stepleton Ethan Tira-Thompson , Fall 2003.
By Yevgeny Yusepovsky & Diana Tsamalashvili the supervisor: Arie Nakhmani 08/07/2010 1Control and Robotics Labaratory.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
3D SLAM for Omni-directional Camera
Image Preprocessing: Geometric Correction Image Preprocessing: Geometric Correction Jensen, 2003 John R. Jensen Department of Geography University of South.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
A Photon Accurate Model Of The Human Eye Michael F. Deering.
Disturbance Rejection: Final Presentation Group 2: Nick Fronzo Phil Gaudet Sean Senical Justin Turnier.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Neil Gealy 8/6/10. Pruning Plots What I have found to be the most optimal sequence of pruning: 1. Consecutive pruning based on radius and threshold 2.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Current Works Corrected unit conversions in code Found an error in calculating offset (to zero sensors) – Fixed error, but still not accurately integrating.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Efficient Motion Updates for Delaunay Triangulations Daniel Russel Leonidas Guibas.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
776 Computer Vision Jan-Michael Frahm Spring 2012.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
Image Stitching Computer Vision CS 691E Some slides from Richard Szeliski.
Image Mosaicing with Motion Segmentation from Video Augusto Roman, Taly Gilat EE392J Final Project 03/20/01.
CS4670 / 5670: Computer Vision Kavita Bala Lecture 20: Panoramas.
Motion and Optical Flow
Chapter 3 Sampling.
Lecture 7: Image alignment
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Idea: projecting images onto a common plane
Announcements Project 2 out today (help session at end of class)
The Brightness Constraint
Image Stitching Computer Vision CS 678
Image and Video Processing
SIFT.
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Dewarped Minds United

Progress Report from Bozeman: Simulator Estimating eye motion Dewarping Montaging Mosaicing

Simulator Simulates translational eye motion, including saccades, and generates the corresponding AOSLO videos 1.Start with a BIG IMAGE 2.Model the raster motion r(t) 3.Model the eye motion x(t). 4.Record from the BIG IMAGE pixel by pixel according to r(t) + x(t) Validates Motion Estimation 1.Tracking Saccadic Eye Motions involve more than just estimating translational transforms 2.Saccadic eye motions induce spurious rotational estimates. 3.Using small patches induces spurious vertical motion and rotation estimates. Simulates rotational eye motion and generates videos

2. Model Raster Motion

3. Model Eye Motion: Drift + sinusoid Saccades modelled by overdamped oscillator

4. Record Video

Validation: Tracking through a pure translational saccade. 1. This suggests that non-translational transforms are needed to accurately track real saccadic eye movements. 2. Saccadic eye motions induce spurious rotational estimates.

3. Using really small patches induces spurious vertical motion and rotation estimates during periods of pure drift: 4 patches/frame, 16 patches/frame …

128 patches/frame

What’s the “optimal” patch size so that spurious rotations are not induced? ANSWER: it depends on the length of the patch wrt frame size

Optimal Patch Size for 480x512 simulated images Patch LengthPatch HeightPatches/Frame.4* * * Rule of Thumb: To prevent spurious rotational estimates, each patch must cover about 8% of the total image 1/num_patches * patchlength/cols patchheight/rows * patchlength/cols =.08 The rule held for: Simulated data with pure drift or drift with high frequency oscillations Different quantizations of rotation transform (1/8 to 1/32 degree) Even for “optimal” patch size, still get spurious rotations at a saccade

Simulating Pure Rotational Eye Motion:

Estimating Rotations in Simulated Data:

Estimating Eye Motion Three Motion Types: transforms are estimated … 1. Differential: between subsequent frames, so the reference frame is continually changing 2. Absolute: from a single, fixed reference frame 3. Compromise: from dynamically changing reference frame, where a reference frame change occurs due to some criterion, such as a low correlation Filtering: remove motion estimates with low correlation (then interpolate) Aligning: propoerly align motion tracks after each reference frame change

1.Differential Motion PROS: - yields motion estimates with high image-to-image correlasions AND reliably detects saccades

1.Differential Motion CONS: Not possible to determine absolute motion by “integration” or least squares methods (even when adding penalty methods or AR1 model for correlations) due to additive error. LESSON LEARNED: Aligning after each reference frame change contributes some error to all subsequent motion estimates POSSIBLE SOLUTION: Kalman filtering ?

2. Absolute Motion from a single reference frame PROS: No error due to reference frame changes CONS: Correlations drop as a function of distance from the reference frame (the second frame shows a filtered motion track, corr<.3,dropping 65% of motion estimates)

3. Compromise: dynamically change the reference frame whenever the correlation drops below some threshold for some proportion of the patches between the current reference frame and the current frame in the video.

Estimating Eye Motion: Filtering and Aligning (filtering out all corr<.3 dropped about 5% of the motion estimates)

Reference Frame changes are not always necessary … (for sk_v15cropped, the reference frame never changes)

Or changing reference frames is a necessity … such as when estimating frame to frame motion from a video which pans about the retina (ARDISK) …

(lots of reference frame changes)

Dewarping images Interpolate between motion estimates (nearest neighbor, linear, cubic, splines) to get a motion estimate at each pixel of an image Guess the motion during which the very first reference frame was recorded (extrapolation based on an average of the “first set” of motion estimates) Interpolate in 2-D to create a dewarped image after each pixel in a frame has been moved according to the corresponding motion

dewarping OC…

dewarping SK_v15cropped …

Montaging 1.Estimate the motion for a sequence of frames 2.Cluster the frames according to reference frame 3.Dewarp each frame in a (subset of a) cluster choose which frames e.g. by staying away from saccades 4.Average the dewarped frames together via 2-D interpolation (Curt’s method, voronoi method) POINT: a montage is a de-noised retinal image

Montages from OC

Montages from ARDISK

Mosaicing 1.Estimate the motion for a sequence of frames 2.Cluster the frames according to reference frame 3.Dewarp each frame in a (subset of a) cluster choose which frames e.g. by staying away from saccades 4. Three ways to build the mosaic, by adding select : 1.“raw” frames (e.g. a cluster representative) 2.dewarped frames 3.montages 5. The current mosaic is the “reference frame” when adding another image to the mosaic

Mosaicing Difficulties: 1.It is problematic to dewarp the additions (wrt the mosaic), especially when only a small part of the mosaic and the addition overlap. Thus, a mosaic appears to lack the detail which the individual montages have. 2.How to choose “select frames” or “select montages”? 3.How does one assure that additions to the mosaic are placed correctly? If an addition is placed incorrectly, now all subsequent additions are being referenced to an incorrect mosaic.

Mosaicing OC (adding all montages)

How to select representative frames or montages? cluster mincorr meancorr maxcorr wt

Mosaicing ARDISK from raw frames… CONS: incorrectly placed frames

A mosaic from raw cluster reps and dewarped cluster reps … CONS: still, incorrectly placed frames (although not as many)

A mosaic from (SMOOTHED) dewarped cluster reps … looks good …

A mosaic from montages … still looks good, and it agrees with the previous masaic

zooming into the mosaics …

Why are there incorrectly placed images into the mosaic? (comparing an incorrect mosaic with a correct one)

The corresponding “correlation landscapes” …

… and the “normalized correlation landscapes”

Mosaicing SKV20 from raw cluster reps …