Jerome Revaud Joint work with: Philippe Weinzaepfel Zaid Harchaoui

Slides:



Advertisements
Similar presentations
Shape Context and Chamfer Matching in Cluttered Scenes
Advertisements

Bayesian Belief Propagation
DTAM: Dense Tracking and Mapping in Real-Time
Pose Estimation and Segmentation of People in 3D Movies Karteek Alahari, Guillaume Seguin, Josef Sivic, Ivan Laptev Inria, Ecole Normale Superieure ICCV.
CISC 489/689 Spring 2009 University of Delaware
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Face Alignment with Part-Based Modeling
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Mixed-Resolution Patch- Matching (MRPM) Harshit Sureka and P.J. Narayanan (ECCV 2012) Presentation by Yaniv Romano 1.
Shape From Light Field meets Robust PCA
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Last Time Pinhole camera model, projection
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Recent progress in optical flow
Segmentation Divide the image into segments. Each segment:
Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Motion Detail Preserving Optical Flow Estimation Li Xu 1, Jiaya Jia 1, Yasuyuki Matsushita 2 1 The Chinese University of Hong Kong 2 Microsoft Research.
Optical Flow Brightness Constancy The Aperture problem Regularization Lucas-Kanade Coarse-to-fine Parametric motion models Direct depth SSD tracking.
The plan for today Camera matrix
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Stereo Computation using Iterative Graph-Cuts
1Ellen L. Walker Matching Find a smaller image in a larger image Applications Find object / pattern of interest in a larger picture Identify moving objects.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Overview Introduction to local features
Kartic Subr Cyril Soler Frédo Durand Edge-preserving Multiscale Image Decomposition based on Local Extrema INRIA, Grenoble Universities MIT CSAIL.
Surface Stereo with Soft Segmentation Michael Bleyer 1, Carsten Rother 2, Pushmeet Kohli 2 1 Vienna University of Technology, Austria 2 Microsoft Research.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Coarse-to-Fine Combinatorial Matching for Dense Isometric Shape Correspondence Yusuf Sahillioğlu and Yücel Yemez Computer Eng. Dept., Koç University, Istanbul,
Local invariant features Cordelia Schmid INRIA, Grenoble.
Tzu ming Su Advisor : S.J.Wang MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION 2013/1/28 L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical.
Robust global motion estimation and novel updating strategy for sprite generation IET Image Processing, Mar H.K. Cheung and W.C. Siu The Hong Kong.
A Non-local Cost Aggregation Method for Stereo Matching
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 2, FEBRUARY Leonardo De-Maeztu, Arantxa Villanueva, Member, IEEE, and.
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
Feature-Based Stereo Matching Using Graph Cuts Gorkem Saygili, Laurens van der Maaten, Emile A. Hendriks ASCI Conference 2011.
Effective Optical Flow Estimation
Local invariant features Cordelia Schmid INRIA, Grenoble.
Paired Sampling in Density-Sensitive Active Learning Pinar Donmez joint work with Jaime G. Carbonell Language Technologies Institute School of Computer.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Non-Ideal Iris Segmentation Using Graph Cuts
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
Motion Estimation I What affects the induced image motion?
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Minimum Barrier Salient Object Detection at 80 FPS JIANMING ZHANG, STAN SCLAROFF, ZHE LIN, XIAOHUI SHEN, BRIAN PRICE, RADOMIR MECH IEEE INTERNATIONAL CONFERENCE.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
Semi-Global Matching with self-adjusting penalties
Motion and Optical Flow
Morphing and Shape Processing
Nonparametric Semantic Segmentation
ISOMAP TRACKING WITH PARTICLE FILTERING
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Announcements more panorama slots available now
MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION
Announcements more panorama slots available now
Recognition and Matching based on local invariant features
Presentation transcript:

EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow Jerome Revaud Joint work with: Philippe Weinzaepfel Zaid Harchaoui Cordelia Schmid Inria Redire le titre quoi qu’il arrive

Optical flow estimation is challenging! Main remaining problems: large displacements occlusions motion discontinuities

Optical flow estimation is challenging! Main remaining problems: large displacements occlusions motion discontinuities Our approach: « EpicFlow » Epic: Edge-Preserving Interpolation of Correspondences leverages state-of-the-art matching algorithm invariant to large displacements incorporate an edge-aware distance: handles occlusions and motion discontinuities state-of-the-art results

Related work: Variational optical flow [Horn and Schunck 1981] energy: Solutions for handling large displacements Minimization using a coarse-to-fine scheme [Horn’81, Brox’04, Sun’13, ..] iterative energy minimization at several scales displacements are small at coarse scales Addition of a matching term [Brox’11, Braux-Zin’13, Weinzaepfel’13, ..] penalizing the difference between flow and HOG matches color/gradient constancy smoothness constraint

Related work: coarse-to-fine minimization Problems with coarse-to-fine: flow discontinuities overlap at coarsest scales errors are propagated across scales no theoretical guarantees or proof of convergence! 1024x436 pixels Coarsest level (59x26 pixels) Full-scale estimate Estimation at coarsest scale EpicFlow

Overview of our approach Avoid coarse-to-fine scheme Other matching-based approaches [Chen’13, Lu’13, Leordeanu’13] less robust matching algorithms (eg. PatchMatch) complex schemes to handle occlusions & motion discontinuities no geodesic distance (SED) (DeepMatching) 6

Overview of our approach (DeepMatching)

Does not respect motion boundaries Dense Interpolation DeepMatching [Weinzaepfel et al. 2013] 5000 matches / image pair Does not respect motion boundaries

contours (SED, [Dollar and Zitnick, 2013]) Dense Interpolation Assumption: motion boundaries are mostly included in image edges image contours (SED, [Dollar and Zitnick, 2013]) ground-truth flow motion boundaries

Dense Interpolation: edge-aware geodesic distance Replace Euclidean distance with edge-aware distance to find NNs Geodesic distance: Anisotropic cost map = image edges Cost of a path: sum of the cost of all traversed pixels Geodesic distance: minimum cost among all possible paths q minimum path p

Dense Interpolation: edge-aware geodesic distance 100 closest matches Occlusions Matches Image edges Geodesic distance Geodesic distance

Dense Interpolation: edge-aware geodesic distance Occlusions Matches Image edges Image edges Geodesic distance 100 closest matches Euclidean distance 100 closest Euclidean matches

Dense Interpolation: approximated geodesic distance Approximation Assign each pixel to its (geodesic) closest match Compute geodesic distances in a graph of neighboring matches: ~5000 nodes=matches  instead of 1M pixel match positions Contour map Pixel assignement to closest match Match graph

Variational refinement One-level variational minimization initialization = dense interpolation classic optimization

Experimental results: datasets MPI-Sintel [Butler et al. 2012] sequences from a realistic animated movie large displacements (>20px for 17.5% of pixels) atmospheric effects and motion blur Ambush_4 Temple_3

Experimental results: datasets KITTI [Geiger et al. 2013] sequences captured from a driving platform large displacements (>20px for 16% of pixels) real-world lightings, repetitive textures (roads, trees…)

Results: with/without refinement Ground-truth Dense interpolation (before variational refinement) error: 5.9 error: 4.2 EpicFlow (after variational refinement) error: 5.7 error: 3.9 temple_3 42 Cave_2 16 Ambush_5 22

Ground-truth EpicFlow MDP-Flow LDOF Classic+NL cave_2 16 ambush_5 22 error: 5.7 error: 3.9 MDP-Flow error: 38.9 error: 9.2 LDOF error: 52.8 error: 10.5 Classic+NL error: 11.8 error: 11.6 cave_2 16 ambush_5 22 temple_3 42

Comparison to the state of the art Average Endpoint Error (lower is better): Timings: Method Error on MPI-Sintel Error on KITTI Timings EpicFlow 6.28 3.8 16.4s TF+OFM 6.73 5.0 ~500s DeepFlow 7.21 5.8 19s NLTGV-SC 8.75 16s (GPU)

Sensitivity to edge detectors and matching algorithms Evaluation with different: input matchings Kd-trees assisted PatchMatch (KPM) [He and Sun, 2012] DeepMatching (DM) [Weinzaepfel et al. 2013] edge detectors SED [Dollar and Zitnick 2013], gPb [Arbelaez et al. 2011], Canny Edges Matching MPI-Sintel (train) KITTI (train) SED KPM 5.76 11.3 DM 3.68 3.3 Edges Matching MPI-Sintel (train) KITTI (train) gPb DM 4.16 3.4 Canny 4.55 3.3 none (Euclidean distance) 4.61 3.6

Comparison with coarse-to-fine Experimental protocol: Same input matching (DeepMatching) Same input contours (for local smoothness) Same training set Average endpoint-error: More detailed experiments in the paper Method MPI-Sintel (train) KITTI (train) Time Coarse-to-fine 4.095 4.422 25.0s EpicFlow 3.686 3.334 16.4s

Failure cases Missing matches Missing contours

Conclusion Excellent results Code is available online at http://lear.inrialpes.fr/software