Francisco Barranco Cornelia Fermüller Yiannis Aloimonos Event-based contour motion estimation.

Slides:



Advertisements
Similar presentations
Human-Assisted Motion Annotation Ce Liu William T. Freeman Edward H. Adelson Massachusetts Institute of Technology Yair Weiss The Hebrew University of.
Advertisements

Bayesian Belief Propagation
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Dynamic Occlusion Analysis in Optical Flow Fields
Real-Time Accurate Stereo Matching using Modified Two-Pass Aggregation and Winner- Take-All Guided Dynamic Programming Xuefeng Chang, Zhong Zhou, Yingjie.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
ICG Professor Horst Cerjak, Thomas Pock Variational Methods for 3D Reconstruction Thomas Pock 1, Chrisopher Zach 2 and Horst Bischof 1 1 Institute.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
3D Video Generation and Service Based on a TOF Depth Sensor in MPEG-4 Multimedia Framework IEEE Consumer Electronics Sung-Yeol Kim Ji-Ho Cho Andres Koschan.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Recent progress in optical flow
1 Static Sprite Generation Prof ︰ David, Lin Student ︰ Jang-Ta, Jiang
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Object Detection and Tracking Mike Knowles 11 th January 2005
Computer Vision Optical Flow Marc Pollefeys COMP 256 Some slides and illustrations from L. Van Gool, T. Darell, B. Horn, Y. Weiss, P. Anandan, M. Black,
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Lecture 19: Optical flow CS6670: Computer Vision Noah Snavely
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 11, NOVEMBER 2011 Qian Zhang, King Ngi Ngan Department of Electronic Engineering, the Chinese university.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
A Time Error Correction Method Applied to High-Precision AER Asynchronous CMOS Image Sensor 李东盛.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Jitter Camera: High Resolution Video from a Low Resolution Detector Moshe Ben-Ezra, Assaf Zomet and Shree K. Nayar IEEE CVPR Conference June 2004, Washington.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
A Non-local Cost Aggregation Method for Stereo Matching
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow Ming Ye Electrical Engineering, University of Washington
Fast Semi-Direct Monocular Visual Odometry
Visual Odometry David Nister, CVPR 2004
Journal of Visual Communication and Image Representation
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Automated Motion Imagery Data
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
A Plane-Based Approach to Mondrian Stereo Matching
Real-Time Soft Shadows with Adaptive Light Source Sampling
University of Granada UGR
Motion Detection And Analysis
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Image Primitives and Correspondence
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Range Imaging Through Triangulation
Representing Moving Images with Layers
Iterative Optimization
Representing Moving Images with Layers
Announcements more panorama slots available now
MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION
Analysis of Contour Motions
Announcements more panorama slots available now
Optical flow and keypoint tracking
Presentation transcript:

Francisco Barranco Cornelia Fermüller Yiannis Aloimonos Event-based contour motion estimation

Asynchronous Event-based Dynamic Visual Sensor [1] P. Lichtsteiner, C. Posch, and T. Delbruck, A 128× dB 15μs latency asynchronous temporal contrast vision sensor, IEEE J. Solid State Circuits, 43(2), 566–576, [1] - No blurring, no artifacts - Accurate fast motion estimation - No occlusions - Real-time performance DVS output  asynchronous AER At point P(u,v ), at time t an event event(u,v,t) of value either +1 or -1 is fired if the logarithm is greater than T DAVIS/ApsDVS or ATIS: - Dynamic + static scene information - Provide absolute scene-reflectance values

Motion pathway with Dynamic Visual Sensor Early contour boundaries Rough motion estimation Early segmentation Refined motion estimation 3D pose estimation - Accurate motion estimation - No additional assumptions - Only estimates in the contours - High temporal resolution

Problems of current approachesDVS Camera solution Handle fast motion Solution: Multiresolution techniques 3D motion from matching interest points not from motion High temporal resolution Depth discontinuitiesEarly extraction of contours Separate different 3D motionsThe high temporal information allows separating two different rigid motions superimposed Computational costCompute normal flow only when there is a change in time Motion blur, light artifactsDynamic range (log) Event-based contour motion estimation

Contour motion estimation Speed  local bar width With:  #events of pixel p  Set of connected pixels  speed for pixel p Robust function for the sign estimation Figure. Events collected with the DVS camera zooming in a chessboard pattern. The first row show the collected positive (black) and negative (red) events for the row 70, and an example of the output of the DVS. Fusing DVS and Intensity data: Accurate intensity reconstruction for every event

Contour motion estimation Figure. Events collected for “Translation Tree” sequence The first row shows the image and the accumulated positive events. The second and third row show the positive events collected for row 80 of the image. The last row shows the speed estimated for pixel (80, 40) for different time intervals (over 3300ms at a step size of 100ms). Speed  local bar width + multi-temporal approach Early segmentation Refined motion estimation

Problem I: Fast speed estimation Video source: Handle fast motion  multiresolution techniques Time performance Computational complexity Not very accurate: - Scale-to-scale error propagation - Outliers suppression Small objects moving fast [2] D. Sun, S. Roth, and M. Black, A quantitative analysis of current practices in optical flow estimation and the principles behind them, IJCV, 106 (2), 115–137, 2014 Solution: High temporal resolution (DVS) classic+NL-fast

Problem II: Occlusions Detect and prevent: - Reprojection - Flow divergence Flow available  ++ Error Solution: with DVS occlusions do not make sense AEPE rel (%)AEPE rel (%) [2]Density (%) Classic+NL-fast [2] Event-based [3] B. McCane, K. Novins, D. Crannitch, and B. Galvin, On benchmarking optical flow, Computer Vision and Image Understanding, 84 (1), 126 – 143, Figure. Satellite sequence [3]. Occlusion results for event-based and Classic+NL-fast[2] algortithm. Occluded regions + smoothing constraint  Error propagation

Problem III: Performance Frame rate Frame rate [2] Events Satellite 200x fps0.07 fps~15000 Real-time performance with DVS Conventional sophisticated methods: -Texture decomposition - Multiresolution schemes - Nonconvexity weighing functions - Spline-based bicubic interpolation - Global smoothing terms - Non-local regularization terms Actual framework seems to be exahusted - Without an early motion boundary segmentation obtaining more accurate methods is very hard Computationally very expensive It might take even minutes AEPE rel (%)AEPE rel (%) [2]Density (%) Trans Diver Yosemite Rubberwhale Dimetrodon Satellite Our contour motion estimation is more accurate than [2], algorithm ranked in Middlebury [4] as one of the first ones in December 2013!!

AEPE rel (%)AEPE rel (%) [2]Density (%) Trans Diver Yosemite Rubberwhale Dimetrodon Satellite [4] S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski, A database and evaluation methodology for optical flow, Int. J. Comput. Vision, 92 (1), 1 – 31, 2011 Event-based contour motion results Our contour motion estimation is more accurate than [2], algorithm ranked in Middlebury [4] as one of the first ones in December 2013!!

Current framework seems exahusted  DVS – Asynchronous event-based data – Multi-temporal biologically inspired approach – Accurate motion estimation – Real-time performance / less resources – No occlusions Fusion with current frame-based technique Take-home message