Using Photographs to Enhance Videos of a Static Scene Pravin Bhat 1, C. Lawrence Zitnick 2, Noah Snavely 1, Aseem Agarwala 3, Maneesh Agrawala 4, Michael.

Slides:



Advertisements
Similar presentations
Sep Space-Time Steering Kernel Regression for Video Hiroyuki Takeda Multi-Dimensional Signal Processing Laboratory University of California, Santa.
Advertisements

Breaking the Frame David Luebke University of Virginia.
Sequence-to-Sequence Alignment and Applications. Video > Collection of image frames.
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Image-based Clothes Animation for Virtual Fitting Zhenglong Zhou, Bo Shu, Shaojie Zhuo, Xiaoming Deng, Ping Tan, Stephen Lin * National University of.
A Painting Interface for Interactive Surface Deformations Jason Lawrence Thomas Funkhouser Princeton University.
Vision For Graphics ICCV 2005 Vision for Graphics Larry Zitnick, Sing Bing Kang, Rick Szeliski Interactive Visual Media Group Microsoft Research Steve.
Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,
ICME 2008 Huiying Liu, Shuqiang Jiang, Qingming Huang, Changsheng Xu.
ADVISE: Advanced Digital Video Information Segmentation Engine
Video Enhancement & RFID Ecosystem. Video Enhancement  Most video cameras = low quality  Most digital cameras = high quality  The video enhancement:
Efficient Scalable Video Compression by Scalable Motion Coding Review and Implementation of DWT Based Approach Syed Jawwad Bukhari
High-Quality Video View Interpolation
Video summarization by graph optimization Lu Shi Oct. 7, 2003.
Exploiting Temporal Coherence for Incremental All-Frequency Relighting Ryan OverbeckRavi Ramamoorthi Aner Ben-ArtziEitan Grinspun Columbia University Ng.
Computational Photography Light Field Rendering Jinxiang Chai.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
A Gentle Introduction to Bilateral Filtering and its Applications 08/10: Applications: Advanced uses of Bilateral Filters Jack Tumblin – EECS, Northwestern.
Tzu ming Su Advisor : S.J.Wang MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION 2013/1/28 L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical.
Periodic Motion Detection via Approximate Sequence Alignment Ivan Laptev*, Serge Belongie**, Patrick Perez* *IRISA/INRIA, Rennes, France **Univ. of California,
Image-based rendering Michael F. Cohen Microsoft Research.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Adaptive Multi-path Prediction for Error Resilient H.264 Coding Xiaosong Zhou, C.-C. Jay Kuo University of Southern California Multimedia Signal Processing.
Deriving Intrinsic Images from Image Sequences Mohit Gupta 04/21/2006 Advanced Perception Yair Weiss.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi Presented by: Alon Pakash & Gilad Karni.
Dynamosaicing Dynamosaicing Mosaicing of Dynamic Scenes (Fri) Young Ki Baik Computer Vision Lab Seoul National University.
Video & Capture SIGGRAPH Asia Modeling and Generating Moving Trees from Video Chuan Li, Oliver Deussen, Yi-Zhe Song, Phil Willis, Peter Hall Media.
3ds max pipeline Use postprocessed (properly cut) –Movie from Overview Camera –MOVEN file as.bvh –Vicon file as.fbx Create two bipeds in 3ds max and load.
IIIT HYDERABAD Image-based walkthroughs from partial and incremental scene reconstructions Kumar Srijan Syed Ahsan Ishtiaque C. V. Jawahar Center for Visual.
Scene Reconstruction Seminar presented by Anton Jigalin Advanced Topics in Computer Vision ( )
Learning low-level vision Computer Examples by Michael Ross.
A Robust Method for Lane Tracking Using RANSAC James Ian Vaughn Daniel Gicklhorn CS664 Computer Vision Cornell University Spring 2008.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Mitsubishi Electric Research Labs Progressively Refined Reflectance Fields from Natural Illumination Wojciech Matusik Matt Loper Hanspeter Pfister.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
© ACTS-MoMuSys All Rights Reserved. VOGUE The Video Object Generator with User Environment Ecole Nationale Supérieure des Mines de Paris, France.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
Journal of Visual Communication and Image Representation
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
5 minutes Warm-Up Solve. 2) 1).
Stereo Video 1. Temporally Consistent Disparity Maps from Uncalibrated Stereo Videos 2. Real-time Spatiotemporal Stereo Matching Using the Dual-Cross-Bilateral.
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
Michal Irani Dept of Computer Science and Applied Math Weizmann Institute of Science Rehovot, ISRAEL Spatio-Temporal Analysis and Manipulation of Visual.
Digital Visual Effects, Spring 2009 Yung-Yu Chuang 2009/5/21
Intrinsic images and shape refinement
Deep Predictive Model for Autonomous Driving
Image Processing and Reconstructions Tools
Real Time Dense 3D Reconstructions: KinectFusion (2011) and Fusion4D (2016) Eleanor Tursman.
Low Bit Rate Video Coding with Geometric Transformation
Digital Visual Effects Yung-Yu Chuang
Contribution of spatial and temporal integration in heading perception
Range Imaging Through Triangulation
Solve System by Linear Combination / Addition Method
Structure from Motion with Non-linear Least Squares
Field Photos
Lecture 23: Structure from motion 2
Optical flow Computer Vision Spring 2019, Lecture 21
Open Topics.
Optical flow and keypoint tracking
Structure from Motion with Non-linear Least Squares
REU Week 3: Real-Time Video Anomaly Detection
Presentation transcript:

Using Photographs to Enhance Videos of a Static Scene Pravin Bhat 1, C. Lawrence Zitnick 2, Noah Snavely 1, Aseem Agarwala 3, Maneesh Agrawala 4, Michael Cohen 1,2, Brian Curless 1, Sing Bing Kang 2 EGSR 2007 University of Washington 1, Microsoft Research Redmond 2 University of California 3, Adobe Systems 4

An overview of Spacetime Fusion

Motivation Low quality video Input Video

Motivation Low quality video Reconstructed video Input VideoReconstructed Video

Motivation Low quality video Reconstructed video –Reconstructed from photos –Good spatial reconstruction –Bad temporal reconstruction Input VideoReconstructed Video

Motivation Spacetime Fusion result Input VideoSpacetime Fusion Result

Motivation Spacetime Fusion result –Spatial properties of reconstruction –Temporal properties of input video Input VideoSpacetime Fusion Result

Define a 3D gradient field Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Video frame: t Video frame: t - 1 Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Video frame: t Video frame: t - 1 GtGt G t (x, y, t) = V(x, y, t) - V(x, y, t - 1) Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Video frame: t Video frame: t - 1 GtGt G t (x, y, t) = V(x, y, t) - V(x - u, y - v, t - 1) Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Increases compatibility between temporal gradients and spatial gradients Spacetime Fusion

Define a 3D gradient field –Spatial gradients from reconstruction –Temporal gradients from input video –Key Idea Temporal gradients defined between motion compensated temporal neighbors Increases compatibility between temporal gradients and spatial gradients Integrate the 3D gradient field Spacetime Fusion

Integrating the gradient field Solve linear system: Av = b

Integrating the gradient field Solve linear system: Av = b Constraints: v x, y, t – v x-1, y, t = G x (x, y, t) v x, y, t – v x, y-1, t = G y (x, y, t) v x, y, t – v x-u, y-v, t = G t (x, y, t) Spacetime Fusion

Applications

Enhanced Exposure

Input Video Edit Propagation

User Edits

Edit Propagation User Edits

Edit Propagation User Edits

Edit Propagation User Edits

Edit Propagation User Edits

Edit Propagation

Edited Video Edit Propagation

Super-Resolution

Conclusion Spacetime fusion

Conclusion Spacetime fusion –Combines spatial and temporal gradients from two different sources

Conclusion Spacetime fusion –Combines spatial and temporal gradients from two different sources –Requires motion vectors for temporal source stereo (static scenes) flow (dynamic scenes)

Conclusion Spacetime fusion –Combines spatial and temporal gradients from two different sources –Requires motion vectors for temporal source stereo (static scenes) flow (dynamic scenes) –Major applications Enforcing temporal coherence Transferring lighting information