Y y function). For a simple object, such as a wall, the environment response can be modelled as a single dirac. A wall further away would have a shifted.

Slides:



Advertisements
Similar presentations
Discontinuity Preserving Stereo with Small Baseline Multi-Flash Illumination Rogerio Feris 1, Ramesh Raskar 2, Longbin Chen 1, Karhan Tan 3 and Matthew.
Advertisements

Bayesian Belief Propagation
Shapelets Correlated with Surface Normals Produce Surfaces Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Optimizing and Learning for Super-resolution
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Raskar, Camera Culture, MIT Media Lab Camera Culture Ramesh Raskar Camera Culture MIT Media Lab Ramesh Raskar
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
More MR Fingerprinting
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Course 3: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Jack Tumblin Northwestern University Course WebPage :
RAY TRACING WITH DISPERSION CSS552 – Topics in Rendering Winter 2011 Final Project by: Kohei Ueda Shivani Srikanteshwara Mary Ann Chiramattel Kunjachan.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
MERL, MIT Media Lab Reinterpretable Imager Agrawal, Veeraraghavan & Raskar Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar Mitsubishi Electric Research.
Goals For This Class Quickly review of the main results from last class Convolution and Cross-correlation Discrete Fourier Analysis: Important Considerations.
High dynamic range imaging. Camera pipeline 12 bits8 bits.
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Computational photography CS4670: Computer Vision Noah Snavely.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
Point Source in 2D Jet: Radiation and refraction of sound waves through a 2D shear layer Model Gallery #16685 © 2014 COMSOL. All rights reserved.
Signals CY2G2/SE2A2 Information Theory and Signals Aims: To discuss further concepts in information theory and to introduce signal theory. Outcomes:
Mitsubishi Electric Research Labs (MERL) Super-Res from Single Motion Blur PhotoAgrawal & Raskar Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Combining Undersampled Dithered Stars Guoliang Li & Lei Wang Purple Mountain Observatoy.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Discrete Fourier Transform in 2D – Chapter 14. Discrete Fourier Transform – 1D Forward Inverse M is the length (number of discrete samples)
EG 2011 | Computational Plenoptic Imaging STAR | VI. High Speed Imaging1 Computational Plenoptic Imaging Gordon Wetzstein 1 Ivo Ihrke 2 Douglas Lanman.
Autonomous Robots Vision © Manfred Huber 2014.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
Dr. Galal Nadim.  The root-MUltiple SIgnal Classification (root- MUSIC) super resolution algorithm is used for indoor channel characterization (estimate.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
1 Machine Vision. 2 VISION the most powerful sense.
Basic Detector Measurements: Photon Transfer Curve, Read Noise, Dark Current, Intrapixel Capacitance, Nonlinearity, Reference Pixels MR – May 19, 2014.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Many of the figures from this book may be reproduced free of charge in scholarly articles, proceedings, and presentations, provided only that the following.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Date of download: 6/21/2016 Copyright © 2016 SPIE. All rights reserved. The measured signal of one pixel is due to the direct illumination from the source.
Date of download: 7/10/2016 Copyright © 2016 SPIE. All rights reserved. A graphical overview of the proposed compressed gated range sensing (CGRS) architecture.
Introduction Computational Photography Seminar: EECS 395/495
Chapter-4 Single-Photon emission computed tomography (SPECT)
Polarized 3D: High-Quality Depth Sensing with Polarization Cues
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
DIGITAL SIGNAL PROCESSING
Deconvolution , , Computational Photography
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Macroscopic Interferometry
Image Based Modeling and Rendering (PI: Malik)
(c) 2002 University of Wisconsin
Lecture 3 From Delta Functions to Convolution.
CSCE 643 Computer Vision: Thinking in Frequency
Macroscopic Interferometry with Electrons, not Photons
(c) 2002 University of Wisconsin
Slides prepared on 09 September, 2018
Digital Image Processing
Motion-Based Analysis of Spatial Patterns by the Human Visual System
Regulation of Airway Ciliary Activity by Ca2+: Simultaneous Measurement of Beat Frequency and Intracellular Ca2+  Alison B. Lansley, Michael J. Sanderson 
Experiments can be reproduced using off-the-shelf ToF cameras
Demosaicking Problem in Digital Cameras
Deblurring Shaken and Partially Saturated Images
Outline Sparse Reconstruction RIP Condition
Camera Culture Ramesh Raskar Camera Culture MIT Media Lab.
Computer Graphics Image processing 紀明德
Presentation transcript:

y y function). For a simple object, such as a wall, the environment response can be modelled as a single dirac. A wall further away would have a shifted dirac. y y y Forward Model: Convolution with Multi-Path Environment Consider a scene with mixed pixels, e.g., a translucent sheet in front of a wall. Environment Profiles are often sparse. At left is a one-sparse environment function, at middle is the environment response resulting from a transparency. At right is a non-sparse environment profile. To recover these, a Tikhonov Deconvolution is used. The resulting environment response is no longer 1-sparse. In this figure it is modelled as 2-sparse. When convolved with a sinusoid (top row), the resulting measurement, y, is another sinusoid, which results in a unicity problem. Perhaps the solution lies in creating a custom correlation function (bottom row). Multi-Path scenarios occur at any edge. Modification 1: Nonnegativity. a)Consider only positive projections when searching for the next atom. b)When updating the residual, use a solver to impose positivity on the coefficients (we use CVX). Modification 2: Proximity Constraints. MIT Media Lab | Camera Culture | University of Waikato Coded Time of Flight Cameras: Sparse Deconvolution to Resolve Multipath Interference Achuta Kadambi, Refael Whyte, Ayush Bhandari, Lee Streeter, Christopher Barsi, Adrian Dorrington, Ramesh Raskar in ACM Transactions on Graphics 2013 (SIGGRAPH Asia) How can we create new time of flight cameras that can range translucent objects, look through diffusers, resolve multipath artifacts, and create time profile movies? Conventional Time of Flight Cameras Time of Flight Cameras (ToF) utilize the Amplitude Modulated Continuous Wave (AMCW) principle to obtain fast, real-time range maps of a scene. These types of cameras are increasingly popular – the new Kinect 2 is a ToF camera – and application areas include gesture recognition, robotic navigation, etc. A time of flight camera sends an optical code and measures the shift in the code by sampling the cross- correlation function. Conventional ToF Cameras use Square or Sinusoidal Codes. Forward Model: Convolution with the Environment The cross-correlation function is convolved with the environment response (here, noted as a delta Application 4: Correcting Multi-Path Artifacts A conventional time of flight camera measures incorrect phase depths of a scene with edges (red). Our correction is able to obtain the correct depths (green). Application 1: Light Sweep Imaging Application 2: Looking Through Diffuser (left): The Measured Amplitude Image. (right): The Component Amplitude Here light is visualized sweeping across a checkered wall at labelled time slots. Colors represent different time slots. Since we know the size of the checkers, we can estimate time resolution. Application 3: Ranging of Translucent Objects In equation form, we express the resulting convolution: Because we have expressed this as a linear inverse problem, we have access to techniques to solve sparse linear inverse problems. We can express this in a Linear Algebra Framework: For this system, true sparsity is defined as: To solve this problem we consider greedy approaches, such as Orthogonal Matching Pursuit (OMP). We make two modifications to the classic OMP that are tailored for our problem: Comparing Different Codes Comparing Different Sparse Programs Related Work: Velten, Andreas, et al. "Femto-photography: Capturing and visualizing the propagation of light." ACM Trans. Graph 32 (2013). Heide, Felix, et al. "Low-budget Transient Imaging using Photonic Mixer Devices." Technical Paper to appear at SIGGRAPH 2013 (2013). Raskar, Ramesh, Amit Agrawal, and Jack Tumblin. "Coded exposure photography: motion deblurring using fluttered shutter." ACM Transactions on Graphics (TOG). Vol. 25. No. 3. ACM, (left) range map taken by a conventional time of flight camera. (middle) We can “refocus” on the foreground depth, or (right) the background depth. Hardware Prototype. An FPGA is used for readout and controlling the modulation frequency. The FPGA is interfaced with a PMD sensor which allows for external control of the modulation signal. Finally, laser diode illumination is synced to the illumination control signal from the FPGA. We compare different codes. The codes sent to the FPGA are in the blue column. Good codes for deconvolution have a broadband spectrum (green). The autocorrelation of the blue codes is in red. Finally, the measured autocorrelation function (gold) is the low pass version of the red curves. The low pass operator represents the smoothing of the correlation waveform due to the rise/fall time of the electronics. The code we use is the m-sequence, which has strong autocorrelation properties. The code that conventional cameras use is the square code, which approximates a sinusoid when smoothed. We compare different programs for deconvolving the measurement (upper-left) into the constituent diracs. A naïve pseudo-inverse results in a poor solution. Tikhonov regularization is better, but lacks the sparsity in the original signal. The LASSO problem is decent, but has many spurious entries. Finally, the modified orthogonal matching pursuit approach provides a faithful reconstruction (bottom-right).