Slides prepared on 09 September, 2018

Slides:



Advertisements
Similar presentations
Y y function). For a simple object, such as a wall, the environment response can be modelled as a single dirac. A wall further away would have a shifted.
Advertisements

--- some recent progress Bo Fu University of Kentucky.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains Baochen Sun and Kate Saenko UMass Lowell.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Recap from Previous Lecture Tone Mapping – Preserve local contrast or detail at the expense of large scale contrast. – Changing the brightness within.
Stereo.
ECE 562 Computer Architecture and Design Project: Improving Feature Extraction Using SIFT on GPU Rodrigo Savage, Wo-Tak Wu.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Lecture 5 Template matching
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Super Fast Camera System Supervised by: Leonid Boudniak Performed by: Tokman Niv Levenbroun Guy.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Crystals and related topics J. Gerl, GSI NUSTAR Calorimeter Working Group Meeting June 17, 2005 Valencia.
Object Recognition Using Geometric Hashing
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
Robot Vision SS 2013 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther, Christian Reinbacher.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Depth from Diffusion Supported by ONR Changyin ZhouShree NayarOliver Cossairt Columbia University.
Human tracking and counting using the KINECT range sensor based on Adaboost and Kalman Filter ISVC 2013.
Bala Lakshminarayanan AUTOMATIC TARGET RECOGNITION April 1, 2004.
Digital Photography A tool for Graphic Design Graphic Design: Digital Photography.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Accidental pinhole and pinspeck cameras: revealing the scene outside the picture A. Torralba and W. T. Freeman Proceedings of 25 IEEE Conference on Computer.
Digital Image Processing & Analysis Spring Definitions Image Processing Image Analysis (Image Understanding) Computer Vision Low Level Processes:
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Computer Vision, Robert Pless
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Raskar, Camera Culture, MIT Media Lab Camera Culture Ramesh Raskar Camera Culture MIT Media Lab Ramesh Raskar.
高精度高速度的三维测量技术. 3D stereo camera Stereo line scan camera 3D data and color image simultaneously 3D-PIXA.
Jeong Kanghun CRV (Computer & Robot Vision) Lab..
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Reconstruction-free Inference on Compressive Measurements Suhas Lohit, Kuldeep Kulkarni, Pavan Turaga, Jian Wang, Aswin Sankaranarayanan Arizona State.
WCPM 1 Chang-Tsun Li Department of Computer Science University of Warwick UK Image Clustering Based on Camera Fingerprints.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Presenter: Jae Sung Park
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
range from cameras stereoscopic (3D) camera pairs illumination-based
Introduction Computational Photography Seminar: EECS 395/495
Radon Transform Imaging
제 5 장 스테레오.
A tool for Graphic Design
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Terahertz sensors, machine learning, and concealed weapon detection
Object detection as supervised classification
Rob Fergus Computer Vision
Visibility Computations
Macroscopic Interferometry with Electrons, not Photons
Brief Review of Recognition + Context
Slides prepared on 09 September, 2018
Slides prepared on 09 September, 2018
Complex Nanophotonics
Experiments can be reproduced using off-the-shelf ToF cameras
Basic Camera Function The camera converts an optical image into electrical signals that are reconverted by a television receiver into visible screen images.
Occlusion and smoothness probabilities in 3D cluttered scenes
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

Slides prepared on 09 September, 2018 Exploiting occlusion to image hidden scenes Christos Thrampoulidis joint work with G. Shulkind, F. Xu, W. Freeman, J H. Shapiro, A. Torralba, F. Wong, and G. Wornell Reference: “Exploiting occlusion in non-line-of-sight active imaging”, C.T. , G. Shulkind, F. Xu, W. Freeman, J H. Shapiro, A. Torralba, F. NC. Wong, and G. W Wornell. IEEE Transactions on Computational Imaging, 2018. Slides prepared on 09 September, 2018

Seeing around the corner Conventional imaging Lens: (physically) maps each scene element x0,i to a distinct pixel value yi on the sensor sensor Seeing around the corner visible (diffuse) wall Applications: defense, search & rescue, robotic vision, autonomous vehicles, …

Challenges Conventional imaging Seeing around the corner Lens: (physically) maps each scene element x0,i to a distinct pixel value yi on the sensor sensor Seeing around the corner visible diffuse wall Challenges Measurements linear but uninformative: 2. very weak signal sensor

Active NLOS imaging Challenge: Weak signal Solution: Probe the scene with an active light source sensor laser Challenge: Uninformative measurements Previous approach: Ultrafast time-of-flight measurements [Velten et al. ’12; Buttafava et al. ‘15; Klein et al. ’16, …]

Active NLOS imaging Challenge: Weak signal Solution: Probe the scene with an active light source laser sensor Challenge: Uninformative measurements Previous approach: Ultrafast time-of-flight information Drawbacks: expensive and sensitive camera extremely fast laser slow: very long-acquisition times The curse of light speed! [Velten et al. ’12; Buttafava et al. ‘15; Klein et al. ’16, …]

Opportunistic NLoS imaging Question: Can we see in a hidden room without ultrafast ToF measurements? What are additional structural features that we can exploit? Our Idea: Opportunistically exploit visible occluders visible occluders visible diffuse wall

Opportunistic NLoS imaging Question: Can we see in a hidden room without ultrafast ToF measurements? Our idea: Opportunistically exploit visible occluders laser

Opportunistic NLoS imaging Question: Can we see in a hidden room without ultrafast ToF measurements? Our idea: Opportunistically exploit visible occluders Occlusions create diversity . 0 0 0 0 0 0 0 0 0 0 0 laser

NLoS in action Computational method: Raw photon counts visible wall hidden pattern Occluder Image structure Computational method: Binomial likelihood (Photon-efficient) [Thrampoulidis,Shulkind,Xu,Freeman,Shapiro,Torralba,Wong,Wornell, IEEE Trans. Comp. Imaging, 2018] [Xu*, Thrampoulidis*,Shulkind*,Shapiro,Torralba,Wong,Wornell, Optics Express, 2018]

More examples

Modeling gains VS ~ 70 PPP ~ 1100 PPP Photons Per Pixel (PPP) 40 min Binomial likelihood VS Gaussian likelihood ~ 70 PPP 2.5 min ~ 1100 PPP 40 min Photons Per Pixel (PPP)