Applications Presented by: Michal Kamara. Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector.

Slides:



Advertisements
Similar presentations
Feature Based Image Mosaicing
Advertisements

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Exploration of bump, parallax, relief and displacement mapping
Cameras and Projectors
A Keystone-free Hand-held Mobile Projection System Li Zhaorong And KH Wong Reference: Zhaorong Li, Kin-Hong Wong, Yibo Gong, and Ming-Yuen Chang, “An Effective.
Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
3D Graphics Rendering and Terrain Modeling
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
RANSAC-Assisted Display Model Reconstruction for Projective Display Patrick Quirk, Tyler Johnson, Rick Skarbez, Herman Towles, Florian Gyarfas, Henry Fuchs.
A Study of Approaches for Object Recognition
Probabilistic video stabilization using Kalman filtering and mosaicking.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Real-Time Geometric and Color Calibration for Multi-Projector Displays Christopher Larson, Aditi Majumder Large-Area High Resolution Displays Motivation.
Multi-view stereo Many slides adapted from S. Seitz.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
High-Quality Video View Interpolation
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Augmented Reality: Object Tracking and Active Appearance Model
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
Paper by Alexander Keller
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Computer Graphics Shadows
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Computer Graphics Mirror and Shadows
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Disclosure risk when responding to queries with deterministic guarantees Krish Muralidhar University of Kentucky Rathindra Sarathy Oklahoma State University.
1 Shadows (2) ©Anthony Steed Overview n Shadows – Umbra Recap n Penumbra Analytical v. Sampling n Analytical Aspect graphs Discontinuity meshing.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
Autonomous Robots Vision © Manfred Huber 2014.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Using Adaptive Tracking To Classify And Monitor Activities In A Site W.E.L. Grimson, C. Stauffer, R. Romano, L. Lee.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Computer Vision Computer Vision based Hole Filling Chad Hantak COMP December 9, 2003.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
A Plane-Based Approach to Mondrian Stereo Matching
Real-Time Soft Shadows with Adaptive Light Source Sampling
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
3D Graphics Rendering PPT By Ricardo Veguilla.
Vehicle Segmentation and Tracking in the Presence of Occlusions
Presentation transcript:

Applications Presented by: Michal Kamara

Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector displays Dynamic shadow removal from front projection displays Augmented generation of consistent shadows for Augmented Reality

The use of large-scale front-projection display has emerged in recent years: –Immersive teleconferencing –Virtual reality environments –Augmented reality One fundamental problem: Shadows easily remove the user from the visually immersive experience. Motivation

What can be done? Back-projection –Problems of: space considerations, intensity and sharpness attenuation and mechanical complexity. Constraint user movement –Some interactive display environments adaptively render a model based on user’s position. –May prevent the user of viewing particular parts of the model. Or…

Dynamic Shadow Elimination For Multi-Projector Displays Rahul Sukthankar Tat-Jen Cham Gita Sukthankar 2001

Outline System Overview Automatic Alignment Reference Images Shadow Detection Shadow Elimination Iterative feedback results

System Overview

System Overview – cont ’ The system must accurately align the projected images on the display surface. Each occluder can create multiple shadows on the display surface. The system must precisely adjust projector output to compensate for each occlusion. Shadow boundaries must be treated carefully.

Algorithm ’ s steps Occluded display Reference image Row slide Multi projector display, After shadow elimination Occluded display Alpha mask (camera frame) Apply Alpha mask Camera-screen homography Screen-projector1 homography Screen-projectorN homography Projector 1 Projector N Shadow-compensated slide (screen frame)

Automatic Alignment We need to find T such that: For all points and in 2 different coordinate systems. Because T is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points.

Automatic Alignment – cont ’ Camera-projector homography, can be determined by: –iteratively projecting a random point from the projector onto the display surface and observing that point in the camera. –projecting a rectangle from the projector, where the coordinates of the rectangle corner in the projector are known and can be located at camera frame using image processing techniques. The display area is either automatically determined by the camera, or interactively specified by the user. Camera-screen homography, can be determined by the corners of the display surface.

Automatic Alignment – cont ’ The projector-screen calibration is important to avoid distortions and double images on the display surface, that may be caused from the off-center projections.

Reference Images Creating the reference images is done during the initialization phase, when the scene is occluder-free. For each slide the system projects, several camera images are capture and are pixel- wise averaged to create a reference image for this slide.

Shadow Detection During operation, the camera acquires a current image which is compared to the reference image. A pixel-wise image differences between reference and current camera images is used to detect shadows. A median filter (5X5) is applied to the difference image to reduce the effects of camera noise and minor calibration errors.

Shadow Elimination From the difference image a mask called alpha mask is constructed: Where is the camera image at time t. is the reference image. is a system parameter, set to 0.25, to avoid rapid fluctuations. And Note that there is only one alpha mask for all projectors.

Shadow Elimination – cont ’ The alpha mask is computed in the camera frame and hence must be transformed into the screen frame. Well, we know how to do that… Applying the alpha mask to the current slide is done by replacing the alpha channel of the slide image. What channel ??? An alpha channel is another channel (along with the 3 channels: R,G,B) that may be added to an image. That channel describes the importance of each pixel when composite over another image.

Shadow Elimination – cont ’ After applying the alpha mask to the screen slide, it is transformed for each projector and…displayed.

Iterative Feedback Since there is no good photometric model of the environment, there is not a precise prediction of how much light is needed to remove the shadow. That is why the iterative feedback loop is used. The system will continue to add light to shadowed regions until it will appear as in the reference image. Surprisingly it creates robustness, suppose 1 of the projector fails, the alpha mask will uniformly increase. Main drawback is time –shadows are eliminated in approximately 3 iterations.

Results

Results – cont ’ To examine image quality over the shadow removal process, SSD error of gray scale intensities was calculated compared to the reference image. As expected, the hard shadow from the single projector is the major source of error.

Results – cont ’ frame error The “occluder” is entered at t=4. The “occluder” is leaving at t=11. The remaining low errors are referred to the “halo” effect.

Dynamic Shadow Removal from Front-Projector Displays Christopher Jayen Stephen Webb R.Matt Steele Michael Brown W.Brent Seales 2001

Outline System Overview Requirements Calibration –Geometric Calibration –Color Calibration Creating an expected image Alpha mask generation Results Main drawbacks

System Overview Very similar to previous system, with one main difference: –The expected image is created from projector frame buffer using the calibration during operation. This difference derives a new type of calibration, color calibration.

Requirements Screen points are illuminated by more than 1 projector. At least 1 camera is able to observe the screen surface at all times.

Calibration Critical both for shadow detection and removal. A two phase process, performed prior to use of the system. –Geometric Calibration –Color Calibration

Geometric Calibration Very similar as in the previous algorithm, only now the calibration is directly between camera and projector. Given a camera and projector pair, calibration determines the transform from pixels in the camera plane to their corresponding positions in the projectors’ frame buffers.

Geometric Calibration cont ’ Reminder: We need to find A such that: for all points in the camera and all in the projector. Because A is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points. We can find such points with iteratively projecting a random point from the projector onto the display surface and observing that point in the camera.

Geometric Calibration cont ’ The accuracy of A can be measured with: In this study, 10 matching pairs for calculating A were used and 50 points for calculating calibration error. To improve results, a technique called Monte Carlo was used.

Color Calibration A given camera C observes the display surface, while uniform color images of increasing intensity are iteratively projected from projector P. For each projected color image, the mean color intensity is computed over the corresponding observed image. This is computed for each channel separately. The mean value over 10 trials is computed for each color channel. Dose not refer to color differences between the projectors.

Measured transfer function for each color channel

Color Calibration cont ’ The transfer function is of the form: Where is a color transfer function for color channel C, the other 4 parameters are fit to the measured datapoints using a technique called the nonlinear optimization Levenberg-Marquardt [et al 1998]. Those color transfer functions provide a straightforward way to predict how a color in projector space will appear in the camera image.

Color Correction results Observed image Predicted image, without color correction with color correction

Creating an expected image In a dynamic display the imagery may change in an unpredictable way (user movement, simulations, video data). The expected image must account for the changing display. The expected image is the basis for subsequent modification of projector frame buffer pixels, so we want it to be as accurate as possible.

Creating an expected image-cont ’ An expected image is recovered by: –Wrapping all projector pixels in to the camera frame (geometric calibration). For the high accuracy, a super sampling technique is used. –Apply color correction (color calibration): Where: is the expected image after geometric calibration.

Predicted image: Example Camera view: Before color correction

Alpha mask generation Expected image is compared to the captured imagery by a subtraction of color components. That leads to 2 delta images,,. Each delta image is filtered (3X3 median) to remove the effect of sensor noise. All the above is happening in the camera coordinate frame. Using the camera-projector homography, the delta images are warped to the reference frame of each projector for correction.

Alpha mask generation – cont ’ Once a delta image has been aligned to a projector, an appropriate alpha mask is computed as follows: Where is the maximum intensity change between any 2 frames to avoid rapid fluctuations. The alpha blending process takes into account whether incoming alpha values should be added or subtracted from the alpha channel currently being projected.

Resulting Alpha mask Example Difference image

Results

Main drawback It takes the system about 3-4 frames to converge to a blended image. This is not an interactive rate.

Automation generation of consistent shadows for Augmented Reality Katrien jacobs Jean-Daniel NahmiasCameron Angus Alex Reche Celine Loscos Anthony Steed Katrien jacobs Jean-Daniel Nahmias Cameron Angus Alex Reche Celine Loscos Anthony Steed2005

Outline Motivation The problem Previous work Method overview Shadow detection step –Automatic estimate of the shadow intensity Shadow Protection step Shadow Generation step

Motivation A wide range of applications use computer generated animations in combination with pictures of real scenes. –Medical training –Medical surgery –Entertainment Some require an instantaneous inclusion between the virtual elements and the real ones. Consistent shadow of the virtual objects gives a correct geometric interpretation Correct lighting enhances the feeling that the virtual objects are part of the real scene.

The problem This doesn't seem natural: The shadow lies correctly on the ground but overlap incorrectly with the real shadow.

Previous work Since the early 90’s a few solutions for the illumination inconsistency have been proposed. Most of them assume that a model of the real scene is available. If not, it is reconstructed using photos from different viewpoints. These usually leads to a mismatch between the simplified geometry and the texture. newIn this paper, a new procedure is presented that offers a solution regardless of the quality of the geometric reconstruction.

Geometric Reconstruction, example Outdoor scene Reconstructed geometry Shadow created based on geometry Mismatch between geometry and texture

Method overview The system is applied on scenes with one main real light source. The real element’s geometry and the position of the light source only need to be known approximately. A three-step mechanism is designed: –Shadow detection step –Shadow protection step –Shadow generation step

Shadow detection step In order to protect the existing shadows in the scene from any post-processing, the shadow pixels in the texture need to be identified. –First a shadow contour estimate is calculated using the geometry and the light source position. –Next, the exact shadow contour is extracted using an edge detector, in this case, Canny edge detector.

Edge detection example Input for the edge detector Using the geometric estimation an accurate edge detection is done

Shadow detection step – cont ’ Correct detection will occur when: –The position of the geometrical estimate is close to that of the real shadow, regardless of the difference in shadow shape or detail. –The shadow is hard or soft and shows a relatively high contrast with the background. –The contrast of the shadow and the background is larger than the contrast in the texture pattern of the background. The computation speed of the shadow edge detector depends on the size of the real shadows.

Shadow detection step: automatic estimate of the shadow intensity Once the true shadow contour is known, it is possible to calculate a scaling factor per material in shadow that reflects the color intensity in the shadow region.

Shadow detection step: automatic estimate of the shadow intensity } The average in the shadow region } The average in the non shadow region C={R,G,B} SR- Shadow Region NSR- Non Shadow Region - The number of pixels in SR - The number of pixels in NSR

Shadow Protection step Binary shadow mask is created in order to protect those points inside a real shadow from any scaling. The scaling factor is chosen to match the color of the non-overlapping areas with the points inside the real shadow.

Shadow Generation step A real-time shadow method such as shadow maps or shadow volumes is used to generate the virtual shadows. The intensity of the shadow relates to the appropriate scaling factor computed in the shadow protection step. Overlap between real and virtual shadows is prevented by using the mask generated in the shadow protection step. The intensities of the pixels in the non- overlapping regions are calculated by scaling the texture color with the scaling factor.

Results Geometric estimation of the shadow

Real scene Estimated shadow in yellow Green area is sent to edge detection Edge detection

Real-time results Virtual man walking around real laptop

References Rahul Sukthankar, Tat-Jen Cham, Gita Sukthankar Dynamic Shadow Elimination for Multi-Projector Displays Proceedings of the IEEE (CVPR), 2001 Christopher Jaynes, Stephen Webb, Matt Steele, Michael Brown, W. Brent Seales Dynamic Shadow Removal from Front Projection Displays Proceedings of the IEEE Visualization, 2001 Katrien Jacobs, Jean-Daniel Nahmias, Cameron Angus, Alex Reche, Celine Loscos, Anthony Steed Automatic generation of consistent shadows for augmented reality ACM International Conference Proceeding Series; Vol. 112 archive, Proceedings of the 2005 conference on Graphics interface