Presentation is loading. Please wait.

Presentation is loading. Please wait.

Removing Highlight Spots in Visual Hull Rendering

Similar presentations


Presentation on theme: "Removing Highlight Spots in Visual Hull Rendering"— Presentation transcript:

1 Removing Highlight Spots in Visual Hull Rendering
Jie Feng, Liang Chen and Bingfeng Zhou Institute of Computer Science & Technology Peking University, Beijing, China

2 Visual Hull Rendering An efficient technique for image-based rendering
The intersection of “viewing cones” constructs a convex hull that contains the object Visual hull is an efficient technique for image-based rendering. Widely used recently. It’s main idea comes from “Shape-from-Silhouette” : from each source image, we can extrude a “viewing cone”, according to the information of the camera. The intersection of “viewing cones” constructs a convex hull that contains the object. It can be considered as an approximation of the object. However, visual hull can’t rebuilt concave objects.

3 Visual Hull Rendering Suffers from the highlight spots on the images
…… As an image-based technique, visual hull rendering also suffer from the highlight spots on the images. These spots would remain on the reconstructed models and cause artifacts, especially when the illumination of the environment is changed. rendered new view reference images

4 Highlight Spots Removal Methods
Some methods need more than one image at a single position Increase the difficulty of acquiring and storing source images in visual hull rendering There have been many highlight removal methods based on image editing techniques. Some of them need more than one image at a single position. For example, Agrawal et al used two images at the same position in their paper. By the guidance of the ambient image, reflection and highlight can be removed from the flash image. However, such methods would increase the difficulty of acquiring and storing source images in visual hull rendering. [Agrawal et al., SIGGRAPH 2005]

5 Highlight Spots Removal Methods
Single-image-based methods Utilizing image gradient, illumination constrains etc. in image editing Working well when background is simple or uniformly textured Cannot guarantee the consistency of different reference images [Pérez et al., SIGGRAPH 2003] There are also some single-image-based methods. They usually make use of the image gradient field, or introduce illumination-based constrains in image editing. Such methods could work quite well when background is simple or regularly textured. But one shortcoming of them is: they edit each reference image independently, and cannot guarantee their consistency in corresponding parts. [Tan et al., ICCV 2003]

6 Removing Highlight on VH
Visual hull methods provide great convenience for highlight removal Reference images usually have much overlaps The correspondence of pixels in different images could be found In fact, visual hull method itself provides great convenience for highlight removal. As illustrated in the feature, reference images of a visual hull often have much overlaps. They will offer sufficient information to remove highlights, because the counterpart of a highlighted area in another image is often out of highlight . And utilizing the calibration information of the cameras, the correspondence of pixels in different images could be found during the constructing of the visual hull. target reference image another reference image

7 Image-based Visual Hull Rendering
Image-based Visual Hull (IBVH) , Matusik et al., The calculations are limited in image space Main steps to render a new view: Projecting viewing ray to reference images (epipolar line) The epipolar line intersects the 2D silhouette Projecting 2D intervals back to get 3D bounding edges on the viewing ray Here we will present a highlight removal method based on the framework of IBVH. IBVH method does not reconstruct 3D models explicitly, but directly render a new image from the desired view. Hence, the calculations are limited in the image space. So it’s very efficient. The main steps to render a new view: 1. For each pixel of the desired view, projecting its viewing ray to other reference images by epipolar geometry (and get epipolar lines). 2. The epipolar line intersects the 2D silhouette, and result in a group of 2D intervals. 3. Projecting 2D intervals back to the viewing ray. Their intersections became the 3D bounding edges. Therefore, the intersection points of each viewing ray and the visual hull could be found. Their rendering colors are sampled from the reference images.

8 Removing Highlight Spots on IBVH
The main steps of our method: Selecting sub-images in the target image that containing highlight spots Finding correspondences of pixels in other reference images Resampling the highlight sub-image from its counterparts In our highlight removal method, we first select sub-images in the target reference image that containing highlight spots. Then, by using the calibration information of the cameras, the correspondences of pixels in other reference images could be found. Thirdly, sub-images are resampled from their counterparts to remove highlight spots.

9 Selecting Highlight Sub-images
Calculations are applied only to sub-images, the rest part remain unchanged To reduce computing cost and minimize the error introduced by pixel mapping and resampling The purpose of selecting sub-images is to reduce computing cost and minimize the error introduced by pixel mapping and resampling, Therefore, all the calculations are applied only to sub-images, and the rest part remain unchanged

10 Finding Pixel Correspondence
Finding pixel correspondence between reference images is similar to rendering a new view in IBVH, only: The target image is not a synthesized new view, but one of the reference images The calculations are limited to the selected highlight sub-images The approach to finding pixel correspondence between different reference images is similar to that of rendering a desired new view in IBVH. Only: 1) When rendering a desired view, the target image is a new, virtual and synthesized one, but here it is exactly one of the reference images; 2) The calculations are no more performed over the whole image, but limited to the selected highlight sub-images.

11 Finding Pixel Correspondence
The approach p0 C0 r rak rbk Ck vak vbk va vb ps pk Is Ik lk pak pbk Here is the approach of finding the correspondence of a pixel. 1. Given a target image I0, for each pixel p0 in a highlight sub-image, we also have a 3D viewing ray r. 2. For another reference image Ik, the epipolar line of p0 (denoted as le) is calculated by using the fundamental matrix between I0 and Ik. Let line le intersects the silhouette of Ik at point pak and pbk, 3. Then there are two 3D rays, rak and rbk that are emitted from camera Ck and pass through pak and pbk, respectively. Note that r, rak and rbk are all in the same plane. Therefore, they would intersect and result in a 3D segment (vak, vbk). 4. Applying the same calculation to other reference images, we will get a group of such segments. Their intersection, denoted as (va, vb), is considered as the intersection of r and the visual hull of the object, and the nearer endpoint va is the corresponding 3D point of pixel p0. 5. Utilizing the calibration information of camera Ck, va can be projected onto image Ik, and get the corresponding pixel pk. 6. Similarly, we can find the corresponding pixel of p0 on other reference images. I0

12 Finding Pixel Correspondence
The counterparts of a highlight sub-image on other reference images reference images Therefore, the counterparts of a highlight sub-image on other reference images are found. Note that they usually contain less highlight, or not at all. We can take advantage of this fact to remove the spots. target reference image

13 Highlight Sub-image Resampling
Re-calculating the color of sub-image pixels to reduce highlight effect Blending the appearance colors of p0 and its corresponding pixels {pk | k=1…n} by their gray level deviations After finding the counterparts of a sub-image, we must re-calculate the color of its pixels to reduce highlight effect. This is completed by blending the appearance colors of pixel p0 and its corresponding pixels {pk | k=1…n}. We can assume that the majority of the appearance colors of {pk | k=1…n} are mainly distributed around the true diffuse color. Thus, when blending the colors, the pixel whose color deviates far from the others would be given small weight, and those closer to the average color would be given larger weight. In fact, weights based on gray level deviation of the pixels could work well enough. Avg. gray level: Final color: Blending weight:

14 Highlight Sub-image Re-sampling
n nearest images are used in blending a pixel, according to the angles between viewing rays (θk) Larger n helps to filter out highlighted pixels, but also causes blurring effect in complex- textured areas Is Ik I0 p0 pk ps θs θk We assume n reference images (besides the target image) are used in blending a pixel. These n images are selected from all reference images according to the angles between viewing rays (θk). Using a larger n will make the average color closer to the true color and help to filter out highlighted pixels. However, blending more pixel colors would cause blurring effect, especially in the areas with complex texture.

15 Highlight Sub-image Re-sampling
Adjusting n according to each sub-image’s complexity (gray level deviation sum) Using less images in complex-textured sub-image, more in simple-textured sub-images Avg. gray level: To get a globally better result, we adjust the value of n according to each sub-image’s gray level deviation sum. A sub-image with more complex texture will be given a smaller n to prevent blurring, and a sub-image with simpler texture will be given a larger n for smoother result. Gray level deviation sum: Num. of image in use:

16 Experimental Results Fixing a single reference image
23 reference images in tatol Nmax=9 (simple texture) Nmin=4 (complex texture)

17 Experimental Results Rendering new views without highlight spots
original / fixed reference images rendering result

18 Summary A simple and efficient method to remove highlight spots in visual hull rendering Utilizing the calibration information of reference images to find pixel correspondence Blending corresponding pixels to filter out highlight spots More realistic and precise result The re-lighting of visual hull become possible

19 Thanks!


Download ppt "Removing Highlight Spots in Visual Hull Rendering"

Similar presentations


Ads by Google