Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-Aperture Photography

Similar presentations


Presentation on theme: "Multi-Aperture Photography"— Presentation transcript:

1 Multi-Aperture Photography
Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL

2 Motivation Post-Exposure Depth of Field Control Depth of Field
Portrait Landscape Large Aperture Shallow Depth of Field Today I will be talking about how to provide useful Depth of field controls to photographers. Depth of field is the term photographers use to describe the range of distances in the scene that are sharp in the final image. shallow DOF, as shown in the top row of images, emphasizes the subject and removes distracting backgrounds, and is often used in portrait photography . Large DOF is necessary when where there are large distances between points of interest in the scene, for example in landscape and outdoor photography. DOF of field is directly controlled by the size of the lens aperture. Small apertures produce large DOF, and large apertures produce Shallow DOF. A photographer must the choose aperture size before taking each photo. [CLICK] And We would like to alleviate some of this burden by allowing post-exposure depth of field control, by capturing multiple aperture settings in one exposure. Now I will go into some details of DoF Small Aperture Large Depth of Field

3 Depth and Defocus Blur sensor lens plane of focus circle of confusion Here I am showing a lens focusing light from a point on the plane of focus into a single point at the sensor plane. [CLICK] However, points not at the focus plane, will not be in sharp focus at the sensor plane, and instead produce a blurred spot. The further a point is from the plane of focus, the larger the spot is. We call the size of the blurred spot, the circle of confusion or the defocus blur. defocus blur depends on distance from plane of focus

4 Defocus Blur & Aperture
sensor lens plane of focus aperture circle of confusion defocus blur depends on aperture size In addition to depending on depth, defocus blur also depends on the size of the aperture. [click] the smaller the aperture, the smaller the blur.

5 Goals Aperture size is a critical parameter for photographers
post-exposure DOF control extrapolate shallow DOF beyond physical aperture Its for this reason that aperture size is a critical parameter for a photographer to set before taking each photo, and also a parameter that takes experience to control well. we want to facilitate post-exposure DOF control by allowing the photographer to explorer the aperture settings after taking the photo . Additionally, the amount of defocus blur is limited by the physical size of the aperture, so to get the Shallow depth of field that many photographers want, it is necessary to use expensive, large aperture lenses. We would like to allow extrapolation of shallow DOF beyond the constraints of the physical aperture on your lens.

6 Outline Multi-Aperture Camera New camera design
Capture multiple aperture settings simultaneously Applications Depth of field control Depth of field extrapolation Refocusing In this talk I will be presenting a new camera design that enables the capture of multiple aperture settings in a single exposure. I will also discuss some of the applications of the multi-aperture camera, with an emphasis on post-exposure depth of field control and depth of field extrapolation.

7 Related Work Computational Cameras Plenoptic Cameras
Adelson and Wang ‘92 Ng et al ‘05 Georgiev et al ‘06 Split-Aperture Camera Aggarwal and Ahuja ‘04 Optical Splitting Trees McGuire et al ‘07 Coded Aperture Levin et al ’07 Veeraraghavan et al ’07 Wavefront Coding Dowski and Cathey ‘95 Depth from Defocus Pentland ‘87 Georgiev et al‘06 Adelson and Wang ‘92 Aggarwal and Ahuja ‘04 McGuire et al ‘07 We build on a lot of previous work on passive computational cameras that allow DOF control. I don’t have time to cover everything, but I will go into the details and differences of several, and discuss how they ultimately shaped our design. Levin et al ’07 Veeraraghavan et al ’07

8 Plenoptic Cameras Aperture Capture 4D LightField v 2D Spatial (x,y)
2D Angular (u,v Aperture) Main Idea: Trade resolution for flexibility after capture Refocusing DOF control Improved Noise Characteristics v u Lenslet Array The first body of work that really shaped our design is Plenoptic, or Lightfield Cameras that can capture the 4d lightfield entering the camera. Sampling 2 spatial dimensions, and 2 angular dimensions. The general idea of these cameras is to trade some spatial resolution to record angular information instead. For example, The design of Ng and colleagues uses a lenslet array placed just over the camera sensor to direct the light that comes from different regions of the aperture to different pixels on the sensor. The purple ray that passes through the center of the aperture is captured at one pixel, while rays that passes through the edges of the aperture are recorded at different pixels. In a sense, what you get is a 4D function, that has the standard 2 spatial dimensions, and 2 extra angular dimensions that sample the aperture in a grid like pattern that I show here. This extra information allows them to perform refocusing, and DOF control after the exposure is taken, and produces very high quality images. Unfortunately the image resolution is reduced by as much as 100 times, and adding the lenslet array has permanently modified your camera. u,v Subject x,y Sensor Lens

9 1D vs 2D Aperture Sampling
v Going back to the particular application of DOF control, we remember that the main controllable parameter that affects DOF is the aperture size. The image on the right shows how the aperture size changes in a normal camera. We can mimic the same behavior with a light field camera by summing different portions of the 2d grid of samples. For example, to create an image with a large depth of field, we would only use the central sample. [click] Now to synthesize the image from slightly larger aperture, we sum the central sample plus the next “ring” of samples. And so on… u 2D Grid Sampling

10 1D vs 2D Aperture Sampling
2D Grid Sampling Aperture 45 Samples 4 Samples lightfield cameras are extremely general, but if what we are really interested in is changing the aperture, then we could have saved a lot of resolution if we had stored the “rings” directly instead of the grid of samples. In other words we have stored a 2d sampling, for something that is essentially 1d. This is the key idea that shaped our camera design. 1D “Ring” Sampling

11 Optical Splitting Trees
General framework for sampling imaging parameters Beamsplitters Multiple imagers Incoming light Beamsplitter Small Aperture One alternative to using a lightfield camera that can capture the 1D space of apertures is to use a network of beamsplitters and cameras, for example as described in the Optical Splitting Trees work by McGuire and colleagues. The main drawback to this approach is that you loose light. In order to get variations in the aperture settings, each camera must use a physical aperture that blocks light. It is also difficult to image onto a single ccd sensor, which would preclude it from being used in consumer photography. Large Aperture McGuire et al ‘07

12 Goals post-exposure DOF control extrapolate shallow DOF 1d sampling
no beamsplitters single sensor removable our review of previous work has added a few more goals. [CLICK] We want to capture the 1d space of aperture rings directly [CLICK] without using beamsplitters [CLICK] we want to use a single Image Sensor so that it can be used with consumer photography [CLICK] and finally, we would like it to be removable, so that you can take a normal photograph if you want.

13 Outline Multi-Aperture Camera New camera design
Capture multiple aperture settings simultaneously Applications Depth of field control Depth of field extrapolation Refocusing Now I will go into the details of our particular optical design

14 Optical Design Principles
3D sampling 2D spatial 1D aperture size 1 image for each “ring” To restate our design principles, we want to capture a 3D sampling of the light entering the camera. 2D spatial, and 1 aperture dimension. In practice, we will capture 4 aperture rings onto the 4 quadrants of the sensor [click] For example the light that enters the smallest green aperture should form an image in the lower left corner on the sensor. So the main task of our optical system is to divert the light from each aperture rings along different paths. Sensor Aperture

15 Aperture Splitting Goal: Split aperture into 4 separate optical paths
concentric tilted mirrors at aperture plane In order to split the light arriving at the aperture to different directions we will place a tilted mirror surfaces at the aperture. [CLICK] this is a 3d illustration of the type of aperture splitting mirrors we use. Tilted Mirrors

16 Aperture Splitting Sensor Tilted Mirrors Focusing lenses Mirrors
Incoming light The tilted surfaces are oriented such that light striking different regions of the aperture is diverted along different directions. By placing folding mirrors, and focusing lenses we can direct the light finally onto the sensor. Tilted Mirrors

17 X Aperture Splitting Ideally at aperture plane
, but not physically possible! Solution: Relay Optics to create virtual aperture plane Photographic Lens Aperture Plane Relay system Aperture splitting optics New Aperture Plane X In practice we can’t place the aperture splitting mirror directly at the aperture plane which lies inside of the lens. Instead, we use a relay system to image the aperture to a plane outside of the photographic lens, and place our splitting mirrors at this new plane.

18 Optical Prototype Mirror Close-up SLR Camera mirrors lenses
tilted mirrors main lens relay optics Mirror Close-up Here is a photo of the Optical Prototype as well as a closeup of the central splitting mirrors. Our prototype uses an off the shelf photographic lens attached to relay optics. The relay optics produce an image of the aperture onto the central splitting mirrors. The light is then reflected onto 1 of 4 folding mirrors depending on its position in the aperture. And finally focused onto the ccd sensor of a SLR camera

19 Sample Data Raw data from our camera
Our design successfully splits the aperture into four regions. Here is an example of the data we capture with our camera. Each quadrant is an image through one of the rings.

20 PSF Occlusion Analysis
inner ring 1 ring 2 outer combined Ideally would be rings Gaps are from occlusion One issue with our design is that there is occlusion from the different rings of the mirrors. I am showing the Point Spread Functions of each of the rings for a point light source placed off the plane of focus. Ideally each of the rings would circular and not contain any gaps or occlusions. We can see that the amount of occlusion increases with the larger rings. The rightmost image, shows the 4 other images summed together, and is an illustration of the PSF of the reconstructed full aperture image.

21 Outline Multi-Aperture Camera New camera design
Capture multiple aperture settings simultaneously Applications Depth of field control Depth of field extrapolation Refocusing In this talk I will be presenting a new camera design that enables the capture of multiple aperture settings in a single exposure. I will also discuss some of the applications of the multi-aperture camera, with an emphasis on post-exposure depth of field control and depth of field extrapolation.

22 DOF Interpolation The first thing we can do with this data is interpolate the depth of field between the smallest and largest apertures captured. By summing the images from different rings, we can create images as if taken through larger apertures.

23 ? DOF Extrapolation? Approximate defocus blur as convolution
So we were able to construct a sequence of images taken from the different aperture sizes. But what if we want to synthesize an image as if taken from a larger aperture? What is this extrapolated image? We know that it should be similar to the other images, but just blurrier because we are using a larger aperture. But exactly how much blurrier? if we make some approximations, for example that the scene is composed of planar objects. Then we can approximate the defocus blur as a convolution with a kernel that depends on the aperture size and the depth at each pixel. This allows to relate the blur observed already, to the extrapolated image. assuming that I0 is the smallest aperture image, We can express the blur in the images I1,2 and 3 as a convolution of image I0. For example, the pixel in I1 is a sum of some are in I0, and similarly for I2, but it’s a slightly larger area. And for I3 the area is even larger. The blur will be even larger for the extrapolated image. The only issue is to figure out exactly how large it should be. And do this for every pixel in the extrapolated image. So to synthesize an new extrapolated image IE, we need to figure out what the blur is for each pixel in Ie. Depends on depth and aperture size What is at each pixel in ? - Circular aperture blurring kernel

24 DOF Extrapolation Roadmap
estimate blur extrapolate blur capture fit model Blur size Aperture Diameter σ Largest physical aperture D I E I 1 2 3 The general roadmap of what we would like to do to extrapolate blur is as follows: First we capture data with our multi-aperture camera, as I’ve described earlier. From the data we have 4 images, each taken with a different different aperture setting. Next, we would like to estimate the amount of blur, or size of the circle of confusion, at each point, in each of the 4 images. Then, we would like to fit a function to the blur samples that allows us to extrapolate. Finally, we would evaluate the extrapolation function for the new aperture size, and synthesize a new blurred image using the extrapolated circle of confusion size. In fact aperture size has a simple linear relation to blur size, and we can use this model to combine the estimation of blur and fitting the extrapolation function for a more robust estimation. Next, I’m going to briefly discuss the linear model of aperture and blur size that we will use as our extrapolation function.

25 Defocus Gradient σ I I I I I Defocus blur Defocus Gradient
Largest physical aperture I E Blur size Defocus blur I 3 I 2 I 1 I Aperture Diameter D Defocus Gradient G is slope of this line Defocus Gradient Map As I have mentioned several times, there is a relationship between size of the defocus blur and the aperture number. This equation also involves object depth and focal length. We can combine these terms into one constant, and arrive at a simple equation That describes how the blur at a point will change with aperture number. We call G the defocus gradient. If we replot the left graph, but this time as a function of 1/aperture-number, we see that G describes the slope of the line. So our task is essentially to fit a line to several data points. And then to extrapolate, we just evaluate the line at the desired aperture number. The right image is showing the defocus gradient map, which is the estimated defocus gradient at each pixel. The DGM is related to a depth map, and infact we could have solved for depth and then converted it into defocus gradient. But this is a simple method that directly solves For the quantity that we are interested in, How defocus changes with aperture size.

26 Optimization solve for discrete defocus gradient values G at each pixel Data term Graph Cuts with spatial regularization term To solve for the defocus gradient map, we construct a graph problem and solve it using graph cuts optimization. Our objective function includes a data fitting term as well as a regularization term to enforce spatial smoothness among neighbors. The data term searches for the defocus gradient value that best explains the observed blur in different images. The output is a map, with a discrete set of defocus gradient values. Please see the paper for more details of the optimization. Smallest Aperture Image Defocus Gradient Map

27 DOF Extrapolation video

28 Synthetic Refocusing Need to focus on nearest object gradient map
We can also perform a type of synthetic refocusing. By shifting the labels in the defocus gradient map, we can synthetically move the apparent plane of focus. And then blur using the DOF extrapolation technique just described. This method only works if you originally focused on the nearest or furthest object in the scene. Otherwise there is an ambiguity about whether a point is in front of or behind the original plane of focus. This is also a problem in DFD methods, and focusing on the nearest object is commonly used. This data was not captured from our prototype, but instead was taken as 4 separate photograhs. gradient map “refocused” map extrapolated f/1.8 “refocused” synthetic f/1.8

29 Synthetic Refocusing Video

30 Depth Guided Deconvolution

31 Limitations Optical Design Occlusion Difficult alignment process
DOF extrapolation and refocusing DOF is dependent on smallest aperture (but our deconvolution helps) The main limitations of our optical design are the occlusion that the central splitting mirror produces. Also, there is a difficult and tedious alignment process to get the images to route correctly to the ccd. The main issue with our DOF extrapolation and refocusing methods is that the DOF of the output image is dependent on the DOF present in the smallest aperture image. Also we used a simple blurring method that suffers from halo artifacts at boundaries. Improved blurring results could be accomplished using the new blurring model introduced in the Active Refocusing paper by Moreno-Noguer et al from this year.

32 Conclusions Post-Exposure DOF control DOF extrapolation
1D sampling of aperture No beamsplitters Removable In conclusion We have presented a method to perform post-exposure DOF control, And in particular DOF extrapolation Our camera design samples the 1d space of aperture sizes, instead of a 2d grid Our system doesn’t use beamsplitters which loose light And finally, is completely removable.

33 Thanks People John Barnwell Jonathan Westhues SeBaek Oh Daniel Vlasic
Eugene Hsu Tom Mertens Jane Malcolm Funding NSF CAREER award Ford Foundation predoctoral Fellowship Microsoft Research New Faculty Fellowship Sloan Fellowship


Download ppt "Multi-Aperture Photography"

Similar presentations


Ads by Google