Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric.

Slides:



Advertisements
Similar presentations
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Advertisements

Digital Image Processing
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Image Processing Lecture 4
Computer Vision CS 776 Spring 2014 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides)
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Modeling Light : Rendering and Image Processing Alexei Efros.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Talk today on “Lightfield photography” by Ren.
CS485/685 Computer Vision Prof. George Bebis
CSCE641: Computer Graphics Image Formation Jinxiang Chai.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Scene Modeling for a Single View : Computational Photography Alexei Efros, CMU, Spring 2010 René MAGRITTE Portrait d'Edward James …with a lot of.
Gradient Domain High Dynamic Range Compression
CSCE 641: Computer Graphics Image Formation & Plenoptic Function Jinxiang Chai.
Image warping/morphing Digital Video Special Effects Fall /10/17 with slides by Y.Y. Chuang,Richard Szeliski, Steve Seitz and Alexei Efros.
Cameras, lenses, and calibration
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
Perspective projection
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University September.
Image Stitching Ali Farhadi CSE 455
COMP 175: Computer Graphics March 24, 2015
Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Oct 19th,
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
CPSC 641: Computer Graphics Image Formation Jinxiang Chai.
Computational photography CS4670: Computer Vision Noah Snavely.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
03/05/03© 2003 University of Wisconsin Last Time Tone Reproduction If you don’t use perceptual info, some people call it contrast reduction.
3D Imaging Motion.
112/5/ :54 Graphics II Image Based Rendering Session 11.
stereo Outline : Remind class of 3d geometry Introduction
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
 Marc Levoy Using Plane + Parallax to Calibrate Dense Camera Arrays Vaibhav Vaish, Bennett Wilburn, Neel Joshi, Marc Levoy Computer Science Department.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
More with Pinhole + Single-view Metrology
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Image Warping Many slides from Alyosha Efros + Steve Seitz + Derek oeim Photo by Sean Carroll.
Announcements Project 1 grading session this Thursday 2:30-5pm, Sieg 327 –signup ASAP:signup –10 minute slot to demo your project for a TA »have your program.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
Image Warping 2D Geometric Transformations
CSE 185 Introduction to Computer Vision
Basic Ray Tracing CMSC 435/634.
Rendering Pipeline Fall, 2015.
The Camera : Computational Photography
CSCE 441 Computer Graphics 3-D Viewing
Scene Modeling for a Single View
Image Processing and Reconstructions Tools
Lecture 13: Cameras and geometry
Lecturer: Dr. A.H. Abdul Hafez
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Filtering Things to take away from this lecture An image as a function
Projection Readings Nalwa 2.1.
Filtering An image as a function Digital vs. continuous images
Announcements Midterm out today Project 1 demos.
Presentation transcript:

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Northeastern University, Fall 2005 CSG242: Computational Photography Ramesh Raskar Mitsubishi Electric Research Labs Northeastern University Nov 20, 2005 Course WebPage :

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Plan for Today Exam review (Avg 84.5, max 89 min 76)Exam review (Avg 84.5, max 89 min 76) Light FieldLight Field Assignment 4Assignment 4 Tools: Gradient Domain and Graph CutsTools: Gradient Domain and Graph Cuts Paper readingPaper reading 2 per student, 15 mins each, Reading list on the web2 per student, 15 mins each, Reading list on the web Starts Nov 10 thStarts Nov 10 th Course feedbackCourse feedback

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Reading Paper Presentations 15 minutes: 10 minute presentation, 5 minutes for discussion Use Powerpoint slides, Bring your own laptop or put the slides on a USB drive, (print the slides to be safe) Format:Motivation Approach (New Contributions) Results Your own view of what is useful, what are the limitations Your ideas on improvements to the technique or new applications (atleast 2 new ideas) It is difficult to explain all the technical details in 15 minutes. So focus on the key concepts and give an intuition about what is new here. Ignore second order details in the paper, instead describe them in the context of the results. Keep the description of the approach simple, a rule of thumb: no more than 3 equations in your presentation. Most authors below have the powerpoint slides on their websites, so feel free to use those slides and modify them. Be careful, do not simply present all their slides in sequence. You should focus on only the key concepts and add your own views. If the slides are not available on the author website, copy paste images from the PDF to create your slides. Sometimes you can send to the author, and s/he will send you the slides.

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Tentative Schedule Nov 30 thNov 30 th –Project Update –Special lecture: Video Special Effects Dec 2 nd (Friday)Dec 2 nd (Friday) – Hw 4 due Midnite Dec 7 thDec 7 th –In class exam (instead of Hw 5) –Special lecture: Mok3 Dec 15 th (Exam week)Dec 15 th (Exam week) –Final Project Presentation

Department of Computer and Information Science The Matrix: A Real Revolution Warner Brothers 1999 A Neo-Realism!

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Assignment 4: Playing with Epsilon Views See course webpage for details Resynthesizing images from epsilon views (rebinning of rays)Resynthesizing images from epsilon views (rebinning of rays) In this assignment, you will use multiple pictures under slightly varying position to create a large synthetic aperture and multiple-center-of-projection (MCOP) images You will create (i) An image with programmable plane of focus (ii) A see-through effect (A) Available set Use only 16 images along the horizontal translation (B) Your own data set Take atleast pictures by translating a camera (push broom) The forground scene is a flat striped paper Background scene is a flat book cover or painting Choose objects with vibrant bright saturated colors Instead of translating the camera, you may find it easier to translate the scene Put the digital camera in remote capture time lapse interval mode (5 second interval) Effect 1: Programmable focus Rebin rays to focus on first plane Rebin rays to focus on back plane Rebin rays to focus on back plane but rejecting first plane Effect 2: MCOP images Rebin rays to create a single image with multiple views Useful links

Department of Computer and Information Science Light Fields, Lumigraph, and Image-based Rendering

Department of Computer and Information Science Pinhole Camera Ray Origin - Image Plane A camera captures a set of rays A pinhole camera captures a set of rays passing through a common 3D point

Department of Computer and Information Science Camera Array and Light Fields Digital cameras are cheap – Low image quality – No sophisticated aperture (depth of view) effects Build up an array of cameras It captures an array of images It captures an array of set of rays

Department of Computer and Information Science Why is it useful? If we want to render a new image – We can query each new ray into the light field ray database – Interpolate each ray (we will see how) – No 3D geometry is needed

Department of Computer and Information Science Key Problem How to represent a ray? What coordinate system to use? How to represent a line in 3D space – Two points on the line (3D + 3D = 6D) Problem? – A point and a direction (3D + 2D = 5D) Problem? – Any better parameterization?

Department of Computer and Information Science Two Plane Parameterization Each ray is parameterized by its two intersection points with two fixed planes. For simplicity, assume the two planes are z = 0 (st plane) and z = 1 (uv plane) Alternatively, we can view (s, t) as the camera index and (u, v) as the pixel index

Department of Computer and Information Science Ray Representation For the moment let’s consider just a 2D slice of this 4D ray space Rendering new pictures = Interpolating rays How to represent rays? 6D? 5D? 4D?

Department of Computer and Information Science Light Field Rendering For each desired ray: – Compute intersection with (u,v) and (s,t) plane – Blend closest ray – What does closest mean? (s, t) (u, v) (s, t) (u, v) (u 1, v 1 ) (s 1, t 1 ) (u 2, v 2 ) (s2,t2)(s2,t2)

Department of Computer and Information Science Light Field Rendering Linear Interpolation y1y1 y2y2 x1x1 x2x2 y? x (s, t) (u, v) (s, t) (u, v) (u 1, v 1 ) (s 1, t 1 ) (u 2, v 2 ) (s2,t2)(s2,t2) What happens to higher dimension? Bilinear, tri-linear, quadra-linear

Department of Computer and Information Science Quadralinear Interpolation Serious aliasing artifacts

Department of Computer and Information Science Ray Structure 2D Light Field 2D EPI t v v t t1t1 v1v1 (v 1, t 1 ) v2v2 t2t2 (v 2, t 2 ) t3t3 v3v3 (v 3, t 3 ) All the rays in a light field passing through a 3D geometry point

Department of Computer and Information Science Why Aliasing? We study aliasing in spatial domain Next class, we will study frequency domain (take a quick review of Fourier transformation if you can)

Department of Computer and Information Science Better Reconstruction Method Assume some geometric proxy Dynamically Reparametrized Light Field

Department of Computer and Information Science Focal Surface Parameterization

Department of Computer and Information Science Focal Surface Parameterization -2 Intersect each new ray with the focal plane Back trace the ray into the data camera Interpolate the corresponding rays But need to do ray- plane intersection Can we avoid that?

Department of Computer and Information Science Using Relative parameterization is hardware friendly – [ s’, t’ ] corresponds to a particular texture – [ u’, v’ ] corresponds to the texture coordinate – Focal plane can be encoded as the disparity across the light field Camera Plane Image Plane (s, t) (u, v) Focal Plane (u 1, v 1 ) (s 1, t 1 ) (u 2, v 2 ) (s2,t2)(s2,t2)

Department of Computer and Information Science Results Quadralinear Focal plane at the monitor

Department of Computer and Information Science Aperture Filtering Simple and easy to implement Cause aliasing (as we will see later in the frequency analysis) Can we blend more than 4 neighboring rays? 8? 16? 32? Or more? Aperture filtering

Department of Computer and Information Science Small aperture size

Department of Computer and Information Science Large aperture size

Department of Computer and Information Science Using very large size aperture

Department of Computer and Information Science Variable focus

Department of Computer and Information Science Close focal surface

Department of Computer and Information Science Distant focal surface

 Marc Levoy Synthetic aperture photography

 Marc Levoy Synthetic aperture photography

 Marc Levoy Synthetic aperture photography

 Marc Levoy Synthetic aperture photography 

 Marc Levoy Synthetic aperture photography 

 Marc Levoy Synthetic aperture photography 

 Marc Levoy Synthetic aperture videography

 Marc Levoy Long-range synthetic aperture photography width of aperture6’ number of cameras45 spacing between cameras5” camera’s field of view4.5°

 Marc Levoy The scene distance to occluder110’ distance to targets125’ field of view at target10’

 Marc Levoy People behind bushes close to sunset, so input images were noisy noise decreases as sqrt ( number of cameras ) → camera arrays facilitate low-light imaging

 Marc Levoy Effective depth of field 35mm camera lens with equivalent field of view460mm typical depth of field for an f/4 460mm lens10’ –allowing a 1-pixel circle of confusion in a 640 x 480 pixel image effective depth of field of our array1’ 125’ 15’ 6’ 10’

 Marc Levoy Synthetic aperture photography using an array of mirrors 11-megapixel camera (4064 x 2047 pixels) 18 x 12 inch effective aperture, 9 feet to scene 22 mirrors, tilted inwards  22 views, each 750 x 500 pixels

 Marc Levoy

Department of Computer and Information Science Perspective? Or Not? It’s pretty cool, dude!

Department of Computer and Information Science Multiperspective Camera?

Department of Computer and Information Science

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Assignment 4: Playing with Epsilon Views Resynthesizing images from epsilon views (rebinning of rays)Resynthesizing images from epsilon views (rebinning of rays) In this assignment, you will use multiple pictures under slightly varying position to create a large synthetic aperture and multiple-center-of-projection (MCOP) images You will create (i) An image with programmable plane of focus (ii) A see-through effect (A) Available set Use only 16 images along the horizontal translation (B) Your own data set Take atleast pictures by translating a camera (push broom) The forground scene is a flat striped paper Background scene is a flat book cover or painting Choose objects with vibrant bright saturated colors Instead of translating the camera, you may find it easier to translate the scene Put the digital camera in remote capture time lapse interval mode (5 second interval) Effect 1: Programmable focus Rebin rays to focus on first plane Rebin rays to focus on back plane Rebin rays to focus on back plane but rejecting first plane Effect 2: MCOP images Rebin rays to create a single image with multiple views Useful links

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Computational Illumination Presence or AbsencePresence or Absence –Flash/No-flash Light positionLight position –Multi-flash for depth edges –Programmable dome (image re-lighting and matting) Light color/wavelengthLight color/wavelength Spatial ModulationSpatial Modulation –Synthetic Aperture Illumination Temporal ModulationTemporal Modulation –TV remote, Motion Tracking, Sony ID-cam, RFIG Natural lighting conditionNatural lighting condition –Day/Night Fusion

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 A Night Time Scene: Objects are Difficult to Understand due to Lack of Context Dark Bldgs Reflections on bldgs Unknown shapes

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Enhanced Context : All features from night scene are preserved, but background in clear ‘Well-lit’ Bldgs Reflections in bldgs windows Tree, Street shapes

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Background is captured from day-time scene using the same fixed camera Night Image Day Image Result: Enhanced Image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Mask is automatically computed from scene contrast

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 But, Simple Pixel Blending Creates Ugly Artifacts

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Pixel Blending Our Method: Integration of blended Gradients

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Nighttime image Daytime image Gradient field Importance image W Final result Gradient field Mixed gradient field G1G1G1G1 G1G1G1G1 G2G2G2G2 G2G2G2G2 x Y x Y I1I1I1I1 I2I2 GG x Y

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Reconstruction from Gradient Field Problem: minimize error  I’ – G| Estimate I’ so that G =   I’ Poisson equation   I’ = div G Full multigrid solver I’ GXGXGXGX GYGYGYGY

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Image Tools Gradient domain operations,Gradient domain operations, –Applications in tone mapping, fusion and matting Graph cuts,Graph cuts, –Applications in segmentation and mosaicing Bilateral and Trilateral filters,Bilateral and Trilateral filters, –Applications in image enhancement

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Intensity Gradient in 1D I(x) G(x) IntensityGradient Gradient at x, G(x) = I(x+1)- I(x) Forward Difference

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Reconstruction from Gradients I(x) Intensity G(x) Gradient ? ? For n intensity values, about n gradients

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Reconstruction from Gradients I(x) Intensity G(x) Gradient 1D Integration I(x) = I(x-1) + G(x) Cumulative sum ?

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y Intensity Gradient in 2D Gradient at x,y as Forward Differences G x (x,y) = I(x+1, y)- I(x,y) G y (x,y) = I(x, y+1)- I(x,y) G(x,y) = (G x, G y )

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Intensity Gradient Vectors in Images Gradient Vector

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y 2D Integration Reconstruction from Gradients GivenG(x,y) = (G x, G y ) How to compute I(x,y) for the image ? For n 2 image pixels, 2 n 2 gradients !

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y 2D Integration Intensity Gradient in 2D Recovering Original Image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y Intensity Gradient Manipulation New Grad X New Grad Y Gradient Processing Recovering Manipulated Image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y Gradient Processing New Grad X New Grad Y 2D Integration Intensity Gradient Manipulation Recovering Manipulated Image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y Gradient Processing New Grad X New Grad Y 2D Integration Intensity Gradient Manipulation Recovering Manipulated Image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y New Grad X New Grad Y 2D Integration Intensity Gradient Manipulation Gradient Processing A Common Pipeline

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Reconstruction from Gradients

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Euler-Lagrange Equation

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Application: Compressing Dynamic Range How could you put all this information into one Image ?

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Attenuate High Gradients I(x) G(x) IntensityGradient I(x) Intensity Maintain local detail at the cost of global range Fattal et al Siggraph 2002

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Basic Assumptions The eye responds more to local intensity differences (ratios) than global illuminationThe eye responds more to local intensity differences (ratios) than global illumination A HDR image must have some large magnitude gradientsA HDR image must have some large magnitude gradients Fine details consist only of smaller magnitude gradientsFine details consist only of smaller magnitude gradients

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Gradient Compression in 1D

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Gradient Domain Method

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Basic Method Take the log of the luminancesTake the log of the luminances Calculate the gradient at each pointCalculate the gradient at each point Scale the magnitudes of the gradients with a progressive scaling function (Large magnitudes are scaled down more than small magnitudes)Scale the magnitudes of the gradients with a progressive scaling function (Large magnitudes are scaled down more than small magnitudes) Re-integrate the gradients and invert the log to get the final imageRe-integrate the gradients and invert the log to get the final image

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Grad X Grad Y New Grad X New Grad Y 2D Integration Summary: Intensity Gradient Manipulation Gradient Processing

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Graph and Images Credits: Jianbo Shi

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Brush strokesComputed labeling

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Image objective 0 if red ∞ otherwise 0 for any label

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Graph Based Image Segmentation W ij i j V: graph node E: edges connection nodes W ij : Edge weight Image pixel Link to neighboring pixels Pixel similarity Segmentation = Graph partition

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Minimum Cost Cuts in a graph Cut: Set of edges whose removal makes a graph disconnected S i,j : Similarity between pixel i and pixel j Cost of a cut, A A

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Problem with min cuts Min. cuts favors isolated clusters

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Normalize cuts in a graph Ncut = balanced cut NP-Hard!

Ramesh Raskar, CompPhoto Class Northeastern, Fall 2005 Brush strokesComputed labeling Graph Cuts for Segmentation and Mosaicing Cut ~ String on a height field

Modeling Projections

Modeling projection The coordinate system We will use the pin-hole model as an approximation Put the optical center (Center Of Projection) at the origin Put the image plane (Projection Plane) in front of the COP –Why? The camera looks down the negative z axis –we need this if we want right-handed-coordinates –– –– Slide by Steve Seitz

Modeling projection Projection equations Compute intersection with PP of ray from (x,y,z) to COP Derived using similar triangles (on board) We get the projection by throwing out the last coordinate: Slide by Steve Seitz

Homogeneous coordinates Is this a linear transformation? Trick: add one more coordinate: homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates no—division by z is nonlinear Slide by Steve Seitz

Perspective Projection Projection is a matrix multiply using homogeneous coordinates: divide by third coordinate This is known as perspective projection The matrix is the projection matrix Can also formulate as a 4x4 divide by fourth coordinate Slide by Steve Seitz

Orthographic Projection Special case of perspective projection Distance from the COP to the PP is infinite Also called “parallel projection” What’s the projection matrix? Image World Slide by Steve Seitz

Spherical Projection What if PP is spherical with center at COP? In spherical coordinates, projection is trivial:  d  Note: doesn’t depend on focal length d!