3D From 2D Image Processing Seminar Presented by: Eli Arbel.

Slides:



Advertisements
Similar presentations
Shapelets Correlated with Surface Normals Produce Surfaces Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Advertisements

5.1 Real Vector Spaces.
3D reconstruction.
電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Dynamic Occlusion Analysis in Optical Flow Fields
Extended Gaussian Images
Trajectory Generation
PARTIAL DERIVATIVES 14. PARTIAL DERIVATIVES 14.6 Directional Derivatives and the Gradient Vector In this section, we will learn how to find: The rate.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Computer Vision Optical Flow
Camera calibration and epipolar geometry
Lecture 6 Image Segmentation
Structure from motion.
Offset of curves. Alina Shaikhet (CS, Technion)
Surface Reconstruction from 3D Volume Data. Problem Definition Construct polyhedral surfaces from regularly-sampled 3D digital volumes.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Constrained Optimization
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Reflectance Map: Photometric Stereo and Shape from Shading
1 Preliminaries Precalculus Review I Precalculus Review II
By: De’Aja Koontz 6 Th Period.  A member of the set of positive whole numbers {1, 2, 3,... }, negative whole numbers {-1, -2, -3,... }, and zero {0}.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Camera Geometry and Calibration Thanks to Martial Hebert.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
视觉的三维运动理解 刘允才 上海交通大学 2002 年 11 月 16 日 Understanding 3D Motion from Images Yuncai Liu Shanghai Jiao Tong University November 16, 2002.
Course 13 Curves and Surfaces. Course 13 Curves and Surface Surface Representation Representation Interpolation Approximation Surface Segmentation.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
SECTION 13.8 STOKES ’ THEOREM. P2P213.8 STOKES ’ VS. GREEN ’ S THEOREM  Stokes ’ Theorem can be regarded as a higher- dimensional version of Green ’
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Single-view geometry Odilon Redon, Cyclops, 1914.
Ch. 3: Geometric Camera Calibration
Normal Curvature of Surface p  N T Local geometry at a surface point p:  surface normal N. The plane containing N and T cuts out a curve  on the surface.
3D Imaging Motion.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Week 4 Functions and Graphs. Objectives At the end of this session, you will be able to: Define and compute slope of a line. Write the point-slope equation.
Reflectance Function Estimation and Shape Recovery from Image Sequence of a Rotating object Jiping Lu, Jim Little UBC Computer Science ICCV ’ 95.
Determining 3D Structure and Motion of Man-made Objects from Corners.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
MAN-522 Computer Vision Spring
Introduction to Functions of Several Variables
Fitting: Voting and the Hough Transform
Chapter 12 Math 181.
Epipolar geometry.
Fitting Curve Models to Edges
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

3D From 2D Image Processing Seminar Presented by: Eli Arbel

Topics Covered Inferring 3D surfaces from 2D contours using symmetry From a single view Using symmetry to enhance structure from motion methods Sequence of images

Inferring 3D surfaces from 2D contours using symmetry

The challenges Recover orientations of 3D surfaces projected on 2D image. Ambiguity - Infinite number of interpretations for a single image Cannot be done without some assumptions

The challenges, cont`d Still, humans are able to perceive 3D surfaces in a single image…

Monocular Cues Shading Texture Contour lines What about conflicts ? Reflectance properties of the surface, light sources Texture Prior knowledge required Contour lines Give shape information near the contours only Conflicts: Shading Vs. contours – humans prefer contours cues Perception of shape from Color images is not faster Methods based on other cues then contour require stronger assumptions What about conflicts ?

Shape from contour – some examples

The method Based on symmetries in the scene to infer surface orientation Assumes orthographic projection Also considers interaction between surfaces

Symmetries Defined as pointwise correspondence between two curves Curves of symmetry Lines of symmetry Axis of symmetry

Symmetries – Cont’d Parallel symmetry: Skew symmetry: Parallel – correspondence function Skew – axis straight, line of symmetries at constant angle (not necessarily orthogonal)

Qualitative inferences from symmetries Symmetries can be major source of information for extracting shape from contour Can give constraints on interpretations of single surfaces Definition: General Viewpoint A scene is said to be imaged from general viewpoint if its perceptual properties are preserved under slight variations of the viewing direction In particular: straightness, parallelity and curves symmetry

Qualitative inferences – Case I Case I: one skew symmetry covers the entire boundary of the surface That kind of contour – if bounded by non-limb edges – must be planar under the assumption of general viewpoint Definition: Limb edges points on the surface whose normal is orthogonal to the viewing direction Non-limb edges: wireframes

Qualitative inferences – Case I Proof: Parallel lines in the image plane must a projection of parallel lines in 3D space. Thus, the symmetry lines are parallel in 3D. Axis of the skew symmetry must be a projection of a straight line. The axis of symmetry is obtained by connecting the midpoints of the symmetry lines From 1,2 and 3 above, the 3D contour must be planar.

Qualitative inferences – Case II Case II: Boundary is covered by two symmetries, at least one of them must be parallel symmetry Figures belong to case II give us the most information about surface shape

Qualitative inferences – Case II Definition: Zero Gaussian Curvature (ZGC) Surface Surface in which the product of the minimum and maximum curvature is zero everywhere Non ZGC ZGC

Qualitative inferences – Case II If the surface generates one parallel symmetry and one skew symmetry with straight curves of skew symmetry which are also the lines of symmetry for parallel symmetry, then the surface must be ZGC surface. Assuming general viewpoint and no surface variations that do not produce edges in the image plane Can be proved…

Qualitative inferences – Case III Case III: Includes all remaining cases. Contours satisfy specific properties Figures which have some missing boundaries All other cases.

Recovering Surface Orientations Reminder: In order to recover a 3D surface from 2D projection, we need to find the orientation (normal) of each point of the surface Orientations of a surface can be recovered using constraints system derived from symmetries in the image We will focus on ZGC surfaces.The presence of ZGC surfaces is indicated by observing the properties given above.

Some Math… Parametric representation of curves: S(r) = (x(r), y(r)) in 2D S(r) = (x(r), y(r), z(r)) in 3D

Some More Math... Parametric representation of a surface: X(u,v) = (x(u,v), y(u,v), z(u,v)) Dini’s surface=X(u,v): x = cosu*sinv y = sinu*sinv z = cosv+log(tan(v/2))+u/50

And a little more... Gradient Space: Given a plane, , its normal N is given by the vector (A, B, C) - the gradient vector. Plane can be rewritten as , and so its normal vector: or (p,q,1) where and (p,q) defines 2D space such that every point at this space corresponds to the normal of a plane in 3D

One Last Thing... Definition: Rulings the lines connecting corresponding points of two curves forming parallel symmetry. (correspondence is determined by the tangent of the curves) Note: orientations of a ZGC surface does not change along a ruling. Rulings

Curved Shared Boundary Constraint - CSBC This constraint relates the orientations of two surfaces of opposite sides of an edge X2 X1 Two surfaces: Xi(u,v), i=1,2 N1 Normal vectors: Ni(s)=(pi(s), qi(s), 1), i=1,2 (in p-q space) N2 Intersection curve: Tangent vector: Since is on the tangent planes of both X1 and X2, it’s orthogonal to both N1(s) and N2(s). That is:

CSBC – cont’d A stronger constraint can be obtained if we assume that the intersection curve is planar. Let N1=(pc,qc,1) be the normal of the plane on which lies on. From this constraint: we get: X2 X1 Note that can be extracted from the image

Inner Surface Constraint - ISC This constraint restricts the relative orientations of neighboring points within a surface, using the image of the surface’s rulings. Let X(u,v)=(x(u,v), y(u,v), z(u,v)) be a parametric representation of a surface, and v be along the direction of the minimum curvature (rulings for ZGC surfaces). ISC intuitive description: as we move along the u parameter (axis of symmetry), the surface orientation should move in p-q plane in a direction orthogonal to the direction of the rulings. (pi,qi) q p (pi+1,qi+1) • Ri i i+1

ISC – cont’d The change in the surface’s orientation when moving from point i to point i+1 in the p-q space is given by the vector (pi+1-pi, qi+1-qi) Let (xi, yi) be the direction of the ruling Ri. (pi,qi) q p (pi+1,qi+1) • Ri According to ISC, the following equation should hold: Again, (xi, yi) can be computed directly from the image

Orthogonality Constraint - OC This constraint is derived from the assumption of orthogonality between the axis of parallel symmetry and the lines of symmetry. For ZGC surfaces, this constrained implies that the cross section must be along the lines of maximum curvature. However, in general, this conflicts with our drive to perceive the cross section as being planar. Cut along Curves of maximum curvature ? Preferred perception OC perception

OC – cont’d Given a ZGC surface: A is the tangent vector of the axis of symmetry B B is the tangent vector of the ruling Let N=(p,q,1) be the normal of the tangent plane at that point. Since A and B are on the tangent plane of the surface, they can be represented as: From the orthogonality constraint, we get or:

Combining the Constraints In order to recover 3D surface from its 2D projection, we need to know the orientation at each point in the surface. For a ZGC surface ,the orientation along the ruling is constant, thus it is sufficient to find the orientation of a single point on the ruling. These leaves us with computing the orientation of points along the axis of symmetry

Combining the Constraints – cont’d Suppose the surface orientation is to be computed for n points: CSBC: Equations: n ISC: n-1 OC: n We have 3n-1 constraints equation producing 2n+2 unknowns. For n > 3, the system is over constrained

Combining the Constraints – cont’d Because the system is over constrained, it may be impossible to find interpretation for the contours such that all the constraints are obeyed by the surface. OC and the planarity of the cross section assumed by CSBC are usually in conflict. However, there are cases where these set of constraints may give a unique answer or even leave one degree of freedom unconstrained.

Recoverable Surfaces Circular Cones: A circular cone in linear straight homogenous generalized cylinder, whose cross section is a circle. These are the only surfaces that have a unique solution to the three constraints.

Recoverable Surfaces – cont’d Cylindrical surfaces A cylindrical surface is a ZGC surface for which rulings are parallel to each other in 3D. With these kind of surfaces, we have one degree of freedom – qc in CSBC.

Recoverable Surfaces – cont’d General ZGC surfaces For general ZGC surfaces, the three constraints cannot be satisfied exactly. In most cases, planarity assumption is stronger then the orthogonality assumption. The solution is to maximize the orthogonality while keeping CSBC and ISC satisfied exactly. Again, the degree of freedom is in the orientation of the cross section plane, namely (pc,qc).

Estimating (pc,qc) The method is based on observations of human perception preferences. In particular: We prefer compact shapes. We prefer medium slant to very high or very low slant. We have a large range of uncertainty for the perceived slants.

Estimating (pc,qc) – cont’d An ellipse-fitting process is applied on the top surface to approximate an ellipse to the cross section.

Estimating (pc,qc) – cont’d After the ellipse is fitted, the orientation of the circle that would project as the fitted ellipse is found. This orientation becomes (pc,qc). A final update of qc is performed to reflect the bias humans have in orienting the cross section toward 45º. The above technique was developed as a consequence of experimental results with humans. It was found that humans estimation of top planes slants is imprecise with an average standard deviation of 8º. The average of the differences between the algorithm estimation and human estimation are 6º.

Testing The Algorithm The inputs to the program are collection of curves grouped into closed regions using continuity Each closed region is considered to be a surface Each curve in the region is checked for parallel symmetry against every other curve in the region. Two curves are considered to be parallel symmetric if they return low symmetry error which is defined as:

Testing The Algorithm – cont’d Surfaces containing parallel symmetry are treated as curved and others as planar. For curved surfaces, the curves joining parallel symmetric curves are checked if they are straight, which confirms that the surface is ZGC. For each surface the orientation of the planar cross section (pc,qc) is computed. Orientation at each point (pi,qi) on the surface is computed using ISC and CSBC constraints.

Some results

Some results – cont’d

Some results – cont’d

Using Bilateral Symmetry To Improve 3D Reconstruction From Image Sequences

Introduction Structure from motion methods Like stereo with single camera Static scene Sequence of images, each one is a different view of the object Reconstruction based on features Like other methods, sensitive to noise

Introduction – cont’d

Noisy Projections Noise can be present in the 2D projections of the scene due to measurements deviations, motion of the camera, etc.. 3D mirror symmetry is one of the most common symmetry in our environment In case the reconstructed object is known to be mirror symmetric, a symmetrization procedure can be applied on the noisy projections to enhance reconstruction of the 3D object

The Framework Apply symmetrization procedure to the 2D data with respect to the 3D symmetry Reconstruct the object using any structure from motion method Given the 3D configuration of stage 2, find the closest mirror-symmetric configuration to it

3D symmetrization Consider a 3D configuration of points If the configuration is from 3D mirror-symmetric object then for every point there exist a point which is its counterpart under reflection Definition: Symmetry Distance Let be a mirror-symmetric configuration derived from . The symmetry distance is the quantity:

3D Symmetrization Algorithm Given a configuration of points in : 1. Divide the points into sets of one or two points. For instance: {P0,P0}, {P1,P3}, {P2,P2}. This defines a matching on the points. • º 3. Find the optimal rotation and translation which minimizes the sum of squared distances between the original points and the reflected points º • 2. Reflect all points across a randomly chosen mirror plane, obtaining the points . 4. Average each original point with its reflected point obtaining the point . The points are mirror symmetric. 5. Evaluate the Symmetry Distance 6. Minimize the Symmetry Distance by repeating steps 1-5 with all possible divisions of points to sets.

Complexity Of Matching Matching of feature points is of exponential complexity However, by constraining the search space of all possible matches, complexity can be greatly reduced • Graph Matching rank 1 rank 3 rank 4 P0 and P1 need higher order connectivity consideration Is it Sufficient ? We might need to consider higher order connectivity as well (maximal…)

Complexity Of Matching – cont’d In the above example the number of possible matchings was reduced to 2 For the class of cyclically connected configurations, the number of possible matchings is reduced to linear • • • • • • • • • • • • • •

Complexity Of Matching: Heuristic Approach The above approach assumes matching is to be found prior to finding the reflection plane But matching and the reflection plane is related to each other: Given a matching we can determine the reflection plane Given the reflection plane, we can constrain the possible matchings

Complexity Of Matching: Heuristic Approach – cont’d The following heuristic can be used: For every possible pair of points determine the corresponding reflection plane (the plane perpendicular to and passing through the mid point of the segment connecting the two points) Build a histogram, of all reflection planes Peaks at the histogram will point at candidates for the optimal reflection plane

2D symmetrization Definition: Projected mirror-symmetry constraint All Segments connecting pairs of mirror-symmetric points of a mirror-symmetric 3D object, projected on a 2D plane, have the same orientation. • P0 P1 P2 P3 P4

2D Symmetrization – cont’d Given a 2D configuration of connected points and Given a matching between the points of the configuration we find a configuration which satisfy: have the same topology as satisfy the mirror-symmetry constraint The Symmetry Distance is minimized

Finding the Closest projected Mirror Symmetry Consider two points and in and an orientation We want to find two points and such that the segment connecting them is at orientation and the following sum is minimized: • •

Finding the Closest projected Mirror Symmetry – cont’d Suppose we have n 2D points and a matching of these points, we need to find the orientation which minimizes the Symmetry Distance For a given orientation , we get: Taking the derivative with respect to and equating to zero:

Finding the Closest projected Mirror Symmetry – Examples Before symmetrization After symmetrization b. a. a. b.

The Reconstruction Process The enhancement algorithm described above is independent of the reconstruction method It should be applied only in reconstruction of bilaterally symmetrical objects Bilateral symmetry can be determined using 2D Symmetry Distance If the Symmetry Distance is small in all projections we may assume that the 3D configuration is symmetric (deviations may occur due to noise)

Some Results Three variations on the algorithm were tested Only 2D symmetrization was applied prior the reconstruction Only 3D symmetrization was applied following the reconstruction Symmetrization both prior and following the reconstruction was applied Reconstruction was performed on synthetic and real objects

Some Results – cont’d Original configuration Reconstruction without enhancement 2D symmetrization only 3D symmetrization only Both symmetrizations

Some Results – cont’d Original configuration Reconstruction without enhancement 2D symmetrization only 3D symmetrization only Both symmetrizations

Some Results – cont’d Both 3D 2D No Symmetrization 1.976036 3.192995 1.919489 3.335983 Error 40.8 4.3 42.5 % improvement

References Fatih Ulupinar, Ramakant Nevatia, Perception of 3-D Surfaces from 2-D Contours. Fatih Ulupinar, Ramakant Nevatia, Using symmetries for Analysis of Shape from Contour Hagit Zabrodsky, Daphna Weinstall, Using Bilateral Symmetry To Improve 3D Reconstruction From Image Sequences www.google.com