Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research directions on Visual Inspection Planning By Alexis H Rivera.

Similar presentations


Presentation on theme: "Research directions on Visual Inspection Planning By Alexis H Rivera."— Presentation transcript:

1 Research directions on Visual Inspection Planning By Alexis H Rivera

2 Overview Short review about visual inspection planning Discussion of issues Discussion of possible solutions and research directions

3 Visual Inspection Problem Visual Inspection –Given an object, determine if it satisfies the design specification using camera as measurement tool Visual Inspection Planning –Where should the camera be located to optimally inspect the desired geometric entities?

4 Visual Inspection Problem(cont.) Optimality criterion –minimize the inherent measurement errors (quantization and displacement) –such that desired entities are: resolvable in focus within field of view visible –satisfy dimensional tolerances –minimize number of camera sensors needed

5 Visual Inspection Problem (cont.) Finding optimal camera pose: –For a given set of entities, S –minimize F (t x, t y, t z, Φ, θ, Ψ, S) –subject to: g1j <= 0 (resolution), for j=1 to k g2a <= 0 (focus) g2b <= 0 g3 <= (field of view) g4i <= 0 (visibility) for i=1 to m

6 Resolution For each entity j, there is a constraint g1j() Map smallest line entity to l pixels Example: l = 2

7 Focus Two constraints, g2a(), g2b() Require closest and furthest entity vertices from the camera position to be within the far and near limits of the depth of field rfrf rcrc Near limit Far limit camera

8 Field Of View One constraint: g3() Bounding cone must be contained within the viewing cone Bounding cone Viewing cone

9 Many equations: g4i() for i=1 to m Plane equations that bound the visibility of the desired entities Visibility x y e1e2 e3 e4 Example: To see entities e1, e2, e3, e4, the camera must satisfy equation y < 0

10 Visual Inspection Planning Solution: Finding optimal plan

11 Issues  The generation of the visibility constraints is incomplete  The software used to generate such constraints is obsolete  The current nonlinear optimization algorithm is very sensitive to the initial conditions  The process of generating a plan is not automated and is very tedious  The test cases used to validate the software were very simple

12 Plan Generation

13 Possible Experiments Optimization Process Error Models Inspection Planning Strategies

14 Possible Experiments (cont.) Optimization Process –Alternative objective functions: Minimum Mean Square Error vs. Robustness Approach –Alternative optimization algorithm: Is there a better optimization algorithm? Does it matter? –Choice of initial pose Is it possible to find a good initial pose that guarantees an optimal solution?

15 Possible Experiments (cont.) Error Models –Sub-pixel Quantization vs. Pixel Quantization, will it make a difference? Inspection Planning Strategies –Characterizing sub-node strategies –Observations on more complicated objects –How to evaluate system? What criteria can be used to validate it?

16 Building Blocks

17 Automation How do I plan to automate this process?

18 Plan Generation

19 Plan Verification

20 Optimization Process

21 Alternative objective functions Idea: How does the objective function choice affects the inspection plan? –Explain straightforward objective function –Explain derivation of the MMSE –Explain robustness approach

22 Optimization Problem Finding optimal camera pose: –For a given set of entities, S –minimize F (t x, t y, t z, Φ, θ, Ψ, S) –subject to: g1j <= 0 (resolution), for j=1 to k g2a <= 0 (focus) g2b <= 0 g3 <= (field of view) g4i <= 0 (visibility) for i=1 to m

23 Optimization Problem Different choices for objective function –(Tarabanis) Maximize weighted sum of sensor constraints –(Crosby) Minimize Mean Square Error –(Gu) Maximize the robustness Objective function characterizes the quality of solution

24 Maximizing weighted sum of sensor constraints Max F (t x, t y, t z, Φ, θ, Ψ, S) = α 1 g 1 +α 2a g 2a +α 2b g 2b + α 3 g 3 + α 4 g 4 The higher the value of the objective function the better the constraints are satisfied Weights indicate how much a constraint contributes to the objective function

25 Maximizing weighted sum of sensor constraints Issues: Weights are based on experimental results Optimization can be biased because of scaling issues Assumption that multiple and couple objectives can be combines in an additive sense into a single objective

26 Minimizing Mean Square Error Minimize the total expected error –Displacement errors –Quantization errors MSE E[ε 2 ]=E[ε d 2 ]+E[ε q 2 ]

27 Next Slides Give a feel of the derivation of the MSE Statistics are found defining and relating random variables Some of this results are used in other objective function definitions

28 Imaging process y x z f v u World coordinate system camera coordinate system image coordinate system p x y z q

29 Imaging process A point (x,y,z) in WCS is related to a point (u,v) in image plane coordinate by: u = Fu (x,y,z,t x, t y, t z, Φ, θ, Ψ) v = Fv(x,y,z, t x, t y, t z, Φ, θ, Ψ) Fv, Fu, are coordinate transformation functions (t x, t y, t z ) vector representing camera location (Φ, θ, Ψ) camera orientation

30 Imaging with displacement error Let dx, dy, dz, dΦ, dθ, dΨ be displacement error in position and orientation Assume I.I.D Gaussian random variables with zero mean and variance (σ x 2, σ y 2, σ z 2, σ Φ 2, σ θ 2, σ ψ 2 ) Mapping is now: u’ = Fu (x,y,z,t x, t y, t z, Φ, θ, Ψ, dx, dy, dx, dΦ, dθ, dΨ ) v’ = Fv(x,y,z, t x, t y, t z, Φ, θ, Ψ, dx, dy, dx,dΦ, dθ, dΨ )

31 Displacement error of single point (u,v) (u’,v’) ε du ε dv Image plane Displacement error for a each end point are new Gaussian RV ε du = u’ – u ε dv = v’ – v

32 Displacement error of single point Displacement error for a each end point are new RV Let ξ χ ς be functions of the RVs dx, dy, dx, dΦ, dθ, dΨ These are Gaussian RVs Let ε du = u’ – u = ς / χ ε dv = v’ – v = ξ / χ Then ε du and ε dv are functions of the form g(x,y) = x/y The mean and variance can be approximated.

33 Displacement error of individual components of line (u,v) (u’,v’) ε du1 ε dv1 Image plane Displacement error for a line are new RV ε dx = ε du1 – ε du2 ε dy = ε dv1 – ε dv2 ε du2 ε dv2

34 Displacement error of line Dimensional error is geometrically approximated:  d   dx cos(  ) +  dy sin(  )  = angle between line

35 Displacement error of k lines Total dimensional error for k lines is:

36 Quantization Error 1D Actual Length: L = lr x + u + v, where u,v uniform random variables Quantized Length

37 Quantization Error 2D The actual and quantize length define new RVs

38 Quantization Error 2D The quantization error for the horizontal and vertical components:

39 Quantization Error for a line Total quantization determined by geometric approximation,  q   qx cos(  ) +  qy sin(  ) zero mean E[  q 2 ]= σ  q 2  1/6(r x 2 cos 2  + r y 2 sin 2  )

40 Total quantization error for k lines Total dimensional error due to quantization in all lines:

41 Mean Square Error Total error ε = ε d – ε q MSE E[ε 2 ]=E[ε d 2 ]+E[ε q 2 ]

42 Dimensional Tolerances Dimensional Tolerance is satisfied if f ε (ε) is the probability density function of dimensional inspection error

43 Dimensional Tolerances Can be rewritten in terms of characteristic equation Where, Φ ε (w)= Φ εd (w) Φ εq (w)

44 Finding Φ εd (w) Recall the following relations between the random variables  d = f(  dx,  dy )   dx cos(  ) +  dy sin(  ) ε dx = f(  du1,  du2 ) = ε du1 – ε du2 ε dy = f(  dv1,  dv2 ) = ε dv1 – ε dv2 ε du = f(ς, χ) = ς / χ ε dv = f(ξ, χ) = ξ / χ Problem: We don’t know distribution of  d because we don’t know distribution of ε du and ε dv

45 Finding Φ εd (w) Solution : Approximate distribution of ε du and ε dv with Gaussian It is shown that is an acceptable representation if camera pose if within feasible region

46 Finding Φ εd (w) Consequence  d is Gaussian because of linear relationships between RV

47 Finding Φ εq (w) Recall the following relations between the random variables  q = f(  qx,  qy )   qx cos(  ) +  qy sin(  ) ε qx = f(L qx, L x ) = L qx – L x ε dy = f(L qy, L y ) = L qy – L y f ε qx (ε qx ) and f ε qy (ε qy ) are triangular distributions defined in the range [-rx,rx] and [-ry,ry] respectively. Their characteristic function are:

48 Finding Φ εq (w) Finally Φ εq (w) is given by:

49 Why do I care about all that derivation?? Next objective function: Robustness

50 Maximizing the robustness Recall, Define δ *2 as the maximum permissible inspection variance, -ΔL ΔL -ΔL ΔL > threshold δ *2 > δ = threshold δ *2 = δ

51 Robustness for an entity The ratio between maximum permissible variance δ *2 and the actual inspection variance δ 2 R = δ *2 / δ 2 R >= 1, the entity can be observed effectively The greater the R is, the more believable and accurate the inspection result is expected to be

52 Robustness objective function For a set of entities Ei, a set of acceptable tolerances ΔL and tolerances Ti. Find the camera pose that maximizes the minimum robustness of all entities Minimize (Max (1/R1, 1/R2, …, 1/Rn))

53 Issues Calculating robustness can be computationally expensive –Numerical integration for each entity Can be simplified under some assumptions: –Displacement errors >> Quantization errors –Threshold is the same for each entity

54 Simplified objective function Then, – δ *2 i = (A i L i ) 2 Where, –Ai = accuracy for entity I –Li = length of entity I –Ri = δ 2 i / (A i L i ) 2 Minimize (Max (1/R1, 1/R2, …, 1/Rn)) Consequence: there exists a single final camera pose that optimizes this objective function

55 Alternative objective functions (summary) Max Weight, Min MSE, Max Robustness –What to do with the objective functions? –Max Weight assigns importance to constraints –Min MSE finds trade off that minimizes error –Max Robustness guarantees certainty of results Be able to perform experiments varying the objective functions and observe results TO DO: Plan experiments, how will they be evaluated?

56 Alternative optimization algorithms Idea: research different algorithms to see who fits better to this kind of problem Different algorithms, give different optimizations. (nature of nonlinear programming, when there are local minimas) Find one that is robust

57 Choice of initial pose Idea: Different optimization algorithms give different results depending on the initial condition Is this a function of the objective function? The algorithm? How does this affect the planning? Example: for special cases, robustness objective function guarantees optimal solution. Is it true?

58 Error Models

59 Sub-pixel Quantization vs. Pixel Quantization –Review pixel quantization –Talk about subpixel accuracy in edge detection –Issue of quantization model being of camera or edge detection method –Issue of quantization vs. subpixel accuracy –How to evaluate this?

60 Quantization Error 1D Actual Length: L = lr x + u + v, where u,v uniform random variables Quantized Length

61 Subpixel accuracy Use of interpolation or probabilistic methods to approximate pixel location Different areas: Edge detection, corner detection, registration

62 Subpixel accuracy Example Given a discrete set of points Fit an edge to those points. Different methods for this: Interpolation Fitting statistical moments of the point set to the step edge

63 Fitting statistical moments Moments from sampled data: h1 h2 Define ideal step containing k elements of h1 value. Edge moments: Edge location k = total number of samples * portion of them that has level h1 = n * p1

64 Questions about quantization How does quantization relates to subpixel accuracy??? Definitely, quantization goes on during imaging acquisition Measurement of line entities, imply some pre-processing. For example, edge detection, object registration etc This pre-processing can introduce error

65 Questions about quantization Is this quantization error model taking into account the preprocessing error? Or it it just the model of the camera intensity quantization process? Accuracy of measurement depends on the image processing techniques used to measure entities How to measure entities? –Edge detection? –Corner detection? –Registration/template matching?

66 Approach Clarify the relation between pixel quantization and subpixel accuracy measurement if there is any Look for related work on line measurement, techniques, and error analysis Study feasibility of incorporating such models into the inspection plan

67 Inspection Planning Strategies

68 Characterizing sub-node strategies – > –how to determine which one is better for what? Is there any way to measure this?

69 Observations on more complicated objects – > –perform inspection with different objects and what can you say about the process?

70 System evaluation – > –How to evaluate system? What criteria can be used to validate it?


Download ppt "Research directions on Visual Inspection Planning By Alexis H Rivera."

Similar presentations


Ads by Google