Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robot vision review Martin Jagersand What is Computer Vision? Three Related fieldsThree Related fields –Image Processing: Changes 2D images into other.

Similar presentations


Presentation on theme: "Robot vision review Martin Jagersand What is Computer Vision? Three Related fieldsThree Related fields –Image Processing: Changes 2D images into other."— Presentation transcript:

1

2 Robot vision review Martin Jagersand

3 What is Computer Vision? Three Related fieldsThree Related fields –Image Processing: Changes 2D images into other 2D images –Computer Graphics: Takes 3D models, renders 2D images –Computer vision: Extracts scene information from 2D images and video –e.g. Geometry, “Where” something is in 3D, –Objects “What” something is” What information is in a 2D image?What information is in a 2D image? What information do we need for 3D analysis?What information do we need for 3D analysis? CV hard. Only special cases can be solved. CV hard. Only special cases can be solved. 90 horizon Image Processing Computer Graphics Computer Vision

4 Machine Vision 3D Camera vision in general environments hard3D Camera vision in general environments hard Machine vision:Machine vision: –Use engineered environment –Use 2D when possible –Special markers/LED –Can buy working system! Photogrammetry:Photogrammetry: –Outdoors –3D surveying using cameras

5 Vision Full: Human visionFull: Human vision –We don’t know how it works in detail Limited vision: Machines, Robots, “AI”Limited vision: Machines, Robots, “AI” What is the most basic useful information we can get from a camera? 1.Location of a dot (LED/Marker) [u,v] = f(I) 2.Segmentation of object pixels All of these are 2D image plane measurements! What is the best camera location? Usually overhead pointing straight down Adjust cam position so pixel [u,v] = s[X,Y]. Pixel coordinates are scaled world coord XY uv

6 Tracking LED special markers Put camera overhead pointing straight down on worktable.Put camera overhead pointing straight down on worktable. –Adjust cam position so pixel [u,v] = s[X,Y]. Pixel coordinates are scaled world coord –Lower brightness so LED brighterest Put LED on robot end-effectorPut LED on robot end-effector Detection algorithm:Detection algorithm: –Threshold brightest pixels I(u,v)>200 –Find centroid [u,v] of max pixels Variations:Variations: –Blinking LED can enhance detection in ambient light. –Different color LED’s can be detected separately from R,G,B color video. LED Camera XY uv

7 Commercial tracking systems Polaris Vicra infra-red system (Northern Digitial Inc.) MicronTracker visible light system (Claron Technology Inc.)

8 Commercial tracking system left image right image Images acquired by the Polaris Vicra infra-red stereo system:

9 6/8/2016Introduction to Machine Vision IMAGE SEGMENTATION How many “objects” are there in the image below? Assuming the answer is “4”, what exactly defines an object? Zoom In

10 6/8/2016Introduction to Machine Vision 8 BIT GRAYSCALE IMAGE

11 6/8/2016Introduction to Machine Vision GRAY LEVEL THRESHOLDING Objects Set threshold here

12 6/8/2016Introduction to Machine Vision BINARY IMAGE

13 6/8/2016Introduction to Machine Vision CONNECTED COMPONENT LABELING: FIRST PASS AA AAA BBCC BBBCC BBBBBB EQUIVALENCE: B=C

14 6/8/2016Introduction to Machine Vision CONNECTED COMPONENT LABELING: SECOND PASS AA AAA BBCC BBBCC BBBBBB TWO OBJECTS! BB BB

15 6/8/2016Introduction to Machine Vision IMAGE SEGMENTATION – CONNECTED COMPONENT LABELING P i,j 4 Neighbor Connectivity P i,j 8 Neighbor Connectivity

16 6/8/2016Introduction to Machine Vision What are some examples of form parameters that would be useful in identifying the objects in the image below?

17 6/8/2016Introduction to Machine Vision OBJECT RECOGNITION – BLOB ANALYSIS Examples of form parameters that are invariant with respect to position, orientation, and scale: Number of holes in the object Compactness or Complexity: (Perimeter) 2 /Area Moment invariants All of these parameters can be evaluated during contour following.

18 6/8/2016Introduction to Machine Vision GENERALIZED MOMENTS Shape features or form parameters provide a high level description of objects or regions in an image For a digital image of size n by m pixels : For binary images the function f(x,y) takes a value of 1 for pixels belonging to class “object” and “0” for class “background”.

19 6/8/2016Introduction to Machine Vision GENERALIZED MOMENTS 0123456789 1 2 3 4 5 6 X Yij M ij 00 10 01 20 02 11 7 Area 20 33 159 Moment of Inertia 64 93

20 6/8/2016Introduction to Machine Vision SOME USEFUL MOMENTS The center of mass of a region can be defined in terms of generalized moments as follows:

21 6/8/2016Introduction to Machine Vision SOME USEFUL MOMENTS The moments of inertia relative to the center of mass can be determined by applying the general form of the parallel axis theorem:

22 6/8/2016Introduction to Machine Vision SOME USEFUL MOMENTS The principal axis of an object is the axis passing through the center of mass which yields the minimum moment of inertia. This axis forms an angle θ with respect to the X axis. The principal axis is useful in robotics for determining the orientation of randomly placed objects.

23 6/8/2016Introduction to Machine Vision Example X Y Principal Axis Center of Mass

24 6/8/2016Introduction to Machine Vision 3D MACHINE VISION SYSTEM Digital Camera Field of View Granite Surface Plate Laser Projector Plane of Laser Light XY Table P(x,y,z)

25 6/8/2016Introduction to Machine Vision 3D MACHINE VISION SYSTEM

26 6/8/2016Introduction to Machine Vision 3D MACHINE VISION SYSTEM

27 Morphological processing Erosion 26

28 Morphological processing Erosion 27

29 Dilation 28

30 Opening erosion followed by dilation, denoted ∘ eliminates protrusionseliminates protrusions breaks necksbreaks necks smoothes contoursmoothes contour 29

31 Opening 30

32 Opening 31

33 32 Opening

34 Closing dilation followed by erosion, denoted dilation followed by erosion, denoted smooth contoursmooth contour fuse narrow breaks and long thin gulfsfuse narrow breaks and long thin gulfs eliminate small holeseliminate small holes fill gaps in the contourfill gaps in the contour 33

35 34 Closing

36 Image Processing Filtering: Noise suppresion Edge detection

37 Image filtering 000000 000000 00200000 000000 00010000 000000 0000000.110.110.110 00.110.110.110 00.110.110.110 00000 * =000000022222200 022222200 0223333110 001111110 001111110 Input image f Filter h Output image g Mean filter

38 Animation of Convolution Mask Image after convolution Original Image To accomplish convolution of the whole image, we just Slide the mask C 4,2 C 4,3 C 4,4

39 Gaussian filter * = Input image f Filter h Output image g Compute empirically

40 --Noise Elimination The noise is eliminated but the operation causes loss of sharp edge definition. In other words, the image becomes blurred Convolution for Noise Elimination

41 There are many masks used in Noise Elimination Median Mask is a typical one J=1 2 3 I=1 2 3 Rank: 23, 47, 64, 65, 72, 90, 120, 187, 209 median Masked Original Image The principle of Median Mask is to mask some sub-image, use the median of the values of the sub-image as its value in new image Convolution with a non-linear mask Median filtering

42 Median Filtering

43 Edge detection using the Sobel operator01-20201-2000 121 Magnitude: Orientation: In practice, it is common to use:

44 Sobel operator Original OrientationMagnitude

45 Effects of noise on edge detection / derivatives Consider a single row or column of the imageConsider a single row or column of the image –Plotting intensity as a function of position gives a signal Where is the edge?

46 Solution: smooth first Look for peaks in

47 Vision-Based Control (Visual Servoing) Initial Image Desired Image User

48 Vision-Based Control (Visual Servoing) : Current Image Features : Desired Image Features

49 Vision-Based Control (Visual Servoing) : Current Image Features : Desired Image Features

50 Vision-Based Control (Visual Servoing) : Current Image Features : Desired Image Features

51 Vision-Based Control (Visual Servoing) : Current Image Features : Desired Image Features

52 Vision-Based Control (Visual Servoing) : Current Image Features : Desired Image Features

53 u,v Image-Space Error : Current Image Features : Desired Image Features One point Pixel coord Many points u v

54 Other (easier) solution: Image-based motion control Motor-Visual function: y=f(x) Achieving 3d tasks via 2d image control Note: What follows will work for one or two (or 3..n) cameras. Either fixed (eye-in-hand) or on the robot. Here we will use two cam

55 Vision-Based Control Image 1 Image 2

56 Vision-Based Control Image 1 Image 2

57 Vision-Based Control Image 1 Image 2

58 Vision-Based Control Image 1 Image 2

59 Image 1 Vision-Based Control Image 2

60 Image-based Visual Servoing Observed features:Observed features: Motor joint angles:Motor joint angles: Local linear model:Local linear model: Visual servoing steps: 1 Solve:Visual servoing steps: 1 Solve: 2 Update: 2 Update:

61 Find J Method 1: Test movements along basis Remember: J is unknown m by n matrixRemember: J is unknown m by n matrix Assume movementsAssume movements Finite difference:Finite difference:

62 Find J Method 2: Secant Constraints Constraint along a line:Constraint along a line: Defines m equationsDefines m equations Collect n arbitrary, but different measures yCollect n arbitrary, but different measures y Solve for JSolve for J

63 Find J Method 3: Recursive Secant Constraints Based on initial J and one measure pairBased on initial J and one measure pair Adjust J s.t.Adjust J s.t. Rank 1 update:Rank 1 update: Consider rotated coordinates:Consider rotated coordinates: –Update same as finite difference for n orthogonal moves

64 Achieving visual goals: Uncalibrated Visual Servoing 1.Solve for motion: 2.Move robot joints: 3.Read actual visual move 4.Update Jacobian: Can we always guarantee when a task is achieved/achievable?

65 Visually guided motion control Issues: 1.What tasks can be performed? –Camera models, geometry, visual encodings 2.How to do vision guided movement? –H-E transform estimation, feedback, feedforward motion control 3.How to plan, decompose and perform whole tasks?

66 How to specify a visual task?

67 Task and Image Specifications Task function T(x) = 0 Image encoding E(y) =0 task space error image space error image space satisfied task space satisfied

68 Visual specifications Point to Point task “error”:Point to Point task “error”: Why 16 elements?

69 Visual specifications 2 Point to LinePoint to LineLine: Note: y homogeneous coord.

70 Parallel Composition example E ( y ) = wrench y - y 47 25 y (y  y ) 8 3 4 6 1 2 (plus e.p. checks)

71 Visual Specifications Additional examples: Line to lineLine to line Point to conicPoint to conic Identify conic C from 5 pts on rim Error function yCy’ Image moments on segmented imagesImage moments on segmented images Any image feature vector that encodes pose.Any image feature vector that encodes pose. Etc.Etc.

72 Achieving visual goals: Uncalibrated Visual Servoing 1.Solve for motion: 2.Move robot joints: 3.Read actual visual move 4.Update Jacobian: Can we always guarantee when a task is achieved/achievable?

73


Download ppt "Robot vision review Martin Jagersand What is Computer Vision? Three Related fieldsThree Related fields –Image Processing: Changes 2D images into other."

Similar presentations


Ads by Google