Download presentation
Presentation is loading. Please wait.
Published byHerbert Wilkins Modified over 9 years ago
1
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D
2
Remote Sensing Systems: Radar – uses long wavelength microwaves for point or field detection Speed and range analysis Trajectory analysis Sonar – uses high energy/high frequency sound waves to detect range or create images in “conductive media” Vision Systems – Operations in Visible or near visible light regimes. Use structured light and high contrast environments to control data mapping problems
3
Parts of The Remote Sensors – field sensors Source information is a structured illumination system (high contrast) Receiver is a Field detector – in machine vision it is typically a CCD or CID CCD is a charge coupled device – where a detector (phototransistor) stores charge to a capacitor which is regularly sampled/harvested through a field sweep using a “rastering” technique (at about 60 hz) CID is a charge injected device where each pixel can be randomly sampled at variable times and rates Image analyzers that examine the raw field image and apply enhancement and identification algorithms to the data
4
The Vision System issues: Blurring of moving objects – A result of the data capture rastering through the receiver’s 2-D array, here, the sampling system lags the real world as it processes field by field with parts of the information being harvested and adjacent pixels continuing to change in time Limits speed of response, speed of objects and system thru-put rates Contrast Enhancements are developed by examining extrema in field of view:
5
Some Additional Issues: Must beware of ‘Bloom’ in the image Bloom is a problem when a high intensity pixel overflows into adjacent pixels increasing or changing the size of an information set Also consider Lensing and Operational errors: Vignetting – lenses transmits more effectively in the center than at their edges leading to intensity issues across the field of the image even without changes in the image field information itself Blur – caused by lack of full field focus Distortion – parabolic and geometric changes due to lens shape errors Motion Blur – moving images “smeared” over many pixels in capture (for CCD system we typically sample up to 3 to 5 field to build a stable image limiting one to about 12 to 20 stable images/sec)
6
Data Handling Issues: Typical Field Camera (780x640 or 499,200 pixels/image) with 8-bit color – means 3 separate 8 bit words (24 bit color) per pixel (32 bit color typically includes a saturation or brightness byte too) Data in one field image as captured during each rastering sweep: 499200/8 = 62400 bytes/image*3 bytes of color = 187200 bytes/image In a minute: 187200*60fr/s*60s/m = 673.9 MBytes (raw – ie. without compression or processing) (40.4 Gigs/hour of video information)
7
Helping with this ‘Data Bloat’ Do we really need Color? – If no, the data is reduced by a factor of 3 Do we really need “shades”? – If no, the data set drops by a factor of 8 – but this requires ‘thresholding’ of the data field Thresholding is used to construct ‘bit maps’ After sampling of test cases, setting a level of pixel intensity corresponding to a value of 1 or ‘on’ while below this level of intensity the pixel is 0 or ‘off’ regardless of image difficulties and material variations Consideration is reduced to 1 bit rather than the 8 to 24 bits in the original field of view!
8
Analyzing the Images Do we really need the entire field – or just the important parts? – But this requires post processing to analyze what is in the ‘thresholded’ image Image processing is designed to build or “Grow” field maps of the important parts of an image for identification purposes These field maps then must be analyzed by applications that can make decisions using some form of intelligence as applied to the field data sets
9
Image Building First we enhance the data array Then we digitize (threshold) the array Then we look for image edges – an edge is where the pixel value changes from 0 to 1 or 1 to 0! Raw image before thresholding and image analysis:
10
Working the Array – hardware and software Bottles for selection, After Reorganizing the Image Field Field Image after Thresholding:
11
After Threshold The final image is a series of On and Off Pixels (the light and dark parts of the 2 nd Image as seen on the previous slide) The image is then scanned to detect edges in the information One popular method is using an algorithm “GROW” that searches the data array (of 1 and 0’s) to map out changes in pixel value abc def g
12
Using Grow Methods We begin a directed Scan – once a state level change is discovered we stop the directed scan and look around the changed pixel to see if it is just a bit error If it is next to changed bits in all “new” directions, we start exploring for edges by stepping forward from the 1 st bit and stepping back and forth about the change line as it “circumvents” the part The algorithm then is said to grow shapes from full arrays but done without exhaustive enumeration!
13
So lets see if it works: ___ ___ __ ---
14
Once Completed: An image must be compared to standard shapes The image can be analyzed to find centers, sizes or other shape information After analysis is completed, objects can then be handled and or sorted
15
Sorting Routines: Based on Conditional Probabilities: This is a measure of the probability that x is a member of class i (W i ) given a knowledge of the probability that x is not a member of the several other classes in the study (W j ’s)
16
Typically a Gaussian Approximation is Assumed: We perform the characteristic measurement (with the Vision System) We compute the conditional probability that X fits each class j – that with the highest value is accepted as a best fit (if the classes are each mutually exclusive)
17
Lets Examine The Use Of This Technique: Step One: present a “Training Set,” to the camera system including representative sizes and shapes of each type across its acceptable sizes Step Two: For each potential class, using its learned values, compute a mean and standard deviation for each class Step 3: Present unknowns to the trained system and make measurements – compute the appropriate dimensions and compute conditional probabilities for each potential class Step 4: Assign unknown to class having the highest conditional probability – if the value is above a threshold of acceptability
18
Using Vision To determine Class & Quality A system to sort by “Body Diagonals” (BD) for a series of Rectangular pieces: A is 2±.01” x 3±.01” B is 2±.01” x 3.25±.01” C is 1.75±.01” x 3.25±.01” Body Diagonals with part dimensions at acceptable limits: – A: (1.99 2 + 2.99 2 ) to (2.01 2 + 3.01 2 ) 3.592 to 3.619 (mean is 3.606”) – B: (1.99 2 + 3.24 2 ) to (2.01 2 + 3.26 2 ) 3.802 to 3.830 (mean is 3.816”) – C: (1.74 2 + 3.24 2 ) to (1.76 2 + 3.26 2 ) 3.678 to 3.705 (mean is 3.691”)
19
Computing Class Variances: Can use Range techniques: find range of samples for a class then using a statistic d 2 to compute σ: σ classi = (R sample )/d 2 Can also compute an estimate of σ class using sample standard deviations and a c 4 statistic: σ classi = (s sample )/c 4 c 4 or d 2 are available in any good engineering statistics text! – see handout
20
Computing Variances: Here using estimates from ideal values on BD range, σ classi is: If sample size is 2 – changes based on sample size
21
Fitting an unknown: Unknown Body Diagonal is measured at 3.681” Compute Z and Cond. Probability (each class) From our analysis we would tentatively place the Unknown in Class C – but more likely we would place it in a hand inspect bin!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.