Machine vision is not a subset of: Computer Science Image Processing Pattern Recognition Artificial Intelligence (whatever this is!) However, tools and.

Slides:



Advertisements
Similar presentations
It is very difficult to measure the small change in volume of the mercury. If the mercury had the shape of a sphere, the change in diameter would be very.
Advertisements

ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Digital Image Processing
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Spatial Filtering (Chapter 3)
Image Processing Lecture 4
Chapter 3 Image Enhancement in the Spatial Domain.
Chapter - 2 IMAGE ENHANCEMENT
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
EE663 Image Processing Histogram Equalization Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
School of Computing Science Simon Fraser University
Lecture 5 Template matching
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Image Enhancement To process an image so that the result is more suitable than the original image for a specific application. Spatial domain methods and.
6/9/2015Digital Image Processing1. 2 Example Histogram.
CS443: Digital Imaging and Multimedia Point Operations on Digital Images Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring.
Multimedia Data Introduction to Image Processing Dr Mike Spann Electronic, Electrical and Computer.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Segmentation Divide the image into segments. Each segment:
Image Forgery Detection by Gamma Correction Differences.
Image Enhancement.
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Computer Vision Lecture 3: Digital Images
Introduction to Machine Vision Systems
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
1 Image Basics Hao Jiang Computer Science Department Sept. 4, 2014.
CS654: Digital Image Analysis Lecture 17: Image Enhancement.
The Digital Image Dr. John Ryan.
CS6825: Point Processing Contents – not complete What is point processing? What is point processing? Altering/TRANSFORMING the image at a pixel only.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Multimedia Data Introduction to Image Processing Dr Sandra I. Woolley Electronic, Electrical.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Chapter 3: Image Restoration Introduction. Image restoration methods are used to improve the appearance of an image by applying a restoration process.
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Intensity Transformations or Translation in Spatial Domain.
AdeptSight Image Processing Tools Lee Haney January 21, 2010.
Chapter Teacher: Remah W. Al-Khatib. This lecture will cover:  The human visual system  Light and the electromagnetic spectrum  Image representation.
Digital Image Processing CSC331 Image Enhancement 1.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Pulse Code Modulation PCM is a method of converting an analog signal into a digital signal. (A/D conversion) The amplitude of Analog signal can take any.
Effect of Noise on Angle Modulation
Chapter 6. Effect of Noise on Analog Communication Systems
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
The Reason Tone Curves Are The Way They Are. Tone Curves in a common imaging chain.
Autonomous Robots Vision © Manfred Huber 2014.
Intelligent Vision Systems ENT 496 Image Filtering and Enhancement Hema C.R. Lecture 4.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
1 Machine Vision. 2 VISION the most powerful sense.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Introduction to Image Processing. What is Image Processing? Manipulation of digital images by computer. Image processing focuses on two major tasks: –Improvement.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 4 – Audio and Digital Image Representation Klara Nahrstedt Spring 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Digital Image Processing Image Enhancement in Spatial Domain
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Digital Image Processing CSC331
Comparison Between AM and FM Reception. 21/06/20162 FM Receiver.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
Digital Image Processing
Perception and Measurement of Light, Color, and Appearance
Range Imaging Through Triangulation
Computer Vision Lecture 3: Digital Images
CSC 381/481 Quarter: Fall 03/04 Daniela Stan Raicu
Digital Image Processing
Image Enhancement in the Spatial Domain
Presentation transcript:

Machine vision is not a subset of: Computer Science Image Processing Pattern Recognition Artificial Intelligence (whatever this is!) However, tools and concepts from these areas are often applied to vision applications.

Machine vision is “…the use of devices for optical, non-contact sensing to receive and interpret an image of a real scene automatically, in order to obtain information and or control machines or processes." Automated Vision Association, 1985

Machine vision Requires Mechanical handling Lighting Optics, including conventional imaging, lasers, diffractive optics, fiber optics, etc. Sensors (Cameras) Electronics (analog and digital) Digital systems architecture Software

Illumination Invariance A Simple 1-D Model For Illumination Incident light reaching a linear sensor can be expressed as: I[n]=I 0 [n]s[n] In which I[n] is the light reaching the nth pixel, L[n] is the background illumination and s[n] is the value of reflection or transmission of the object being observed which can range between 0 and 1.

Video Signal Generation The sensor converts the light into an electrical signal v[n] v[n] = b[n]s[n] in which b[n] is proportional to I 0 [n]. In many vision applications, b[n] varies slowly as a function of distance because background illumination does not change rapidly.

Scan Line from Hypothetical Image Small objects have 50% contrast with background Background illumination variations can be caused by optics and lighting The objective here is to find the dark features!

Ideal Low-Pass Filter Filtering provides an estimate of b[n]. Subtracting the original image from the low-pass filtered image provides the lower curve: y hp [n] = b[n]s[n] ‑ b[n] = b[n](s[n] - 1)

Linear High-Pass Filtering Results on previous slide show that the resulting high-pass filtering provides a significant improvement in the detection of dark features. A single threshold can be selected which will detect all of the features but the sensitivity varies because of the illumination level. Differentiating between features with different contrast values would be difficult, however.

Illumination-Invariant Processing Transforming image pixel values logarithmically, however, removes the effects of varying illumination. A logarithmic image is sometimes referred to as a density image using medical imaging concepts. A single threshold will detect all features equally well regardless of light level.

Background Illumination Background illumination for the following discussion refers to the source or sources that illuminate a given scene. The illumination may be either from the front or back but not from both at the same time. Background illumination may be measured directly in some vision applications but has to be estimated in others. If the background can be measured directly or estimated, illumination-invariant techniques can be employed so that a feature in a scene may be extracted regardless of the illumination level at the feature’s location.

Homomorphic Filtering One classical image enhancement technique is based on combining linear filtering with a nonlinear point transform of the gray-scale values of the input image. If a logarithmic transform is used, the result is illumination invariant. The inverse operation is not useful when the results will be thresholded. Nonlinear Transform Input Image High-Pass Filter Output Image Inverse Nonlinear Transform

Homomorphic example High-Pass Filtering Homomorphic Filtering Original ImageLog Transformed

Linear Filtering Limitations In the hypothetical example, an assumption was made that a linear filter could be obtained that would filter out the dark features. In actuality, this turns out to be difficult to accomplish and implementation requires complicated (and therefor computationally intensive) algorithms and may very well not provide a good estimate of the background illumination. As will be shown, however, it is still possible to obtain illumination-invariant results.

Illumination Invariant FIR Filter Consider a FIR filter that will remove the DC component over a neighborhood Assuming that all pixels have gray-scale value , then From this, it can be inferred that

Invariant Filter Output Now, since v[n] = s[n]b[n] For slowly varying illumination, b[k] is assumed to be the constant  over the filter extent so y[n] does not depend on illumination:

Morphological Background Estimation For an image containing dark regions smaller than some given structuring element, a gray-scale closing operation can be used to estimate the background. With a good estimate of the background, an illumination invariant output is still obtained.

Slow and Fast Illumination Changes Illumination changes can occur over a variety of time spans. Slow variations are often associated with filament evaporation in incandescent lamps and similar aging effects. Slow variations can, in some cases, be corrected by viewing a diffuse uniform reflector periodically. Rapid variations are often associated with voltage fluctuations and ripple due to sinusoidal driving voltages.

Short-Term Compensation Regulated DC sources can be used for the illumination sources but is quite expensive for high-power applications. In some applications, cameras can be scanned synchronously with the power line although AC power regulation may still be required, i.e. Sola transformers. This approach, however, is unsuitable for high-speed matrix and line-scan camera applications.

Alternative Short-Term Compensation Suppose that the effective illumination reaching the sensor is a function of both time and spatial position: I[j,n] =I 0 [j,n]s[j,n] Since illumination sources are usually driven from a common power source, the light output of each illumination source will vary temporally in exactly the same manner so that I 0 [j,n] can be decomposed into the product: I 0 [j,n] = I t [j]I s [n] The voltage output of the sensor becomes v[j,n] = I t [j]b[n]s[j,n]

Short Termp Compensation (cont.) The density representation becomes d[j,n] = log(I t [j]b[n]s[j,n]) = log(I t [j]) + log(b[n]s[j,n]) Let a[j] be the average value of the N density-image pixels in a reference region which never changes for the jth sample: logI t [j] is constant for all pixels in region R since it only varies as a function of the sample number.

Short Termp Compensation (cont.) A constant k R can be defined as A normalized image in which compensation for the short-term variations are provided follows: d n [j,n] = logI t [j] + log(b[n]s[j,n] - a[j] = logI t [j] + log(b[n]s[j,n] - log(I t [j]) - k R =log(b[n]s[j,n]) - k R the amount of processing required is relatively small. The region R need only be large enough to minimize errors due to camera signal noise. The image normalization operation simply requires that a constant value be added to each pixel

Quantization Considerations Effects of quantization error when logarithmic transformation is performed before (nearly horizontal line) after (approximately triangular waveform). The lower and higher ramps represent background and foreground data.

Accidental Illumination Invariance Scaled gamma correction with an exponent of.45 (top curve) and logarithmic transformation function (bottom curve) are very similar except for low gray levels. Enabling gamma correction can provide a good approximation to a logarithmic transformation. The SNR of typical video cameras probably does not justify a better approximation

Image Format Considerations GIF: The original Compuserve format! JPEG: Very good compression available. BMP: The Windows standard. PCX: The original PC paintbrush format TIFF: The almost universal standard ! PGM: Portable Gray Map: No Endian Problems! Irfan View reads all of the above and many more!