پردازش تصاویر دیجیتال- احمدی فرد

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Chapter 3 Image Enhancement in the Spatial Domain.
Computer Vision Lecture 16: Texture
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
Segmentation (2): edge detection
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
EE 7730 Image Segmentation.
Chapter 10 Image Segmentation.
Machinen Vision and Dig. Image Analysis 1 Prof. Heikki Kälviäinen CT50A6100 Lectures 8&9: Image Segmentation Professor Heikki Kälviäinen Machine Vision.
Chapter 10 Image Segmentation.
The Segmentation Problem
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Chapter 10: Image Segmentation
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Edge Linking & Boundary Detection
Lecture 16 Image Segmentation 1.The basic concepts of segmentation 2.Point, line, edge detection 3.Thresh holding 4.Region-based segmentation 5.Segmentation.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
Digital Image Processing CCS331 Relationships of Pixel 1.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Chapter 10 Image Segmentation.
Image Segmentation Chapter 10.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
1 8. One Function of Two Random Variables Given two random variables X and Y and a function g(x,y), we form a new random variable Z as Given the joint.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Image Segmentation Image segmentation (segmentace obrazu)
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Image Segmentation Prepared by:- Prof. T.R.Shah Mechatronics Engineering Department U.V.Patel College of Engineering, Ganpat Vidyanagar.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
October 3, 2013Computer Vision Lecture 10: Contour Fitting 1 Edge Relaxation Typically, this technique works on crack edges: pixelpixelpixel pixelpixelpixelebg.
Digital Image Processing (DIP)
Machine Vision ENT 273 Lecture 4 Hema C.R.
Image Representation and Description – Representation Schemes
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
Chapter 10 Image Segmentation
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
IMAGE SEGMENTATION USING THRESHOLDING
Image Segmentation – Edge Detection
Materi 10 Analisis Citra dan Visi Komputer
IMAGE PROCESSING INTENSITY TRANSFORMATION AND SPATIAL FILTERING
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Computer Vision Lecture 9: Edge Detection II
ECE 692 – Advanced Topics in Computer Vision
Computer Vision Lecture 16: Texture II
Digital Image Processing
Geometric and Point Transforms
Image Segmentation Image analysis: First step:
Chapter 10 – Image Segmentation
Digital Image Processing
8. One Function of Two Random Variables
9. Two Functions of Two Random Variables
8. One Function of Two Random Variables
Fundamentals of Spatial Filtering
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

پردازش تصاویر دیجیتال- احمدی فرد ناحیه بندی تصاویر پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال (فهرست مطالب) اصول ناحیه بندی ناحیه بندی مبتنی بر لبه ناحیه بندی مبتنی بر آستانه گذاری پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال- احمدی فرد Fundamental Let R represent the entire spatial region occupied by an image . We may view image segmentation as a process that partitions R into n subregions , such that پردازش تصاویر دیجیتال- احمدی فرد

Fundamental(continue) Segmentation algorithms for monochrome images generally are based on one of two categories : discontinuity and similarity. In the first category , the assumption is that boundaries of regions are sufficiently different from each other and from the background to allow boundary detection based on local discontinuities in intensity. (Edge based segmentation approaches) Region based segmentation approaches in the second category are based on partitioning an image into regions that are similar according to set of predefined criteria. پردازش تصاویر دیجیتال- احمدی فرد

Comparison between two segmentation approach FIGURE 10.1 (a) Image containing a region of constant intensity. (b) Image showing the boundary of the inner region. obtained from intensity dicontinuities (C) Result of segmenting the image into two region (d) Image containing a textured region. (e) Result of edge computations. Note the large number of small edges that are connected to the original boundary, making it difficult to find a unique boundary uing only edge information. (f) Result of segmentation based on region properties. پردازش تصاویر دیجیتال- احمدی فرد

Line detection using Laplacian a b c d FIGURE 10.5 (a) Original image. (b) Laplacian image: the magnified section shows the positive/negative double-line effect characteristic of the Laplacian. (c) Absolute value of the Laplacian. (d) Positive values of the Laplacian. پردازش تصاویر دیجیتال- احمدی فرد

Line detection using specific operators پردازش تصاویر دیجیتال- احمدی فرد

Line detection using specific operators   a b c d e f FIGURE 10.7 (a) Image of a wire-bond template. (b) Result of processing with the line detector mask in Fig. 10.6. (c) Zoomed view of the top left region of (b). (d) Zoomed view of the bottom right region of (b).(e)The image in (b) with all negative values set to zero. (f) All points (in white) whose values satisfied the condition , where g is the image in (e). (The points in (f) were enlarged to make them easier to see.) پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال- احمدی فرد Edge detection پردازش تصاویر دیجیتال- احمدی فرد

Sobel edge detector (example) a b c d FIGURE 10.16 (a) Original image of size 814 X 1114 pixels with intensity values scaled to the range [0,1]. (b) the component of the gradient in the x-direction, obtained using the Sobel mask in Fig. 10.14(f) to filter the image. (C) obtained using the mask in Fig. 10.14(g). (d) The gradient image. پردازش تصاویر دیجیتال- احمدی فرد

Edge detection in diagonal directions پردازش تصاویر دیجیتال- احمدی فرد

Edge linking and boundary detection Edge detection typically is followed by linking algorithms designed to assemble edge pixels into meaningful edges and /or region boundaries. Local processing: In this approach pixels in the neighborhood of each edge point are analyzed. All points that are similar according to predefined criteria are linked. Regional processing: If we have a sequence of ordered points on a region boundary. The points can be linked to assemble a polygon which is an approximation for the region of interest. Global Processing: In the case that we do not have enough knowledge about object of interest and points on it. We have to link edges using global properties. پردازش تصاویر دیجیتال- احمدی فرد

Edge Grouping using local processing One of the simplest approaches for linking edge points is to analyze the characteristics of pixels in a small neighborhood about every point (x, y) that has been declared an edge point by one of the techniques discussed in the previous section. All points that are similar according to predefined criteria are linked, forming an edge of pixels that share common properties according to the specified criteria. The two principal properties used for establishing similarity of edge pixels in this kind of analysis are (1) the strength (magnitude) and (2) the direction of the gradient vector. The first property is based on Eq. (10.2-10). Let denote the set of coordinates of a neighborhood centered at point (x, y) in an image. An edge pixel with coordinates (s, t) in is similar in magnitude to the pixel at(x,y) if Where E is a positive threshold, The direction angle of the gradient vector is given by Eq. (10.2-11). An edge pixel with coordinates (s, t) in S, has an angle similar to the pixel at (x, y) if where A is a positive angle threshold. As noted in Section 10.2.5, the direction of the edge at (x, y) is perpendicular to the direction of the gradient vector at that point. پردازش تصاویر دیجیتال- احمدی فرد

local processing (a simplified apprasch example) 1. Compute the gradient magnitude and angle arrays, M(x, y) and ,of the input image, f(x, y). 2. Form a binary image, whose value at any pair of coordinates (x,y) is given by: where is a threshold, A is a specified angle direction, and ± defines a “band” of acceptable directions about A. 3.Scan the rows of g and fill (set to 1s) all gaps (sets of 0 s) in each row that do not exceed a specified length, K. Note that, by definition, a gap is hounded at both ends by one or more is. The rows are processed individually, with no memory between them. 4.To detect gaps in any other direction, θ, rotate g by this angle and apply the horizontal scanning procedure in Step 3. Rotate the result back by -θ. پردازش تصاویر دیجیتال- احمدی فرد

Edge Grouping using local processing(example) پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال- احمدی فرد Regional Processing Regional processing Often, the location of regions of interest in an image are known or can be determined. This implies that knowledge is available regarding the regional membership of pixels in the corresponding edge image. In such situations, we can use techniques for linking pixels on a regional basis, with the desired result being an approximation to the boundary of the region. One approach to this type of processing is functional approximation, where we fit a 2D curve to the known points. Typically, interest lies in fast-executing techniques that yield an approximation to essential features of the boundary, such as extreme points and concavities. Polygonal approximations are particularly attractive because they can capture the essential shape features of a region while keeping the representation of the boundary (i.e., the vertices of the polygon) relatively simple. In this section, we develop and illustrate an algorithm suitable for this purpose. Before stating the algorithm, we discuss the mechanics of the procedure using a simple example. Figure 10.28 shows a set of points representing an open curve in which the end points have been labeled A and B. . پردازش تصاویر دیجیتال- احمدی فرد

Regional Processing (continue) پردازش تصاویر دیجیتال- احمدی فرد

Regional Processing (example) پردازش تصاویر دیجیتال- احمدی فرد

Regional Processing (example) پردازش تصاویر دیجیتال- احمدی فرد

Global linking by Hough transform پردازش تصاویر دیجیتال- احمدی فرد

Hough transform in Cartesian space پردازش تصاویر دیجیتال- احمدی فرد

Hough transform in Polar space پردازش تصاویر دیجیتال- احمدی فرد

Hough transform (illustration) پردازش تصاویر دیجیتال- احمدی فرد

Hough transform for line detection(example) پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال- احمدی فرد Thresholding پردازش تصاویر دیجیتال- احمدی فرد

Thresholding based on histogram analysis پردازش تصاویر دیجیتال- احمدی فرد

Global thresholding

Global thresholding(example)

Optimal Global Adaptive Thresholding پردازش تصاویر دیجیتال- احمدی فرد

پردازش تصاویر دیجیتال- احمدی فرد Optimal Global Adaptive Thresholding for Gaussian Distribution function پردازش تصاویر دیجیتال- احمدی فرد

The role of noise in image thresholding

پردازش تصاویر دیجیتال- احمدی فرد Otsu’s Method Let {0, 1, 2,..., L- 1} denote the L distinct intensity levels in a digital image of size M * N pixels, and let denote the number of pixels with intensity i. The total number, M N, of pixels in the image is . The normalized histogram (see Section 3.3) has components , from which it follows that Now, suppose that we select a threshold T(k) = k, 0 < k < L-1, and use it to threshold the input image into two classes, and , where consists of all the pixels in the image with intensity values in the range [0, k ] and consists of the pixels with values in the range [k + 1, L -1] Using this threshold. the probability, , that a pixel is assigned to (ie., thresholded into) class is given by the cumulative sum پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) Viewed another way, this is the probability of class occurring. For example, if we set k = 0, the probability of class having any pixels assigned to it is zero. Similarly, the probability of class occurring is From Eq. (3.348), the mean intensity value of the pixels assigned to class is Where is given in Eq. (10.3-4). The term P(i/ ) in the first line of Eq. (10.3-6) is the probability of value i , given that i comes from class . The second line in the equation follows from Bayes’ formula: پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) Similarly, the mean intensity value of the pixels assigned to class The cumulative mean (average intensity) up to level k is given by and the average intensity of the entire image (i.e., the global mean) is given by پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) The validity of the following two equations can be verified by direct substitution of the preceding results: where we have omitted the ks temporarily in favor of notational clarity. In order to evaluate the “goodness” of the threshold at level k we use the normalized, dimensionless metric where is the global variance [i.e., the intensity variance of all the pixels in the image, as given in Eq. (3.349)], پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) and is the between-class variance, defined as This expression can be written also as Where and m are as stated earlier. These equations are obtained from previous discussion. This form is slightly more efficient computationally because the global mean, , is computed only once, so only two parameters, m and ,need to be computed for any value of k. We see from the first line in Eq. (10.3-15) that the farther the two means and are from each other the larger will be, indicating that the between class variance is a measure of separability between classes. Because is a constant, it follows that η also is a measure of separability, and maximizing this metric is equivalent to maximizing .The objective, then, is to determine the threshold value, k, that maximizes the between-class variance. Note that Eq.(10.3-12) assume simplicitly that . This variance can be zero only when all the intensity levels in the image are the same, which implies the existence of only one class of pixels. This in turn means that for a constant image since the separability of a single class from itself is zero. پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) Reintroducing k, we have the final results: Then, the optimum threshold is the value, k*, that maximizes : In other words, to find k* we simply evaluate Eq. (10.3-18) for all integer values of k (such that the condition 0 < < 1 holds) and select that value of k that yielded the maximum . If the maximum exists for more than one value of k, it is customary to average the various values of k for which is maximum. It can be shown (Problem 10.33) that a maximum always exists, subject to the condition that 0 < < 1. Evaluating Eqs. (10.347) and (10.3-18) for all values of k is a relatively inexpensive computational procedure, because the maximum number of integer values that k can have is L. پردازش تصاویر دیجیتال- احمدی فرد

Otsu’s Method(continue) Otsu’s algorithm may be summarized as follows: 1. Compute the normalized histogram of the input image. Denote the components of the histogram by , i = 0, 1,2...,L - 1. 2. Compute the cumulative sums, , for k = 0, 1,2,..., L- 1, using Eq. (10.3-4). 3. Compute the cumulative means. , for k = 0, 1,2,..., L -1, using Eq. (10.3-8). 4. Compute the global intensity mean, using (10.3-9). 5. Compute the between-class variance, , for k = 0, 1, 2,..., L - 1, using Eq. (10.3-17). 6. Obtain the Otsu threshold, k*, as the value of k for which is maximum. If the maximum is not unique, obtain k* by averaging the values of k corresponding to the various maxima detected. 7. Obtain the separability measure, by evaluating Eq. (10.3-16) at k =k*. پردازش تصاویر دیجیتال- احمدی فرد

Thresholding using Otsu’s method example