Download presentation
Presentation is loading. Please wait.
Published byNeal Wade Modified over 9 years ago
1
Joonas Vanninen Antonio Palomino Alarcos
2
One of the objectives of biomedical image analysis The characteristics of the regions are examined later in detail Segmentation is the process of dividing an image into different parts by Discontinuity Similarity Many of the methods can be used more generally in detection of features
3
If the gray levels of the objects of interest are known, the image can be thresholded to include only them Doesn’t generally produce uniform regions
4
Can be useful in noise removal and the analysis of particles Isolated points can be detected with the following convolution mask Can be thresholded Straight lines can be detected with masks
6
An edge: A large change in the gray level The change is in a particular direction, depending upon the orientation of the edge Can be measured for example with derivatives or gradients:
7
Derivatives and gradients can be approximated from differences with convolution masks Prewitt operators:
8
Sobel operators have larger weights for the pixels in the same row / column: Roberts operators use 2 x 2 neighborhoods to compute cross-differences
9
With two masks we can get a vector value for the gradient:
10
A second-order difference operator Omnidirectional: sensitive to edges in all directions but can’t detect the direction of the edge Sensitive to noise since there is no averaging Positive and negative values for each edge Zero crossings in between, can be used to find the local maximas of the first-order gradients
11
The noise in an image can be reduced by first convolving it with a Gaussian: The order of the operators can be changed:
12
The result is called the Laplacian of Gaussian operator, LoG Often referred as the Mexican hat function Can be approximated by the difference between two gaussians, DoG operator
13
Uses zero-crossings of the image convolved with the LoG-operator to represent edges Problems: If the edges are not well separated, zero crossings may also represent local minimas (false zero crossings) The edge localization may be poor
14
Different structures are visible with different scales, parameter σ of the Gaussian Ideally an edge would be seen with as many scales as possible Stability map measures the persistence of boundaries over a range of filter scales
15
Ideal detector for step-type edges corrupted with additive white noise by three criterias: Detection: no false or missing edges Localization: detected edges are spatially near the real ones A single output for a single edge The image is convolved with a Gaussian The gradient is estimated for each pixel The direction of the gradient is the normal of the edge The amplitude of the gradient is the strength of the edge
16
Non-maximal suppression: the values of gradients that are not local maximas are set to zero The gradients are hysteresis thresholded: a pixel is considered to be an edge pixel if It has a gradient value larger than the higher threshold, It has a gradient value larger than the lower threshold and it is spatially connected to another edge pixel The zero-crossings of the second derivative in the direction of the normal can also be used This can be used for sub-pixel accuracy
17
Highpass filters in the Fourier-domain can be used to find edges High-frequency noise → use a bandpass filter LoG –filter: a high-frequency emphasising Laplacian and a Gaussian lowpass filter Use of frequency domain may be computationally advantageous if the LoG is specified with a large array (large σ)
18
Edges are usually not linked The similarity of edge pixels can be measured by: The strenght of the gradient The direction of the gradient Most similar pixels should be used to link edges to each other
28
Dividing the image into regions that could correspond to ROIs is an important prerequisite apply the image analysis techniques Computer analysis of images usually starts with segmentation Reduces pixel data to region-based information about the objects present in the image
29
Thresholding techniques Assumption: all pixels whose values lie within a certain range belong to the same class Threshold may be determined based upon the histogram of the image Boundary-based methods Assumption: pixel values change rapidly at the boundaries between regions Intensity discontinuities lying at the boundaries between objects and backgrounds must be detected
30
Region-based methods Assumption: neighboring pixels within a region have similar values May be divided into two groups Region Splitting and Merging Region Growing Hybrid techniques Combine boundary and region criteria
31
Noise modify the gray levels to distributions represented by Gaussian PDFs Probability of erroneus classification is Differentiating whith respect to T, equating the result to zero and taking some simplifications (σ1=σ2=σ)
32
This method partitions R (entire space of the given image) into n subregions such that: is a connected region for i=1,2,…,n Results are highly dependent upon the procedure used to select the seed pixels and the inclusion criteria used
33
Initially, we will divide the given image arbitrarily into a set of disjoint quadrants If F(R i )=FALSE for any quadrant, subdivide that quadrant into subquadrants Iterate the procedure until no further changes are made Splitting procedure could result in adjacent regions that are similar, a merging step would be required, as follows: F(R i UR k )=TRUE Iterate until no further merging is possible
34
A neighboring pixel f(m,n) is appended to the region if: T Ξ ‘Additive Tolerance Level’ Problem: The size and shape of the region depend on the seed pixel selected
36
Running-mean algorithm The new pixel is compared with the mean gray level (running mean) of the region being grown ”Current center pixel” method After a pixel C is appended to the region, its 4 (or 8) connected neighbours would be checked for inclusion in the region as follows:
38
A relative difference, based upon a ”multiplicative tolerance level” ( τ ) could be employed: f(m,n)Ξgray level of the pixel being checked μRcΞ original seed pixel value current center pixel value running-mean gray level
39
Last methods presents difficulties in the selection of the range of the tolerance value Possible solution: make use of some of the characteristics of the HVS New parameter ”Just-noticeable difference” (JND) is used: JND=L.C T L Ξ background luminance C T Ξ threshold contrast
40
Determination of the JND as a function of background gray level is needed to apply this method It is possible to determine this relationship based upon psychophysical esperiments
41
1. It starts with a 4-connected neighbor-pixel grouping. The condition is defined as: 2. Removal of small regions is performed 3. Merging of connected regions is performed if any of two neighboring regions meet the JND condition 4. The procedure is iterated until no nieghboring region satisfies the JND condition
46
Hough domain: straight lines are characterized by the pair of parameters (m,c) m is the slope c is the position Disadvantage: m and c have unbounded ranges Parametric representation θ limited to [0,π] (or to [0,2π]) ρ limited by the size of the image Limits of (ρ,θ) affected by the choice of the origin
47
If normal parameters of the line are (ρ 0,θ 0 ) Derived properties: Point in (x,y) space corresponds to a sinusoidal curve in the (ρ,θ) space Point in the (ρ,θ) space correspond to a straigh line in the (x,y) space Points in the same straigh line in the (x,y) space corresponds to curves through a common point in the (ρ,θ) space Points on the same curve in the parameter space corrspond to lines through a common point in the (x,y) space
48
1. Discretize the (ρ,θ) space into accumulator cells by quantizing ρ and θ 2. Accumulator cells are increased by one (new curve ’ρ=x(n)cosθ+y(n)cosθ’ found) for each pixel with a value of 1 3. Cordinates of points of intersection of the curves in the parameter space provide the parameters of the line
49
Image: Wikipedia
50
Any circle in the (x,y) space is represented by a single point in the 3D (a,b,c) parameter space Points along the perimeter of the circle describe a circular cone in the (a,b,c) space Algorithm for detection of straight lines may be extended for the detection of circles
51
The methods here are quite general, their applications are not limited to just segmentation The purpose of any image processing method is to obtain a result that can be presented to humans or used in a further analysis The result should be consistent with a human observer’s assessment A priori information about the shapes and features in an image is important in segmentation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.