An article by: Itay Bar-Yosef, Nate Hagbi, Klara Kedem, Itshak Dinstein Computer Science Department Ben-Gurion University Beer-Sheva, Israel Presented by: Doron Ben-Zion and Michael Wasserstrum
A distance transform, also known as distance map or distance field, is a derived representation of a digital image. A distance transform, also known as distance map or distance field, is a derived representation of a digital image. Derived – extracting information from the original image. Derived – extracting information from the original image. We label each pixel of the image with the Euclidian Distance to the nearest ”obstacle pixel”, which in our case is the foreground pixel. We label each pixel of the image with the Euclidian Distance to the nearest ”obstacle pixel”, which in our case is the foreground pixel. For each pixel x : For each pixel x : DT(x) = distance (x,y) where y stands for the nearest pixel in the foreground. DT(x) = distance (x,y) where y stands for the nearest pixel in the foreground.
The distance transform is sometimes very sensitive to small changes in the object. If, for example, we take this rectangle which contains a small black region in the center of the white rectangle, then the Distance Transform becomes: The distance transform is sometimes very sensitive to small changes in the object. If, for example, we take this rectangle which contains a small black region in the center of the white rectangle, then the Distance Transform becomes:
The Distance Transform is also very sensitive to noise The Distance Transform is also very sensitive to noise
An example of applying the Distance Transform to a real world image is illustrated with: An example of applying the Distance Transform to a real world image is illustrated with: To obtain a binary input image, we threshold the image at a value of 100, as shown in: To obtain a binary input image, we threshold the image at a value of 100, as shown in: The Distance Transform is: The Distance Transform is:
Sqrt(2) To preform an estimation of DT on a given binary matrix (which represents an image) we will use one kind of the 3 given “masks” – where each one of them represent a different metric. To preform an estimation of DT on a given binary matrix (which represents an image) we will use one kind of the 3 given “masks” – where each one of them represent a different metric. L1 – Matrix (Diamond)L2 – Matrix (euclidean)L ∞ - Matrix Computers “prefer” working with integers so to simplify the process we multiply the numbers by 100 while preserving the ratios
For a Given image matrix A (m*n) and a given mask M: For a Given image matrix A (m*n) and a given mask M: Initialize every background pixel to ∞ and every foreground pixel to zero. Initialize every background pixel to ∞ and every foreground pixel to zero. For k = 1 until m (Top down) For k = 1 until m (Top down) For s = 1 until n (Left to right) For s = 1 until n (Left to right) A[k,s] = min{A[k+i,s+j] + M[i,j]} A[k,s] = min{A[k+i,s+j] + M[i,j]} -1 ≤ i ≤ 1 -1 ≤ j ≤ 1 For k = m down to 1 (Botton up) For k = m down to 1 (Botton up) For s = n down to 1 (Right to left) For s = n down to 1 (Right to left) A[k,s] = min{A[k+i,s+j] + M[i,j]} A[k,s] = min{A[k+i,s+j] + M[i,j]} -1 ≤ i ≤ 1 -1 ≤ j ≤ 1 In our case we will use L2 (euclidiean mask). In our case we will use L2 (euclidiean mask). Loop 1 Loop 2
Note that we do change ‘A’ and don’t create a new Matrix (image). Note that we do change ‘A’ and don’t create a new Matrix (image). Means that if we have changed a value of a pixel, its new value will be taken in consideration in the upcoming iterations. Means that if we have changed a value of a pixel, its new value will be taken in consideration in the upcoming iterations. Also note that in the first loop we do not consider values to the right of the pixel that can be later changed! Also note that in the first loop we do not consider values to the right of the pixel that can be later changed! That is why we need the second loop! That is why we need the second loop!
We can use only part of the mask in each loop! We can use only part of the mask in each loop!
Document skew estimation is an important step in the process of document analysis. Document skew estimation is an important step in the process of document analysis. It affect the performance of subsequent stages of document capturing process such as: It affect the performance of subsequent stages of document capturing process such as: line extraction. line extraction. page segmentation. page segmentation. OCR - Optical character recognition. OCR - Optical character recognition. We will use Distance Transform to estimate the skew of a document. We will use Distance Transform to estimate the skew of a document.
1. Use Thresholding to obtain a Binarized Document. 2. Use Distance Transform. 3. Use Gaussian Blur for smoothing the Image. 4. Calculate the gradient for each backround pixel. 5. Calculate the average orientation for a specific window. 6. Produce an histogram. Calculate a gaussien on top of the histogram. Calculate a gaussien on top of the histogram. Return The Gaussien central value! Return The Gaussien central value!
Binarized Documents Binarized Document is a document represented only by 2 values of pixels: 0 & 1. usually we use 0 for black and 1 for white. Binarized Document is a document represented only by 2 values of pixels: 0 & 1. usually we use 0 for black and 1 for white. To obtain a Binarized Document from a gray scale document we simply use a Threshold. To obtain a Binarized Document from a gray scale document we simply use a Threshold. In our case “Otsu’s global thresholding approach” was used. In our case “Otsu’s global thresholding approach” was used.
Original Image Binarized Image
We don’t need color information for estimating the document orientation. We don’t need color information for estimating the document orientation. It’s crucial for Distance Transform - an important step in our skew estimation process. It’s crucial for Distance Transform - an important step in our skew estimation process.
We are using the DT as explained before. We are using the DT as explained before. We are using DT because of the observation that the dominant orientation of its gradients accurately reflects the skew of the document. We are using DT because of the observation that the dominant orientation of its gradients accurately reflects the skew of the document. (a) A portion of a text document image (b) The DT of the document image
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function.
Space between characters creates local maxima which is irrelevant to the document orientation and interrupts the statistics process that is being done later. Space between characters creates local maxima which is irrelevant to the document orientation and interrupts the statistics process that is being done later. We would like to avoid those local maximas. We would like to avoid those local maximas. Blur will help us to eliminate local maxima between characters. Blur will help us to eliminate local maxima between characters.
(c) Gradient orientation field of the DT (d) Gradient orientation field of the smoothed DT
(a) A document image rotated at 20 ◦. (b) Corresponding histogram hθ. (b) Corresponding histogram hθ. (c) A document image rotated at −30 ◦. (d) Corresponding histogram hθ.
The Gaussian allow us to return an accurate value of the skew. The Gaussian allow us to return an accurate value of the skew. We return the Gaussian central value! We return the Gaussian central value!