Detecting Artifacts and Textures in Wavelet Coded Images Rajas A. Sambhare ECE 738, Spring 2003 Final Project November 16, 2018November 16, 2018
Motivation Wavelet based image coders like JPEG 2000 lead to new types of artifacts when used at small bit-rates Blocking artifacts Color distortion Ringing artifacts Blurring artifacts Illustrated at http://ai.fri.uni-lj.si/~aleks/jpeg/artifacts.htm Ringing artifacts can be detected and removed easily with smoothing – e.g. [Hu, Tull, Yang and Nguyen, 2001], [Nosratinia, 2002] etc.
Motivation Blurring artifacts can removed by using texture synthesis as demonstrated by [Hu, Sambhare, 2003] Two image regions have to be manually identified Target regions (t): where the blurry artifact is visible Source regions (S): where the original texture is preserved.
Motivation Blurring artifacts For visual detection, humans use Easy to detect visually Difficult to detect automatically For visual detection, humans use Contextual information Color and contour cues In this project we Segment the image using texture and color Attempt to identify potential source and target regions.
Algorithm Overview Computationally intensive Feature extraction. K-means segmentation. Identification of potential source and target regions. Computationally intensive To reduce the computational requirements while maintaining high quality was another goal. MATLAB implementation (includes Image Processing Toolbox).
Feature Extraction Used an 11 or 13 dimensional feature vector for each pixel (11 for gray and 13 for color images) Features Normalized dimensions (row/maxRow, column/maxCol). Low passed intensity values (or RGB triplet). Median filtering instead of linear low-pass filtering to preserve contour edges. Texture features 8 different texture features.
Feature Extraction Texture Features Use oriented Difference of Offset Gaussian (DOOG) and Difference of Gaussian (DOG) Filters to extract features. [Malik, Perona, 1990] Use 2 DOG filters to detect spotty regions. Use 6 DOOG filters at orientations from 0° to 180° to detect barred regions.
Feature Extraction Texture Features Use a magnitude operator ( abs() ) to get replicate the non-linear step in human texture recognition. Use a median filter on the result to replicate lateral neuronal inhibition in humans to get final texture feature.
Segmentation Use k-means segmentation to cluster into k regions. Complexity is O(cknd) where n: number of data points d: dimensionality of feature space c: number of iterations ( depends sub-linearly on k, n, d.) For a 256 by 256 image with 8 clusters, this requires more than 5 minutes to run in MATLAB. Unusable for larger images as the number of data points n increase to square of image dimension.
Segmentation Complexity reduction Reduce feature dimensionality using Principal Component Analysis. (Reduce d) Modify k-means algorithm (Reduce n) Randomly select 10% data points Classify and get centroids Use centroids to classify the remaining data points 256 by 256 image takes from 10 - 15 seconds to segment (including feature extraction) in MATLAB.
Segmentation Results
Identification of source and target regions k-means clusters non-adjacent segments in the same cluster Separate them Combine all texture features and identify all textured regions using thresholding (Otsu’s method, MATLAB graythresh function)
Identification of source and target regions Algorithm for each textured region for each adjacent non-textured region if (adjacent region is “similar” to textured region) mark regions as potential source and target “Similar” – small difference in average gray levels (or average RGB levels) of the low passed source and target regions. Might give better results if we compare histograms instead of average gray levels ?
Results Image: 228 x 228 24bit. Time: 17s Dimensions after PCA: 4
Analysis And Future Work Results are quite promising for color images. Why? Color cue is exploited in finding similarity. Gray scale result not as good as color so far. Why? Inherently more difficult problem. Humans use implied and visible contour cues which were ignored in this project. Possible improvements. Include contour information while segmenting. Use better segmenting method than k-means.
References [Hu, Tull, Yang and Nguyen, 2001] S. Yang, Y. H. Hu, D. L. Tull, and T. Q. Nguyen, “Maximum likelihood parameter estimation for image ringing artifact removal,” IEEE Trans. Circuits and Systems for Video Technology, vol. 11, no. 8, August, 2001, pp. 963-973. [Nosratinia, 2003] A. Nosratinia, “Post-Processing of JPEG-2000 Images to Remove Compression Artifacts,” to appear in IEEE Signal Processing Letters. [Hu, Sambhare, 2003] Y. H. Hu, and R. A. Sambhare, “Constrained Texture Synthesis for Image Post Processing,” to appear in ICME 2003, Baltimore, MD. [Malik, Perona, 1990] J. Malik and P. Perona, “Preattentive texture discrimination with early vision mechanisms,” J. Opt. Soc. America A, 7(5):923–32, May 1990.