Digital Image Processing Lecture 3: Image Display & Enhancement March 2, 2005 Prof. Charlene Tsai.

Slides:



Advertisements
Similar presentations
Types of Image Enhancements in spatial domain
Advertisements

Digital Image Processing Lecture 3: Image Display & Enhancement
Image Display MATLAB functions for displaying image Bit Planes
Digital Image Processing
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Contrast-Aware Halftoning Hua Li and David Mould April 22,
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Image Processing Lecture 4
Chapter 3 Image Enhancement in the Spatial Domain.
Lecture 6 Sharpening Filters
Chapter - 2 IMAGE ENHANCEMENT
 Image Characteristics  Image Digitization Spatial domain Intensity domain 1.
Intensity Transformations (Chapter 3)
Digital Image Processing
Image Enhancement in the Spatial Domain
Intensity Transformations
BYST Eh-1 DIP - WS2002: Enhancement in the Spatial Domain Digital Image Processing Bundit Thipakorn, Ph.D. Computer Engineering Department Image Enhancement.
Digital Image Processing
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Image Enhancement To process an image so that the result is more suitable than the original image for a specific application. Spatial domain methods and.
6/9/2015Digital Image Processing1. 2 Example Histogram.
Digital Image Processing
Digital Image Processing
CS 376b Introduction to Computer Vision 02 / 25 / 2008 Instructor: Michael Eckmann.
Half Toning. Continuous Half Toning Color Half Toning.
Digital Image Processing
Lecture 2. Intensity Transformation and Spatial Filtering
Computer Vision Lecture 3: Digital Images
ECE 472/572 - Digital Image Processing Lecture 4 - Image Enhancement - Spatial Filter 09/06/11.
Image Processing 고려대학교 컴퓨터 그래픽스 연구실 cgvr.korea.ac.kr.
IDL GUI for Digital Halftoning Final Project for SIMG-726 Computing For Imaging Science Changmeng Liu
Spatial Filtering: Basics
University of Ioannina - Department of Computer Science Intensity Transformations (Point Processing) Christophoros Nikou Digital Image.
CS654: Digital Image Analysis Lecture 17: Image Enhancement.
CS6825: Point Processing Contents – not complete What is point processing? What is point processing? Altering/TRANSFORMING the image at a pixel only.
Digital Image Processing Lecture 6: Image Geometry
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing Prof. Charlene Tsai.
Image Processing and Sampling
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Intensity Transformations or Translation in Spatial Domain.
Image Processing is replacing Original Pixels by new Pixels using a Transform rst uvw xyz Origin x y Image f (x, y) e processed = v *e + r *a + s *b +
A Simple Image Model Image: a 2-D light-intensity function f(x,y)
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Digital Image Processing Lecture 10: Image Restoration
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Digital Image Processing EEE415 Lecture 3
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
3-1 Chapter 3: Image Display The goodness of display of an image depends on (a) Image quality: i) Spatial resolution, ii) Quantization (b) Display device:
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Lecture Reading  3.1 Background  3.2 Some Basic Gray Level Transformations Some Basic Gray Level Transformations  Image Negatives  Log.
Digital Image Processing Lecture 4: Image Enhancement: Point Processing January 13, 2004 Prof. Charlene Tsai.
Digital Image Processing Image Enhancement in Spatial Domain
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Image Enhancement in the Spatial Domain.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Processing and Sampling
Digital Image Processing
Image Enhancement.
Image Enhancement in the Spatial Domain
Contrast-Aware Halftoning
CSC 381/481 Quarter: Fall 03/04 Daniela Stan Raicu
Grey Level Enhancement
© 2010 Cengage Learning Engineering. All Rights Reserved.
Digital Image Processing Lecture 3: Image Display & Enhancement
IT523 Digital Image Processing
IT523 Digital Image Processing
Presentation transcript:

Digital Image Processing Lecture 3: Image Display & Enhancement March 2, 2005 Prof. Charlene Tsai

Digital Image ProcessingLecture 3 2 Spatial Resolution  The spatial resolution of an image is the physical size of a pixel in an image  Basically it is the smallest discernable detail in an image  The spatial resolution of an image is the physical size of a pixel in an image  Basically it is the smallest discernable detail in an image

Digital Image ProcessingLecture 3 3 Spatial Resolution (con’t)  The greater the spatial resolution, the more pixels are used to display the image.  How to alter the resolution using Matlab?  Imresize(x,n)  when n=1/2,  The greater the spatial resolution, the more pixels are used to display the image.  How to alter the resolution using Matlab?  Imresize(x,n)  when n=1/2,

Digital Image ProcessingLecture 3 4 Spatial Resolution (con’t)  when n=2,  Commands that generate the images  Imresize(imresize(x,1/2),2);  Imresize(imresize(x,1/4),4);  Imresize(imresize(x,1/8),8);  when n=2,  Commands that generate the images  Imresize(imresize(x,1/2),2);  Imresize(imresize(x,1/4),4);  Imresize(imresize(x,1/8),8);

Digital Image ProcessingLecture 3 5 Quantization  Quantization refers to the number of greyscales used to represent the image.  A set of n quantization levels comprises the integers 0,1,2,, n-1  0 and n-1 are usually black and white respectively, with intermediate levels rendered in various shades of gray.  Quantization levels are commonly referred to as gray levels  Sometimes the range of values spanned by the gray levels is called the dynamic range of an image.  Quantization refers to the number of greyscales used to represent the image.  A set of n quantization levels comprises the integers 0,1,2,, n-1  0 and n-1 are usually black and white respectively, with intermediate levels rendered in various shades of gray.  Quantization levels are commonly referred to as gray levels  Sometimes the range of values spanned by the gray levels is called the dynamic range of an image.

Digital Image ProcessingLecture 3 6 Quantization (con’t)  n is usually an integral power of 2, n = 2 b, where b is the number of bits used for quantization  Typically b=8 => 256 possible gray levels.  If b=1, then there are only two values: 0 and 1 (a binary image).  Uniform quantization: y=floor(double(x)/64); unit8(y*64); OR y=grayslice(x,4); Imshow(y,gray(4));  n is usually an integral power of 2, n = 2 b, where b is the number of bits used for quantization  Typically b=8 => 256 possible gray levels.  If b=1, then there are only two values: 0 and 1 (a binary image).  Uniform quantization: y=floor(double(x)/64); unit8(y*64); OR y=grayslice(x,4); Imshow(y,gray(4)); A mapping for uniform quantization

Digital Image ProcessingLecture 3 7 Quantization: Using “grayslice” image quantized to 8 greyscales image quantized to 4 greyscales image quantized to 2 greyscales

Digital Image ProcessingLecture 3 8 Method1: Dithering  Without dithering, visible contours can be detected between two levels. Our visual system is particularly sensitive to this.  By adding random noise to the original image, we break up the contouring. The quantization noise itself remains evenly distributed about the entire image.  The purpose:  The darker area will contain more black than white  The light area will contain more white than black  Method: Compare the image to a random matrix d  Without dithering, visible contours can be detected between two levels. Our visual system is particularly sensitive to this.  By adding random noise to the original image, we break up the contouring. The quantization noise itself remains evenly distributed about the entire image.  The purpose:  The darker area will contain more black than white  The light area will contain more white than black  Method: Compare the image to a random matrix d

Digital Image ProcessingLecture 3 9 Dithering: Halftone  Using one standard matrix  Generate matrix d, which is as big as the image matrix, by repeating D.  The halftone image is  Using one standard matrix  Generate matrix d, which is as big as the image matrix, by repeating D.  The halftone image is

Digital Image ProcessingLecture 3 10 Dithering: General  Example: 4 levels  Step1: First quantize by dividing grey value x(i,j) by 85  Suppose now that our replicated dither matrix d(i,j) is scaled so that its values are in the range 0 – 85  The final quantization level p(I,j) is  Example: 4 levels  Step1: First quantize by dividing grey value x(i,j) by 85  Suppose now that our replicated dither matrix d(i,j) is scaled so that its values are in the range 0 – 85  The final quantization level p(I,j) is

Digital Image ProcessingLecture 3 11 Dithering: General 4 output greyscales 8 output greyscales x = imread('newborn.jpg'); % Dither to 4 greylevels D=[0 56;84 28]; r=repmat(D,128,128); x=double(x); q=floor(x/85); x4=q+(x-85*q>r); subplot(121); imshow(uint8(85*x4)); % Dither to 8 grey levels clear r x4 D=[0 24;36 12]; r=repmat(D,128,128); x=double(x); q=floor(x/37); x4=q+(x-37*q>r); subplot(122); imshow(uint8(37*x4));

Digital Image ProcessingLecture 3 12 Method2: Error Diffusion  Quantization error: The difference between the original gray value and the quantized value.  If quantized to 0 and 255, Intensity value closer to the center (128) will have higher error.  Goal: spread this error over neighboring pixels.  Method: sweeping from left->right, top->down, for each pixel  Perform quantization  Calculate the quantization error  Spread the error to the right and below neighbors  Quantization error: The difference between the original gray value and the quantized value.  If quantized to 0 and 255, Intensity value closer to the center (128) will have higher error.  Goal: spread this error over neighboring pixels.  Method: sweeping from left->right, top->down, for each pixel  Perform quantization  Calculate the quantization error  Spread the error to the right and below neighbors

Digital Image ProcessingLecture 3 13 Error Diffusion Formulae  To spread the error, there are several formulae.  Weights must be normalized in actual implementation.  To spread the error, there are several formulae.  Weights must be normalized in actual implementation. Floyd-Steinberg

Digital Image ProcessingLecture 3 14 Error Diffusion Formulae  Matlab dither function implements the Floyd-Steinberg error diffusion Jarvis-Judice-NinkeStucki

Digital Image ProcessingLecture 3 15 Summary  Quantization is necessary when the display device can handle fewer grayscales.  The simplest method (least pleasing) is uniform quantization.  The 2 nd method is dithering by comparing the image to a random matrix.  The best method is error diffusion by spreading the error over neighboring pixels.  Quantization is necessary when the display device can handle fewer grayscales.  The simplest method (least pleasing) is uniform quantization.  The 2 nd method is dithering by comparing the image to a random matrix.  The best method is error diffusion by spreading the error over neighboring pixels.

Digital Image ProcessingLecture 3 16 Image Enhancement  Two main categories:  Point operations: pixel’s gray value is changed without knowledge of the its surroundings. E.g. thresholding  Neighborhood processing: The new gray value of a pixel is affected by its small neighborhood. E.g. smoothing.  We start with point operations.  Two main categories:  Point operations: pixel’s gray value is changed without knowledge of the its surroundings. E.g. thresholding  Neighborhood processing: The new gray value of a pixel is affected by its small neighborhood. E.g. smoothing.  We start with point operations.

Digital Image ProcessingLecture 3 17 Gray-level Slicing  Highlighting a specific range of gray levels.  Applications:  Enhancing specific features, s.t. masses of water in satellite images  Enhancing flows in X-ray images  Two basic approaches:  High value for all gray levels in the range of interest, and low value outside the range.  High value for all gray levels in the range of interest, but background unchanged.  Highlighting a specific range of gray levels.  Applications:  Enhancing specific features, s.t. masses of water in satellite images  Enhancing flows in X-ray images  Two basic approaches:  High value for all gray levels in the range of interest, and low value outside the range.  High value for all gray levels in the range of interest, but background unchanged.

Digital Image ProcessingLecture 3 18 Gray-level Slicing (con’t) Gonzales, section 3.2

Digital Image ProcessingLecture 3 19 Bit-plane Slicing Gonzales, section 3.2  Highlighting the contribution made by specific bits.  Bit-plane 0 is the least significant, containing all the lowest order bits.  Bit-plane 7 is the most significant, containing all the highest order bits.  Higher-order bits contain visually significant data  Lower-order bits contain subtle details.  One application is image compression.  Highlighting the contribution made by specific bits.  Bit-plane 0 is the least significant, containing all the lowest order bits.  Bit-plane 7 is the most significant, containing all the highest order bits.  Higher-order bits contain visually significant data  Lower-order bits contain subtle details.  One application is image compression.

Digital Image ProcessingLecture 3 20 Demo: A Fractal Image

Digital Image ProcessingLecture 3 21 Eight Bit Planes of the Image