Image Processing is any form of signal processing for which our input is an image, such as photographs or frames of video and our output can be either.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

1 ECE 495 – Integrated System Design I Introduction to Image Processing ECE 495, Spring 2013.
Image Segmentation Longin Jan Latecki CIS 601. Image Segmentation Segmentation divides an image into its constituent regions or objects. Segmentation.
Introduction to Computer Vision Image Texture Analysis
Matlab Tutorial. Session 1 Basics, Filters, Color Space, Derivatives, Pyramids, Optical Flow Gonzalo Vaca-Castano.
1 A L L A H. Command-Window Workspace & Directory Command- History The Matlab Command window - Finding your way around.
Automation and Drives SIMATIC HMI The Human Machine Interface for internal use only Scope of Presentation Dept. of Industrial Electronics and Control Eng.
Laboratory of Image Processing Pier Luigi Mazzeo
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Digital Image Processing
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Image Processing in Matlab An Introductory Approach by Sabih D. Khan
Digital Imaging and Image Analysis
Fingerprint recognition using MATLAB (using minutiae matching) Graduation project Prepared by: Zain S. Barham Supervised by: Dr. Allam Mousa.
Computer Vision Lecture 16: Region Representation
Mapping: Scaling Rotation Translation Warp
Quadtrees, Octrees and their Applications in Digital Image Processing
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Segmentation Divide the image into segments. Each segment:
Quadtrees, Octrees and their Applications in Digital Image Processing
Preprocessing ROI Image Geometry
Vector vs. Bitmap SciVis V
V Obtained from a summer workshop in Guildford County July, 2014
SCCS 4761 Introduction What is Image Processing? Fundamental of Image Processing.
Image processing Second lecture. Image Image Representation We have seen that the human visual system (HVS) receives an input image as a collection of.
Vector vs. Bitmap
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
DEVELOPMENT OF ALGORITHM FOR PANORAMA GENERATION, AND IMAGE SEGMENTATION FROM STILLS OF UNDERVEHICLE INSPECTION Balaji Ramadoss December,06,2002.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
1 Lecture 1 1 Image Processing Eng. Ahmed H. Abo absa
Image Arithmetic Image arithmetic is the implementation of standard arithmetic operations, such as addition, subtraction, multiplication, and division,
CS112 Scientific Computation Department of Computer Science Wellesley College Numb3rs Number and image types.
Presented By: ROLL No IMTIAZ HUSSAIN048 M.EHSAN ULLAH012 MUHAMMAD IDREES027 HAFIZ ABU BAKKAR096(06)
Digital Image Processing Lecture4: Fundamentals. Digital Image Representation An image can be defined as a two- dimensional function, f(x,y), where x.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
11/29/ Image Processing. 11/29/ Systems and Software Image file formats Image processing applications.
Autonomous Robots Vision © Manfred Huber 2014.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Data Representation. What is data? Data is information that has been translated into a form that is more convenient to process As information take different.
1 Machine Vision. 2 VISION the most powerful sense.
Digital Image Processing Introduction to MATLAB. Background on MATLAB (Definition) MATLAB is a high-performance language for technical computing. The.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Image Representation Last update st March Heejune Ahn, SeoulTech.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Morphological Image Processing
Guilford County SciVis V104.03
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Digital Image Processing CCS331 Relationships of Pixel 1.
Introduction to Digital Image Analysis Kurt Thorn NIC.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
IMAGE PROCESSING is the use of computer algorithms to perform image process on digital images   It is used for filtering the image and editing the digital.
Chapter 10 Digital Signal and Image Processing
Computer Application in Engineering Design
(Project) by:- ROHAN HIMANSHU ANUP 70282
Image Processing.
Vector vs. Bitmap.
FINGER PRINT RECOGNITION USING MINUTIAE EXTRACTION FOR BANK LOCKER SECURITY Presented by J.VENKATA SUMAN ECE DEPARTMENT GMRIT, RAJAM.
Introduction to Morphological Operators
Mean Shift Segmentation
Fitting Curve Models to Edges
Digital Image Processing using MATLAB
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Fundamentals of Image Processing Digital Image Representation
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Image Processing is any form of signal processing for which our input is an image, such as photographs or frames of video and our output can be either an image or a set of characteristics or parameters related to the image.

Image Processing generally refers to processing of two dimensional picture and by two dimensional picture we implies a digital image. A digital image is an array of real or complex numbers represented by a finite number of bits. But now in these days optical and analog image processing is also possible.

 Face detection  Feature detection  Non-photorealistic rendering  Medical image processing  Microscope image processing  Morphological image processing  Remote sensing  Automated Sieving Procedures  Finger print recognization

Image processing can be done using various softwares and languages such as:- Software  Matlab  Adobe photoshop  Irfan view Language  VHDL  C/C++

The name MATLAB stands for matrix laboratory. It is a high-performance language for technical computing. It is an interactive system whose basic data element is an array which does not require any dimensioning. This allows us to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time.

MATLAB features a family of add-on application- specific solutions called toolboxes. These toolboxes are comprehensive collections of matlab functions (M-files) that extend the matlab environment to solve particular classes of problems. Areas in which toolboxes are available include image processing, signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others..

The Image Processing Toolbox is a collection of functions that extend the capability of the Matlab numeric computing environment. These functions are written on M-files and further we can extend the capabilities of the Image Processing Toolbox by writing our own M-files. This toolbox supports a wide range of image processing operations such as..

 Color corrections such as brightness and contrast adjustments, quantization, or color translation to a different color space.  Image registration, the alignment of two or more images.  Image segmentation.

 Neighborhood and block operations.  Linear filtering and filter design.  Transforms.  High dynamic range imaging by combining multiple images.  Deblurring.

The following image formats are supported by Matlab:  BMP  HDF  JPEG  PCX  TIFF  XWB

Intensity image This is the equivalent to a "gray scale image“. It represents an image as a matrix where every element has a value corresponding to how bright/dark (each element represent intensities). Binary image This image format also stores an image as a matrix but can only color a pixel black or white (and nothing in between). It assigns a 0 for black and a 1 for white.

Indexed image This is a practical way of representing color images. An indexed image stores an image as two matrices. The first matrix has the same size as the image and one number for each pixel. The second matrix is called the color map and its size may be different from the image. The numbers in the first matrix is an instruction of what number to use in the color map matrix.

RGB image It represents an image with three matrices of sizes matching the image format. Each matrix corresponds to one of the colors red, green or blue and gives an instruction of how much of each of these colors a certain pixel should use. A pixel whose color components are (255,255,255) is displayed as White. And whose color components are (0,0,0) is displayed as Black.

Histeq( ): This command is used to improve the contrast in the image. This function spreads the intensity values over the full range of the image, and this process is known as histogram equalization. Syntax histeq(‘Image_name.format’); Before After

Imadjust( ): This command is used to adjust the contrast of the image. Imadjust increases the contrast of the image by saturating 1% of the data at both low and high intensities of Image and by stretching the intensity. Syntax Imadjust(‘image_name.format’)

Examples of imadjust

Adapthisteq( ): This command is used to perform contrast-limited adaptive histogram equalization (CLAHE). It is an alternative to function histeq, While histeq works on the entire image, adapthisteq operates on small regions in the image, called tiles. Syntax adapthisteq(‘Image_name.format’); Before After

 Imread( ): This command is used to read that image on which the operation has to be done. Imread returns the image data in the array. If the file contains a grayscale image, then it will return a two-dimensional (M-by-N) array and if the file contains a color image, it will return a three-dimensional (M-by-N-by-3) array. The class of the returned array depends on the data type used by the file format. Syntax Imread(‘image_name.format’)

 Imview( ): This command is used to view the image on the screen. And this command is always used with the imread and imwrite command, because, it views only that image which is under process. Syntax Imview(‘image_name.format’)

Result of above explained commands.

 Imwrite(, ): This command is used to change the format of read file. By using this command we can change the file formats from one to another. Syntax Imwrite(‘image_name.format2’, ‘image_name.format1’)  Imfinfo( ): This command is used to obtain the information about the image under process. It is used to obtain information such as size, position of pixels, number of rows and columns etc. Syntax Iminfo(‘image_name.format’)

 Imsubtract(, ): This command is used to create a more uniform background, by subtracting the background image from the original image. To estimate the background image, we use following command: background=imopen(‘Image_name.format’,strel('disk',15)) To see the estimated background image, type: imview(background) Now subtract the background image from the original image, type: imsubtract(‘image_name.format’,background);

Result of the above used commands

Imresize(, ): This command is used to resize the processed image. To enlarge an image, specify a magnification factor greater than 1 in the command. To reduce an image, specify a magnification factor between 0 and 1 in the command. Syntax imresize(‘Image_name.format’, value);

Image Rotation(, ): This command is used to rotate the given image. This command accepts two primary arguments, one is the image to be rotated and other one is rotation angle. We have to specify the rotation angle in degrees. If a positive value is specified then imrotate rotates the image counterclockwise and if a negative value then imrotate rotates the image clockwise. Syntax imrotate(‘Image_name.format’, angle_in_degree);

Result of the above used commands

Imcrop(, ); This command is used to crop the particular image. It accepts two primary arguments:  The image to be cropped.  The coordinates of a rectangle that defines the crop area. If we call imcrop without specifying the crop rectangle, we can specify the crop rectangle interactively. In this case, the cursor changes to crosshairs when it is over the image. Position the crosshairs over a corner of the crop region and press and hold the left mouse button. When we drag the crosshairs over the image we specify the rectangular crop region. imcrop draws a rectangle around the area we are selecting. When we release the mouse button, imcrop creates a new image from the selected region.

Syntax Imcrop(‘image_name.format’); Before After If we call imcrop with specifying the crop rectangle, imcrop operates on the image in the current axes. Syntax Imcrop(‘Image_name.format’, [rect]);

Imerode(, ): This command is used to erode the image. This function accepts two primary arguments:  The input image to be processed (grayscale, binary image),  A structuring element object, returned by the strel function, or a binary matrix defining the neighborhood of a structuring element. Syntax Imerode(‘binary_image.format’, strel); BeforeAfter

 Im2bw = This command is used to convert the given image in to binary image. This command is mainly followed by one another command i.e. level=graythresh(image_name.format). Syntax Level=graythresh(‘image_name.format’) Bw=im2bw(‘image_name.format’, level) BeforeAfter

 Duller = This command is used to fade the image, so that, other image on which it is overlapped can be easily viewed. Syntax Duller= * ‘image_name.format’ This is the amount by which the fading is done.  Combine = This command is used to combine or overlap the two images. This is the command on which over all project stands. Syntax Combine= image1 +image2. 0.5

We can convert images in any of formats/types described above using the following commands.  Intensity format to Indexed format. gray2ind( ) GrayIndexed

 indexed format to intensity format. ind2gray() IndexedGray

 RGB format to intensity format. rgb2gray( ) RGBGray

 RGB format to indexed format. rgb2ind( ) RGBIndexed

Image analysis return information about the structure of an image. This section describes toolbox functions that we can use for these image analysis techniques which includes:  Edge Detection  Boundary Tracing  Quadtree Decomposition

We can use the edge function to detect edges, which are those places in an image that correspond to object boundaries. To find edges, this function looks for places in the image where the intensity changes rapidly. Edge takes an intensity image I as its input, and returns a binary image BW of the same size as I, with 1's where the function finds edges in I and 0's elsewhere.

Canny Method The most powerful edge-detection method that edge provides is the Canny method. The Canny method differs from the other edge-detection methods in that it uses two different thresholds (to detect strong and weak edges), and includes the weak edges in the output only if they are connected to strong edges. This method is therefore less likely than the others to be fooled by noise, and more likely to detect true weak edges. Syntax Edge (‘image_name.fomat’,'canny'); BeforeAfter

The toolbox includes two functions you can use to find the boundaries of objects in a binary image:  Bwtraceboundary  Bwboundaries

Bwtraceboundary The bwtraceboundary function returns the row and column coordinates of all the pixels on the border of an object in an image. We must specify the location of a border pixel on the object as the starting point for the trace. Syntax boundary = bwtraceboundary(binary_image,[row, col],'N');

Bwboundaries The bwboundaries function returns the row and column coordinates of border pixels of all the objects in an image. Syntax binary_image _filled = imfill(binary_image,'holes'); boundaries = bwboundaries(binary_image _filled);

Example of above explained commands

Quadtree decomposition is an analysis technique that involves subdividing an image into blocks that are more homogeneous than the image itself. This technique reveals information about the structure of the image. Syntax qtdecomp(‘Image_name.format’,threshold); BeforeAfter

Digital images are prone to a variety of types of noise. There are several ways that noise can be introduced into an image, depending on how the image is created. For example: If the image is scanned from a photograph made on film, the film grain is a source of noise. Noise can also be the result of damage to the film, or be introduced by the scanner itself. If the image is acquired directly in a digital format, the mechanism for gathering the data (such as a CCD detector) can introduce noise.Electronic transmission of image data can introduce noise. The Image Processing toolbox provides a number of different ways to remove or reduce noise in an image. Different methods are better for different kinds of noise. The method which is used mostly to remove noise is Filtering.

Filtering is a technique for modifying or enhancing an image. It is a neighborhood operation, in which the value of any given pixel in the output image is determined by applying some algorithm to the values of the pixels in the neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel. Filtering is further categories as following:  Linear Filtering  Median Filtering  Adaptive Filtering

An image histogram is a chart that shows the distribution of intensities in an indexed or intensity image. The image histogram function imhist creates this plot by making n equally spaced bins, each representing a range of data values. It then calculates the number of pixels within each range. Syntax Imhist(‘Image_name.format’);

Histogram of a given image

F INGERPRINT R ECOGNITION or F INGERPRINT A UTHENTICATION refers to the automated method of verifying a match between two human fingerprints. Fingerprints are one of many forms of biometrics used to identify an individual and verify their identity. PATTERNSMINUTIA Fingerprint Recognition touches on two major classes of algorithms, one is based upon PATTERNS and other one is based upon MINUTIA.

The basic patterns of fingerprint ridges includes :-  Arch pattern: An arch is a pattern where the ridges enter from one side of the finger, rise in the center forming an arc, and then exit the other side of the finger as shown in following figure..

 Loop Pattern: It is a pattern where the ridges enter from one side of a finger, form a curve, and tend to exit from the same side they enter as shown in the following figure..

 Whirl Pattern: It is completely different from others. In this ridges form circularly around a central point on the finger as shown in the following figure..

The major Minutia features of fingerprint ridges includes :  Ridge Ending: The ridge ending is the point at which a ridge terminates and this could be better explained with the help of image, which is shown below..

 Bifurcations : These are the points at which a single ridge splits into two ridges as shown in the following image..

 Short ridges (or dots): These are ridges which are significantly shorter than the average ridge length on the fingerprint as shown in the following image.. Short Ridge (or Dot)

There are various methods of fingerprint recognization using MATLAB which includes:-  Overlapping or Combining.  Comparing slopes of ridges.  Other methods.

Overlapping or Combining Method  It is the one of the oldest technique of fingerprint matching. In this the two images of known fingerprints and unknown fingerprints are overlapped and checked for similarities. Also for recognization the position of the pixels of combined image is matched with the other images for the better result.

Basically for overlapping the images, firstly they are converted into the binary images and then they are overlapped. Also for improving the results some other operation are also done like normalization, thinning etc.. according to the requirements.

Program i=imread('fp1.bmp'); imview(i); level=graythresh(i); bw=im2bw(i,level); imview(bw); imview(~bw); imshow(bw,[1 0 0 ; 0 0 1]);

 Result of the above used commands

i2=imread('fp2.bmp'); imview(i2); level=graythresh(i); bw2=im2bw(i2,level); imview(bw2); imview(~bw2); imshow(bw2,[1 0 0 ; 0 0 1]);

Result of the above used commands

duller= bw2 * 0.5; imview(duller); combine=bw+duller; imview(combine);

 From the above program, we come to know that whether two finger prints are similar or not. As it is clear from the above program that two fingerprints does not overlap each other completely. Hence these finger prints are of two different persons.

Advantages  It is the simplest method for fingerprint authentication.  It is less time consuming method.  It is comparatively easy to implement.  It is an interactive method for recognizing fingerprints. Disadvantages  It does not give us accurate result everytime because it does not relates the minutia features.  We can only overlap the pictures having same dimensions.

This method is mainly used in these days for fingerprint authentication. In this method we compare the slopes of the ridges of two images of fingerprints. This method is mainly based upon matching minutia.

Program Step 1: Load Image RGB = imread('fingerprint.bmp'); imshow(RGB); imview(RGB); Original image

Step 2: Extract The Region Of Interest start_row = 73; start_col = 105; cropRGB = RGB(start_row:134, start_col:233, :); figure, imshow(cropRGB); imview(RGB); offsetX = start_col-1; offsetY = start_row-1; Cropped image

Step 3: Threshold The Image Convert the image to black and white for subsequent extraction of the edge coordinates using bwtraceboundary routine. I = rgb2gray(cropRGB); threshold = graythresh(I); BW = im2bw(I,threshold); BW = ~BW; Figure, imshow(BW); i2=edge(BW, 'canny'); figure, imshow(i2); imview(i2); Binary image Canny image

Step 4: Find Initial Point On Each Boundary The bwtraceboundary routine requires that you specify a single point on a boundary. This point is used as the starting location for the boundary tracing process. dim = size(i2); col1 = 40; row1 = min(find(i2(:,col1))); row2 = 30; col2 = min(find(i2(row2,:)));

Step 5: Trace The Boundaries boundary1 = bwtraceboundary(BW, [row1, col1], 'N', 8, 70); boundary2 = bwtraceboundary(i2, [row2, col2], 'E', 8, 90,'counter'); figure, imshow(RGB); imview(RGB); plot(offsetX+boundary1(:,2),offsetY+boundary1(:,1),'g','LineWidth',2); plot(offsetX+boundary2(:,2),offsetY+boundary2(:,1),'g','LineWidth',2); Marked image

Step 6: Fit Lines To The Boundaries ab1 = polyfit(boundary1(:,2), boundary1(:,1), 1); ab2 = polyfit(boundary2(:,2), boundary2(:,1), 1);

Step 7: Find The Angle Of Intersection Use the dot product to find the angle. vect1 = [1 ab1(1)]; vect2 = [1 ab2(1)]; dp = dot(vect1, vect2); length1 = sqrt(sum(vect1.^2)); length2 = sqrt(sum(vect2.^2)); angle = 180-acos(dp/(length1*length2))*180/pi

Result of all used command in program

Angle between the marked ridges of image is

Advantages  This one is more accurate than the overlapping method because it is based upon minutia.  It is an interactive method for recognizing fingerprints. Disadvantages  It is more time consuming as compared to the former.  More complex program.