T490 (IP): Tutorial 2 Chapter 2: Digital Image Fundamentals

Slides:



Advertisements
Similar presentations
Image Processing Ch2: Digital image Fundamentals Part 2 Prepared by: Tahani Khatib.
Advertisements

電腦視覺 Computer and Robot Vision I
Image Display MATLAB functions for displaying image Bit Planes
Chap 1 Image fundamental. Trends Image processing techniques have developed from Gray-level processing to color processing 2-D processing to 3-D processing.
Digital Image Processing Lecture 12: Image Topology
Digital Image Processing Chapter 2: Digital Image Fundamentals.
Preprocessing ROI Image Geometry
Chapter 2: Digital Image Fundamentals Fall 2003, 劉震昌.
Chapter 2 Digital Image Fundamentals
Digital Image Fundamentals
Digital Images Chapter 8 Exploring the Digital Domain.
Digital Image Processing
What is an image? f(x,y):2 
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Chapter 3: Image Restoration Geometric Transforms.
Digital Image Processing Lecture 2
Digital Image Processing in Life Sciences March 14 th, 2012 Lecture number 1: Digital Image Fundamentals.
Digital Image Fundamentals II 1.Image modeling and representations 2.Pixels and Pixel relations 3.Arithmetic operations of images 4.Image geometry operation.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Chapter Two Digital Image Fundamentals. Agenda: –Light and Electromagnetic Spectrum –Image Sensing & Acquisition –Image Sampling & quantization –Relationship.
Digital Image Processing Lecture 6: Image Geometry
Digital Image Processing CCS331 Relationships of Pixel 1.
Chapter Teacher: Remah W. Al-Khatib. This lecture will cover:  The human visual system  Light and the electromagnetic spectrum  Image representation.
Digital Image Fundamentals Faculty of Science Silpakorn University.
A Simple Image Model Image: a 2-D light-intensity function f(x,y)
Spring 2012Meeting 2, 7:20PM-10PM1 Image Processing with Applications-CSCI567/MATH563 Lectures 3, 4, and 5: L3. Representing Digital Images; Zooming. Bilinear.
3. Image Sampling & Quantisation 3.1 Basic Concepts To create a digital image, we need to convert continuous sensed data into digital form. This involves.
Medical Image Processing & Neural Networks Laboratory 1 Medical Image Processing Chapter 2 Digital Image Fundamentals 國立雲林科技大學 資訊工程研究所 張傳育 (Chuan-Yu Chang.
© 2002 by Yu Hen Hu 1 ECE533 Digital Image Processing Image Acquisition.
CS482 Selected Topics in Digital Image Processing بسم الله الرحمن الرحيم Instructor: Dr. Abdullah Basuhail,CSD, FCIT, KAU, 1432H Chapter 2: Digital Image.
Digital Image Processing Lecture4: Fundamentals. Digital Image Representation An image can be defined as a two- dimensional function, f(x,y), where x.
Digital Image Processing In The Name Of God Digital Image Processing Lecture2: Digital Image Fundamental M. Ghelich Oghli By: M. Ghelich Oghli
Ch2: Digital image Fundamentals Prepared by: Hanan Hardan
Image Processing Ch2: Digital image Fundamentals Prepared by: Tahani Khatib.
CS654: Digital Image Analysis Lecture 4: Basic relationship between Pixels.
Digital Topology CIS 601 Fall 2004 Longin Jan Latecki.
Digital Image Processing
Some Basic Relationships Between Pixels Definitions: –f(x,y): digital image –Pixels: q, p –Subset of pixels of f(x,y): S.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Image Processing Chapter(3) Part 1:Relationships between pixels Prepared by: Hanan Hardan.
Relationship between pixels Neighbors of a pixel – 4-neighbors (N,S,W,E pixels) == N 4 (p). A pixel p at coordinates (x,y) has four horizontal and vertical.
Digital Image Processing CCS331 Relationships of Pixel 1.
What is Digital Image Processing?
Image Sampling and Quantization
Some Basic Relationships Between Pixels
Digital Image Fundamentals
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
Miguel Tavares Coimbra
Chapter 6: Image Geometry 6.1 Interpolation of Data
Image Processing Digital image Fundamentals. Introduction to the course Grading – Project: 30% – Midterm Exam: 30% – Final Exam : 40% – Total: 100% –
F(x,y) قيمة البيكسل الواحد توجد 3 مراحل لمعالجة الصورة
IMAGE PROCESSING Questions and Answers.
Image Geometry and Geometric Transformation
Image Formation and Processing
Image quantization By Student Manar naji. quantization The values obtained by sampling a continuous function usually comprise of an infinite set or.
Mean Shift Segmentation
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Image Acquisition.
Digital Image Fundamentals
CSC 381/481 Quarter: Fall 03/04 Daniela Stan Raicu
Subject Name: IMAGE PROCESSING Subject Code: 10EC763
Copyright © Cengage Learning. All rights reserved.
Digital Image Processing
Magnetic Resonance Imaging
© 2010 Cengage Learning Engineering. All Rights Reserved.
Fundamentals of Image Processing Digital Image Representation
Miguel Tavares Coimbra
Lecture 2 Digital Image Fundamentals
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

T490 (IP): Tutorial 2 Chapter 2: Digital Image Fundamentals 16/11/2018

2.4 Image Sampling and Quantization Consider an image as a 2-D function f(x,y) The above image may be continuous in coordinates x, y and in amplitude. Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. 16/11/2018

16/11/2018

16/11/2018

2.4.2 Representing Digital Images Consider a digital image f(x,y). It can be considered as 2-D array of M rows and N columns. x=0,1,2....M-1 and y=0,1,2.....N-1. x, y are called Spatial Coordinates. The values of f at x, y are called Intensities. Symbol L is usually used to represent the Discrete Intensity Levels. Images can be monochromatic (gray scale) or chromatic (colour). Chapter 6 deals with colour Image Procesing. 16/11/2018

16/11/2018

Matrix form: MxN digital image can be represented in the following compact matrix form: Each element of this matrix array is called an image element, picture element, pixel, or pel. 16/11/2018

The number of gray levels typically is an integer power of 2: Size and Storage: The number of gray levels typically is an integer power of 2: The number, b, of bits required to store a digitized image is: “k-bit image”: For example, an image with 256 possible gray-level values is called an 8-bit image. 16/11/2018

16/11/2018

2.4.3 Spatial and Gray-Level Resolution Intuitively, Spatial Resolution (SR) is a measure of the smallest discernable detail in an image. Quantitatively: It is line pairs per unit distance, dots (pixels) per unit distance. Examples: Newspapers are printed with a resolution of 75dpi, magazines at 133dpi, glossy brochures at 175dpi. 16/11/2018

Intensity Resolution: Intensity Resolution (IR) refers to the smallest discernable change in intensity level. The number of Intensity Levels usually is an integer power of 2. The most common number is 8 bits (256 levels). 16/11/2018

Fig 2.20 shows the effects of reducing SR in an image. Example 2.2: Fig 2.20 shows the effects of reducing SR in an image. The images are shown in 1250, 300, 150, and 72 dpi. The lower resolution images are smaller than the original. The original is of size 3692x2812 pixels, but the 72 dpi image is an array of size 213x162. 16/11/2018

16/11/2018

Effect of varying the Intensity Levels. Example 2.3: Effect of varying the Intensity Levels. Here we keep the number of samples constant and reduce the number of intensity levels from 256 to 2. Fig 2.21 shows this effect. 16/11/2018

16/11/2018

16/11/2018

Used for zooming, shrinking, rotating, geometric corrections. 2.4.4 Image Interpolation Used for zooming, shrinking, rotating, geometric corrections. The first two are Image resizing tasks and come under image resampling methods. In this chapter we study Interpolation for Image resizing (zooming, shrinking). Interpolation is the process of using known data to estimate values at unknown location. 16/11/2018

Bilinear Interpolation: Here we use the 4 nearest neighbours to estimate the intensity at a given location. Let (x,y) denote the coordinates of the location to which we want to assign an intensity value and let v(x,y) denote that value, then: Here the 4 coefficients are determined from the 4 equations in 4 unknowns using the 4 nearest neighbours of point (x,y). 16/11/2018

Bicubic Interpolation: Involves 16 nearest neighbours of a point. Here the 16 coefficients are determined from the 16 equations in 16 unknowns using the 16 nearest neighbours of point (x,y). 16/11/2018

Example: 2.4 16/11/2018

2.5 Some Basic Relationships Between Pixels 2.5.1 Neighbors of a Pixel 16/11/2018

2.5.2 Adjacency, Connectivity, Regions, and Boundaries Connectivity between pixels is a fundamental concept that simplifies the definition of numerous digital image concepts, such as regions and boundaries. To establish if two pixels are connected, it must be determined if they are neighbors and if their gray levels satisfy a specified criterion of similarity (say, if their gray levels are equal). For instance, in a binary image with values 0 and 1, two pixels maybe 4-neighbors, but they are said to be connected only if they have the same value. 16/11/2018

Let V be the set of intensity values used to define adjacency. In a binary image, V={1} if we are referring to adjacency of pixels with value 1. In a gray-scale image, the idea is the same, but set V typically contains more elements. For example, in the adjacency of pixels with a range of possible gray-level values 0 to 255, set V could be any subset of these 256 values. We consider 3 types of adjacency as shown in next slide. 16/11/2018

16/11/2018

In this case, n is the length of the path. A (digital) path (or curve) from pixel p with coordinates (x, y) to pixel q with coordinates (s, t) is a sequence of distinct pixels with coordinates: where are adjacent for In this case, n is the length of the path. 16/11/2018

The path in Fig. 2.25(c) is an m-path. We can define 4-, 8-, or m-paths depending on the type of adjacency specified. For example, the paths shown in Fig. 2.25(b) between the top right and bottom right points are 8-paths, and The path in Fig. 2.25(c) is an m-path. 16/11/2018

Let S represent a subset of pixels in an image. Connected set: Let S represent a subset of pixels in an image. Two pixels p and q are said to be connected in S if there exists a path between them consisting entirely of pixels in S. For any pixel p in S, the set of pixels that are connected to it in S is called a connected component of S. If it only has one connected component, then set S is called a connected set. 16/11/2018

Let R be a subset of pixels in an image. Region: Let R be a subset of pixels in an image. We call R a region of the image if R is a connected set. Two regions are said to be adjacent if their union forms a connected set. Regions that are not adjacent are said to be disjoint. We consider 4- and 8- adjacency when referring to regions. Boundary: The boundary (also called border or contour) of a region R is the set of points that are adjacent to points in the complement of R. 16/11/2018

There is a key difference between edges & boundaries. Edges are formed from pixels with derivative values that exceed a preset threshold. There is a key difference between edges & boundaries. The boundary of a finite region forms a closed path and is thus a “global” concept. The idea of an edge is a “local” concept that is based on a measure of gray-level discontinuity at a point. It is possible to link edge points into edge segments, and sometimes these segments are linked in such a way that correspond to boundaries, but this is not always the case. The one exception in which edges and boundaries correspond is in binary images. For time being, it is helpful to think of edges as intensity discontinuities and boundaries as closed paths. 16/11/2018

2.5.3 Distance Measures 16/11/2018

16/11/2018

16/11/2018

16/11/2018

Course Software Package: Scilab (open source and free). http://www.scilab.org/ Note: Matlab is extensively used for IP but it is expensive with license requirements. Scilab Toolboxes (http://atoms.scilab.org/): IPD SIVP Other Libraries: OpenCV 16/11/2018

Installation Steps Sequence: Install scilab 5.2.2 with the ATLAS LIB OPTION selected during installation process. INSTALL OpenCV. After the first 2 steps, first load SIVP and then load IPD (put them in the CONTRIB dir). Starting Scilab will show SIVP and IPD on top of console window under toolboxes. Click on SIVP to install it. From ATOMS manager of the Scilab console window, install the IPD toolbox. 16/11/2018

Scilab Tutorials: Intro to Scilab tutorial from www.scilab.org Some IP: http://www.comp.dit.ie/bmacnamee/ma terials/dip/labs/IntroToScilab.pdf 16/11/2018

Practice Questions (ungraded) I/O and display: Read the images peppers.png and baboon.png. Display them as colour images. Conversion: Convert them to gray-scale images and display them. Processing: Reduce their sizes to 200x200 using bilinear and bicubic interpolation methods available in Scilab. Hints/Help: imread, imwrite, ReadImage, imshow, ShowImage, ShowColorImage, imresize 16/11/2018

Peppers.png 16/11/2018

Baboon.png 16/11/2018

200x200, bilinear 16/11/2018

200x200, bicubic 16/11/2018