Computer Vision Lecture 3: Digital Images

Slides:



Advertisements
Similar presentations
Digital Image Processing Lecture 3: Image Display & Enhancement
Advertisements

電腦視覺 Computer and Robot Vision I
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Image Display MATLAB functions for displaying image Bit Planes
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
BYST Eh-1 DIP - WS2002: Enhancement in the Spatial Domain Digital Image Processing Bundit Thipakorn, Ph.D. Computer Engineering Department Image Enhancement.
Digital Image Processing
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Image Enhancement To process an image so that the result is more suitable than the original image for a specific application. Spatial domain methods and.
Introduction to Cognitive Science Lecture 2: Vision in Humans and Machines 1 Vision in Humans and Machines September 10, 2009.
Digital Image Processing
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Image processing Lecture 4.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
By Meidika Wardana Kristi, NRP  Digital cameras used to take picture of an object requires three sensors to store the red, blue and green color.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Image Formation Fundamentals Basic Concepts (Continued…)
CS 6825: Binary Image Processing – binary blob metrics
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
CS6825: Point Processing Contents – not complete What is point processing? What is point processing? Altering/TRANSFORMING the image at a pixel only.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
infinity-project.org Engineering education for today’s classroom 2 Outline How Can We Use Digital Images? A Digital Image is a Matrix Manipulating Images.
Image Compression Fasih ur Rehman. Goal of Compression Reduce the amount of data required to represent a given quantity of information Reduce relative.
3. Image Sampling & Quantisation 3.1 Basic Concepts To create a digital image, we need to convert continuous sensed data into digital form. This involves.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
Digital Image Processing (DIP) Lecture # 5 Dr. Abdul Basit Siddiqui Assistant Professor-FURC 1FURC-BCSE7.
Digital Image Processing Lecture 3: Image Display & Enhancement March 2, 2005 Prof. Charlene Tsai.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Image Processing Ch2: Digital image Fundamentals Prepared by: Tahani Khatib.
stereo Outline : Remind class of 3d geometry Introduction
Digital Image Processing EEE415 Lecture 3
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Digital Image Processing
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Digital Image Processing Image Enhancement in Spatial Domain
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CCS331 Relationships of Pixel 1.
April 14, 2016Introduction to Artificial Intelligence Lecture 20: Image Processing 1 Example I: Predicting the Weather Let us study an interesting neural.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Digital Image Processing CCS331 Camera Model and Imaging Geometry 1.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
ITEC2110, Digital Media Chapter 2 Fundamentals of Digital Imaging 1 GGC -- ITEC Digital Media.
Bitmap Image Vectorization using Potrace Algorithm
Image Processing Digital image Fundamentals. Introduction to the course Grading – Project: 30% – Midterm Exam: 30% – Final Exam : 40% – Total: 100% –
Digital Image Processing
Fourier Transform: Real-World Images
Common Classification Tasks
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Histogram Histogram is a graph that shows frequency of anything. Histograms usually have bars that represent frequency of occuring of data. Histogram has.
Computer Vision Lecture 3: Digital Images
Computer Vision Lecture 9: Edge Detection II
Example I: Predicting the Weather
Magnetic Resonance Imaging
Intensity Transformation
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Computer Vision Lecture 3: Digital Images A simple two-stage model of computer vision: Image processing Scene analysis Bitmap image Scene description Prepare image for scene analysis Build an iconic model of the world feedback (tuning) September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images The image processing stage prepares the input image for the subsequent scene analysis. Usually, image processing results in one or more new images that contain specific information on relevant features of the input image. The information in the output images is arranged in the same way as in the input image. For example, in the upper left corner in the output images we find information about the upper left corner in the input image. September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images The scene analysis stage interprets the results from the image processing stage. Its output completely depends on the problem that the computer vision system is supposed to solve. For example, it could be the number of bacteria in a microscopic image, or the identity of a person whose retinal scan was input to the system. In the following lectures we will focus on the lower-level, i.e., image processing techniques. Later we will discuss a variety of scene analysis methods and algorithms. September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images How can we turn a visual scene into something that can be algorithmically processed? Usually, we map the visual scene onto a two-dimensional array of intensities. In the first step, we have to project the scene onto a plane. This projection can be most easily understood by imagining a transparent plane between the observer (camera) and the visual scene. The intensities from the scene are projected onto the plane by moving them along a straight line from their initial position to the observer. The result will be a two-dimensional projection of the three-dimensional scene as it is seen by the observer. September 9, 2014 Computer Vision Lecture 3: Digital Images

Digitizing Visual Scenes (x’, y’) (x, y, z) x’ y’ z y x f Obviously, any 3D point (x, y, z) is mapped onto a 2D point (x’, y’) by the following equations: September 9, 2014 Computer Vision Lecture 3: Digital Images

Actual Camera Geometry September 9, 2014 Computer Vision Lecture 3: Digital Images

Color Imaging via Bayer Filter September 9, 2014 Computer Vision Lecture 3: Digital Images

Magnetic Resonance Imaging September 9, 2014 Computer Vision Lecture 3: Digital Images

Digitizing Visual Scenes In this course, we will mostly restrict our concept of images to grayscale. Grayscale values usually have a resolution of 8 bits (256 different values), in medical applications sometimes 12 bits (4096 values), or in binary images only 1 bit (2 values). We simply choose the available gray level whose intensity is closest to the gray value color we want to convert. September 9, 2014 Computer Vision Lecture 3: Digital Images

Digitizing Visual Scenes With regard to spatial resolution, we will map the intensity in our image onto a two-dimensional finite array: y’ x’ [0, 0] [0, 1] [0, 2] [0, 3] [1, 0] [1, 1] [1, 2] [1, 3] [2, 0] [2, 1] [2, 2] [2, 3] September 9, 2014 Computer Vision Lecture 3: Digital Images

Digitizing Visual Scenes So the result of our digitization is a two-dimensional array of discrete intensity values. Notice that in such a digitized image F[i, j] the first coordinate i indicates the row of a pixel, starting with 0, the second coordinate j indicates the column of a pixel, starting with 0. In an m×n pixel array, the relationship between image (with origin in ints center) and pixel coordinates is given by the equations September 9, 2014 Computer Vision Lecture 3: Digital Images

Image Size and Resolution September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Intensity Resolution September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Dithering September 9, 2014 Computer Vision Lecture 3: Digital Images

Floyd-Steinberg Dithering The best dithering results are achieved by compensating for the thresholding error at a pixel x by adjusting the intensity of neighboring pixels. Floyd-Steinberg dithering uses the following matrix for this purpose: September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Levels of Computation As we discussed before, computer vision systems usually operate at various levels of computation. In the following, we will discuss different levels of computation as they occur during the image processing and scene analysis stages. We will formalize this concept by means of an operation O that receives a set of pixels and returns a single intensity value that can be used to determine the value of a pixel in the output image. We will look at operations at the point level, local level, global level, and object level mapping an input image FA[i, j] to an output image FB [i, j]. September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Point Level Operation type: FB[i, j] = Opoint{FA[i, j]} This means that the intensity of each pixel in the output image only depends on the intensity of the corresponding pixel in the input image. Examples for this kind of operation are inversion (creating a negative image), thresholding, darkening, increasing brightness etc. September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Local Level Operation type: FB[i, j] = Olocal{FA[k, l]; [k, l]  N[i, j]} Where N[i, j] is a neighborhood around the position [i, j]. For example, it could be defined as N[i, j] = {[u, v] | |i – u| < 3  |j – v| < 3}. Then the neighborhood would include all pixels within a 5×5 square centered at [i, j]. So the intensity of each pixel in the output image depends on the intensity of pixels in the neighborhood of the corresponding position in the input image: Examples: Blurring, sharpening. September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Global Level Operation type: FB[i, j] = Oglobal{FA[k, l]; 0 ≤ k < m, 0 ≤ l < n} for an m×n image. So the intensity of each pixel in the output image may depend on the intensity of any pixel in the input image. Examples: histogram modification, rotating the image September 9, 2014 Computer Vision Lecture 3: Digital Images

Computer Vision Lecture 3: Digital Images Object Level The goal of computer vision algorithms usually is to determine properties of an image with regard to specific objects shown in it. To do this, operations must be performed at the object level, that is, include all pixels that belong to a particular object. Problem: We must use all points that belong to an object to determine its properties, but we need some of these properties to determine the pixels that belong to the object. While this seems effortless in biological systems, we will later see that complex algorithms are required to solve this problem in an artificial system. September 9, 2014 Computer Vision Lecture 3: Digital Images