Artificial Intelligence Chapter 6 Robot Vision Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.

Slides:



Advertisements
Similar presentations
Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Advertisements

Spatial Filtering (Chapter 3)
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Artificial Intelligence Chapter 5 State Machines.
EDGE DETECTION.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
MSU CSE 803 Stockman Fall 2009 Vectors [and more on masks] Vector space theory applies directly to several image processing/representation problems.
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Lecture 2: Image filtering
The Segmentation Problem
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Introduction to Image Processing Grass Sky Tree ? ? Sharpening Spatial Filters.
Digital Image Processing CCS331 Relationships of Pixel 1.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Image Processing Jitendra Malik. Different kinds of images Radiance images, where a pixel value corresponds to the radiance from some point in the scene.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
APECE-505 Intelligent System Engineering Basics of Digital Image Processing! Md. Atiqur Rahman Ahad Reference books: – Digital Image Processing, Gonzalez.
Reference books: – Digital Image Processing, Gonzalez & Woods. - Digital Image Processing, M. Joshi - Computer Vision – a modern approach, Forsyth & Ponce.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing
CSE 185 Introduction to Computer Vision Image Filtering: Spatial Domain.
Digital Image Processing CSC331
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
Artificial Intelligence Chapter 7 Agents That Plan Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Image Enhancement in the Spatial Domain.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Chapter 6. Robot Vision.
Computer Vision Lecture 9: Edge Detection II
Artificial Intelligence Chapter 3 Neural Networks
Spatial operations and transformations
Image Segmentation Image analysis: First step:
Digital Image Processing
Artificial Intelligence Chapter 3 Neural Networks
Linear Operations Using Masks
Artificial Intelligence Chapter 3 Neural Networks
Edge Detection in Computer Vision
Artificial Intelligence Chapter 3 Neural Networks
Lecture 2: Image filtering
IT472 Digital Image Processing
IT472 Digital Image Processing
Spatial operations and transformations
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Artificial Intelligence Chapter 6 Robot Vision Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

(c) SNU CSE Biointelligence Lab2 Introduction Computer vision  Endowing machines with the means to “see”  Create an image of a scene and extract features  Very difficult problem for machines  Several different scenes can produce identical images.  Images can be noisy.  Cannot directly ‘invert’ the image to reconstruct the scene. Figure 6.1 The Many-to-One Nature of the Imaging Process

(c) SNU CSE Biointelligence Lab3 Steering an Automobile ALVINN system [Pomerleau 1991,1993]  Uses Artificial Neural Network  Used 30*32 TV image as input (960 input node)  5 Hidden node  30 output node  Training regime: modified “on-the-fly”  A human driver drives the car, and his actual steering angles are taken as correct labels for the corresponding inputs.  Shifted and rotated images were also used for training.  ALVINN has driven for 120 consecutive kilometers at speeds up to 100km/h.

(c) SNU CSE Biointelligence Lab4 Steering an Automobile-ALVINN network Figure 6.2 The ALVINN Network

(c) SNU CSE Biointelligence Lab5 Two stages of Robot Vision (1/3) Finding out objects in the scene  Looking for “edges” in the image  Edge:a part of the image across which the image intensity or some other property of the image changes abruptly.  Attempting to segment the image into regions.  Region:a part of the image in which the image intensity or some other property of the image changes only gradually. Figure 6.3 Scene Discontinuities

(c) SNU CSE Biointelligence Lab6 Two stages of Robot Vision (2/3) Image processing stage  Transform the original image into one that is more amendable to the scene analysis stage.  Involves various filtering operations that help reduce noise, accentuate edges, and find regions. Scene analysis stage  Attempt to create an iconic or a feature-based description of the original scene, providing the task-specific information. Figure 6.4 The Two Stages of Robot Vision

(c) SNU CSE Biointelligence Lab7 Two stages of Robot Vision (3/3) Scene analysis stage produces task-specific information.  If only the disposition of the blocks is important, appropriate iconic model can be (C B A FLOOR)  If it is important to determine whether there is another block on top of the block labeled C, adequate description will include the value of a feature, CLEAR_C. Figure 6.5 A Robot in a Room with Toy Blocks

(c) SNU CSE Biointelligence Lab8 Averaging (1/4) Original image can be represented as an m*n array of numbers. The numbers represent the light intensities at corresponding points in the image. Certain irregularities in the image can be smoothed by an averaging operation. Averaging operation involves sliding an averaging widow all over the image array.

(c) SNU CSE Biointelligence Lab9 Averaging (2/4) Smoothing operation thickens broad lines and eliminates thin lines and small details. The averaging window is centered at each pixel, and the weighted sum of all the pixel numbers within the averaging window is computed. This sum then replaces the original value at that pixel. Figure 6.6 Elements of the Averaging Operation

(c) SNU CSE Biointelligence Lab10 Averaging (3/4) Common function used for smoothing is a Gaussian of two dimensions. Convolving an image with a Gaussian is equivalent to finding the solution to a diffusion equation when the initial condition is given by the image intensity field.

(c) SNU CSE Biointelligence Lab11 Averaging (4/4) Figure 6.7 The Gaussian Smoothing Function Figure 6.8 Image Smoothing with a Gaussian Filter

(c) SNU CSE Biointelligence Lab12 Edge enhancement (1/2) Edge: any boundary between parts of the image with markedly different values of some property. Edges are often related to important object properties. Edges in the image occur at places where the second derivative of the image intensity is zero.

(c) SNU CSE Biointelligence Lab13 Edge enhancement (2/2) Figure 6.9 Edge Enhancement Figure 6.10 Taking Derivatives of Image Intensity

(c) SNU CSE Biointelligence Lab14 Combining Edge Enhancement with Averaging (1/2) Edge enhancement alone would tend to emphasize noise elements along with enhancing edges. To be less sensitive to noise, both operations are needed. (First averaging and then edge enhancing) We can convolve the one-dimensional image with the second derivative of a Gaussian curve to combine both operation.

(c) SNU CSE Biointelligence Lab15 Combining Edge Enhancement with Averaging (2/2) Laplacian is second-derivate-type operation that enhances edges of any orientation. Laplacian of the two-dimensional Gaussian function looks like an upside-down hat, often called a sombrero function. Entire averaging/edge-finding operation can be achieved by convolving the image with the sombrero function (called Laplacian filtering) Figure 6.11 The Sombrero Function Used in Laplacian Filtering