Advanced Multimedia Image Content Analysis Tamara Berg.

Slides:



Advertisements
Similar presentations
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Motion illusion, rotating snakes. Slide credit Fei Fei Li.
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Computational Photography CSE 590 Tamara Berg Filtering & Pyramids.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Computer Vision – Image Representation (Histograms)
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Linear Filters and Convolution
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Lecture 2: Image filtering
The blue and green colors are actually the same.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Neighborhood Operations
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Computational Photography Tamara Berg Features. Representing Images Keep all the pixels! Pros? Cons?
Image Processing Edge detection Filtering: Noise suppresion.
Lecture 3: Edge detection CS4670/5670: Computer Vision Kavita Bala From Sandlot ScienceSandlot Science.
Image Filtering Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/02/10.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Image gradients and edges Tuesday, September 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
Linear filtering. Motivation: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What’s.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSE 185 Introduction to Computer Vision Edges. Scale space Reading: Chapter 3 of S.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Advanced Multimedia Image Content Analysis Tamara Berg.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CS 691B Computational Photography
CS654: Digital Image Analysis
COMPUTER VISION D10K-7C02 CV05: Edge Detection Dr. Setiawan Hadi, M.Sc.CS. Program Studi S-1 Teknik Informatika FMIPA Universitas Padjadjaran.
Images and Filters CSEP 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick.
Finding Boundaries Computer Vision CS 143, Brown James Hays 09/28/11 Many slides from Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li,
776 Computer Vision Jan-Michael Frahm Spring 2012.
Winter in Kraków photographed by Marcin Ryczek
Edge Detection Images and slides from: James Hayes, Brown University, Computer Vision course Svetlana Lazebnik, University of North Carolina at Chapel.
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 13: Feature Descriptors and Matching
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Local features: main components
Scale Invariant Feature Transform (SIFT)
Scale and interest point descriptors
Image gradients and edges
Paper Presentation: Shape and Matching
Histogram—Representation of Color Feature in Image Processing Yang, Li
Feature description and matching
Edge Detection CS 678 Spring 2018.
Lecture 2: Edge detection
Jeremy Bolton, PhD Assistant Teaching Professor
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Image gradients and edges April 11th, 2017
Motion illusion, rotating snakes
Lecture 2: Edge detection
SIFT keypoint detection
Lecture 2: Image filtering
Feature descriptors and matching
Lecture 2: Edge detection
Winter in Kraków photographed by Marcin Ryczek
Presentation transcript:

Advanced Multimedia Image Content Analysis Tamara Berg

Announcements HW2 due March 13, 11:59pm – Questions? HW3 online Thurs

How are images stored?

Images Images are sampled and quantized measurements of light hitting a sensor.

Images in the computer

Color Images Ellsworth Kelly Red Blue Green, 1963 img =

Color Images Ellsworth Kelly Red Blue Green, 1963 N img =

Color Images Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – Image matrix would be size NxMx3 – ? = img(:,:,1)? = img(:,:,2)? = img(:,:,3) Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – Image matrix would be size NxMx3 – R = img(:,:,1)G = img(:,:,2)B = img(:,:,3) Ellsworth Kelly Red Blue Green, 1963 N M img =

Red component img(:,:,1) =

Green component img(:,:,2) =

Blue component img(:,:,3) =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – How could you get the r,g,b values for pixel (i,j)? Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel vals = img(i,j,:); Then r= vals(1); g = vals(2); b = vals(3); Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – So r,g,b values are stored in img(i,j,:). – In the case above these values might be [1 0 0]. Ellsworth Kelly Red Blue Green, 1963 N M img = i j

RGB color space RGB cube – Easy for devices – But not perceptual – Where do the grays live? – Where is hue and saturation? Slide by Steve Seitz

HSV Cone Hue, Saturation, Value (Intensity/Brightness ) Use rgb2hsv() and hsv2rgb() in Matlab

How should we represent images?

Motivation Image retrieval – We have a database of images – We have a query image – We want to find those images in our database that are most similar to the query

Motivation Image retrieval – We have a database of images – We have a query image – We want to find those images in our database that are most similar to the query Similarly to text retrieval, & music retrieval we first need a representation for our data.

How should we represent an image?

First try Just represent the image by all its pixel values

First try Img1 = Img2 = Say we measure similarity as: sim = sum(abs(img1 – img2))

First try How similar are these two images? Is this bad? Img1 = Img2 = Say we measure similarity as: sim = average diff between values in img1 and img2

What do we want? Features should be robust to small changes in the image such as: – Translation – Rotation – Illumination changes

Second Try Represent the image as its average pixel color Photo by: marielitomarielito

Second Try Represent the image as its average pixel color Pros? Cons? Photo by: marielitomarielito

Third Try Represent the image as a spatial grid of average pixel colors Pros? Cons? Photo by: marielitomarielito

QBIC system First content based image retrieval system – Query by image content (QBIC) – IBM 1995 – QBIC interprets the virtual canvas as a grid of coloured areas, then matches this grid to other images stored in the database. QBIC link

Color is not always enough! The representation of these two umbrella’s should be similar…. Under a color based representation they look completely different!

What next? Edges! But what are they & how do we find them?

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 f = g = signal filter

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** = -2.25

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** = Convolution computes a weighted average.

Images -> 2d filtering

Let’s replace each pixel with a weighted average of its neighborhood The weights are called the filter kernel What are the weights for a 3x3 unweighted moving average? Moving average Source: D. Lowe

Let’s replace each pixel with a weighted average of its neighborhood The weights are called the filter kernel What are the weights for a 3x3 unweighted moving average? Moving average “box filter” Source: D. Lowe

“box filter” g(x,y)

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz What is this filter doing?

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Filtered (no change) Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Shifted left By 1 pixel Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Blur (with a box filter) Source: D. Lowe

Gaussian Kernel Constant factor at front makes volume sum to 1 (can be ignored, as we should re-normalize weights to sum to 1 in any case) x 5,  = 1 Source: C. Rasmussen

Example: Smoothing with a Gaussian source: Svetlana Lazebnik

Edges

Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist’s line drawing (but artist is also using object-level knowledge) Source: D. Lowe

Origin of Edges Edges are caused by a variety of factors depth discontinuity surface color discontinuity illumination discontinuity surface normal discontinuity Source: Steve Seitz

Characterizing edges An edge is a place of rapid change in the image intensity function image intensity function (along horizontal scanline) first derivative edges correspond to extrema of derivative source: Svetlana Lazebnik

Edge filters Approximations of derivative filters: Source: K. Grauman Convolve filter with image to get edge map

Edge filters Approximations of derivative filters: Source: K. Grauman Respond highly to vertical edges

Edge filters Approximations of derivative filters: Source: K. Grauman Respond highly to horizontal edges

Edges: example source: Svetlana Lazebnik

What about our umbrellas? The representation of these two umbrella’s should be similar…. Under a color based representation they look completely different! How about using edges?

Edges Edges extracted using convolution with Prewitt filter Red umbrellaGray umbrella

Edges Edges overlayed from red and gray umbrellas. How is this?

Edge Energy in Spatial Grid Red UmbrellaGray Umbrella How is this representation?

Quick overview of other common kinds of Features

Important concept: Histograms Graphical display of tabulated frequencies, shown as bars. It shows what proportion of cases fall into each of several categories. The categories are usually specified as non- overlapping intervals of some variable.

Color Histograms Representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically (3D) color space (RGB, HSV etc). What are the bins in this histogram?

Shape Descriptors: shape context Representation of the local shape around a feature location (star) – as histogram of edge points in an image relative to that location. Computed by counting the edge points in a log polar space. So what are the bins of this histogram?

Descriptor computation: Divide patch into 4x4 sub-patches Compute histogram of gradient orientations (convolve with filters that respond to edges in different directions) inside each subpatch Resulting descriptor: 4x4x8 = 128 dimensions Shape descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004."Distinctive image features from scale-invariant keypoints.” source: Svetlana Lazebnik

Texture Features Universal texton dictionary histogram Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003 Convolve with various filters – spot, oriented bar. Compute histogram of responses.