Advanced Multimedia Image Content Analysis Tamara Berg.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Content-Based Image Retrieval
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Motion illusion, rotating snakes. Slide credit Fei Fei Li.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Computer Vision Lecture 16: Texture
Computational Photography CSE 590 Tamara Berg Filtering & Pyramids.
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
CSE 473/573 Computer Vision and Image Processing (CVIP)
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 1: Images and image filtering
Feature extraction: Corners and blobs
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Lecture 2: Image filtering
Linear filtering.
The blue and green colors are actually the same.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Neighborhood Operations
Computer vision.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Computational Photography Tamara Berg Features. Representing Images Keep all the pixels! Pros? Cons?
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Lecture 2: Edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Image Processing Edge detection Filtering: Noise suppresion.
Lecture 3: Edge detection CS4670/5670: Computer Vision Kavita Bala From Sandlot ScienceSandlot Science.
Image Filtering Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/02/10.
Why is computer vision difficult?
Advanced Multimedia Image Content Analysis Tamara Berg.
Linear filtering. Motivation: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What’s.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSE 185 Introduction to Computer Vision Edges. Scale space Reading: Chapter 3 of S.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CS 691B Computational Photography
CS654: Digital Image Analysis
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 4 – Audio and Digital Image Representation Klara Nahrstedt Spring 2010.
COMPUTER VISION D10K-7C02 CV05: Edge Detection Dr. Setiawan Hadi, M.Sc.CS. Program Studi S-1 Teknik Informatika FMIPA Universitas Padjadjaran.
Reconnaissance d’objets et vision artificielle
Linear filtering. Motivation: Image denoising How can we reduce noise in a photograph?
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Finding Boundaries Computer Vision CS 143, Brown James Hays 09/28/11 Many slides from Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li,
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
Winter in Kraków photographed by Marcin Ryczek
Edge Detection Images and slides from: James Hayes, Brown University, Computer Vision course Svetlana Lazebnik, University of North Carolina at Chapel.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Local features: main components
Scale Invariant Feature Transform (SIFT)
Image gradients and edges
Paper Presentation: Shape and Matching
Feature description and matching
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
Edge Detection CS 678 Spring 2018.
Lecture 2: Edge detection
Jeremy Bolton, PhD Assistant Teaching Professor
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Image gradients and edges April 11th, 2017
Computer Vision Lecture 16: Texture II
Motion illusion, rotating snakes
Lecture 2: Edge detection
SIFT keypoint detection
Feature descriptors and matching
Lecture 2: Edge detection
Winter in Kraków photographed by Marcin Ryczek
Presentation transcript:

Advanced Multimedia Image Content Analysis Tamara Berg

Announcements Start thinking about project ideas – These can be related to text, sound, images, combinations of different media, interaction with digital media, social media, … – Next week on March 10 or 12 visit office hours to discuss your project ideas (20 points for coming with a well thought out idea or ideas). – Can work alone or in pairs. – Project proposal presentations March 17 Reminder – Assignment due March 12 (a week from today).

Possible Project Ideas Build a spam/ham detector Extend your pagerank algorithm to do web search by query Build a document topic classification system Implement a music retrieval system based on low level audio features or pitch class profile Implement an image retrieval system Projects related to combined sources of information – images + maps, images + text Something cool with Twitter? Example – twittering the superbowl, or twitter + location information/maps. Projects related to interactivity Build a facebook app to do …

How are images stored?

Reminder: Images Images are sampled and quantized measurements of light hitting a sensor. What do we mean by sampled? What is being quantized?

Images in the computer

Color Images Ellsworth Kelly Red Blue Green, 1963 img =

Color Images Ellsworth Kelly Red Blue Green, 1963 N img =

Color Images Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – Image matrix would be size NxMx3 – R = img(:,:,1)G = img(:,:,2)B = img(:,:,3) Ellsworth Kelly Red Blue Green, 1963 N M img =

Red component img(:,:,1) =

Green component img(:,:,2) =

Blue component img(:,:,3) =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – So img(i,j,:) = [r g b] Ellsworth Kelly Red Blue Green, 1963 N M img =

Color Images Stored as 3d matrix of r, g, and b values at each pixel – So img(i,j,:) = [r g b]. – In the case above this might be [ ]. Ellsworth Kelly Red Blue Green, 1963 N M img = i j

Useful Matlab image functions img = imread(filename); – read in an image imagesc(img); – display an image imwrite(img,outfilename); – write an image img(i,j,:) indexes into the ith row, jth column of the image. subimg = img(1:10,20:30,:) extracts part of img.

Matlab demo 1 This will be very useful for homework 3!

How should we represent them?

Motivation Image retrieval – We have a database of images – We have a query image – We want to find those images in our database that are most similar to the query

Motivation Image retrieval – We have a database of images – We have a query image – We want to find those images in our database that are most similar to the query Similarly to text retrieval, & music retrieval we first need a representation for our data.

How should we represent an image?

First try Just represent the image by all its pixel values

First try Img1 = Img2 = Say we measure similarity as: sim = sum(abs(img1 – img2))

First try How similar are these two images? Is this bad? Img1 = Img2 = Say we measure similarity as: sim = average diff between values in img1 and img2

Matlab demo 2

What do we want? Features should be robust to small changes in the image such as: – Translation – Rotation – Illumination changes

Second Try Represent the image as its average pixel color Photo by: marielitomarielito

Second Try Represent the image as its average pixel color Pros? Cons? Photo by: marielitomarielito

Third Try Represent the image as a spatial grid of average pixel colors Pros? Cons? Photo by: marielitomarielito

QBIC system First content based image retrieval system – Query by image content (QBIC) – IBM 1995 – QBIC interprets the virtual canvas as a grid of coloured areas, then matches this grid to other images stored in the database. QBIC link

Matlab demo 3

Color is not always enough! The representation of these two umbrella’s should be similar…. Under a color based representation they look completely different!

What next? Edges! But what are they & how do we find them?

Reminder: Convolution

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 f = g = signal filter

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** = -2.25

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** =

Filtering Alternatively you can convolve the input signal with a filter to get frequency limited output signal. Convolution: (convolution demo)demo /41/21/4 *** = Convolution computes a weighted average.

Images -> 2d filtering

Let’s replace each pixel with a weighted average of its neighborhood The weights are called the filter kernel What are the weights for a 3x3 moving average? Moving average Source: D. Lowe

Let’s replace each pixel with a weighted average of its neighborhood The weights are called the filter kernel What are the weights for a 3x3 moving average? Moving average “box filter” Source: D. Lowe

Defining convolution in 2d f Let f be the image and g be the kernel. The output of convolving f with g is denoted f * g. Source: F. Durand Convention: kernel is “flipped” MATLAB: conv2 vs. filter2 (also imfilter)

“box filter” g(x,y)

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz

Moving Average In 2D Source: S. Seitz What is this filter doing?

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Filtered (no change) Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Shifted left By 1 pixel Source: D. Lowe

Practice with linear filters Original ? Source: D. Lowe

Practice with linear filters Original Blur (with a box filter) Source: D. Lowe

Gaussian Kernel Constant factor at front makes volume sum to 1 (can be ignored, as we should re-normalize weights to sum to 1 in any case) x 5,  = 1 Source: C. Rasmussen

Example: Smoothing with a Gaussian source: Svetlana Lazebnik

Edges

Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist’s line drawing (but artist is also using object-level knowledge) Source: D. Lowe

Origin of Edges Edges are caused by a variety of factors depth discontinuity surface color discontinuity illumination discontinuity surface normal discontinuity Source: Steve Seitz

Characterizing edges An edge is a place of rapid change in the image intensity function image intensity function (along horizontal scanline) first derivative edges correspond to extrema of derivative source: Svetlana Lazebnik

Edge filters Approximations of derivative filters: Source: K. Grauman Convolve filter with image to get edge map

Edge filters Approximations of derivative filters: Source: K. Grauman Respond highly to vertical edges

Edge filters Approximations of derivative filters: Source: K. Grauman Respond highly to horizontal edges

Edges: example source: Svetlana Lazebnik

What about our umbrellas? The representation of these two umbrella’s should be similar…. Under a color based representation they look completely different! How about using edges?

Edges Edges extracted using convolution with Prewitt filter Red umbrellaGray umbrella

Edges Edges overlayed from red and gray umbrellas. How is this?

Edge Energy in Spatial Grid Red UmbrellaGray Umbrella How is this representation?

Quick overview of other common kinds of Features

Important concept: Histograms Graphical display of tabulated frequencies, shown as bars. It shows what proportion of cases fall into each of several categories. The categories are usually specified as non- overlapping intervals of some variable.

Color Histograms Representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically (3D) color space (RGB, HSV etc). What are the bins in this histogram?

Shape Descriptors: shape context Representation of the local shape around a feature location (star) – as histogram of edge points in an image relative to that location. Computed by counting the edge points in a log polar space. So what are the bins of this histogram?

Descriptor computation: Divide patch into 4x4 sub-patches Compute histogram of gradient orientations (convolve with filters that respond to edges in different directions) inside each subpatch Resulting descriptor: 4x4x8 = 128 dimensions Shape descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp , 2004."Distinctive image features from scale-invariant keypoints.” source: Svetlana Lazebnik

Texture Features Universal texton dictionary histogram Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003 Convolve with various filters – spot, oriented bar. Compute histogram of responses.