Parallel Integration of Video Modules

Slides:



Advertisements
Similar presentations
3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Computer Vision Lecture 16: Texture
Stereo Matching Segment-based Belief Propagation Iolanthe II racing in Waitemata Harbour.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Edge Detection CSE P 576 Larry Zitnick
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
Announcements Big mistake on hint in problem 1 (I’m very sorry).
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
3-D Computer Vision CSc83029 / Ioannis Stamos 3-D Computer Vision CSc Radiometry and Reflectance.
Image Enhancement.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Object recognition under varying illumination. Lighting changes objects appearance.
Information that lets you recognise a region.
1 An Implementation Sanun Srisuk of EdgeFlow.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 35 – Review for midterm.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Expectation-Maximization (EM) Case Studies
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Render methods. Contents Levels of rendering Wireframe Plain shadow Gouraud Phong Comparison Gouraud-Phong.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Schedule Update GP 4 – Tesselation/Cg GDS 4 – Subdiv Surf. GP 5 – Object Modeling Lab: Mini-proj Setup GDS 5 – Maya Modeling MCG 6 – Intersections GP 6.
Winter in Kraków photographed by Marcin Ryczek
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
A Novel 2D-to-3D Conversion System Using Edge Information
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Motion and Optical Flow
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Contourlet Transforms For Feature Detection
?. ? White Fuzzy Color Oblong Texture Shape Most problems in vision are underconstrained White Color Most problems in vision are underconstrained.
Filtering Things to take away from this lecture An image as a function
Announcements Final is Thursday, March 16, 10:30-12:20
Analysis of Contour Motions
Introduction Computer vision is the analysis of digital images
Filtering An image as a function Digital vs. continuous images
Image Filtering with GLSL
Shape from Shading and Texture
Morphological Operators
Winter in Kraków photographed by Marcin Ryczek
IT472 Digital Image Processing
Learning complex visual concepts
Presentation transcript:

Parallel Integration of Video Modules T. Poggio, E.B. Gamble, J.J. Little 6.899 Paper Presentation Presenter: Brian Whitman

Overview Different cues make up a ‘reliable map’ Edge Stereo Color Motion How can we integrate these cues to find surface discontinuities?

Architecture

Physical Discontinuities Depth Orientation Albedo Edges Specular Edges Shadow Edges

Implementation The architecture was not fully implemented Results in integrating brightness with: Hue Texture Motion Stereo But separately – not together

Smoothness Physical processes behind cues change slowly over time: Two points adjacent are not vastly different depths Need a representation to capture this

Discontinuities Cues are assumed smooth everywhere except on discontinuities Each module needs to assume and interpolate smoothness detect edges and changes

Dual Lattices Circles are smooth, crosses are lines / discontinuities

Neighborhoods

Quickly, MRF (again) Prior probability of depth in the lattice is: Z: normalization, T is temperature, U is energy (sum of local contributions) If we know g (observation) use it

Membrane Prior Prior energy when surface is smooth:

Gaussian Process If we assume gaussian process generated the noise:

Line Process Where is the smoothness assumption broken? l: line between i and j? Vc: varying energies for different line configurations

Integrated Process Extend the energy function to tie together vision modules to brightness gradients Assumption: changes in brightness guide our belief of the source of surface discontinuities

High Brightness Gradients Instead of energy terms based on line configuration, use strengths of brightness edges

Low-level Modules Paper mentions: Edge detection Stereo Motion Color Texture But only has short detail on texture & color.

Texture Module Measures level density ‘Blobs’ are taken through a center-surround filter

Color Module Hue = R/(R+G) Should be independent of illumination MRF uses this to segment image into sections of ‘constant reflectance’

Original image + brightness edges

Stereo data, MRF generated depth

Motion data, MRF generated flow

Texture data, MRF generated texture regions

Hue, MRF hue segmentation

Parallelizing Many words about specialized architecture Small processes better for mass computation Specialized experts model

More Recent Recent Mohan, Papageorgiou, Poggio paper: “Example-Based Object Detection in Images by Components” Train an ‘ACC’ using different ‘experts’

Conclusions All extracted surface discontinuities can be used in later understanding “Do brightness edges aid human computation of surface discontinuities?” Parallelizing image analysis…