Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Fall 2003.

Slides:



Advertisements
Similar presentations
Visible-Surface Detection(identification)
Advertisements

Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
Invariants (continued).
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Games, Movies and Virtual Worlds – An Introduction to Computer Graphics Ayellet Tal Department of Electrical Engineering Technion.
3D Graphics Rendering and Terrain Modeling
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Mapping: Scaling Rotation Translation Warp
HCI 530 : Seminar (HCI) Damian Schofield.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
(conventional Cartesian reference system)
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Video Object Tracking and Replacement for post TV production LYU0303 Final Year Project Fall 2003.
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Fall 2003.
Lecture 9: Image alignment CS4670: Computer Vision Noah Snavely
Fitting a Model to Data Reading: 15.1,
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
April Geometric Transformations for Computer Graphics Shmuel Wimer Bar Ilan Univ., School of Engineering.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology 3D Engines and Scene Graphs Spring 2012.
Geometric Objects and Transformations Geometric Entities Representation vs. Reference System Geometric ADT (Abstract Data Types)
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Working with Graphics. Objectives Understand bitmap and vector graphics Place a graphic into a frame Work with the content indicator Transform frame contents.
Modeling and representation 1 – comparative review and polygon mesh models 2.1 Introduction 2.2 Polygonal representation of three-dimensional objects 2.3.
1 Texturing. 2 What is Texturing? 3 Texture Mapping Definition: mapping a function onto a surface; function can be:  1, 2, or 3D  sampled (image) or.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
Remote Sensing Image Rectification and Restoration
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Tools for Raster Displays CVGLab Goals of the Chapter To describe pixmaps and useful operations on them. To develop tools for copying, scaling, and rotating.
Geometric Operations and Morphing.
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
Visible-Surface Detection Jehee Lee Seoul National University.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
Rendering Pipeline Fall, D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
Lecture 13: Raster Graphics and Scan Conversion
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
CS559: Computer Graphics Lecture 7: Image Warping and Morphing Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
IS502:M ULTIMEDIA D ESIGN FOR I NFORMATION S YSTEM D IGITAL S TILL I MAGES Presenter Name: Mahmood A.Moneim Supervised By: Prof. Hesham A.Hefny Winter.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
A School of Mechanical Engineering, Hebei University of Technology, Tianjin , China Research on Removing Shadow in Workpiece Image Based on Homomorphic.
Image Warping 2D Geometric Transformations
CSE 185 Introduction to Computer Vision
Geometric Transformations for Computer Graphics
Geometric Transformations for Computer Graphics
Digital Image Processing Lecture 20: Representation & Description
Motion and Optical Flow
Motion Detection And Analysis
Digital 2D Image Basic Masaki Hayashi
3D Graphics Rendering PPT By Ricardo Veguilla.
Lecture 7: Image alignment
CSCE 441: Computer Graphics Image Warping
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Geometric Transformations for Computer Graphics
2D transformations (a.k.a. warping)
Graphics Laboratory Korea University
Computer Graphics: Image Warping/Morphing
Presentation transcript:

Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Fall 2003

Outline  Project Introduction  Basic parts of the purposed system  Working principles of individual parts  Future Work  Q&A

Introduction  Post-TV production software changes the content of the original video clips.  Extensively used in video-making industries.  Why changing the content of a video?  Reducing video production cost  Performing dangerous actions  Producing effects those are impossible in reality

Difficulties to be overcome  Things in video can be treated individually called “video objects”.  Computers cannot perform object detection directly because…  Image is processed byte-by-byte  Without pre-knowledge about the video objects to be detected  Result is definite, no fuzzy logic.  Though computers cannot perform object detection directly, it can be programmed to work indirectly.

Basic parts of the purposed system  Simple bitmap reader/writer  RGB/HSV converter  Edge detector  Edge equation finder  Equation processor  Translation detector  Texture mapper

RGB/HSV converter  Human eyes are more sensitive to the brightness rather than the true color components of an object.  More reasonable to convert the representation of colors into HSV (Hue, Saturation and Value (brightness)) model.  After processing, convert back to RGB and save to disk.

RGB/HSV converter RGB to HSV HSV to RGB

Edge detector  Usually, a sharp change in hue, saturation or brightness means that there exist a boundary line. HSV: (0,0,0) HSV: (0,255,255)

Edge equation finder  Derives mathematical facts out of the edge points.  Works with voting algorithm of Hough Transform.  Automatically adjusts tolerance value to minimize the effect of noise points.  This helps when the edge is not completely straight or blurred.

Edge equation finder (x1,y1) Angle in degree Frequency Desired linear equation in point-slope form:

Equation processor  After finding out the equation, constraints can be applied in order to remove redundant equations, get shadows or detect occultation.  Find out the corner points for translation detector and texture mapper.

Translation detector  A simple object motion tracker.  Collects the data of the first key frame to accelerate the process of the remaining video frames.  Can be beneficial if the video segment is long and the scene seldom changes.

Texture Mapper  A graphics design process in which a 2-D surface, called a texture map, is "wrapped around" a 3-D object.  The 3-D object acquires a surface texture similar to the texture map.

Texture Mapper Mapping Original position of pixel New position of pixel

Texture Mapper  Every polygon is assigned 2 sets of coordinates  Image coordinates (r, c) : location of pixel in the image  Texture coordinates (u, v) : location in texture image which contains color information for image coordinates

Texture Mapper  Mapping functions map texture coordinates to image coordinates or vice versa.  They are usually determined by image points whose texture coordinates are given explicitly.

Texture Mapper (r1, c1) (u1, v1) (r2, c2) (u2, v2) (r4, c4) (u4, v4) (r3, c3) (u3, v3) (u1, v1)(u2, v2) (u3, v3)(u4, v4)

Texture Mapper  Scan conversion: the process of scanning all the pixels and perform the necessary calculation.  Forward mapping maps from the texture space to image space  Inverse mapping maps from the image space to texture space

Scan conversion with forward mapping  Algorithm:  for u = umin to umax  for v = vmin to vmix  r = R(u,v)  c = C(u,v)  copy pixel at source (u,v)  to destination (r,c)

Scan conversion with forward mapping  Advantage: Easy to compute as long as the forward mapping function is known.  Disadvantage Pixel-to-pixel mapping is not 1-1. Holes may appear. Can result in aliasing.

Scan conversion with forward mapping

Scan conversion with inverse mapping  Algorithm:  for (r,c) = polygon pixel  u = TEXR(r,c)  v = TEXC(r,c)  copy pixel at source (u,v)  to destination (r,c)

Scan conversion with inverse mapping  Advantage: Every destination pixel is filled (no holes). Allow easy incorporation of pre-filtering & resampling operations to prevent aliasing

Scan conversion with inverse mapping  Take advantage of Scanline Polygon Fill Algorithm  For a row scan, maintain a list of scanline / polygon intersections.  Intersection at scanline r+1 efficiently computed from row r. Scanline yk+1 Scanline yk {xk, yk} {xk+1, yk+1}

Scan conversion with inverse mapping  Coordinates at a non-boundary level are computed by linearly interpolating (u,v) coordinates of bounding pixels on the scanline. Scanline yk+1 Scanline yk {xk, yk} {xk+1, yk+1}

Scan conversion with inverse mapping  Suppose (ri,ci) maps to (ui,vi), i = 1,…, 5  (r4,c4) = s (r1,c1) + (1-s) (r3,c3) { s is known }  (u4,v4) = s(u1,v1) + (1-s)(u3,v3) {u4,v4 are known}  Similarly, (u5, v5) can be found.  t = (c-c4)/(c5-c4)  (r,c) = t*(u5,v5) + (1-t)*(u4,v4) Scanline yk (r4, c4) (r1, c1) (r3, c3)(r2, c2) (r5, c5) (r, c) image

Basic 2D linear mapping  Scaling & Translation u = ar + d v = bc + e upright rectangle  upright square  Euclidean mapping u = (cos  )r – (sin  )c + d v = (sin  )r + (cos  )c + e rotated unit square  upright square

Basic 2D linear mapping  Similarity mapping u = s(cos  )r – s(sin  )c + d v = s(sin  )r + s(cos  )c + e rotated square  upright unit square  Affine mapping u = f(cos  )r – g(sin  )c + d v = h(sin  )r + i(cos  )c + e rotated rectangle  upright unit square DEMO !

Basic 2D linear mapping  Projective mapping The most general 2D linear map Square  arbitrary quadrangle !  u = (a11r+a12c+a13) / (a31r+a32c+1)  v = (a21r+a22c+a23) / (a31r+a32c+1)  The 8 variables a11,a12, …, a32 have to be found out.

Basic 2D linear mapping  We have a system of 8 equations solving 8 unknowns. (x1,y1)

Future Work  Mapping cans  Speed optimization  Movie manipulation  Use of 3D markers

Q & A See the foot notes.