Download presentation
Presentation is loading. Please wait.
Published byElwin Booker Modified over 9 years ago
1
Video Analysis Mei-Chen Yeh May 29, 2012
2
Outline Video representation Motion Actions in Video
3
Videos A natural video stream is continuous in both spatial and temporal domains. A digital video stream sample pixels in both domains.
4
Video processing YC b C r
5
Video signal representation (1) Composite color signal – R, G, B – Y, C b, C r Why Y, C b, C r ? – Backward compatibility (back- and-white to color TV) – The eye is less sensitive to changes of C b and C r components Luminance (Y) Chrominance (C b + C r )
6
Video signal representation (2) Y is the luma component and C b and C r are the blue and red chroma components. YCbCb CrCr
7
Sampling formats (1) 4:4:44:2:2 (DVB)4:1:1 (DV) Slide from Dr. Ding
8
Sampling formats (2) 4:2:0 (VCD, DVD)
9
TV encoding system (1) PAL – Phase Alternating Line, is a color encoding system used in broadcast television systems in large parts of the world. SECAM – (French: Séquential Couleur Avec Mémoire), is an analog color television system first used in France. NTSC – National Television System Committee, is the analog television system used in most of North America, South America, Burma, South Korea, Taiwan, Japan, Philippines, and some Pacific island nations and territories.
10
TV encoding system (2)
11
Uncompressed bitrate of videos Slide from Dr. Chang
12
Outline Video representation Motion Actions in Video
13
Motion and perceptual organization Sometimes, motion is foremost cue
14
Motion and perceptual organization Even poor motion data can evoke a strong percept
15
Motion and perceptual organization Even poor motion data can evoke a strong percept
16
Uses of motion Estimating 3D structure Segmenting objects based on motion cues Learning dynamical models Recognizing events and activities Improving video quality (motion stabilization) Compressing videos ……
17
Motion field The motion field is the projection of the 3D scene motion into the image
18
Motion field P(t) is a moving 3D point Velocity of scene point: V = dP/dt p(t) = (x(t),y(t)) is the projection of P in the image Apparent velocity v in the image: v x = dx/dt v y = dy/dt These components are known as the motion field of the image p(t)p(t) p(t+dt) P(t)P(t) P(t+dt) V v
19
Motion estimation techniques Based on temporal changes in image intensities Direct methods – Directly recover image motion at each pixel from spatio- temporal image brightness variations – Dense motion fields, but sensitive to appearance variations – Suitable when image motion is small Feature-based methods – Extract visual features (corners, textured areas) and track them over multiple frames – Sparse motion fields, but more robust tracking – Suitable when image motion is large
20
Optical flow The velocity of observed 2-D motion vectors Can be caused by – object motions – camera movements – illumination condition changes
21
Optical flow the true motion field Motion field exists but no optical flow No motion field but shading changes
22
Problem definition: optical flow How to estimate pixel motion from image I(x,y,t) to image I(x,y,t+d t )? Solve pixel correspondence problem –given a pixel in I t, look for nearby pixels of the same color in I t+dt Key assumptions color constancy: a point in I t looks the same in I t+dt –For grayscale images, this is brightness constancy small motion: points do not move very far This is called the optical flow problem.
23
Optical flow constraints (grayscale images) Let’s look at these constraints more closely: brightness constancy: small motion: (u and v are small) –using Taylor’s expansion = 0
24
Optical flow equation Combining these two equations Dividing both sides by d t u, v: displacement vectors Known as the optical flow equation velocity vector spatial gradient vector
25
Q: how many unknowns and equations per pixel? – 2 unknowns, one equation What does this constraint mean? The component of the flow perpendicular to the gradient (i.e., parallel to the edge) is unknown edge (v x, v y ) (u’,v’) gradient (v x +u’, v y +v’) If (v x, v y ) satisfies the equation, so does (v x +u’, v y +v’) if
26
Q: how many unknowns and equations per pixel? – 2 unknowns, one equation What does this constraint mean? The component of the flow perpendicular to the gradient (i.e., parallel to the edge) is unknown This explains the Barber Pole illusion 1 2
27
The aperture problem Perceived motion
28
The aperture problem Actual motion
29
The barber pole illusion http://en.wikipedia.org/wiki/Barberpole_illusion
30
The barber pole illusion http://en.wikipedia.org/wiki/Barberpole_illusion
31
To solve the aperture problem… We need more equations for a pixel. Example – Spatial coherence constraint: pretends the pixel’s neighbors have the same (v x, v y ) – Lucas & Kanade (1981)
32
Outline Video representation Motion Actions in Video – Background subtraction – Recognition of actions based on motion patterns
33
Using optical flow: recognizing facial expressions Recognizing Human Facial Expression (1994) by Yaser Yacoob, Larry S. Davis
34
Example use of optical flow: visual effects in films http://www.fxguide.com/article333.html
35
Slide credit: Birgi Tamersoy
36
Background subtraction Simple techniques can do ok with static camera …But hard to do perfectly Widely used: – Traffic monitoring (counting vehicles, detecting & tracking vehicles, pedestrians), – Human action recognition (run, walk, jump, squat), – Human-computer interaction – Object tracking
37
Slide credit: Birgi Tamersoy
41
Frame differences vs. background subtraction Toyama et al. 1999
42
Slide credit: Birgi Tamersoy
43
Pros and cons Advantages: Extremely easy to implement and use Fast Background models need not be constant, they change over time Disadvantages: Accuracy of frame differencing depends on object speed and frame rate Median background model: relatively high memory requirements Setting global threshold Th… Slide credit: Birgi Tamersoy
44
Background subtraction with depth How can we select foreground pixels based on depth information? Leap: http://www.leapmotion.com/http://www.leapmotion.com/
45
Outline Video representation Motion Actions in video – Background subtraction – Recognition of action based on motion patterns
46
Motion analysis in video “Actions”: atomic motion patterns -- often gesture- like, single clear-cut trajectory, single nameable behavior (e.g., sit, wave arms) “Activity”: series or composition of actions (e.g., interactions between people) “Event”: combination of activities or actions (e.g., a football game, a traffic accident) Modified from Venu Govindaraju
47
Surveillance http://users.isr.ist.utl.pt/~etienne/mypubs/Auvinetal06PETS.pdf
48
Interfaces https://flutterapp.com/
49
Model-based action/activity recognition: – Use human body tracking and pose estimation techniques, relate to action descriptions – Major challenge: accurate tracks in spite of occlusion, ambiguity, low resolution Activity as motion, space-time appearance patterns – Describe overall patterns, but no explicit body tracking – Typically learn a classifier – We’ll look at a specific instance… Human activity in video: basic approaches
50
Recognize actions at a distance [ICCV 2003] – Low resolution, noisy data, not going to be able to track each limb. – Moving camera, occlusions – Wide range of actions (including non-periodic) [Efros, Berg, Mori, & Malik 2003] http://graphics.cs.cmu.edu/people/efros/research/action/ The 30-Pixel Man
51
Approach Motion-based approach – Non-parametric; use large amount of data – Classify a novel motion by finding the most similar motion from the training set More specifically, – A motion description based on optical flow – an associated similarity measure used in a nearest neighbor framework [Efros, Berg, Mori, & Malik 2003] http://graphics.cs.cmu.edu/people/efros/research/action/
52
Motion description matching results More matching results: video 1, video 2video 1video 2
53
Action classification result demo video
54
Gathering action data Tracking – Simple correlation-based tracker – User-initialized
55
Figure-centric representation Stabilized spatio-temporal volume – No translation information – All motion caused by person’s limbs, indifferent to camera motion!
56
Extract optical flow to describe the region’s motion. Using optical flow: action recognition at a distance [Efros, Berg, Mori, & Malik 2003] http://graphics.cs.cmu.edu/people/efros/research/action/
57
Input Sequence Matched Frames Use nearest neighbor classifier to name the actions occurring in new video frames. Using optical flow: action recognition at a distance [Efros, Berg, Mori, & Malik 2003] http://graphics.cs.cmu.edu/people/efros/research/action/
58
Football Actions: classification [.67.58.68.79.59.68.58.66] (8 actions, 4500 frames, taken from 72 tracked sequences)
59
Application: motion retargeting [Efros, Berg, Mori, & Malik 2003] http://graphics.cs.cmu.edu/people/efros/research/action/ SHOW VIDEO
60
Summary Background subtraction: – Essential low-level processing tool to segment moving objects from static camera’s video Action recognition: – Increasing attention to actions as motion and appearance patterns – For constrained environments, relatively simple techniques allow effective gesture or action recognition
61
Closing remarks Thank you all for your attention and participation to the class! Please be well prepared for the final project (06/12 and 06/19). Come to class on time. Start early!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.