Download presentation
Presentation is loading. Please wait.
Published byAmice Gilbert Modified over 9 years ago
1
2D Human Pose Estimation in TV Shows Vittorio Ferrari Manuel Marin Andrew Zisserman Dagstuhl Seminar July 2008
2
Goal: detect people and estimate 2D pose in images/video Pose spatial configuration of body parts Estimation localize body parts in Desired scope TV shows and feature films focus on upper-body Body parts fixed aspect-ratio rectangles for head, torso, upper and lower arms Ramanan and Forsyth, Mori et al., Sigal and Black, Felzenszwalb and Huttenlocher, Bergtholdt et al.
3
The search space of pictorial structures is large Search space - 4P dim (a scale factor per part) - 3P+1 dims (a single scale factor) - P = 6 for upper-body - 720x405 image = 10^45 configs ! Kinematic constraints - reduce space to valid body configs (10^28) - efficiency by model independencies (10^12) Body parts fixed aspect-ratio rectangles for head, torso, upper and lower arms = 4 parameters each Ramanan and Forsyth, Mori et al., Sigal and Black, Felzenszwalb and Huttenlocher, Bergtholdt et al.
4
The challenge of unconstrained imagery varying illumination and low contrast; moving camera and background; multiple people; scale changes; extensive clutter; any clothing
5
The challenge of unconstrained imagery Extremely difficult when knowing nothing about appearance/pose/location
6
Approach overview Human detection Foreground highlighting reduce search space estimate pose new additions Ramanan NIPS 2006 every frame independently multiple frames jointly Image parsing Learn appearance models over multiple frames Spatio-temporal parsing
7
Single frame
8
Search space reduction by human detection + fixes scale of body parts + sets bounds on x,y locations - little info about pose (arms) Idea get approximate location and scale with a detector generic over pose and appearance Building an upper-body detector - based on Dalal and Triggs CVPR 2005 - train = 96 frames X 12 perturbations (no Buffy) + fast Benefits for pose estimation detectedenlarged + detects also back views Train Test
9
Search space reduction by human detection Upper body detection and temporal association code available! www.robots.ox.ac.uk/~vgg/software/UpperBody/index.html
10
Search space reduction by foreground highlighting + further reduce clutter + conservative (no loss 95.5% times) Idea exploit knowledge about structure of search area to initialize Grabcut (Rother et al. 2004) Initialization - learn fg/bg models from regions where person likely present/absent - clamp central strip to fg - don’t clamp bg (arms can be anywhere) Benefits for pose estimation + needs no knowledge of background + allows for moving background initialization output
11
Search space reduction by foreground highlighting + further reduce clutter + conservative (no loss 95.5% times) Initialization - learn fg/bg models from regions where person likely present/absent - clamp central strip to fg - don’t clamp bg (arms can be anywhere) Benefits for pose estimation + needs no knowledge of background + allows for moving background Idea exploit knowledge about structure of search area to initialize Grabcut (Rother et al. 2004)
12
Pose estimation by image parsing + much more robust + much faster (10x-100x) Goal estimate posterior of part configuration Algorithm 1. inference with = edges 2. learn appearance models of body parts and background 3. inference with = edges + appearance Advantages of space reduction = image evidence (given edge/app models) = spatial prior (kinematic constraints) Ramanan 06
13
Pose estimation by image parsing appearance models edge parse edge + app parse + much more robust + much faster (10x-100x) Goal estimate posterior of part configuration Algorithm 1. inference with = edges 2. learn appearance models of body parts and background 3. inference with = edges + appearance Advantages of space reduction = image evidence (given edge/app models) = spatial prior (kinematic constraints) Ramanan 06
14
Pose estimation by image parsing appearance models edge parse edge + app parse + much more robust + much faster (10x-100x) Goal estimate posterior of part configuration Algorithm 1. inference with = edges 2. learn appearance models of body parts and background 3. inference with = edges + appearance Advantages of space reduction no foreground highlighting = image evidence (given edge/app models) = spatial prior (kinematic constraints) Ramanan 06
15
Failure of direct pose estimation Ramanan NIPS 2006 unaided Ramanan 06
16
Multiple frames
17
Transferring appearance models across frames Idea refine parsing of difficult frames, based on appearance models from confident ones (exploit continuity of appearance) lowest-entropy frames higher-entropy frame Advantages + improve parse in difficult frames + better than Ramanan CVPR 2005: richer and more robust models need no specific pose at any point in video Algorithm 1. select frames with low entropy of P(L|I) 2. integrate their appearance models by min KL-divergence 3. re-parse every frame using integrated appearance models
18
lowest-entropy frames higher-entropy frame Idea refine parsing of difficult frames, based on appearance models from confident ones (exploit continuity of appearance) Transferring appearance models across frames Algorithm 1. select frames with low entropy of P(L|I) 2. integrate their appearance models by min KL-divergence 3. re-parse every frame using integrated appearance models Advantages + improve parse in difficult frames + better than Ramanan CVPR 2005: richer and more robust models need no specific pose at any point in video
19
Idea refine parsing of difficult frames, based on appearance models from confident ones (exploit continuity of appearance) Transferring appearance models across frames parse Algorithm 1. select frames with low entropy of P(L|I) 2. integrate their appearance models by min KL-divergence 3. re-parse every frame using integrated appearance models Advantages + improve parse in difficult frames + better than Ramanan CVPR 2005: richer and more robust models need no specific pose at any point in video
20
Idea refine parsing of difficult frames, based on appearance models from confident ones (exploit continuity of appearance) Transferring appearance models across frames re-parse Algorithm 1. select frames with low entropy of P(L|I) 2. integrate their appearance models by min KL-divergence 3. re-parse every frame using integrated appearance models Advantages + improve parse in difficult frames + better than Ramanan CVPR 2005: richer and more robust models need no specific pose at any point in video
21
Idea extend kinematic tree with edges preferring non-overlapping left/right arms Model - add repulsive edges - inference with Loopy Belief Propagation Advantage + less double-counting The repulsive model = kinematic constraints = repulsive prior
22
Idea extend kinematic tree with edges preferring non-overlapping left/right arms Model - add repulsive edges - inference with Loopy Belief Propagation Advantage + less double-counting The repulsive model = kinematic constraints = repulsive prior
23
Spatio-temporal parsing Spatio-temporal parsing Transferring appearance models exploits continuity of appearance Image parsing Learn appearance models over multiple frames After repulsive model Now exploit continuity of position time
24
Spatio-temporal parsing Spatio-temporal parsing Idea extend model to a spatio-temporal block time Model - add temporal edges - inference with Loopy Belief Propagation = image evidence (given edge+app models) = spatial prior (kinematic constraints) = temporal prior (continuity constraints)
25
Spatio-temporal parsing Spatio-temporal parsing Idea extend model to a spatio-temporal block time Model - add temporal edges - inference with Loopy Belief Propagation = image evidence (given edge+app models) = spatial prior (kinematic constraints) = temporal prior (continuity constraints) = repulsive prior (anti double-counting)
26
Spatio-temporal parsing Spatio-temporal parsing Integrated spatio-temporal parsing reduces uncertainty Image parsing Learn appearance models over multiple frames Spatio-temporal parsing spatio-temporal no spatio-temporal
27
Full-body pose estimation in easier conditions Weizmann action dataset (Blank et at. ICCV 05)
28
Evaluation dataset - >70000 frames over 4 episodes of Buffy the Vampire Slayer (>1000 shots) - uncontrolled and extremely challenging low contrast, scale changes, moving camera and background, extensive clutter, any clothing, any pose - figure overlays = final output, after spatio-temporal inference
29
Example estimated poses
36
Quantitative evaluation Ground-truth 69 shots x 4 = 276 frames x 6 = 1656 body parts (sticks) Upper-body detector fires on 243 frames (88%, 1458 body parts) Data available ! http://www.robots.ox.ac.uk/ ~ vgg/data/stickmen/index.html
37
Quantitative evaluation Pose estimation accuracy 62.6% with repulsive model ( with spatio-temporal parsing) 59.4% with appearance transfer 57.9% with foreground highlighting (best single-frame) 41.2% with detection 9.6% Ramanan NIPS 2006 unaided Conclusions : + both reduction techniques improve results += small improvement by appearance transfer … + … method good also on static images + repulsive model brings us further = spatio-temporal parsing mostly aesthetic impact
38
Application: foreground/background segmentation Idea use localized body parts to initialize a spatio-temporal extension to Grabcut Initialization - partition shot into blocks - learn fg/bg models from eroded/dilated body part segmentations - clamp eroded to fg Spatio-temporal Grabcut - add edges over time (same edge model) - share fg/bg models over whole block
39
Application: foreground/background segmentation Applications - remove the actor before location recognition - get silhouette for action recognition Benefit + spatially and temporally continuous figure/ground segmentation
40
Video !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.