Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Human Detection under Partial Occlusions using Markov Logic Networks Raghuraman Gopalan and William Schwartz Center for Automation Research University.

Similar presentations


Presentation on theme: "1 Human Detection under Partial Occlusions using Markov Logic Networks Raghuraman Gopalan and William Schwartz Center for Automation Research University."— Presentation transcript:

1 1 Human Detection under Partial Occlusions using Markov Logic Networks Raghuraman Gopalan and William Schwartz Center for Automation Research University of Maryland, College Park

2 2 Human Detection

3 3 Holistic window-based: Dalal and Triggs CVPR (2005) Tuzel et al CVPR (2007) Part-based: Wu and Nevatia ICCV (2005) Mikolajczyk et al ECCV (2004) Scene-related cues: Torralba et al IJCV (2006)

4 4 The occlusion challenge * Probability of presence of a human obtained from Schwartz et al ICCV (2009) Body parts occluded by objectsPerson occluded by another person 0.0059*0.0816 0.1452 0.1272

5 5 Related work Bilattice-based logical reasoning: Shet et al CVPR (2007) Integrating probability of human parts using first-order logic (FOL): Schwartz et al ICB (2009)

6 6 Our approach: Motivation A data-driven, part-based method 1. Probabilistic logical inference using Markov logic networks (MLN) [Domingos et al, Machine Learning (2006)] 2. Representing `semantic context’ between the detection probabilities of parts. Within-window, and between-windows With and without occlusions

7 7 Our approach: An overview Multiple detection windows Part detector’s outputs Face detector outputs Instantiation of the MLN Inference Final Result Queries: - person(d1)? - occluded(d1)? - occludedby(d1,d2)? Learning contextual rules

8 8 Main questions How to integrate detector’s outputs to detect people under occlusion?  Enforce consistency according to spatial location of detectors → removal of false alarms.  Exploit relations between persons to solve inconsistencies → explain occlusions.  Both using MLN, which combines FOL and graphical models in a single representation → avoids contradictions.

9 9 Our approach: An overview Multiple detection windows Part detector’s outputs Face detector outputs Instantiation of the MLN Inference Final Result Queries: - person(d1)? - occluded(d1)? - occludedby(d1,d2)? Learning contextual rules

10 10 Part-based detectors  To handle human detection under occlusion, our original detector is split into parts, then MLN is used to integrate their outputs. original top torso legs top-torso torso-legs top-legs

11 11 Detector – An overview Exploit the use of more representative features to provide richer set of descriptors to improve detection results – edges, textures, and color. Consequences of the feature augmentation:  extremely high dimensional feature space (>170,000)  number of samples in the training dataset is smaller than the dimensionality These characteristics prevent the use of classical machine learning such as SVM, but make an ideal setting for Partial Least Squares (PLS)*. * H. Wold, Partial Least Squares, Encyclopedia of statistical sciences, 6:581-591 (1985)

12 12 Detector – Partial Least Squares (PLS) PLS is a wide class of methods for modeling relations between sets of observations by means of latent variables. Although originally proposed as a regression technique, PLS can be also be used as a class aware dimensionality reduction tool. By setting the dependent variable to a set of discrete values (class ids), we use PLS for dimensionality reduction followed by classification using a classifier in low dimensional space. The extracted feature vector is projected onto a set o latent vectors (estimated using PLS), then a classifier is used in the resulting low dimensional sub-space.

13 13 Detection using PLS T, U are (n x h) matrices of h extracted latent vectors. P (p x h) and q (1 x h) represent the matrices loadings and E (n x p) and f (n x 1) are the residuals of X and Y, respectively. PLS method NIPALS (nonlinear iterative partial least squares) finds the set of weight vectors W (p x h) ={w 1,w 2,….w h } such that PLS models relations between predictors variables in matrix X (n x p) and response variables in vector y (n x 1), where n denotes number of samples, p the number of features.

14 14 Our approach: An overview Multiple detection windows Part detector’s outputs Face detector outputs Instantiation of the MLN Inference Final Result Queries: - person(d1)? - occluded(d1)? - occludedby(d1,d2)? Learning contextual rules

15 15 Context: Consistency between the detector outputs topTorso(d1) ^ top(d1) ^ torso(d1) → person(d1) (consistent) topTorso(d1) ^ (¬top(d1) v ¬torso(d1)) → ¬person(d1) (false alarm) First order logic rules:  Each detector acts in a specific region of the body. One can look at the output of sensors acting in the same spatial location to check for consistency – similar responses are expected. Example: top-torsotoptorso Given that top-torso detector outputs high probability, top and torso detectors need to output high probability as well since they intersect the region covered by top-torso.

16 16 Context: Understanding relationship between different windows d1 d2 intersect(d1,d2) ^ person(d1) ^ matching(d1,d2) → person(d2) ^ occluded(d2) ^ occludedby(d2,d1) First order logic rule: matching(d1,d2) is true if: - Detectors at visible parts of d2 have high response. - detectors at occluded parts of d2 have low response while sensors located at the corresponding positions of d1 have high response.  Low response given by a detector might be caused by a second detection window (a person may be occluding another and causing low response of the detectors). - d1, and d2 are persons - d1 and d2 intersect

17 17 Our approach: An overview Multiple detection windows Part detector’s outputs Face detector outputs Instantiation of the MLN Inference Final Result Queries: - person(d1)? - occluded(d1)? - occludedby(d1,d2)? Learning contextual rules F i

18 18 3. Inference using MLN* - The basic idea A logical knowledge base (KB) is a set of hard constraints (F i ) on the set of possible worlds Let’s make them soft constraints: When a world violates a formula, It becomes less probable, not impossible Give each formula a weight (w i ) (Higher weight  Stronger constraint) Contents of the next three slides are partially adapted from Markov Logic Networks tutorial by Domingos et al, ICML (2007)

19 19 MLN – At a Glance Logical language: First-order logic Probabilistic language: Markov networks  Syntax: First-order formulas with weights  Semantics: Templates for Markov net features Learning:  Parameters: Generative or discriminative  Structure: ILP with arbitrary clauses and MAP score Inference:  MAP: Weighted satisfiability  Marginal: MCMC with moves proposed by SAT solver  Partial grounding + Lazy inference / Lifted inference

20 20 MLN- Definition A Markov Logic Network (MLN) is a set of pairs (F i, w i ) where  F i is a formula in first-order logic  w i is a real number

21 21 Example: Humans & Occlusions

22 22 Example: Humans & Occlusions

23 23 Example: Humans & Occlusions

24 24 Example: Humans & Occlusions Two constants: Detection window 1 (D1) and Detection window 2 (D2) D1 D2

25 25 Example: Humans & Occlusions Parts(D1) Human(D1)Human(D2) Parts(D2) Two constants: Detection window 1 (D1) and Detection window 2 (D2) One node for each grounding of each predicate in the MLN

26 26 Example: Humans & Occlusions Parts(D1) Human(D1) Occlusion(D1,D1) Occlusion(D2,D1) Human(D2) Occlusion(D1,D2) Parts(D2) Occlusion(D2,D2) Two constants: Detection window 1 (D1) and Detection window 2 (D2)

27 27 Example: Humans & Occlusions Parts(D1) Human(D1) Occlusion(D1,D1) Occlusion(D2,D1) Human(D2) Occlusion(D1,D2) Parts(D2) Occlusion(D2,D2) Two constants: Detection window 1 (D1) and Detection window 2 (D2) One feature for each grounding of each formula Fi in the MLN, with the corresponding weight wi

28 28 Example: Humans & Occlusions Parts(D1) Human(D1) Occlusion(D1,D1) Occlusion(D2,D1) Human(D2) Occlusion(D1,D2) Parts(D2) Occlusion(D2,D2) Two constants: Detection window 1 (D1) and Detection window 2 (D2)

29 29 Example: Humans & Occlusions Parts(D1) Human(D1) Occlusion(D1,D1) Occlusion(D2,D1) Human(D2) Occlusion(D1,D2) Parts(D2) Occlusion(D2,D2) Two constants: Detection window 1 (D1) and Detection window 2 (D2)

30 30 Instantiation MLN is template for ground Markov nets Probability of a world x : Learning of weights, and inference performed using the open-source Alchemy system [Domingos et al (2006)] Weight of formula Fi No. of true groundings of formula F i

31 31 Our approach: An overview Multiple detection windows Part detector’s outputs Face detector outputs Instantiation of the MLN Inference Final Result Queries: - person(d1)? - occluded(d1)? - occludedby(d1,d2)? Learning contextual rules

32 Results

33

34

35 35 Comparisons Dataset details: 200 images 5 to 15 humans per image Occluded humans ~ 35%

36 36 Comparisons

37 37 Comparisons

38 38 Conclusions A data-driven approach to detect humans under occlusions Modeling semantic context of detector probabilities across spatial locations Probabilistic contextual inference using Markov logic networks Question of interest: Integrating analytical models for occlusions and context with this data-driven method

39 39 Questions ?


Download ppt "1 Human Detection under Partial Occlusions using Markov Logic Networks Raghuraman Gopalan and William Schwartz Center for Automation Research University."

Similar presentations


Ads by Google