Download presentation
Presentation is loading. Please wait.
Published byBrice Winfred Porter Modified over 8 years ago
1
8/16/99 Computer Vision: Vision and Modeling
2
8/16/99 Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) - Bayesian Decision Theory - Density Estimation Computer Vision: Vision and Modeling
3
8/16/99 A Different View of Lucas-Kanade I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2 E = ( ) = I (i) - I(i) v t i 2 i White board High Gradient has Higher weight
4
8/16/99 Constrained Optimization VV Constrain - I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2
5
8/16/99 Constraints = Subspaces E(V) VV Constrain - Analytically derived: Affine / Twist/Exponential Map Learned: Linear/non-linear Sub-Spaces
6
8/16/99 Motion Constraints Optical Flow: local constraints Region Layers: rigid/affine constraints Articulated: kinematic chain constraints Nonrigid: implicit / learned constraints
7
8/16/99 V = M ( ) Constrained Function Minimization = E(V) VV Constrain - I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2
8
8/16/99 2D Translation: Lucas-Kanade = E(V) VV Constrain - dx, dy... dx, dy V =V = 2D I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2
9
8/16/99 2D Affine: Bergen et al, Shi-Tomasi = E(V) VV Constrain - a1, a2 a3, a4 v = 6D dx dy x y i i i + I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2
10
8/16/99 Affine Extension Affine Motion Model: - 2D Translation - 2D Rotation - Scale in X / Y - Shear Matlab demo ->
11
8/16/99 Affine Extension Affine Motion Model -> Lucas-Kanade: Matlab demo ->
12
8/16/99 2D Affine: Bergen et al, Shi-Tomasi VV Constrain - 6D
13
8/16/99 K-DOF Models = E(V) VV Constrain - K-DOF V = M ( ) I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2
14
8/16/99 V = M ( ) Quadratic Error Norm (SSD) ??? = E(V) VV Constrain - I (1) - I(1) v t 1 I (2) - I(2) v t 2 I (n) - I(n) v t n... 2 White board (outliers?)
15
8/16/99 Support Maps / Layers - L2 Norm vs Robust Norm - Dangers of least square fitting: L2 D
16
8/16/99 Support Maps / Layers - L2 Norm vs Robust Norm - Dangers of least square fitting: L2robust DD
17
8/16/99 Support Maps / Layers - Robust Norm -- good for outliers - nonlinear optimization robust D
18
8/16/99 Support Maps / Layers - Iterative Technique Add weights to each pixel eq (white board)
19
8/16/99 Support Maps / Layers - how to compute weights ? -> previous iteration: how good does G-warp matches F ? -> probabilistic distance: Gaussian:
20
8/16/99 Error Norms / Optimization Techniques SSD: Lucas-Kanade (1981)Newton-Raphson SSD: Bergen-et al. (1992)Coarse-to-Fine SSD: Shi-Tomasi (1994)Good Features Robust Norm: Jepson-Black (1993)EM Robust Norm: Ayer-Sawhney (1995)EM + MRF MAP: Weiss-Adelson (1996)EM + MRF ML/MAP: Bregler-Malik (1998)Twists / EM ML/MAP: Irani (+Ananadan) (2000)SVD
21
8/16/99 Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) - Bayesian Decision Theory - Density Estimation Computer Vision: Vision and Modeling
22
8/16/99 Support Maps / Layers - Black-Jepson-95
23
8/16/99 Support Maps / Layers - More General: Layered Motion (Jepson/Black, Weiss/Adelson, …)
24
8/16/99 Support Maps / Layers - Special Cases of Layered Motion: - Background substraction - Outlier rejection (== robust norm) - Simplest Case: Each Layer has uniform color
25
8/16/99 Support Maps / Layers - Color Layers: P(skin | F(x,y))
26
8/16/99 Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Duda+Hart+Stork: Chap. 1-5) - Bayesian Decision Theory - Density Estimation Computer Vision: Vision and Modeling
27
8/16/99 Statistical Models: Represent Uncertainty and Variability Probability Theory: Proper mechanism for Uncertainty Basic Facts White Board Statistical Models / Probability Theory
28
8/16/99 General Performance Criteria Optimal Bayes With Applications to Classification Optimal Bayes With Applications to Classification
29
8/16/99 Bayes Decision Theory Example: Character Recognition: Goal: Classify new character in a way as to minimize probability of misclassification Example: Character Recognition: Goal: Classify new character in a way as to minimize probability of misclassification
30
8/16/99 Bayes Decision Theory 1st Concept: Priors a a b a b a a b a b a a a a b a a b a a b a a a a b b a b a b a a b a a P(a)=0.75 P(b)=0.25 ?
31
8/16/99 Bayes Decision Theory 2nd Concept: Conditional Probability # black pixel
32
8/16/99 Bayes Decision Theory Example: X=7
33
8/16/99 Bayes Decision Theory Example: X=8
34
8/16/99 Bayes Decision Theory Example: X=8 Well… P(a)=0.75 P(b)=0.25
35
8/16/99 Bayes Decision Theory Example: X=9 P(a)=0.75 P(b)=0.25
36
8/16/99 Bayes Decision Theory Bayes Theorem:
37
8/16/99 Bayes Decision Theory Bayes Theorem:
38
8/16/99 Bayes Decision Theory Bayes Theorem: Posterior = Likelihood x prior Normalization factor
39
8/16/99 Bayes Decision Theory Example:
40
8/16/99 Bayes Decision Theory Example:
41
8/16/99 Bayes Decision Theory Example: X>8 class b
42
8/16/99 Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries: Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:
43
8/16/99 Bayes Decision Theory Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries: Goal: Classify new character in a way as to minimize probability of misclassification Decision boundaries:
44
8/16/99 Bayes Decision Theory Decision Regions: R1R2 R3
45
8/16/99 Bayes Decision Theory Goal: minimize probability of misclassification
46
8/16/99 Bayes Decision Theory Goal: minimize probability of misclassification
47
8/16/99 Bayes Decision Theory Goal: minimize probability of misclassification
48
8/16/99 Bayes Decision Theory Goal: minimize probability of misclassification
49
8/16/99 Bayes Decision Theory Discriminant functions: class membership solely based on relative sizesclass membership solely based on relative sizes Reformulate classification process in terms ofReformulate classification process in terms of discriminant functions: x is assigned to Ck if x is assigned to Ck if Discriminant functions: class membership solely based on relative sizesclass membership solely based on relative sizes Reformulate classification process in terms ofReformulate classification process in terms of discriminant functions: x is assigned to Ck if x is assigned to Ck if
50
8/16/99 Bayes Decision Theory Discriminant function examples:
51
8/16/99 Bayes Decision Theory Discriminant function examples: 2-class problem
52
8/16/99 Bayes Decision Theory Why is such a big deal ?
53
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition 71897189 = x y [/ah/, /eh/,.. /uh/] FFT melscale bank apple,...,zebra
54
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition FFT melscale bank /t/ /aal//aol//owl/
55
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition How do Humans do it?
56
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition “This machine can recognize speech” ??
57
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition “This machine can wreck a nice beach” !!
58
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition 71897189 = x y FFT melscale bank
59
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #1: Speech Recognition Why is such a big deal ? Example #1: Speech Recognition 71897189 = x y FFT melscale bank P(“wreck a nice beach”) = 0.001 P(“recognize speech”) = 0.02 Language Model
60
8/16/99 Bayes Decision Theory Why is such a big deal ? Example #2: Computer Vision Why is such a big deal ? Example #2: Computer Vision Low-Level Image Measurements High-Level Model Knowledge
61
8/16/99 Bayes Why is such a big deal ? Example #3: Curve Fitting Why is such a big deal ? Example #3: Curve Fitting E + ln p(x|c) + ln p(c)
62
8/16/99 Bayes Why is such a big deal ? Example #4: Snake Tracking Why is such a big deal ? Example #4: Snake Tracking E + ln p(x|c) + ln p(c)
63
8/16/99 Lucas-Kanade Extensions Support Maps / Layers: Robust Norm, Layered Motion, Background Subtraction, Color Layers Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5) - Bayesian Decision Theory - Density Estimation Computer Vision: Vision and Modeling
64
8/16/99 Probability Density Estimation Collect Data: x1,x2,x3,x4,x5,... x x ? Estimate:
65
8/16/99 Probability Density Estimation Parametric Representations Non-Parametric Representations Mixture Models
66
8/16/99 Probability Density Estimation Parametric Representations - Normal Distribution (Gaussian) - Maximum Likelihood - Bayesian Learning
67
8/16/99 Normal Distribution
68
8/16/99 Multivariate Normal Distribution
69
8/16/99 Multivariate Normal Distribution Why Gaussian ? Simple analytical properties: - linear transformations of Gaussians are Gaussian - marginal and conditional densities of Gaussians are Gaussian - any moment of Gaussian densities is an explicit function of “Good” Model of Nature: - Central Limit Theorem: Mean of M random variables is distributed normally in the limit.
70
8/16/99 Multivariate Normal Distribution Discriminant functions:
71
8/16/99 Multivariate Normal Distribution Discriminant functions: equal priors + cov: Mahalanobis dist.
72
8/16/99 Multivariate Normal Distribution How to “learn” it from examples: Maximum Likelihood Bayesian Learning
73
8/16/99 Maximum Likelihood How to “learn” density from examples: x x ? ?
74
8/16/99 Maximum Likelihood Likelihood that density model generated data X :
75
8/16/99 Maximum Likelihood Likelihood that density model generated data X :
76
8/16/99 Maximum Likelihood Learning = optimizing (maximizing likelihood / minimizing E):
77
8/16/99 Maximum Likelihood Maximum Likelihood for Gaussian density: Close-form solution:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.