Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSSE463: Image Recognition Day 17 Sunset detector spec posted. Sunset detector spec posted. Homework tonight: read it, prepare questions! Homework tonight:

Similar presentations


Presentation on theme: "CSSE463: Image Recognition Day 17 Sunset detector spec posted. Sunset detector spec posted. Homework tonight: read it, prepare questions! Homework tonight:"— Presentation transcript:

1 CSSE463: Image Recognition Day 17 Sunset detector spec posted. Sunset detector spec posted. Homework tonight: read it, prepare questions! Homework tonight: read it, prepare questions! Tomorrow: Exam and sunset questions (+ lab time?) Tomorrow: Exam and sunset questions (+ lab time?) Weds night: Lab 4 due Weds night: Lab 4 due Thursday: Exam Thursday: Exam Bring your calculator. Bring your calculator. Today: Bayesian classifiers Today: Bayesian classifiers Questions? Questions?

2 Late-breaking news! Sunset detector technique is now patented, with two colleagues at Kodak and me as inventors: "Method for using effective spatio-temporal image recomposition to improve scene classification", U.S. Patent number 7,313,268, issued December 25, 2007. Sunset detector technique is now patented, with two colleagues at Kodak and me as inventors: "Method for using effective spatio-temporal image recomposition to improve scene classification", U.S. Patent number 7,313,268, issued December 25, 2007. 7,313,268

3 Bayesian classifiers Use training data Use training data Assume that you know probabilities of each feature. Assume that you know probabilities of each feature. If 2 classes: If 2 classes: Classes   and   Classes   and   Say, circles vs. non-circles Say, circles vs. non-circles A single feature, x A single feature, x Both classes equally likely Both classes equally likely Both types of errors equally bad Both types of errors equally bad Where should we set the threshold between classes? Here? Where should we set the threshold between classes? Here? Where in graph are 2 types of errors? Where in graph are 2 types of errors? x p(x) P(x|  1 ) Non-circles P(x|  2 ) Circles Detected as circles

4 What if we have prior information? Bayesian probabilities say that if we only expect 10% of the objects to be circles, that should affect our classification Bayesian probabilities say that if we only expect 10% of the objects to be circles, that should affect our classification

5 Bayesian classifier in general Bayes rule: Bayes rule: Verify with example Verify with example For classifiers: For classifiers: x = feature(s) x = feature(s)  i = class  i = class P(  |x) = posterior probability P(  |x) = posterior probability P(  ) = prior P(  ) = prior P(x) = unconditional probability P(x) = unconditional probability Find best class by maximum a posteriori (MAP) priniciple. Find class i that maximizes P(  i |x). Find best class by maximum a posteriori (MAP) priniciple. Find class i that maximizes P(  i |x). Denominator doesn’t affect calculations Denominator doesn’t affect calculations Example: Example: indoor/outdoor classification indoor/outdoor classification Learned from examples (histogram) Learned from training set (or leave out if unknown) Fixed

6 Indoor vs. outdoor classification I can use low-level image info (color, texture, etc) I can use low-level image info (color, texture, etc) But there’s another source of really helpful info! But there’s another source of really helpful info!

7 Camera Metadata Distributions Exposure Time Flash Subject Distance Scene Brightness

8 Why we need Bayes Rule Problem: We know conditional probabilities like P(flash was on | indoor) We want to find conditional probabilities like P(indoor | flash was on, exp time = 0.017, sd=8 ft, SVM output) Let  = class of image, and x = all the evidence. More generally, we know P( x |  ) from the training set (why?) But we want P(  | x)

9 Using Bayes Rule P(  |x) = P(x|  )P(  )/P(x) The denominator is constant for an image, so P(  |x) =  P(x|  )P(  ) We have two types of features, from image metadata (M) and from low-level features, like color (L) Conditional independence means P(x|  ) = P(M|  )P(L|  ) P(  |X) =  P(M|  ) P(L|  ) P(  ) P(  |X) =  P(M|  ) P(L|  ) P(  ) From histogramsFrom SVM Priors (initial bias)

10 Bayesian network Efficient way to encode conditional probability distributions and calculate marginals Efficient way to encode conditional probability distributions and calculate marginals Use for classification by having the classification node at the root Use for classification by having the classification node at the root Examples Examples Indoor-outdoor classification Indoor-outdoor classification Automatic image orientation detection Automatic image orientation detection

11 Indoor vs. outdoor classification SVM KL Divergence Color Features SVM Texture Features EXIF header Each edge in the graph has an associated matrix of conditional probabilities

12 Effects of Image Capture Context Recall for a class C is fraction of C classified correctly

13 Orientation detection See IEEE TPAMI paper See IEEE TPAMI paper Hardcopy or posted Hardcopy or posted Also uses single-feature Bayesian classifier (answer to #1) Also uses single-feature Bayesian classifier (answer to #1) Keys: Keys: 4-class problem (North, South, East, West) 4-class problem (North, South, East, West) Priors really helped here! Priors really helped here! You should be able to understand the two papers (both posted in Angel) You should be able to understand the two papers (both posted in Angel)


Download ppt "CSSE463: Image Recognition Day 17 Sunset detector spec posted. Sunset detector spec posted. Homework tonight: read it, prepare questions! Homework tonight:"

Similar presentations


Ads by Google