Announcements •Homework due Tuesday. •Office hours Monday 1-2 instead of Wed. 2-3.

Slides:



Advertisements
Similar presentations
Class 6: Hypothesis testing and confidence intervals
Advertisements

Introduction to Non Parametric Statistics Kernel Density Estimation.
Copyright © Cengage Learning. All rights reserved. 7 Probability.
Sampling Distributions Suppose I throw a dice times and count the number of times each face turns up: Each score has a similar frequency (uniform.
People Counting and Human Detection in a Challenging Situation Ya-Li Hou and Grantham K. H. Pang IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART.
ST3236: Stochastic Process Tutorial 9
Probabilistic models Haixu Tang School of Informatics.
Sampling Distribution of a Sample Proportion Lecture 26 Sections 8.1 – 8.2 Wed, Mar 8, 2006.
Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
6-1 Stats Unit 6 Sampling Distributions and Statistical Inference - 1 FPP Chapters 16-18, 20-21, 23 The Law of Averages (Ch 16) Box Models (Ch 16) Sampling.
Sampling Distributions (§ )
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision Lecture 16: Region Representation
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Human-Computer Interaction Human-Computer Interaction Segmentation Hanyang University Jong-Il Park.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Assuming normally distributed data! Naïve Bayes Classifier.
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Segmentation Divide the image into segments. Each segment:
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Learning Bayesian Networks
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
IMAGE 1 An image is a two dimensional Function f(x,y) where x and y are spatial coordinates And f at any x,y is related to the brightness at that point.
1 Some terminology Population - the set of all cases about which we have some interest. Sample - the cases we have selected from the population (randomly)
Lecture II-2: Probability Review
Slide 1 Tutorial: Optimal Learning in the Laboratory Sciences Working with nonlinear belief models December 10, 2014 Warren B. Powell Kris Reyes Si Chen.
Crash Course on Machine Learning
Computer vision: models, learning and inference Chapter 6 Learning and Inference in Vision.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Tag Ranking Present by Jie Xiao Dept. of Computer Science Univ. of Texas at San Antonio.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Distributions of the Sample Mean
Monitoring of background events in 2010 run Giuseppe Zito 06/11/2015 PFG/MIG Topical meeting on beam background.
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
CS654: Digital Image Analysis
Computer Graphics and Image Processing (CIS-601).
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
Section 3.3: The Story of Statistical Inference Section 4.1: Testing Where a Proportion Is.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
AP Statistics Section 11.1 B More on Significance Tests.
CY1B2 Statistics1 (ii) Poisson distribution The Poisson distribution resembles the binomial distribution if the probability of an accident is very small.
Inference: Probabilities and Distributions Feb , 2012.
Random Variables Ch. 6. Flip a fair coin 4 times. List all the possible outcomes. Let X be the number of heads. A probability model describes the possible.
Sampling and estimation Petter Mostad
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten’s and E. Frank’s “Data Mining” and Jeremy Wyatt and others.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
ECE4270 Fundamentals of DSP Lecture 8 Discrete-Time Random Signals I School of Electrical and Computer Engineering Center for Signal and Image Processing.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
Chapter 9 Sampling Distributions 9.1 Sampling Distributions.
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Probability Distributions Chapter 6.
Course : T Computer Vision
Computer vision: models, learning and inference
POINT ESTIMATOR OF PARAMETERS
LECTURE 07: BAYESIAN ESTIMATION
Image Segmentation.
AP Statistics Chapter 16 Notes.
Sampling Distributions (§ )
Presentation transcript:

Announcements •Homework due Tuesday. •Office hours Monday 1-2 instead of Wed. 2-3.

Knowledge of the world is often statistical. •When the appearance of an object varies, but not completely arbitrarily. •Examples: –Classes of objects: faces, cars, bicycles –Segmentation: contour shape, texture, background appearance.

Represent statistics with probability distribution •Every value has a probability •Probabilities between 0 and 1 •Probabilities sum to 1 •Two Issues –How do we get these distributions? –How do we use them?

Background Subtraction •We’ll use this as our first example. •Many images of same scene. •A pixel is foreground or background. •Many training examples of background. •Classify pixels in new image

The Problem …

Look at each pixel individually … Then classify:

Just Subtract? - and threshold difference

Background isn’t static

Probability Distribution for Pixels •p(I(x,y)=k) for the probability that the pixel at (x,y) will have an intensity of k

Bayes’ Law This tells us how to reach a conclusions using evidence, if we know the probability that the evidence would occur. Probability (x,y) is background if intensity is 107? Who knows? Probability intensity is 107 if background? We can measure.

Bayes’ law cont’d If we have uniform prior for foreground pixel, then key is to find probability distribution for background.

Sample Distribution with Histogram •Histogram: count # times each intensity appears. •We estimate distribution from experience. •If 1/100 of the time, background pixel is 17, then assume P(I(x,y)=17|B) = 1/100. •May not be true, but best estimate. •Requires Ergodicity, ie distribution doesn’t change over time.

Sample Distribution Problems •This estimate can be noisy. Try: k=6; n=10; figure(1); hist(floor(k*rand(1,n)), 0:(k-1)) for different values of k and n. •Need a lot of data.

Histogram of One Pixel Intensities

Kernel Density Estimation •Assume p(I(x,y)=k) similar for similar values of k. •So observation of k tells us a new observation at or near k is more likely. •Equivalent to smoothing distribution. (smoothing reduces noise)

KDE vs. Sample Distribution •Suppose we have one observation –Sample dist. says that event has prob. 1 •All other events have prob. 0 –KDE says there’s a smooth dist. with a peak at that event. •Many observations just average what happens with one.

KDE cont’d •To compute P(x,y)=k, for every sample we add something based on distance between sample and k. •Let s i be sample no. i of (x,y), N the number of samples,  be a parameter.

KDE for Background Subtraction •For each pixel, compute probability background would look like this. •Then threshold.

Naïve Subtraction With Model of Background Distribution