Presentation is loading. Please wait.

Presentation is loading. Please wait.

Essence The of the Scene Gist Trayambaka Karra KT and Garold Fuks.

Similar presentations


Presentation on theme: "Essence The of the Scene Gist Trayambaka Karra KT and Garold Fuks."— Presentation transcript:

1 Essence The of the Scene Gist Trayambaka Karra KT and Garold Fuks

2 The “Gist” of a scene If this is a street this must be a pedestrian

3 Physiological Evidence People are excellent in identifying pictures (Standing L., QL. Exp. Psychol. 1973) Gist : abstract meaning of scene Obtained within 150 ms (Biederman, 1981, Thorpe S. et.al 1996 ) Obtained without attention (Oliva & Schyns, 1997, Wolfe,J.M. 1998) Possibly derived via statistics of low-level structures (e.g. Swain & Ballard, 1991) Change Blindness (seconds) (Simons DJ,Levin DT,Trends Cog.Sci. 97)

4 What is the “gist” Inventory of the objects (2-3 objects in 150 msec Luck & Vogel, Nature 390, 1997 ) Relation between objects (layout) (J. Wolfe, Curr. Bio. 1998, 8 ) Presence of other objects “Visual stuff” – impression of low level features

5 How does the “Gist” works Statistical Properties Object Properties R.A. Rensink, lecture notes

6 Outline Context Modeling –Previous Models –Scene based Context Model Context Based Applications –Place Identification –Object Priming –Control of Focus of Attention –Scale Selection –Scene Classification Joint Local and Global Features Applications –Object Detection and Localization Summary

7 Probabilistic Framework MAP Estimator v – image measurements O – object property Category (o) Location (x) Scale (σ)

8 Object-Centered Object Detection B. Moghaddam, A. Petland IEEE, PAMI-19 1997 The only image features relevant to object detection are those belonging to the object and not the background

9 The “Gist” of a scene Local features can be ambiguous Context can provide prior

10 Scene Based Context Model Background provides a likelihood of finding an object Prob(Car/image) = low Prob(Person/image) = high

11 Context Modeling  Previous Context Models (Fu, Hammond and Swain, 1994,Haralick, 1983; Song et al, 2000)  Rule Based Context Model  Object Based Context Model  Scene centered context representation (Oliva and Torralba, 2001,2002)

12 Structural Description O2 O4 O3 O1 O4 Above Right-of Left-of Touch Rule Based Context Model

13 Fu, Hammond and Swain, 1994

14 Object Based Context Model Context is incorporated only through prior probability of object combinations in the world R. Harralick, IEEE, PAMI-5 1983

15 Scene Based Context Model What are the features representing scene - ? Statistics of local low level features Color histograms Oriented band pass filters

16 Context Features - Vc g 1 (x) g 2 (x) g K (x) v(x,1) v(x,2) v(x,K)

17 Context Features - Vc Car, no people People, no car Gabor filter

18 Context Features - Vc PCA

19 PCA Detour Natural Images DB Calculate v(x,k) Arrange v(x,k) s in a matrix Calculate Correlation of V Perform SVD Use columns of U as basis

20 Context Features - Summary Bank Of Filters Dimension Reduction PCA I(x)

21 Probability from Features How to obtain context based probability priors P(O/v c ) on object properties - ? GMM - Gaussian Mixture Model Logistic regression Parzen window

22 Probability from Features GMM P(Object Property/Context) Need to study two probabilities: P(v c /O) – likelihood of the features given the presence of an object P(v c /¬O) – likelihood of the features given the absence of an object Gaussian Mixture Model: The unknown parameters are learnt by EM algorithm

23 Probability from Features How to obtain context based probability priors P(O/v c ) on object properties - ? GMM - Gaussian Mixture Model Logistic regression Parzen window

24 Probability from Features Logistic Regression

25 O = having back problems v c = age - The log odds for 20 year old person - The log odds ratio when comparing two persons who differ 1 year in age Example Training Stage Working Stage

26 Probability from Features How to obtain context based probability priors P(O/v c ) on object properties - ? GMM - Gaussian Mixture Model Logistic regression Parzen window

27 Probability from Features Parzen Window Radial Gaussian Kernel

28 What did we have so far… Context Modeling Context Based Applications –Place Identification –Object Priming –Control of Focus of Attention –Scale Selection –Scene Classification

29 Place Identification Goal: Recognize specific locations

30 Place Identification A.Torralba, K.Murphy, W. Freeman, M. Rubin ICCV 2003

31 Place Identification Decide only when Precision vs. Recall rate: A.Torralba, P. Sinha, MIT AIM 2001-015

32 Object Priming How do we detect objects in an image? –Search the whole image for the object model. –What if I am searching in images where the object doesn’t exist at all? Obviously, wasting “my precious” computational resources. --------- GOLUM. Can we do better and if so, how? –Use the “great eye”, the contextual features of the image (v C ), to predict the probability of finding our object of interest, o in the image i.e. P(o / v C ).

33 Object Priming ….. What to do? –Use my experience to learn from a database of images with How to do it? –Learn the PDF, by a mixture of Gaussians –Also, learn the PDF

34 Object Priming …..

35

36 Control of Focus of Attention How do biological visual systems use to deal with the analysis of complex real-world scenes? –by focusing attention into image regions that require detailed analysis.

37 Modeling the Control of Focus of Attention How to decide which regions are “more” important than others? Local–type methods 1.Low level saliency maps – regions that have different properties than their neighborhood are considered salient. 2.Object centered methods. Global-type methods 1.Contextual control of focus of attention

38 Contextual Control of Focus of Attention Contextual control is both –Task driven (looking for a particular object o) and –Context driven (given global context information: v C ) No use of object models (i.e. ignores object centered features)

39 Contextual Control of Focus of Attention …

40 Focus on spatial regions that have high probability of containing the target object o given context information (v C ) For each location x, lets calculate the probability of presence of the object o given the context v C. Evaluate the PDF based on the past experience of the system.

41 Contextual Control of Focus of Attention … Learning Stage: Use the Swiss Army Knife, the EM algorithm, to estimate the parameters

42 Contextual Control of Focus of Attention … Learning Stage: - - models the distribution of object locations. - models the distribution of contextual features. Given training data is { v t } t = 1, N and { x t } t = 1, N where v t are the contextual features of picture t and x t is the location of object o in the scene. Use the Swiss Army Knife, the EM algorithm, to estimate the parameters

43 Contextual Control of Focus of Attention …

44 Scale Selection Scale selection is a fundamental problem in computer vision. a key bottleneck for object-centered object detection algorithms. Can we estimate scale in a pre-processing stage? Yes, using saliency measures of low-level operators across spatial scales. Other methods? Of course, …..

45 Context-Driven Scale Selection Preferred Scale,

46 Context-Driven Scale Selection ….

47

48 Scene Classification Strong correlation between the presence of many types of objects. Do not model this correlation directly. Rather, use a “common” cause, which we shall call “scene”. Train a Classifier to identify scenes. Then all we need is to calculate

49 What did we have so far… Context Modeling Context Based Applications Joint Local and Global Features Applications –Object Detection and Localization Need new tools: Learning and Boosting

50 Weak Learners Given (x 1,y 1 ),…,(x m,y m ) where Can we extract “rules of thumb” for classification purposes? Weak learner finds a weak hypothesis (rule of thumb) h : X {spam, non-spam}

51 Decision Stumps Consider the following simple family of component classifiers generating ±1 labels: h(x;p) = a[x k > t] - b where p = {a, b, k, t}. These are called decision stumps. Sign (h) for classification and mag (h) for a confidence measure. Each decision stump pays attention to only a single component of the input vector.

52 Ponders his maker, ponders his will Can we combine weak classifiers to produce a single strong classifier in a simple manner: h m (x) = h(x;p 1 ) + …. + h(x;p m ) where the predicted label for x is the sign of h m (x). Is it beneficial to allow some of the weak classifiers to have more “votes” than others: h m (x) = α 1 h(x;p 1 ) + …. + α m h(x;p m ) where the non-negative votes α i can be used to emphasize the components more reliable than others.

53 Boosting What is boosting? –A general method for improving the accuracy of any given weak learning algorithm. –Introduced in the framework of PAC learning model. –But, works with any weak learner (in our case the decision stumps).

54 Boosting ….. A boosting algorithm sequentially estimates and combines classifiers by re-weighting training examples (each time concentrating on the harder examples) – each component classifier is presented with a slightly different problem depending on the weights Base Algorithms –a set of “weak” binary (±1) classifiers h(x;p) such as decision stumps –normalized weights D 1 (i) on the training examples, initially set to uniform (D 1 (i) = 1 / m)

55 AdaBoost 1.At the t th iteration we find a weak classifier h(x;p t ) for which the classification error is better than chance. 2.The new component classifier is assigned “votes” based on its performance 3.The weights on the training examples are updated according to where Z t is a normalization factor.

56 AdaBoost

57 Gambling Uri Gari KT

58 Object Detection and Localization 3 Families of Approaches –Parts based Object defined as spatial arrangement of small parts. –Region based Use segmentation to extract a region of image from the background and deduce shape and texture info from its local features. –Patch based Use local features to classify each rectangular image region as object or background. Object detection is reduced to a binary classification problem i.e compute just P(O C i = 1 / v C i ) where O C i = 1 if patch i contains (part of) an object of class C v C i = the feature vector for patch i computed for class C.

59 Feature Vector for a Patch: Step 1

60 Feature Vector for a Patch: Step 2

61 Feature Vector for a Patch: Step 3

62 Summary: Feature Vector Extraction 12 * 30 *2 = 720 features

63 Filters and Spatial Templates

64 Object Detection ….. Do I need all the features for a given object class? If so, what features should I extract for a given object class? –Use training to learn which features are more important than others.

65 Classifier: Boosted Features What is available? –Training data is v = the features of the patch containing an object o. Weak learners pay attention to single features: –h t (v) picks best feature and threshold: Output is –h t (v) = output of weak classifier at round t –α t = weight assigned by boosting ~100 rounds of boosting

66 Examples of Learned Features

67 Example Detections

68 Using the Gist for Object Localization Use gist to predict the possible location of the object. Should I run my detectors only in that region? –No! Misses detection if the object is at any other location. –So, search everywhere but penalize those that are far from predicted locations. But how?

69 Using the Gist for Object Localization …. Construct a feature vector which combines the output of the boosted classifier, and the difference. Train another classifier to compute

70 Using the Gist for Object Localization ….

71 Summary Context Modeling –Previous Models –Scene based Context Model

72 Summary Context Modeling Context Based Applications –Place Identification –Object Priming –Control of Focus of Attention –Scale Selection –Scene Classification

73 Summary Context Modeling Context Based Applications Joint Local and Global Features Applications –Object Detection and Localization


Download ppt "Essence The of the Scene Gist Trayambaka Karra KT and Garold Fuks."

Similar presentations


Ads by Google