Download presentation
Presentation is loading. Please wait.
Published bySuzan Tyler Modified over 9 years ago
1
My Group’s Current Research on Image Understanding
2
An image-understanding task
3
Low-level vision
4
Color, Shape, Texture Low-level vision
5
Color, Shape, Texture Simple Segmentation Low-level vision
6
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition
7
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception
8
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition
9
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition Analogy-making
10
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition “Meaning” Analogy-making
11
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception ??? Pattern recognition “Meaning” Analogy-making
12
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition “Meaning” Analogy-making The “SEMANTIC GAP’
13
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition “Meaning” Analogy-making HMAX model of visual cortex Riesenhuber, Poggio, et al. The “SEMANTIC GAP’
14
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition “Meaning” Analogy-making Active Symbol Architecture for high-level perception Hofstadter et al. HMAX model of visual cortex Riesenhuber, Poggio, et al. The “SEMANTIC GAP’
15
Color, Shape, Texture Simple Segmentation Low-level vision Object recognition High-level perception Pattern recognition “Meaning” Analogy-making Active Symbol Architecture for high-level perception Hofstadter et al. HMAX model of visual cortex Riesenhuber, Poggio, et al. The “SEMANTIC GAP’
16
The HMAX model for object recognition (Riesenhuber, Poggio, Serre, et al.)
17
1. Densely tile the image with windows of different sizes. 2. HMAX features are computed in each window. 3. The features in each window are given as input to the trained support vector machine. 4. If the SVM returns a score above a learned threshold, then the object is said to be “detected”. … … Recognition Phase Streetscenes “scene understanding” system (Bileschi, 2006)
18
Object detection (here, “car”) with HMAX model (Bileschi, 2006)
19
Some limitations of the Streetscenes approach to scene understanding
20
Requires exhaustive search for object identification and localization
21
Some limitations of the Streetscenes approach to scene understanding Requires exhaustive search for object identification and localization Exhaustive search over:
22
Some limitations of the Streetscenes approach to scene understanding Requires exhaustive search for object identification and localization Exhaustive search over: Window size and location in the image
23
Some limitations of the Streetscenes approach to scene understanding Requires exhaustive search for object identification and localization Exhaustive search over: Window size and location in the image Object categories (e.g., car, pedestrian, tree, etc.)
24
Some limitations of the Streetscenes approach to scene understanding Requires exhaustive search for object identification and localization Exhaustive search over: Window size and location in the image Object categories (e.g., car, pedestrian, tree, etc.) Exhaustive use of HMAX features in each window
25
Does not recognize spatial and abstract relationships among objects for whole scene understanding
26
Has no prior knowledge about object categories and their place in “conceptual space”
27
Does not recognize spatial and abstract relationships among objects for whole scene understanding Has no prior knowledge about object categories and their place in “conceptual space” HMAX model is completely feed-forward; no feedback to allow context to aid in scene understanding.
28
Goal of our project Perform whole-scene interpretation without exhaustive search. –Incorporate conceptual knowledge –Allow feedforward and feedback modes to interact
29
PersonDog leash attached to walking action holds A Simple Semantic Network (or “Ontology”) “Dog walking”
30
But... http://www.dogasaur.com/blog/wp-content/uploads/2011/04/dogwalker.jpg
31
But... http://www.vet.k-state.edu/depts/development/lifelines/images/dog_jog_1435.jpg
32
PersonDog leash attached to walking action holds Dog Group running “Dog walking”
33
PersonDog leash attached to walking action holds running Allowing “conceptual slippage” “Dog walking” Dog Group
34
But... http://3.bp.blogspot.com/_1YuoCTv4oKQ/S71jUDm7kOI/AAAAAAAAAak/jz4Pg7zzzQ8/s1600/23743577.JPG
35
http://lh3.ggpht.com/-ZZrYWeBFTjo/SFQH_0ijwaI/AAAAAAAABjA/8nwryW2BmEw/IMG_0356.JPG
36
Person leash attached to walking action holds “Dog walking” running Cat Iguana Dog Dog Group Tail
37
But... http://www.mileanhour.com/post/Dog-walking-bike.aspx
38
http://cl.jroo.me/z3/Z/e/C/d/a.aaa-Thus-walking-dog.png
39
ttp://thedaemon.com/images/DARPA_Segue_Dog.jpg
40
http://www.bikeforest.com/product45422.jpg
41
http://www.k9ring.com/blog/image.axd?picture=2010%2F3%2Fw alking_dog_from_car.jpg
42
http://www.guy- sports.com/fun_pictures/dog_walking_helicopter.jpg
43
http://static.themetapicture.com/media/funny-dog-walking-horse-leash.jpg
44
http://macwetblog.files.wordpress.com/2012/05/dog-walking.jpg
45
PersonDog leash attached to walking action holds running Cat Iguana BikingCarHelicopter “Dog walking” Dog Group DrivingSegue-ing Treadmill-ing Horse Tail
46
Active Symbol Architecture (Hofstadter et al., 1995)
47
Basis for –Copycat (analogy-making), Hofstadter & Mitchell –Tabletop (anlaogy-making), Hofstadter & French –Metacat (analogy-making and self-awareness), Hofstadter & Marshall and many others…
49
Semantic network Temperature Workspace Active Symbol Architecture (Hofstadter et al., 1995) Perceptual agents (codelets) are “active symbols”
50
Petacat: (Descendant of Copycat, part of the PetaVision project) Integration of Active Symbol Architecture and HMAX Initial task: Decide if image is an instance of “taking a dog for a walk”, and if so, how good an instance it is.
51
Workspace
52
Semantic network Workspace
53
taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation Semantic Network cat
54
Property links Slip links taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation Semantic Network cat
55
Semantic Network taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation cat
56
taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation cat
57
taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation cat
58
taking a dog for a walk outdoors has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash sidewalk string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors is on Spatial Relation cat
59
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies cathorse swims rope belt leash string walks is in front of has location has action has component stands sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors is on Spatial Relation
60
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies horse swims rope belt leash string walks is in front of has location has action has component stands is on sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors Spatial Relation cat
61
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies cathorse swims rope belt leash string walks is in front of has location has action has component stands is on sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors Spatial Relation
62
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies cathorse swims rope belt leash string walks is in front of has location has action has component stands is on sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors Spatial Relation
63
Measures how well organized the program’s “understanding” is as processing proceeds –Little organization high temperature –Lots of organization low temperature Temperature feeds back to affect perceptual agents: –High temperature low confidence in decisions decisions are made more randomly –Low temperature high confidence in decisions decisions are made more deterministically Temperature
64
Input image
65
Weak segmentation
66
Input imageWeak segmentation Location “heat map” (probability distribution over pixel locations)_ +++ + +
67
Input imageWeak segmentation Location “heat map” (probability distribution over pixel locations)_ +++ + + Scale “heat map” (probability distribution over scales at each pixel location)
68
Dog? Scout codelets: Send C1 features in window to corresponding SVM. If positive result, post builder codelet with urgency equal to SVM’s confidence.
69
Dog? Person? Scout codelets: Send C1 features in window to corresponding SVM. If positive result, post builder codelet with urgency equal to SVM’s confidence.
70
Dog? Sidewalk? Person? Scout codelets: Send C1 features in window to corresponding SVM. If positive result, post builder codelet with urgency equal to SVM’s confidence.
71
Dog? Sidewalk? Person? Dog ? Outdoors? Scout codelets: Send C1 features in window to corresponding SVM. If positive result, post builder codelet with urgency equal to SVM’s confidence.
72
Dog? negative Dog? negative Sidewalk? positive: 0.4 Person? negative Outdoors? positive: 0.7 Scout codelets: Send C1 features in window to corresponding SVM. If positive result, post builder codelet with urgency equal to SVM’s confidence. Dog ? positive: 0.8
73
Builder codelets: Ask HMAX to compute C2 features using prototype shapes specific to the object class, and send them to corresponding SVM. If positive, decide to build structure with probability equal to SVM confidence. Break competing structures if necessary. Dog? negative Dog? negative Sidewalk? positive: 0.4 Person? negative Outdoors? positive: 0.7 Dog ? positive: 0.8
74
Outdoors Dog
75
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies cathorse swims rope belt leash string walks is in front of has location has action has component stands is on sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors Spatial Relation
76
Object-specific heat maps are updated. + Dog Person heat map +
77
Object-specific heat maps are updated. + Dog Person heat map + Dog Person?
78
Object-specific heat maps are updated. As codelets build structure, heat maps are continually updated to reflect prior (learned) expectations about location and scale as a function of location and scale of “built” objects. + Dog + Person heat map Person?
79
Dog ? Dog Leash? Outdoors Leash? Sidewalk? Person?
80
Dog Outdoors Sidewalk Person Strength: 0.6
81
Dog Outdoors Sidewalk
82
taking a dog for a walk has location person dog has action is on is touching has component a road a beach trail drives runs flies cathorse swims rope belt leash string walks is in front of has location has action has component stands is on sits is in front of is touching is behind is next to is on a grass is touching Object Action indoors sidewalk outdoors Spatial Relation
83
Dog Outdoors Sidewalk Leash? Dog ? Sidewalk? Dog ? Rope?
84
Dog Outdoors Sidewalk Leash Dog (weak)
85
Dog Outdoors Sidewalk Leash Dog (weak) Dog (strong)
86
Dog Outdoors Sidewalk Leash Dog
88
Outdoors Sidewalk Leash Dog Once objects begin to be built, relation and grouping codelets can run on them. is next to Dog group
89
Once objects begin to be built, relation and grouping codelets can run on them. Dog Outdoors Sidewalk Dog is next to Dog group Leash
90
Dog Outdoors Sidewalk Dog is next to Dog group is next to Leash Once objects begin to be built, relation and grouping codelets can run on them.
91
How Petacat makes a final decision Temperature taking a dog for a walk Dog Outdoor s Leash Dog is next to Dog group Sidewalk is next to
92
How Petacat makes a final decision Temperature taking a dog for a walk Dog Outdoor s Leash Dog is next to Dog group Sidewalk “Situation” codelet is more likely to run when temperature is low. is next to
93
Dog Outdoors Leash Dog is next to Dog group is next to Sidewalk Situation codelet tries to match prototypical situation with existing workspace structures, possibly allowing slippages.
94
Dog Outdoors Leash Dog is next to Dog group Sidewalk perso n taking a dog for a walk leash dog outdoors is next to has component has location is in front of Situation codelet tries to match prototypical situation with existing workspace structures, possibly allowing slippages.
95
Dog Outdoors Leash Dog is next to Dog group perso n taking a dog for a walk leash dog outdoors is next to has component has location is in front of is next to Dog group Sidewalk
96
Dog Outdoors Leash Dog is next to Dog group perso n taking a dog for a walk leash dog outdoors is next to has component has location is in front of is next to Dog group If resulting temperature is low enough, classify scene as positive Sidewalk
97
Dog Outdoors Leash Dog is next to Dog group Sidewalk perso n taking a dog for a walk leash dog outdoors is next to has component has location is in front of is next to Dog group If situation codelet fails enough times or does not run for a long time, program has increasing chance of ending with negative classification. If resulting temperature is low enough, classify scene as positive
98
Temperature at the end of the run gives a measure of how good an instance the picture is (e.g., of the “dog walking” situation). Temperature
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.