Download presentation
Presentation is loading. Please wait.
Published byJunior Owen Modified over 9 years ago
1
Labeling Images for FUN!!! Yan Cao, Chris Hinrichs
2
How do you improve Learning systems? Get more processing power. (Faster computers, more memory, more parallel.) Find a more sophisticated algorithm. Get lots and lots of quality data.
3
Why Manually label Images? A job that’s easy for humans but challenging for Computer Vision Why? Acquire Ground Truth – Segmentation, i.e. object extraction from an image, is hard – Multiple poses and views of objects – Depth of objects, which one is in the front when there is an intersection – Relationships between objects and their parts. E.g., face and eyes, car and wheels
4
General Idea to make computers do labeling – Supervised learning Enough training data – Images with manually pre-assigned labels. Classifiers which are trained by the training data and used to label the queried images. If we want do segmentations on the queried images, the training images need to include the information about the boundaries of the inside objects.
5
Who is willing to be volunteer Manually Labeling numerous images is a tedious job Motivations which can make humans do something – Money! You know you will be paid – Fun! You enjoy doing it – Gain respect from others
6
ESP – an image labeling game Rules – Server randomly arranges a partner to you (could be a “bot”) – The same image on the two partners’ screen – When the labels typed by the partners match each other, gain scores and move to the next image – There might be some taboo words which can not be the labels for the image
7
ESP – an image labeling game Rules – Partners strive to agree on as many images as they can in 2.5 minutes – Partners can choose to pass images when they both click “Pass” button – The more images the partners agree on the labels, the higher the final scores they achieve
8
ESP – an image labeling game
10
Taboo Words Gained from the game – When the image is shown the first time in ESP, there are no taboo words – If the image is used again, there is a taboo word which is obtained from last agreements At most 6 taboo words for one image Taboo words guarantee that each image has many different labels
11
Good Label Threshold It is the threshold to include a label to the list of taboo words for an image If threshold = 1, it means that once a pair of partners agree on a label, this label will be set as a taboo word If threshold =10, when 10 pairs of partners agree on the same label for an image, it is set as a taboo word
12
Image source Randomly selected from the Web using a small amount of filters From “Random Bounce Me”, which randomly returns images from Google database Qualifications of images: – Large enough (>20 pixels on either dimension) – Aspect ratio between (1/4.5, 4.5) – Not blank/single color image
13
Evaluation Are the labels relevant to the images? – Do a search within the labeled images in the ESP database Are the players motivated by the game? – Do statistics on user log How’s the labeling rate? – See how many images are labeled within a time period
14
Accuracy of Labels 20 images are randomly selected from ESP 15 participants are asked to label 20 images with 6 labels on each image, given no information about the taboo words. When the labels made by the participants are compared with the labels obtained from the game, 83% of the labels match For all images, the 3 most common words entered by the participants were contained by ESP labels
15
Example: some images labeled with “car”
16
Is it fun? Over 80% users played the game on multiple dates In 4 months, 33 players played more than 50 hours on the game
17
Labeling Rate If there are 5000 users online all 24 hours (it is easy to reach for online games), within a month all images in Google database (425,000,000) will be labeled!
18
More than Labeling What if the players tell more information about images, such as where the objects are in the images? Peekaboom – An interesting game which is fun and at the same time, collects information other than labels
19
Peekaboom
20
Rules of Peekboom Pairs of partners randomly arranged by Server One sees a whole image and its label (Boom side) The other sees a blank screen and an input box at bottom (Peek side) The boom partner clicks on the image and each click reveals an area with a 20-pixel radius to the peek partner
21
Rules of Peekaboom According to the revealed parts, the peek partner inputs labels until one matches the label shown on the boom side The boom partner can give hints to help the peek partner get the right label – Ping the “key” parts in the images – Tell how the word is related to the image
22
Hints given by the boom partner
23
Rules of Peekaboom The partners switch between peek and boom alternatively For images with a hard-to-guess label, the partners can choose to pass The more images they correctly label in 2.5 minutes, the higher their score To make the game more fun, bonus rounds are added and users are ranked by their scores
24
Information collected by Peekaboom How the word relates to the image (from hints) Pixels necessary to guess the word The pixels inside the object, animal, or person (from pings) The most salient aspects of the objects in the image (from the sequence of clicks) Elimination of poor image-word pairs (passing)
25
Applications based on the information Improving Image-Search Results – images in which the word refers to a higher fraction of the total pixels should be ranked higher Bounding boxes of objects
26
Applications based on the information Using Ping data for pointing
27
Evaluation Do people have fun? – More than 90% people play multiple times on different days – Players on the “Top Scores” all played over 53 hours Accuracies of collected data – Bounding boxes. Participants VS Peekaboom. Overlap percentage 0.754 – Accuracies of Pings. Participants VS Peekaboom. 100% accuracy!
28
Label Me Russel et. al. MIT CSAILab
29
Improving on image captions Many image DBs are available which have captions for every image, which say what is in the image. LabelMe allows users to add their own bounding boxes around objects and label them directly. LabelMe’s authors claim their pictures are taken from a wide variety of places. (They seem to be mostly street scenes, and other travel photos, and a few insides of houses.)
30
How do you participate? Just go to the URL: http://labelme.csail.mit.edu/http://labelme.csail.mit.edu/ You are given an image, which may or may not have previously drawn boundaries. If you see an object which you can identify, draw a boundary, and when you close the polygon it asks for a label. There are no rules on how to choose the labels, or on how closely to draw the boxes. They trust your judgment – but more importantly, it reflects peoples’ different ideas.
31
How good are the bounding boxes? It varies.
32
More general results: 25 th, 50 th, and 75 th percentile by polygon count of come common object types.
33
We can learn something about the way people take pictures from the distribution of where objects are located. Generally, people are standing when they take pictures.
34
What do the average objects look like?
35
Tying it in with WordNet Some words have synonyms: man/woman, person, pedestrian; car, automobile, cab, suv Look up each label on Wordnet. The authors report 93% of labels found a matching WordNet entry, though some manual word sense disambiguation had to be done. This allows queries to match at various levels of specificity in the WordNet tree, and more general queries.
36
Some general queries & results, using WordNet
37
Dealing with occlusion: simple rules If an object is completely contained, it is inside. If it has more control points in the overlapping region is probably on top. Can use features like color histograms to match the overlapping region with one region or the other, but this is expensive, complicated, and doesn’t work as well.
38
Depth ordering results
39
Image search reranking Do segmentation on query image, extract features, compare with features of regions labeled with search terms, reorder by strength of correlation.
40
80 Million Tiny Images Torralba et. al. http://people.csail.mit.edu/torralba/tinyimages/http://people.csail.mit.edu/torralba/tinyimages/
41
Shrinking images How much information does an image need to contain in order to identify its contents? Why not ask humans before asking computers? Torralba et. al. looked for the minimum resolution that humans need in order to identify the contents of an image.
42
Can you tell what these are?
43
Note that for color images, the humans’ accuracy levels off at 32x32. For grayscale, the same happens at 64x64. The humans did much better at 32x32 resolution than the best recognition algorithms did at full resolution. 32x32x3 dimensions for color images, 32x32x4 dimensions for grayscale with very nearly the same accuracy, so ~3000 dimensions needed for recognition.
44
Next: Acquire a huge number of images Where do you start? – even at reduced resolution, there are just too many images out there to get them all. Start with WordNet. For each of the 75,062 concrete nouns in Wordnet, do an image retrieval search on many image search engines. They used, Google, Cydral, AltaVista, Flickr, Picsearch, and Webshots.WordNet Then eliminate duplicates and solid-color images. About 10% of the words were rare, and had no matching images.
45
Finding nearest neighbors Need a distance metric to compare the tiny images. They examine 3: SSD(Sum of Squared Differences), Warp, and Shift.
46
SSD Normal SSD is done by summing the squared difference over all dimensions. Computing distance between all pairs this way is too expensive, so they used the top 19 Principal Components. They did some experiments to show that this works reliably.
47
Warp & Shift Warp: Just warp the image in some simple way, like flipping, scaling or translating, and see if that improves the SSD. Shift: Allow each pixel to shift in a 5x5 window, and take the best SSD from that. (Crude approximation of general warping.)
48
Effect of DB size As the DB grows, the quality of nearest neighbors noticeably changes, even up to ~100,000,000.
49
Applications Object Recognition Image retrieval reranking Person Detection & localization Image Colorization
50
Recognition Recognition is done by finding neighbors, and retrieving the Wordnet entry for each. Each one corresponds to a unique leaf node in the WordNet tree, and gets a single “vote”. Unify the branches into a tree, weighting internal nodes by how many branches pass through them. Classify by following link to highest voted child node.
51
Image search reranking Do an image search on, say, “person”, on any image retrieval engine. Then find the correlation with the search term with the neighbor set of each image returned, and rank them based on the strength of the correlation with the original search terms.
52
Person detection Images matched with the Wordnet node “person” and their nearest neighbors. Note that the neighbors match the part of the person shown in the query image, and their poses and color of clothing. Here, the system only returns whether the best match passes through the “person” internal node. The internet has a large bias towards images with people in them, so not all applications of this method will work with things that are not people.
53
Person location Given a portion of an image, we can find its neighbors, and measure the correlation with “person” in that set. Extending this, we can find the portion of a query image whose neighbor set has the highest correlation with “person”. This region is very likely to have a person in it.
54
Colorization Given a query image, (grayscale,) find its neighbor set, and take the average color of the set. Then apply that coloring to the grayscale image. Surprisingly, this works, especially given that not all neighbor images are even of the same type of object!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.