Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 1674: Intro to Computer Vision Recurrent Neural Networks

Similar presentations


Presentation on theme: "CS 1674: Intro to Computer Vision Recurrent Neural Networks"— Presentation transcript:

1 CS 1674: Intro to Computer Vision Recurrent Neural Networks
Prof. Adriana Kovashka University of Pittsburgh December 5, 2016

2 Announcements Next time: Review for the final exam
By Tuesday at noon, send me three topics you want me to review (for participation credit!) Please do OMETs! (Thanks!) Grades before final: See CourseWeb, “Overall” column (I won’t need to curve)

3 Plan for today Motivation/history Tools
Vision and language, image captioning Tools Recurrent neural networks Recent problem: Visual question answering Some approaches

4 Vision and Language Humans don’t use only their visual processing abilities or speaking/listening abilities in isolation, they use them together While computer vision and natural language processing are separate fields, there has been increased interest in combining them A popular task is image captioning: Given an image, automatically generate a caption for this image, that agrees well with human-generated captions

5 Berg, Attributes Tutorial CVPR13
Descriptive Text “It was an arresting face, pointed of chin, square of jaw. Her eyes were pale green without a touch of hazel, starred with bristly black lashes and slightly tilted at the ends. Above them, her thick black brows slanted upward, cutting a startling oblique line in her magnolia-white skin–that skin so prized by Southern women and so carefully guarded with bonnets, veils and mittens against hot Georgia suns” Scarlett O’Hara described in Gone with the Wind. People sometimes produce very vivid and richly informative descriptions about the visual world. For example, here the writer says “it was an arresting face, pointed of chin, …” Berg, Attributes Tutorial CVPR13

6 More Nuance than Traditional Recognition…
person car You’ll notice that this human output of recognition is quite different from traditional computer vision recognition outputs which might recognize this picture as a person, this one as a shoe or this one as a car. shoe Berg, Attributes Tutorial CVPR13

7 Toward Complex Structured Outputs
car A lot of research in visual recognition has focused on producing categorical labels for items Berg, Attributes Tutorial CVPR13

8 Toward Complex Structured Outputs
pink car Today we’ve been talking about attributes which is a first step toward producing more complex structured recognition outputs Attributes of objects Berg, Attributes Tutorial CVPR13

9 Toward Complex Structured Outputs
car on road We can also think about recognizing the context of where objects are located with respect to the overall scene or relative to other objects – maybe recognizing that this is a car on a road Relationships between objects Berg, Attributes Tutorial CVPR13

10 Toward Complex Structured Outputs
Little pink smart car parked on the side of a road in a London shopping district. … Complex structured recognition outputs Ultimately we might like our recognition systems to produce more complete predictions about the objects, their appearance, their relationships, actions, and context. Perhaps even going so far as to produce a short description of the image that tells the “story behind the image.” For this image we might like to say something like “little pink smart car…” Telling the “story of an image” Berg, Attributes Tutorial CVPR13

11 Some good results This is a picture of one sky, one road and one sheep. The gray sky is over the gray road. The gray sheep is by the gray road. Here we see one road, one sky and one bicycle. The road is near the blue sky, and near the colorful bicycle. The colorful bicycle is within the blue sky. This is a picture of two dogs. The first dog is near the second furry dog. This is a picture of one sky, one road… Kulkarni et al, CVPR11

12 Some bad results Missed detections: False detections:
Here we see one potted plant. Missed detections: This is a picture of one dog. False detections: There are one road and one cat. The furry road is in the furry cat. This is a picture of one tree, one road and one person. The rusty tree is under the red road. The colorful person is near the rusty tree, and under the red road. This is a photograph of two sheeps and one grass. The first black sheep is by the green grass, and by the second black sheep. The second black sheep is by the green grass. Incorrect attributes: This is a photograph of two horses and one grass. The first feathered horse is within the green grass, and by the second feathered horse. The second feathered horse is within the green grass. Of course it doesn’t always work! Some common mistakes are: missing detections, false detections, Incorrectly predicted attributes. Kulkarni et al, CVPR11

13 Results with Recurrent Neural Networks
Karpathy and Fei-Fei, CVPR 2015

14 Recurrent Networks offer a lot of flexibility:
Vanilla Neural Networks Andrej Karpathy

15 Recurrent Networks offer a lot of flexibility:
e.g. Image Captioning image -> sequence of words Andrej Karpathy

16 Recurrent Networks offer a lot of flexibility:
e.g. Sentiment Classification sequence of words -> sentiment Andrej Karpathy

17 Recurrent Networks offer a lot of flexibility:
e.g. Machine Translation seq of words -> seq of words Andrej Karpathy

18 Recurrent Networks offer a lot of flexibility:
e.g. Video classification on frame level Andrej Karpathy

19 Recurrent Neural Network
RNN RNN x Andrej Karpathy

20 Recurrent Neural Network
y usually want to output a prediction at some time steps RNN x Adapted from Andrej Karpathy

21 Recurrent Neural Network
We can process a sequence of vectors x by applying a recurrence formula at every time step: y RNN new state old state input vector at some time step some function with parameters W x Andrej Karpathy

22 Recurrent Neural Network
We can process a sequence of vectors x by applying a recurrence formula at every time step: y RNN Notice: the same function and the same set of parameters are used at every time step. x Andrej Karpathy

23 (Vanilla) Recurrent Neural Network
The state consists of a single “hidden” vector h: y RNN x Andrej Karpathy

24 Character-level language model example
RNN x y Character-level language model example Vocabulary: [h,e,l,o] Example training sequence: “hello” Andrej Karpathy

25 Example Character-level language model example Vocabulary: [h,e,l,o]
Example training sequence: “hello” Andrej Karpathy

26 Example Character-level language model example Vocabulary: [h,e,l,o]
Example training sequence: “hello” Andrej Karpathy

27 Example Character-level language model example Vocabulary: [h,e,l,o]
Example training sequence: “hello” Andrej Karpathy

28 Image Captioning Explain Images with Multimodal Recurrent Neural Networks, Mao et al. Deep Visual-Semantic Alignments for Generating Image Descriptions, Karpathy and Fei-Fei Show and Tell: A Neural Image Caption Generator, Vinyals et al. Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al. Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick Andrej Karpathy

29 Recurrent Neural Network
Image Captioning Recurrent Neural Network Convolutional Neural Network Andrej Karpathy

30 Image Captioning test image Andrej Karpathy

31 test image Andrej Karpathy

32 test image X Andrej Karpathy

33 Image Captioning test image <START> Andrej Karpathy x0

34 Image Captioning v before: h = tanh(Wxh * x + Whh * h) Wih now:
test image y0 before: h = tanh(Wxh * x + Whh * h) h0 Wih now: h = tanh(Wxh * x + Whh * h + Wih * v) x0 <STA RT> v <START> Andrej Karpathy

35 Image Captioning sample! test image <START> y0 h0
x0 <STA RT> straw <START> Andrej Karpathy

36 Image Captioning test image <START> y0 y1 h0 h1 Andrej Karpathy
x0 <STA RT> straw <START> Andrej Karpathy

37 Image Captioning sample! test image <START> y0 y1 h0 h1
x0 <STA RT> straw hat <START> Andrej Karpathy

38 Image Captioning test image <START> y0 y1 y2 h0 h1 h2
x0 <STA RT> straw hat <START> Andrej Karpathy

39 Image Captioning Caption generated: “straw hat” sample
test image Caption generated: “straw hat” y0 y1 y2 sample <END> token => finish. h0 h1 h2 x0 <STA RT> straw hat <START> Adapted from Andrej Karpathy

40 Image Sentence Datasets
Microsoft COCO [Tsung-Yi Lin et al. 2014] mscoco.org currently: ~120K images ~5 sentences each Andrej Karpathy

41 Some Results Andrej Karpathy

42 Visual Question Answering (VQA)
Task: Given an image and a natural language open-ended question, generate a natural language answer. Aishwarya Agrawal

43 VQA Dataset Aishwarya Agrawal

44 Applications of VQA An aid to visually-impaired
Is it safe to cross the street now? Aishwarya Agrawal

45 Applications of VQA Surveillance
What kind of car did the man in red shirt leave in? Aishwarya Agrawal

46 Applications of VQA Interacting with robot
Is my laptop in my bedroom upstairs? Aishwarya Agrawal

47 2-Channel VQA Model Image Embedding Question Embedding
Neural Network Softmax over top K answers Image Embedding Convolution Layer + Non-Linearity Pooling Layer Fully-Connected MLP 4096-dim Embedding Question “How many horses are in this image?” 1024-dim Aishwarya Agrawal

48 Incorporating Knowledge
Wu et al., CVPR 2016

49 Incorporating Attention
Shih et al., CVPR 2016

50 Visual Question Answering Demo
Aishwarya Agrawal


Download ppt "CS 1674: Intro to Computer Vision Recurrent Neural Networks"

Similar presentations


Ads by Google