Download presentation
Presentation is loading. Please wait.
Published byBeverly Foster Modified over 5 years ago
1
Department of Computer Science University of Texas at Austin
Explainable AI: Making Visual Question Answering Systems More Transparent Raymond Mooney Department of Computer Science University of Texas at Austin with Nazneen Rajani Jialin Wu
2
Explainable AI (XAI) AI systems are complex and their decisions are based on weighting and combining many sources of evidence. People, understandably, don’t trust decisions from opaque “black boxes.” AI systems should be able to “explain” their reasoning process to human users to engender trust.
3
History of XAI (personalized)
4
Explainable Rule-Based Expert Systems
MYCIN was one of the early rule-based “expert systems” developed at Stanford in the early 70’s that diagnosed blood infections. Experiments showed that MYCIN was as accurate as medical experts, but, initially, doctors wouldn’t trust its decisions. MYCIN was augmented to explain its reasoning by providing a trace of its rule-based inferences that supported its decision.
5
Sample MYCIN Explanation
6
Explaining Probabilistic Machine Learning
In the late 90’s I developed a content-based book-recommending system that used a naïve-Bayes bag-of-words text classifier to recommend books (Mooney & Roy, 2000). It used training examples of rated books supplied by the user and textual information about the book extracted from Amazon. It could explain its recommendations based on content words that most influenced its probabilistic conclusions.
7
Sample Book Recommendation Explanation
8
Sample Keyword Explanation
9
Evaluating Recommender Explanations
Herlocker et al. (2000) provided the first experimental evaluation of recommender system explanations. They showed that some methods for explaining recommendations for a collaborative filtering system were “better” than others. Explanations were evaluated based on how much they increased the likelihood that a user would agree to adopt a recommendation.
10
Satisfaction vs. Promotion
In our own evaluation of recommender system explanations (Bilgic & Mooney, 2005), we argued that user satisfaction with an adopted recommendation is more important than just convincing them to adopt it (promotion). We experimentally evaluated how explanations impacted users satisfaction with the recommendations they adopted.
11
Evaluating Satisfaction
We asked users to predict their rating of a recommended book twice. First, they predicted their rating of a book after just seeing the system’s explanation for why it was recommended. Second, they predicted their rating again after they had a chance to examine detailed information about the book, i.e. all the info on Amazon about it. An explanation was deemed “better” if the difference in these two ratings was less, i.e. the explanation allowed users to more accurately predict their final opinion.
12
Explanation Evaluation Experiment
We compared three styles of explanation NSE: Neighborhood Style which shows the most similar training examples and was the best at promotion according to Herlocker et al. KSE: Keyword Style which was used by our original content-base recommender. ISE: Influence Style which presented the training examples that most influenced the recommendation.
13
Experimental Results We found that in terms of satisfaction: KSE > ISE > NSE NSE caused people to “over rate” recs. (for ratings on a 5 point scale)
14
XAI for Deep Learning
15
Deep Learning Recent neural networks have made remarkable progress on challenging AI problems such as object recognition, machine translation, image and video captioning, and game playing (e.g. Go). They have hundreds of layers and millions of parameters. Their decisions are based on complex non-linear combinations of many “distributed” input representations (called “embeddings.”).
16
Types of Deep Neural Networks (DNNs)
Convolutional neural nets (CNNs) for vision. Recurrent neural nets (RNNs) for machine translation, speech recognition, and image/video captioning. Long Short Term Memory (LSTM) Gated Recurrent Unit (GRU) Deep reinforcement learning for game playing.
17
Recent Interest in XAI The desire to make deep neural networks (and other modern AI systems) more transparent has renewed interest in XAI. DARPA has a new XAI program that funds 12 diverse teams of researchers to develop more explainable AI systems. The EU’s new General Data Protection Regulation (GDPR) gives consumers the “Right to Explanation” (Recital 71) when any automated decision is made about them.
18
XAI for VQA I am part of a DARPA XAI team focused on making explainable deep-learning systems for Visual Question Answering (VQA) BBN (prime): Bill Ferguson GaTech: Dhruv Batra and Devi Parikh MIT: Antonio Torralba UT: Ray Mooney
19
Visual Question Answering (Agrawal, et al., 2016)
Answer natural language questions about an image.
20
VQA Architectures Most systems are DNNs using both CNNs and RNNs.
21
Visual Explanation Recent VQA research shows that deep learning models attend to relevant parts of image while answering the question (Goyal et al., 2016). The parts of images that the models focus on can be viewed of as a visual explanation. Heat-maps used to visualize explanations as images. On the left is an image from the VQA dataset and on the right is the heat-map overlaid on the image for the question - ’What is the man eating?’
22
Generating Visual Explanations
GradCAM (Selvaraju et al., 2017) is used to generate heat-map explanations.
23
Ensembles Combining multiple learned models has been a popular and successful approach since the 90’s. Ensembling VQA systems produces better results (Fukui et al. 2016) but further complicates explaining their results. Visual explanations can also be ensembled, and it improves explanation quality over those of the individual component models.
24
Ensembling Visual Explanations
Explain a complex VQA ensemble by ensembling the visual explanations of its component systems. Average the explanatory heat-maps of systems that agree with the ensemble, weighted by their performance on validation data (Weighted Average, WA). Can also subtract the explanatory heat-maps of systems that disagree with the ensemble (Penalized Weighted Average, PWA).
25
of Visual Explanations
Sample Ensembling of Visual Explanations Combine heat-maps of MCB and HieCoAtt systems (whose correct answers agreed with the ensemble) with LSTM (whose answer disagreed) to get an improved heat-map for the ensemble.
26
Evaluating Ensembled Explanations
Crowd-sourced human judges were shown two visual explanations and asked: “Which picture highlights the part of the image that best supports the answer to the question?” Our ensemble explanation was judged better 63% of the time compared to any individual system’s explanation.
27
Mechanical Turk Interface for Explanation Comparison
28
Explanation Comparison Results
29
Textual Explanations Generate a natural-language sentence that justifies the answer (Hendricks et al. 2016). Use an LSTM to generate an explanatory sentence given embeddings of: image question answer Train this LSTM on human-provided explanatory sentences.
30
Sample Textual Explanations (Park et al., 2018)
31
Multimodal Explanations
Combine both a visual and textual explanation to justify an answer to a question (Park et al., 2018).
32
VQA-X Using crowdsourcing, Park et al., collected human multi-modal explanations for training and testing VQA explanations. Explanation: “Because he is on a snowy hill wearing skis”
33
Post Hoc Rationalizations
Previous textual and multimodal explanations for VQA (Park et al., 2018) are not “faithful” or “introspective.” They do not reflect any details of the internal processing of the network or how it actually computed the answer. They are just trained to mimic human explanations in an attempt to “justify” the answer and get humans to trust it (analogous to “promoting” a recommendation).
34
Faithful Multimodal Explanations
We are attempting to produce more faithful explanations that actually reflect important aspects of the VQA system’s internal processing. Focus explanation on including detected objects that are highly attended to during the VQA network’s generation of the answer. Trained to generate human explanations, but explicitly biased to include references to these objects.
35
Sample Faithful Multimodal Explanation
36
VQA with BUTD To make it more explainable, we use a recent state-of-the-art VQA system BUTD (Bottom-Up-Top-Down) (Anderson et al. 2018). BUTD first detects a wide range of objects and attributes trained on VisualGenome data, and attends to them when computing an answer.
37
Using Visual Segmentations
We use recent methods for using detailed image segmentations for VQA (VQS, Gan et al., 2017). Provides more precise visual information than BUTD’s bounding boxes.
38
High-Level VQA Architecture
39
Detecting Explanatory Phrases
We first extract frequent phrases about objects (2-5 words ending in a common noun) appearing in human explanations. We train visual detectors to identify relevant explanatory phrases from the detected image segmentations for an image.
40
High-Level System Architecture
41
Textual Explanation Generator
We finally train an LSTM to generate an explanatory sentence from embeddings of the segmented objects and the detected phrases. Trained on VQA-X data to produce human-like textual explanations. Trained to encourage the explanation to cover the segments highly attended to by VQA to make it faithfully reflect the focus of the network that computed the answer.
42
Multimodal Explanation Generator
Words generated while attending to a particular visual segment are highlighted and linked to the corresponding segmentation in the visual explanation by depicting them both in the same color.
43
High-Level System Architecture
44
Sample Explanation
45
Sample Explanation
46
Sample Explanation
47
Evaluating Textual Explanations
Compare system explanation to “gold standard” human explanations using standard machine translations metrics for judging similarity of sentences. Ask human judges on Mechanical Turk to compare system explanation to human explanation and judge which is better (allowing for ties). Report percentage of time algorithm beats or ties human.
48
Textual Evaluation Results
Automated Metrics Human Eval
49
Evaluating Multimodal Explanations
Ask human judges on Mechanical Turk to qualitatively evaluate the final multimodal explanations by answering two questions: “ How well do the highlighted image regions support the answer to the question?” “How well do the colored image segments highlight the appropriate regions for the corresponding colored words in the explanation?”
50
Multimodal Explanation Results
51
Explanation Evaluation Issues
Explanations should not be just “post-hoc rationalizations,” such as in Hendricks et al. (2016) and Park et al. (2017). Explanations should be “faithful” and elucidate the system’s actual decision process. But, it is hard to evaluate “faithfulness.” One approach is to test whether explanations actually aid the ability of a human user to correctly decide whether or not to accept the conclusions of an AI system. We have previously done such an evaluation for book recommending (Bilgic & Mooney, 2005) and are considering how to do it for VQA.
52
Conclusions Complex AI systems such as DNNs should be able to explain their decisions to human users to engender trust. VQA is challenging AI problem that needs explanation. Visual explanations for VQA can be effectively ensembled, and human evaluations indicate that ensembled explanations are superior to the original ones. Multimodal explanations for VQA that integrate both textual and visual information are particularly useful. Our approach that uses high-level object segmentations to drive both VQA and human-like explanation is promising and superior to previous “post hoc rationalization.”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.