Download presentation
Presentation is loading. Please wait.
1
Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael Wolverton (SRI), and Paulo Pinheiro da Silva (UTEP)
2
Outline Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work
3
Why is “trust” an issue? Systems are getting more complex Hybrid and distributed processing Multiple learning components Multiple heterogeneous, distributed information sources Highly variable reliability of information sources Less transparency of system computation and reasoning Systems are taking more autonomous control Guide/assist user actions Perform autonomous actions on behalf of user “reason, learn from experience, be told what to do, explain what they are doing, reflect on their experience, and respond robustly to surprise” * * DARPA PAL program: http://www.darpa.mil/ipto/programs/pal/
4
One Adaptive System: CALO Cognitive Assistant that Learns and Organizes Personal office assistant, tasked with: Noticing things in the cyber and physical environments Aggregating what it notices, thinks, and does Executing, adding/deleting, suspending/resuming tasks Planning to achieve abstract objectives Anticipating things it may be called upon to do or respond to Interacting with the user Adapting its behavior in response to past experience, user guidance Contributed to by 22 different organizations
5
Working with a Cognitive Assistant CALO users need to Understand system behavior and responses Trust system reasoning and actions To believe and act on recommendations from CALO, users need ways of exploring how and why the system acted, responded, recommended, and reasoned the way it did. Additional wrinkle: CALO knowledge, behavior, and assumptions are constantly changing through several forms of machine learning. A unified framework for explaining behavior and reasoning is essential for users to trust and adopt cognitive assistants.
6
Outline Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work
7
Interacting with Complex Systems
8
Study Procedure 14 participants 12 men, 2 women Wide range of ages, education, previous CALO experience Assigned tasks to accomplish with CALO (many scripted) Told about trust study in advance Structured interview format Identified 8 themes, in 3 major categories
9
Usability Theme 1: Basic usability is important even in prototype- level systems. “I can’t tell you how much I would love to have [the system]. But I also can’t tell you how much I can’t stand it.”
10
Usability Theme 2: Learning algorithms can give the impression that the user is being ignored. “You specify something, and [the system] comes up with something completely different. And you’re like, it’s ignoring what I want!”
11
Explanations Theme 3: Users consistently want to ask context-sensitive questions, particularly when they are surprised by responses or failures. What are you doing? Why did you do that? When will you be finished? What information sources did you use? “If there had been an option to ask a question, I would have loved to ask a question.” “I asked [‘Why?’] all the time, but I wasn’t getting answers!”
12
Explanations Theme 4: The granularity of feedback is important. “I don’t just want an idiot light.”
13
Trust Theme 5: Users don’t trust opaque systems; they want transparency. “The ability to check up on the system, ask it questions, get transparency to verify what it is doing, is the number one thing that would make me want to use it.”
14
Trust Theme 6: Access to information provenance can improve trust in both the information and the automated reasoning. “[The system] needs a better way to have a meta-conversation.”
15
Trust Theme 7: Like in politics and the economy, gaining user trust relies on properly managing expectations. “I was paralyzed with fear about what it would understand and what it would not.”
16
Trust Theme 8: Most users have a “trust but verify” attitude that makes system autonomy difficult without explainable verification. “I trust [the system’s] accuracy, but not its judgment.”
17
Outline Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work
18
The Use-Ask-Understand-Update Cycle UseAsk UnderstandUpdate
19
What is an “explanation”… : Why are you doing ? : I am trying to do and is one subgoal in the process. : Why are you doing ? : Why haven’t you completed yet? : Why is a subgoal of ? : When will you finish ? : What sources did you use to do ? Initial request and answer strategy Follow-up questions for mixed initiative dialogue
20
How Explanation Can Help T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification
21
How Explanation Can Help T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification
22
How Explanation Can Help T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification
23
The Integrated Cognitive Explanation Environment (ICEE) Unified framework for explaining logical and task reasoning. Applicable to multiple task execution systems. Leverage existing InferenceWeb work for generating formal justifications. Underlying task reasoning useful beyond explanation. Provide sample implementation of end-to-end system.
24
Explanation Example Sample question type: task motivation Why are you doing ? Strategy: reveal task hierarchy I am trying to do and is one subgoal in the process. Alternate strategies: Provide task abstraction Expose preconditions Expose termination conditions Reveal meta-information about task dependencies Explain provenance related to task preconditions or other knowledge Possible follow-up suggestions: Request additional detail Request clarification of of the given explanation Request an alternate strategy to the original query McGuinness, D.L.; Glass, A.; Wolverton, M.; Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI Workshop on Explanation-Aware Computing (ExaCt-2007), Vancouver, Canada, 2007.
25
Sidetrack: An InferenceWeb Primer Trust Explanation Presentation Abstraction Inference Meta-Language Inference Rule Specs Provenance Meta-data Information Manipulation Data Interaction Understanding Proof Markup Language Framework for explaining reasoning and execution tasks by abstracting, storing, exchanging, combining, annotating, filtering, comparing, and rendering justifications from varied cognitive reasoners. 1.Registry and service support for knowledge provenance. 2.Language for encoding hybrid, distributed proof fragments (both formal and informal). 3.Declarative inference rule representation for checking proofs. 4.Multiple strategies for proof abstraction, presentation, and interaction.
26
Sidetrack continued: Representations in PML Proof Markup Language (PML) is a proof interlingua Used to represent justification of information manipulation steps done by theorem provers, extractors, other reasoners Main components concern inference representation and provenance issues Specification written in OWL iw:hasConclusion: SupportsTopLevelGoal iw:NodeSet iw:isConsequenceOf iw:InferenceStep iw:hasLanguage: (Supports GA BL) KIF iw:hasRule: iw:hasSourceUsage: iw:hasEngine:SPARK http://foo.com/Example.owl#Laptop TailorComment
27
Sample Interface Linked to ICEE
28
Learning by Instruction Relatively straight-forward to explain Store instruction, resulting modification Strategies present instruction and related meta-information Demonstrated in CALO with Tailor task learning system
29
Learning by Demonstration Generalizes user’s demonstration to learn a procedure One data point --> generalization will sometimes be wrong Specifically, it will occasionally over generalize Generalize the wrong variables, or too many variables Produce too general a procedure because of a coarse- grained type hierarchy Explain the relevant aspects of the generalization process To help the user identify and correct over generalizations To help the user understand and trust the learned procedures Working with LAPDOG task learning system in CALO
30
Support-Vector Machines Augment SVM to gather additional meta-information about the SVM itself: Support vectors identified by SVM Support vectors nearest to the query point Margin to the query point Average margin over all data points Non-support vectors nearest to the query point Kernel transformation used, if any Represent SVM learning and meta-information as justification in PML, using added SVM rules Design abstraction strategies for presenting justification to user as a similarity-based explanation Demonstrated in CALO with PLIANT preference learning system PLIANT uses user-elicited preferences and past choices to learn user scheduling preferences Inconsistent user preferences, over-constrained schedules, and necessity of exploring the preference space result in user confusion about why a schedule is being presented. Lack of user understanding of PLIANT’s updates creates confusion, mistrust, and the appearance that preferences are being ignored. Provide justifications of schedule suggestions, without requiring user to understand SVM learning.
31
Future Work Using conflicts to drive the learn- explain cycle Using explanations to identify high- reward learning opportunities Support more advanced dialogues and interfaces User study using ICEE explanations
32
Questions?
33
Resources Explanation questions & strategies: McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI 2007 Workshop on Explanation-Aware Computing (ExaCt- 2007), Vancouver, Canada, 2007. CALO trust study: Glass, A., McGuinness, D.L., and Wolverton, M. Toward Establishing Trust in Adaptive Agents. Technical Report, KSL-07-04, Knowledge Systems, Artificial Intelligence Laboratory, Stanford University, 2007. Explanation interfaces: McGuinness, D.L., Ding, L., Glass, A., Chang, C., Zeng, H., and Furtado, V. Explanation Interfaces for the Semantic Web: Issues and Models. 3rd International Semantic Web User Interaction Workshop (SWUI’06), Athens, Georgia, 2006. Overview of ICEE: McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. Explaining Task Processing in Cognitive Assistants that Learn. Proceedings of the 20th International FLAIRS Conference (FLAIRS-20), Key West, Florida, 2007. Video demonstration of ICEE: http://iw.stanford.edu/2006/10/ICEE.640.mov
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.