Download presentation
Presentation is loading. Please wait.
Published byAustin Fields Modified over 9 years ago
1
March 26, 2007 McGuinness et al Explaining Task Processing in Cognitive Assistants that Learn Deborah McGuinness 1, Alyssa Glass 1,2, Michael Wolverton 2, Paulo Pinheiro da Silva 3* 1 Knowledge Systems, AI Laboratory Stanford University {dlm | glass} @ksl.stanford.edudlm | glass} @ksl.stanford.edu 2 SRI International mjw@ai.sri.com 3 University of Texas El Paso* *Work done while on staff at Stanford KSL paulo@utep.edu *thanks to Li Ding, Cynthia Chang, Honglei Zeng, Vasco Furtado, Jim Blythe, Karen Myers, Ken Conley, David Morley
2
March 26, 2007 McGuinness et al Interoperability – as systems use varied sources and multiple information manipulation engines, they benefit more from encodings that are shareable & interoperable Provenance – if users (humans and agents) are to use and integrate data from unknown, unreliable, or evolving sources, they need provenance metadata for evaluation Explanation/Justification – if information has been manipulated (i.e., by sound deduction or by heuristic processes), information manipulation trace information should be available Trust – if some sources are more trustworthy than others, representations should be available to encode, propagate, combine, and (appropriately) display trust values Provide interoperable knowledge provenance infrastructure that supports explanations of sources, assumptions, learned information, and answers as an enabler for trust. General Motivation
3
March 26, 2007 McGuinness et al Files/WWW Toolkit Proof Markup Language (PML) CWM (NSF TAMI) JTP (DAML/NIMD) SPARK (DARPA CALO) UIMA (DTO NIMD Exp Aggregation) IW Explainer/ Abstractor IWBase IWBrowser IWSearch Trust Justification Provenance N3 KIF SPARK-L Text Analytics IWTrust provenance registration search engine based publishing Expert friendly Visualization End-user friendly visualization Trust computation Semantic Discovery Service (DAML/SNRC) OWL-S/BPEL Framework for explaining question answering tasks by abstracting, storing, exchanging, combining, annotating, filtering, segmenting, comparing, and rendering proofs and proof fragments provided by question answerers. Inference Web Infrastructure primary collaborators Ding, Chang, Zeng, Fikes
4
March 26, 2007 McGuinness et al ICEE: Integrated Cognitive Explanation Environment Improve Trust in Cognitive Assistants that learn by providing transparency concerning: * provenance * information manipulation * task processing * learning
5
March 26, 2007 McGuinness et al Task Management Framework Procedure Learners Execution Monitor & Predictor ProPL Task Manager SPARK Time Manager PTIME Process Models Task Explainer ICEE Team Coordinator Machinetta Advice Preferences Tailor, LAPDOG, Execution Monitor & Predictor ProPL Task Manager SPARK Time Manager PTIME Process Models Task Explainer ICEE Effectors Sensors ToDo Interpreter BEAM Advice Preferences Towel ToDo UI …… Activity Recognizer Location Estimator PrimTL, PLOW
6
March 26, 2007 McGuinness et al ICEE Architecture Collaboration Agent Justification Generator Task Manager (TM) TM Wrapper Explanation Dispatcher Task State Database TM Explainer Tailor Explainer Learning by Instruction Constraint Explainer Constraint Reasoner
7
March 26, 2007 McGuinness et al Task Explanation Ability to ask “why” at any point… Contextually relevant responses (using current processing state and underlying provenance) Context appropriate follow-up questions are presented Explanations generated completely automatically; No additional work required by user to supply information
8
March 26, 2007 McGuinness et al Explainer Strategy Present Query Answer Abstraction of justification (using PML encodings) Provide access to meta information Suggest context-appropriate drill down options (also provide feedback options)
9
March 26, 2007 McGuinness et al Sample Introspective Predicates: Provenance Author Modifications Algorithm Addition date/time Data used Collection time span for data Author comment Delta from previous version Link to original Glass, A., and McGuinness, D.L. 2006. Introspective Predicates for Explaining Task Execution in CALO. Technical Report, KSL-06-04, Knowledge Systems Lab., Stanford Univ.
10
March 26, 2007 McGuinness et al Task Action Schema Wrapper extracts portions of task intention structure through introspective predicates Store extracted information in action schema Designed to achieve three criteria: 1.Salience – info relevant to information needs 2.Reusability – info usable by cognitive agent activities like procedure learning or state estimation 3.Generality – conceptual model appropriate for action reasoning in bdi, blackboard systems, production systems, etc.
11
March 26, 2007 McGuinness et al User Trust Study Interviewed 10 Critical Learning Period (CLP) participants Programmers, researchers, administrators Focus of study: Trust Failures, surprises, and other sources of confusion Desired questions to ask CALO Initial results: Explanations are required in order to trust agents that learn To build trust, users want transparency and provenance Identified question types most important to CALO users --> motivation for future work
12
March 26, 2007 McGuinness et al Selected Future Directions Broaden explanation of learning (and CALO integration) Explain learning by demonstration (integrating initially with CALO component LAPDOG) Explain preference learning (integrating initially with CALO component PTIME) Investigate explanation of conflicts/failures. Explore this as feedback and a driver to initiate learning procedure modifications or learning new procedures. Expand dialogue-based interaction and presentation of explanations (expanding our integration with Towel) Use trust study results to prioritize provenance, strategy, and dialogue work. Exploit our work on IW Trust - a method for representing, propagating, and presenting trust – within the CALO setting – already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc.
13
March 26, 2007 McGuinness et al Advantages to ICEE Approach Unified framework for explaining task execution and deductive reasoning, built on the Inference Web infrastructure. Architecture for reuse among many task execution systems. Introspective predicates and software wrapper that extract explanation-relevant information from task reasoner. Reusable action schema for representing task reasoning.
14
March 26, 2007 McGuinness et al Resources Overview of ICEE: Deborah McGuinness, Alyssa Glass, Michael Wolverton and Paulo Pinheiro da Silva. Explaining Task Processing in Cognitive Assistants That Learn. In the proc. of the 20th International FLAIRS Conference. Key, West, Florida, May 7-9, 2007. Introspective predicates: Glass, A., and McGuinness, D.L. Introspective Predicates for Explaining Task Execution in CALO. Technical Report, KSL- 06-04, Knowledge Systems, AI Lab., Stanford University, 2006. Video demonstration of ICEE: http://iw.stanford.edu/2006/10/ICEE.640.mov Explanation interfaces: McGuinness, D.L., Ding, L., Glass, A., Chang, C., Zeng, H., and Furtado, V. Explanation Interfaces for the Semantic Web: Issues and Models. 3rd International Semantic Web User Interaction Workshop (SWUI’06). Co-located with the International Semantic Web Conference, Athens, Georgia, 2006. Inference Web (including above publications): http://iw.stanford.edu/
15
March 26, 2007 McGuinness et al Extra
16
March 26, 2007 McGuinness et al NodeSet: SupportsTopLevelGoal(GS) NodeSet: IntentionPrecondition(GS) NodeSet: TerminationConditionNotMet(GS) NodeSet: Supports(GS, BL) NodeSet: TopLevelGoal(BL) NodeSet: ParentOf(GS,GA) NodeSet: Supports(GA, BL) NodeSet: ParentOf(GA, BL) NodeSet: Supports(BL, BL) GS: GetSignature BL: BuyLaptop GA: GetApproval NodeSet: Executing(GS) SupportsTopLevelGoal(x) & IntentionPreconditionMet(x) & TerminationConditionNotMet(x) => Executing(x) TopLevelGoal(y) & Supports(x,y) => SupportsTopLevelGoal(x) ParentOf (x,y) & Supports(y,z) => Supports (x,z) Supports (x,x)
17
March 26, 2007 McGuinness et al Explaining Learning by Demonstration General Motivation LAPDOG (Learning Assistant Procedures from Demonstration, Observation, and Generalization) generalizes the user’s demonstration to learn a procedure While LAPDOG’s generalization process is designed to produce reasonable procedures, it will occasionally get it wrong Specifically, it will occasionally over generalize Generalize the wrong variables, or too many variables Produce too general a procedure because of a coarse-grained type hierarchy ICEE needs to explain the relevant aspects of the generalization process in a user-friendly format To help the user identify and correct over generalizations To help the user understand and trust the learned procedures Specific elements of LAPDOG reasoning to explain Ontology-Based Parameter Generalization The variables (elements of the user’s demonstration) that LAPDOG chooses to generalize The type hierarchy on which the generalization is based Procedure Completion The knowledge-producing actions that were added to the demonstration The generalization done on those actions Background knowledge that biases the learning E.g., “rich information about the email, calendar events, files, web pages, and other objects upon which it executes it actions” Primarily for future versions of LAPDOG
18
March 26, 2007 McGuinness et al Explaining Preferences General Motivation PLIANT (Preference Learning through Interactive Advisable Non- intrusive Training) uses user-elicited preferences and past choices to learn user scheduling preferences for PTIME, using a Support Vector Machine. Inconsistent user preferences, over-constrained schedules, and necessity of exploring the preference space result in user confusion about why a schedule is being presented. Lack of user understanding of PLIANT’s updates creates confusion, mistrust, and the appearance that preferences are being ignored. ICEE needs to provide justifications of PLIANT’s schedule suggestions, in a user-friendly format, without requiring the user to understand SVM learning. Providing Transparency into Preference Learning Augment PLIANT to gather additional meta-information about the SVM itself: Support vectors identified by SVM Support vectors nearest to the query point Margin to the query point Average margin over all data points Non-support vectors nearest to the query point Kernel transformation used, if any Represent SVM learning and meta-information as justification in PML, using added SVM rules Design abstraction strategies for presenting justification to user as a similarity-based explanation
19
March 26, 2007 McGuinness et al During the demo, notice: User can ask questions at any time Reponses are context-sensitive Dependant on current task processing state and on provenance of underlying process Explanations generated completely automatically No additional work required by user to supply information Follow-up questions provide additional detail at user’s discretion Avoids needless distraction
20
March 26, 2007 McGuinness et al Example Usage: Live Demo and/or Video Clip
21
March 26, 2007 McGuinness et al Future Directions Broaden explanation of learning and CALO integration Explain learning by demonstration, integrating initially with CALO component LAPDOG Explain preference learning, integrating initially with CALO component PTIME Investigate explanation of conflicts. Explore this as a driver to initiate learning procedure modifications or learning new procedures. Expand dialogue-based interaction and presentation of explanations, expanding our integration with Towel Write up and distribute trust study (using our interviews with 10 year 3 CLP subjects). Use trust study results to prioritize provenance, strategy, and dialogue work. Potentially exploit our work on IW Trust - a method for representing, propagating, and presenting trust – within the CALO setting – already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc. Continue discussions with: Tom Garvey about transition opportunities to CPOF Tom Dietterich about explanation-directed learning and provenance Adam Cheyer about explaining parts of the OPIE environment
22
March 26, 2007 McGuinness et al How PML Works Justification Trace IWBase NodeSet foo:ns1 (hasConclusion …) Query foo:query1 Question foo:question1 Mapping NodeSet foo:ns2 (hasConclusion …) SourceUsage hasAnswer hasAntecendent fromQuery fromAnswer … isQueryFor InferenceEngine InferenceRule hasVariableMapping hasInferencEngine hasRule InferenceStep Language hasLanguage InferenceStep Source isConsequentOf hasSourceUsage hasSource isConsequentOf usageTime …
23
March 26, 2007 McGuinness et al Future Directions We will leverage results from our trust study to focus and prioritize our strategies explaining cognitive assistants – e.g., learning specific provenance We will expand our explanations of learning to augment learning by instruction and design and implement explanation of learning by demonstration (initially focusing on LAPDOG). We will expand our initial design of explaining preferences in PTIME Write up and distribute user trust study to CALO participants Consider using conflicts to drive learning and explanations – I have not finished … because x has not completed…. Advanced dialogues exploiting TOWEL and other CALO components Potentially exploit our work on IW Trust - a method for representing, propagating, and presenting trust – within the CALO setting – already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc.
24
March 26, 2007 McGuinness et al Sample Task Hierarchy: Purchase equipment Purchase equipment Collect requirements Get quotes Do research Choose set of quotes Pick single item Get approval Place order
25
March 26, 2007 McGuinness et al Sample Task Hierarchy: Get travel authorization Get travel authorization Collect requirements Get approval, if necessary Note: this conditional step was added to the original procedure through learning by instruction Submit travel paperwork
26
March 26, 2007 McGuinness et al PML in Swoop
27
March 26, 2007 McGuinness et al Explaining Extracted Entities Source: fbi_01.txt Source Usage: span from 01 to 78 This extractor decided that Person_fbi-01.txt_46 is a Person and not Occupation Same conclusion from multiple extractors conflicting conclusion from one extractor
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.