Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems – Reasoning with logic and probability Chitta Baral Arizona State university.

Slides:



Advertisements
Similar presentations
The Monty Hall Problem Madeleine Jetter 6/1/2000.
Advertisements

ABC Welcome to the Monty Hall show! Behind one of these doors is a shiny new car. Behind two of these doors are goats Our contestant will select a door.
The Monty Hall Problem. Warm up example (from Mondays In Class Problems) Suppose there are 50 red balls and 50 blue balls in each of two bins (200 balls.
How to Schedule a Cascade in an Arbitrary Graph F. Chierchetti, J. Kleinberg, A. Panconesi February 2012 Presented by Emrah Cem 7301 – Advances in Social.
Answering complex questions and performing deep reasoning in advance question answering systems Chitta Baral 1, Michael Gelfond 2 and Richard Scherl 3.
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Question Answering with deep reasoning Chitta Baral, Arizona State U. Michael Gelfond, Texas Tech U. Richard Scherl, Monmouth Univ.
1 Probability Part 1 – Definitions * Event * Probability * Union * Intersection * Complement Part 2 – Rules Part 1 – Definitions * Event * Probability.
Probability & Certainty: Intro Probability & Certainty.
Excursions in Modern Mathematics, 7e: Copyright © 2010 Pearson Education, Inc. 15 Chances, Probabilities, and Odds 15.1Random Experiments and.
Games, Logic, and Math Kristy and Dan. GAMES Game Theory Applies to social science Applies to social science Explains how people make all sorts of decisions.
1 Discrete Structures & Algorithms Discrete Probability.
CMPT 354, Simon Fraser University, Fall 2008, Martin Ester 52 Database Systems I Relational Algebra.
A Semantic Characterization of Unbounded-Nondeterministic Abstract State Machines Andreas Glausch and Wolfgang Reisig 1.
Question Answering with deep reasoning Chitta Baral, Arizona State U. Michael Gelfond, Texas Tech U. Richard Scherl, Monmouth Univ.
1 Many people debate basic questions of chance in games such as lotteries. The Monty Hall problem is a fun brain teaser that Marilyn vos Savant addressed.
Visual Recognition Tutorial
KI2 - 2 Kunstmatige Intelligentie / RuG Probabilities Revisited AIMA, Chapter 13.
1 Section 5.1 Discrete Probability. 2 LaPlace’s definition of probability Number of successful outcomes divided by the number of possible outcomes This.
Using answer set programming to answer complex queries Chitta Baral (joint work with Michael Gelfond and Richard Scherl) Arizona State University Tempe,
Monty Hall and options. Demonstration: Monty Hall  A prize is behind one of three doors.  Contestant chooses one.  Host opens a door that is not the.
CSE (c) S. Tanimoto, 2008 Bayes Nets 1 Probabilistic Reasoning With Bayes’ Rule Outline: Motivation Generalizing Modus Ponens Bayes’ Rule Applying.
Extending AnsProlog to reason with probabilities Chitta Baral Arizona State university (joint work with Michael Gelfond, and Nelson Rushton)
Great Theoretical Ideas in Computer Science.
What is the probability that it will snow on Christmas day in Huntingdon?
Probability, Bayes’ Theorem and the Monty Hall Problem
COMP14112: Artificial Intelligence Fundamentals L ecture 3 - Foundations of Probabilistic Reasoning Lecturer: Xiao-Jun Zeng
1 9/8/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
1 9/23/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Section 7.1. Section Summary Finite Probability Probabilities of Complements and Unions of Events Probabilistic Reasoning.
Chapter 7 With Question/Answer Animations. Section 7.1.
Pattern-directed inference systems
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Epistemic Strategies and Games on Concurrent Processes Prakash Panangaden: Oxford University (on leave from McGill University). Joint work with Sophia.
Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule Jim Little Uncertainty 2 Nov 3, 2014 Textbook §6.1.3.
Topic 2: Intro to probability CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally from the class text, Probability and Statistics for Engineering.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Reasoning with Probs How does evidence lead to conclusions in situations of uncertainty? Bayes Theorem Data fusion, use of techniques that combine data.
Basics on Probability Jingrui He 09/11/2007. Coin Flips  You flip a coin Head with probability 0.5  You flip 100 coins How many heads would you expect.
Introduction to Discrete Probability Epp, section 6.x CS 202.
What is the probability of picking an ace? Probability =
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
Great Theoretical Ideas in Computer Science for Some.
1 Learning Objectives Bayes’ Formula The student will be able to solve problems involving finding the probability of an earlier event conditioned on the.
1 An infrastructure for context-awareness based on first order logic 송지수 ISI LAB.
Section 7.1. Probability of an Event We first define these key terms: An experiment is a procedure that yields one of a given set of possible outcomes.
Monty Hall This is a old problem, but it illustrates the concept of conditional probability beautifully. References to this problem have been made in much.
Artificial Intelligence Knowledge Representation.
Ch 11.7 Probability. Definitions Experiment – any happening for which the result is uncertain Experiment – any happening for which the result is uncertain.
Great Theoretical Ideas in Computer Science.
Integrative Genomics I BME 230. Probabilistic Networks Incorporate uncertainty explicitly Capture sparseness of wiring Incorporate multiple kinds of data.
Chapter 5: Bayes’ Theorem (And Additional Applications)
Lecture 1 – Formal Logic.
Introduction to Discrete Mathematics
Logics for Data and Knowledge Representation
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Graphical Models in Brief
The Monty Hall Problem Madeleine Jetter 6/1/2000.
Monty Hall This is a old problem, but it illustrates the concept of conditional probability beautifully. References to this problem have been made in much.
Great Theoretical Ideas In Computer Science
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
The Monty Hall Game PLAY Teacher’s Notes.
Probability Rules.
Probabilistic Reasoning With Bayes’ Rule
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Answering Complex Questions and Performing Deep Reasoning in Advance QA Systems – Reasoning with logic and probability Chitta Baral Arizona State university Co-Investigators: Michael Gelfond and Richard Scherl. Other collaborator: Nelson Rushton.

Developing formalisms for Reasoning with logic and probability One of the research issues that we are working on. A core component of our ``deep reasoning’’ goals. – What if reasoning; – Reasoning about counterfactuals; – Reasoning about causes, etc. In parallel: – We are also using existing languages and formalisms (AnsProlog) to domain knowledge and do reasoning.

Reasoning with counterfactuals. Text: (From Pearl’s Causality) – If the court orders the execution then the captain gives a signal. – If the captain gives a signal then the rifleman A shoots and rifleman B shoots. – Rifleman A may shoot out of nervousness. – If either of rifleman A or B shoots then the prisoner dies. – The probability that the court orders the execution is p. – The probability that A shoots out of nervousness is q. A counterfactual question: – What is the probability that the prisoner would be alive if A were not to have shot, given that the prisoner is in fact dead?

Pictorial representation U: court orders execution (p) C: Captain gives the signal A: Rifleman A shoots B: Rifleman B shoots V: Rifleman A is nervous (q) D The prisoner dies Logic: – U causes C – C causes A – V causes A – C causes B – A causes D – B causes D Probability: – Pr(U) = p – Pr(V) = q U C AB D V

Bayes’ Nets, Causal Bayes Nets, and Pearl’s structural causal models Bayes’ nets do not have to respect causality They only succinctly represent the joint probability distribution One can not reason about causes and effects with Bayes nets. (One can with causal Bayes nets) Even then, one needs to distinguish between doing and observing. This distinction is important to do counterfactual reasoning. Pearl’s structural causal models gives an algorithm to do counterfactual reasoning. But – The logical language is weak. (also true for other attempts) – (Need a more general knowledge representation language that can express defaults, normative statements etc.) – The variables with probabilities are assumed to be independent.

Further Motivation for a richer language: The Monty Hall problem A player is given the opportunity to select one of three closed doors, behind one of which there is a prize, and the other 2 rooms are empty. Once the player has made a selection, Monty is obligated to open one of the remaining closed doors, revealing that it does not contain the prize. Monty gives a choice to the player to switch to the other unopened door if he wants. Question: Does it matter if the player switches, or – Which unopened door has the higher probability of containing the prize?

Illustration-1 First, let us assume that the car is behind door no. 1. – We can do this without reducing the validity of our proof, because if the car were behind door no. 2, we only had to exchange all occurrences of "door 1" with "door 2" and vice versa, and the proof would still hold. – The candidate has three choices of doors. – Because he has no additional information, he randomly selects one. – The possibility to choose each of the doors 1, 2, or 3 is 1/3 each: Candidate chooses p – Door 1 1/3 – Door 2 1/3 – Door 3 1/3 – Sum1

Illustration-2 Going on from this table, we have to split the case depending on the door opened by the host. – Since we assume that the car is behind door no. 1, the host has a choice if and only if the candidate selects the first door - because otherwise there is only one "goat door" left! – We assume that if the host has a choice, he will randomly select the door to open. – Candidate chooses Host opens p – Door 1Door 2 1/6 – Door 1Door 3 1/6 – Door 2Door 3 1/3 – Door 3 Door 2 1/3 – Sum1

Illustration-3 candidate who always sticks to his original choice no matter what happens: – Candidate chooses Host opens final choice win p – Door 1 Door 2 Door 1 yes 1/6 – Door 1 Door 3 Door 1 yes 1/6 – Door 2Door 3 Door 2 no 1/3 – Door 3 Door 2 Door 3 no 1/3 – Sum1Sum of cases where candidate wins1/3 candidate who always switches to the other door whenever he gets the chance: – Candidate choosesHost opensfinal choicewin p – Door 1Door 2 Door 3no 1/6 – Door 1Door 3 Door 2no 1/6 – Door 2Door 3 Door 1yes 1/3 – Door 3Door 2 Door 1yes 1/3 – Sum1Sum of cases where candidate wins 2/3

Key Issues The existing languages of probability do not really give us the syntax to express certain knowledge about the problem Lot of reasoning is done by the human being Our goal: Develop a knowledge representation language and a reasoning system such that once we express our knowledge in that language the system can do the desired reasoning P-log is such an attempt

Representing the Monty Hall problem in P-log. doors = {1, 2, 3}. open, selected, prize, D: doors ~can_open(D)  selected = D. ~can_open(D)  prize = D. can_open(D)  not ~can_open(D). pr(open=D | c can_open(D), can_open(D1), D =/= D1 ) = ½ – By default pr(open = D | c can_open(D) ) = 1 when there is no D1, such that can_open(D1) and D =/= D1. random(prize), random(selected). random(open: {X : can_open(X)}). pr(prize = D) = 1/3. pr(selected = D) = 1/3. obs(selected = 1). obs(open = 2). obs(~prize = 2). Queries: P(prize = 1) = ? P(prize = 3) = ?

General Syntax of P-log Sorted Signature: – objects and function symbols (term building functions and attributes) Declaration: – Definition of Sorts and typing information for attributes – Eg. doors = {1, 2, 3}. open, selected, prize, D: doors. Regular part: Collection of AnsProlog rules Random Selection – [r] random(a(t) : { Y : p(Y) } )  B. Probabilistic Information – Pr r (a(t) = y) | c B) = v. Observations and Actions – obs(l). – do(a(t) = y).

Semantics: the main ideas The logical part is translated into an AnsProlog program. – The answer sets correspond to possible worlds. The probabilistic part is used to define a measure over the possible worlds. – It is then used in defining the probability of formulas, and – conditional probabilities. Consistency conditions. Sufficiency conditions for consistency. Bayes’s nets and Pearl’s causal models are special cases of P-log programs.

Rifleman Example -- various P-log encodings (Encoding 1) U, V, C, A, B, D : boolean random(U). random(V). random(C). random(A). random(B). random(D). Pr(U) = p. Pr(V) = q. Pr(C| C U) = 1. Pr(A| C C) = 1. Pr(A| C V) = 1. Pr(B| C C) = 1. Pr(D| C A) = 1. Pr(D| C B) = 1. Pr(~C| C ~U) = 1. Pr(~A| C ~V,~C) = 1. Pr(~B| C ~C) = 1. Pr(~D| C ~A,~B) = 1. Prediction, Explanation: works as expected. Formulating counterfactual reasoning – work in progress.

Conclusion: highlights of progress Modules and generalizations – Travel module – first milestone reached. (Dialog with PARC has started.) – Generalization of the methodology – in progress. – Develop couple of other modules – about to start. – Further generalize the process – subsequent step. AnsProlog enhancements and its use in various kinds of reasoning – P-log and reasoning using it -- in progress. – CR-Prolog (Consistency restoring Prolog) – in progress. – GUIs, Modular AnsProlog, etc. – in progress.