Kevin H Knuth Game Theory 2009 Automating the Processes of Inference and Inquiry Kevin H. Knuth University at Albany.

Slides:



Advertisements
Similar presentations
Reinforcement Learning
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
Computer vision: models, learning and inference Chapter 8 Regression.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Kevin Knuth on Measuring July 8, 2007 (20). Kevin H. Knuth Department of Physics University at Albany Measuring.
Best-First Search: Agendas
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Entropy Rates of a Stochastic Process
Hidden Markov Models M. Vijay Venkatesh. Outline Introduction Graphical Model Parameterization Inference Summary.
Relevance Feedback Content-Based Image Retrieval Using Query Distribution Estimation Based on Maximum Entropy Principle Irwin King and Zhong Jin Nov
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Reinforcement Learning.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Maximization without Calculus Not all economic maximization problems can be solved using calculus –If a manager does not know the profit function, but.
Constrained Maximization
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Maximum likelihood (ML) and likelihood ratio (LR) test
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Part III: Inference Topic 6 Sampling and Sampling Distributions
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
WELCOME TO THE WORLD OF FUZZY SYSTEMS. DEFINITION Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept.
CS 561, Sessions 28 1 Uncertainty Probability Syntax Semantics Inference rules.
Information Theory and Security
THE MATHEMATICS OF OPTIMIZATION
Kevin H. Knuth Departments of Physics and Informatics University at Albany Intelligent Science Platforms.
Robin McDougall, Ed Waller and Scott Nokleby Faculties of Engineering & Applied Science and Energy Systems & Nuclear Science 1.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Hypothesis Testing.
CSU 670 Review Fall Software Development Application area: robotic games based on combinatorial maximization problems. Software development is about.
6 February 2009Kevin H Knuth CESS 2009 Kevin H. Knuth Departments of Physics and Informatics University at Albany Automating Science Supported by: NASA.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
Naive Bayes Classifier
Section 1.8. Section Summary Proof by Cases Existence Proofs Constructive Nonconstructive Disproof by Counterexample Nonexistence Proofs Uniqueness Proofs.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
Recitation on EM slides taken from:
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Example: Sec 3.7: Implicit Differentiation. Example: In some cases it is possible to solve such an equation for as an explicit function In many cases.
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
Kevin H. Knuth Department of Physics University at Albany Clearing up the Mysteries: The Fundamentals of Probability Theory.
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
ID3 Algorithm Michael Crawford.
Lecture 3: Statistics Review I Date: 9/3/02  Distributions  Likelihood  Hypothesis tests.
Tangent Planes and Normal Lines
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
6.853: Topics in Algorithmic Game Theory Fall 2011 Constantinos Daskalakis Lecture 22.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
1 An infrastructure for context-awareness based on first order logic 송지수 ISI LAB.
Statistical Methods. 2 Concepts and Notations Sample unit – the basic landscape unit at which we wish to establish the presence/absence of the species.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Ariel Caticha on Information and Entropy July 8, 2007 (16)
Updating Probabilities Ariel Caticha and Adom Giffin Department of Physics University at Albany - SUNY MaxEnt 2006.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
On triangular norms, metric spaces and a general formulation of the discrete inverse problem or starting to think logically about uncertainty On triangular.
Chapter 1: Outcomes, Events, and Sample Spaces men 1.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
What is Probability? Quantification of uncertainty.
Estimating the Spatial Sensitivity Function of A Light Sensor N. K
Announcements Homework 3 due today (grace period through Friday)
Implicit Differentiation
Statistical Inference about Regression
Sets and Probabilistic Models
Presentation transcript:

Kevin H Knuth Game Theory 2009 Automating the Processes of Inference and Inquiry Kevin H. Knuth University at Albany

Kevin H Knuth Game Theory 2009 Describing the World

Kevin H Knuth Game Theory 2009 applebananacherry states of a piece of fruit picked from my grocery basket States

Kevin H Knuth Game Theory 2009 statements describe potential states powerset a b c states of a piece of fruit { a } { b } { c } { a, b } { a, c } { b, c } { a, b, c } statements about a piece of fruit subset inclusion Statements (States of Knowledge)

Kevin H Knuth Game Theory 2009 ordering encodes implication { a } { b } { c } { a, b } { a, c } { b, c } { a, b, c } statements about a piece of fruit implies Implication

Kevin H Knuth Game Theory 2009 { a } { b } { c } { a, b } { a, c } { b, c } { a, b, c } statements about a piece of fruit inference works backwards Quantify to what degree knowing that the system is in one of three states {a, b, c} implies knowing that it is in some other set of states Inference

Kevin H Knuth Game Theory 2009 Quantification

Kevin H Knuth Game Theory 2009 Quantification quantify the partial order = assign real numbers to the elements { a } { b } { c } { a, b } { a, c } { b, c } { a, b, c } Any quantification must be consistent with the lattice structure. Otherwise, it does not quantify the partial order!

Kevin H Knuth Game Theory 2009 Local Consistency Any general rule must hold for special cases Look at special cases to constrain general rule We enforce local consistency This implies that: x y x  y I

Kevin H Knuth Game Theory 2009 Associativity of Join V Write the same element two different ways This implies that:

Kevin H Knuth Game Theory 2009 Associativity of Join V Write the same element two different ways This implies that: The general solution (Aczel) is: DERIVATION OF THE SUMMATION AXIOM IN MEASURE THEORY! (Knuth, 2003, 2009)

Kevin H Knuth Game Theory 2009 Valuation xy x  y I VALUATION

Kevin H Knuth Game Theory 2009 General Case xy x  y x  y z

Kevin H Knuth Game Theory 2009 General Case xy x  y x  y z

Kevin H Knuth Game Theory 2009 General Case xy x  y zx  y

Kevin H Knuth Game Theory 2009 General Case xy x  y zx  y

Kevin H Knuth Game Theory 2009 SUM RULE symmetric form (self-dual)

Kevin H Knuth Game Theory 2009 Lattice Products x = Direct (Cartesian) product of two spaces

Kevin H Knuth Game Theory 2009 The lattice product is associative After the sum rule, the only freedom left is rescaling DIRECT PRODUCT RULE

Kevin H Knuth Game Theory 2009 Context and Bi-Valuations Valuation Bi-Valuation Measure of x with respect to Context i is implicit Context i is explicit Bi-valuations generalize lattice inclusion to degrees of inclusion BI-VALUATION I

Kevin H Knuth Game Theory 2009 Context Explicit Sum Rule Direct Product Rule

Kevin H Knuth Game Theory 2009 Associativity of Context =

Kevin H Knuth Game Theory 2009 CHAIN RULE a c b

Kevin H Knuth Game Theory 2009 Extending the Chain Rule Since x  x and x  x  y, w(x | x) =1 and w(x  y | x)=1 x y x  y x  y

Kevin H Knuth Game Theory 2009 Extending the Chain Rule y x z x  yy  z x  y  z

Kevin H Knuth Game Theory 2009 Extending the Chain Rule y x z x  yy  z x  y  z

Kevin H Knuth Game Theory 2009 Extending the Chain Rule y x z x  yy  z x  y  z

Kevin H Knuth Game Theory 2009 Extending the Chain Rule y x z x  yy  z x  y  z

Kevin H Knuth Game Theory 2009 Extending the Chain Rule y x z x  yy  z x  y  z

Kevin H Knuth Game Theory 2009 Constraint Equations Sum Rule Direct Product Rule Product Rule (Knuth, MaxEnt 2009)

Kevin H Knuth Game Theory 2009 Commutativity leads to Bayes Theorem… Bayes Theorem involves a change of context.

Kevin H Knuth Game Theory 2009 Automated Learning

Kevin H Knuth Game Theory 2009 Application to Statements Applied to the lattice of statements our bi-valuation quantifies degrees of implication M represents a statement about our MODEL D represents a statement about our observed DATA T is the TRUISM (what we assume to be true)

Kevin H Knuth Game Theory 2009 Change of Context = Learning Re-arranging the terms highlights the learning process Updated state of knowledge about the MODEL Initial state of knowledge about the MODEL DATA dependent term

Kevin H Knuth Game Theory 2009 Information Gain

Kevin H Knuth Game Theory 2009 Predict the measurement value D e we would expect to obtain by measuring at some position (x e, y e ). We rely on our previous data D, and hypothesized model M: Using the product rule Predict a Measurement Value

Kevin H Knuth Game Theory 2009 Probability theory is not sufficient to select an optimal experiment. Instead, we rely on decision theory, where U(.) is an utility function Using the Shannon Information as the Utility function Select an Experiment

Kevin H Knuth Game Theory 2009 By writing the joint entropy of the model M and the predicted measurement D e, two different ways, one can show that (Loredo 2003) We choose the experiment that maximizes the entropy of the distribution of predicted measurements. Other cost functions will lead to other results (GOOD FOR ROBOTICS!) Maximum Information Gain

Kevin H Knuth Game Theory 2009 This robot is equipped with a light sensor. It is to locate and characterize a white circle on a black playing field with as few measurements as possible. Robotic Scientists

Kevin H Knuth Game Theory 2009 Initial Stage BLUE: Inference Engine generates samples from space of polygons / circles COPPER: Inquiry Engine computes entropy map of predicted measurement results With little data, the hypothesized shapes are extremely varied and it is good to look just about anywhere

Kevin H Knuth Game Theory 2009 After Several Black Measurements With several black measurements, the hypothesized shapes become smaller. Exploration is naturally focused on unexplored regions

Kevin H Knuth Game Theory 2009 After One White Measurement A positive result naturally focuses exploration around promising region

Kevin H Knuth Game Theory 2009 After Two White Measurements A second positive result naturally focuses exploration around the edges

Kevin H Knuth Game Theory 2009 After Many Measurements Edge exploration becomes more pronounced as data accumulates. This is all handled naturally by the entropy!

Kevin H Knuth Game Theory 2009 John Skilling Janos Aczél Ariel Caticha Keith Earle Philip Goyal Steve Gull Jeffrey Jewell Carlos Rodriguez Phil Erner Scott Frasso Rotem Gutman Nabin Malakar A.J. Mesiti Special Thanks to: