Evaluation of representations in AI problem solving Eugene Fink.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

QoS-based Management of Multiple Shared Resources in Dynamic Real-Time Systems Klaus Ecker, Frank Drews School of EECS, Ohio University, Athens, OH {ecker,
Heuristic Search techniques
Artificial Intelligence Presentation
Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
CS 345: Chapter 9 Algorithmic Universality and Its Robustness
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Part2 AI as Representation and Search
Chapter 10 Algorithmic Thinking. Copyright © 2013 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Learning Objectives List the five essential.
Best-First Search: Agendas
EC941 - Game Theory Prof. Francesco Squintani Lecture 8 1.
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Certainty Equivalent and Stochastic Preferences June 2006 FUR 2006, Rome Pavlo Blavatskyy Wolfgang Köhler IEW, University of Zürich.
Planning under Uncertainty
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
CHAPTER 4 Decidability Contents Decidable Languages
Automated Changes of Problem Representation Eugene Fink LTI Retreat 2007.
An Introduction to Black-Box Complexity
Module C9 Simulation Concepts. NEED FOR SIMULATION Mathematical models we have studied thus far have “closed form” solutions –Obtained from formulas --
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Using Abstraction to Speed Up Search Robert Holte University of Ottawa.
Some Thoughts to Consider 6 What is the difference between Artificial Intelligence and Computer Science? What is the difference between Artificial Intelligence.
Knowledge representation
A Comparative Study of Search Result Diversification Methods Wei Zheng and Hui Fang University of Delaware, Newark DE 19716, USA
Network Aware Resource Allocation in Distributed Clouds.
Seongbo Shim, Yoojong Lee, and Youngsoo Shin Lithographic Defect Aware Placement Using Compact Standard Cells Without Inter-Cell Margin.
A RTIFICIAL I NTELLIGENCE Introduction 3 October
Empirical Explorations with The Logical Theory Machine: A Case Study in Heuristics by Allen Newell, J. C. Shaw, & H. A. Simon by Allen Newell, J. C. Shaw,
Y. Kotani · F. Ino · K. Hagihara Springer Science + Business Media B.V Reporter: 李長霖.
Chapter 7 Thinking, Language, and Intelligence. Cognition.
What is “Thinking”? Forming ideas Drawing conclusions Expressing thoughts Comprehending the thoughts of others Where does it occur? Distributed throughout.
1 CS 391L: Machine Learning: Experimental Evaluation Raymond J. Mooney University of Texas at Austin.
On information theory and association rule interestingness Loo Kin Kong 5 th July, 2002.
Introduction The berthing assignment problem requires that a detailed time-and-space-schedule be planned for incoming ships, with the goal of minimizing.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Uncertainty Management in Rule-based Expert Systems
O PTIMAL SERVICE TASK PARTITION AND DISTRIBUTION IN GRID SYSTEM WITH STAR TOPOLOGY G REGORY L EVITIN, Y UAN -S HUN D AI Adviser: Frank, Yeong-Sung Lin.
CHAPTER 15: Tests of Significance The Basics ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
Dendral: A Case Study Lecture 25.
AI ● Dr. Ahmad aljaafreh. What is AI? “AI” can be defined as the simulation of human intelligence on a machine, so as to make the machine efficient to.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Introduction to Artificial Intelligence CS 438 Spring 2008.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 21: Dynamic Multi-Criteria RL problems Dr. Itamar Arel College of Engineering Department.
Knowledge Representation Fall 2013 COMP3710 Artificial Intelligence Computing Science Thompson Rivers University.
Computer vision: models, learning and inference Chapter 2 Introduction to probability.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
Research Word has a broad spectrum of meanings –“Research this topic on ….” –“Years of research has produced a new ….”
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Computational Learning Theory Part 1: Preliminaries 1.
Organic Evolution and Problem Solving Je-Gun Joung.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
1 Minimum Bayes-risk Methods in Automatic Speech Recognition Vaibhava Geol And William Byrne IBM ; Johns Hopkins University 2003 by CRC Press LLC 2005/4/26.
Artificial Intelligence Solving problems by searching.
The Acceptance Problem for TMs
Introduction Algorithms Order Analysis of Algorithm
Computer vision: models, learning and inference
Artificial Intelligence Problem solving by searching CSC 361
CONCEPTS OF ESTIMATION
MURI Kickoff Meeting Randolph L. Moses November, 2008
Significance Tests: The Basics
28th September 2005 Dr Bogdan L. Vrusias
CS 8520: Artificial Intelligence
Search.
Search.
Presentation transcript:

Evaluation of representations in AI problem solving Eugene Fink

1. Introduction The performance of all reasoning systems depends on problem representation, and researchers have accumulated much evidence on the importance of appropriate representations for both humans and artificial-intelligence systems; however, the notion of “good” representations has remained at an informal level. We formalize the concept of representation in artificial intelligence, and propose a framework for evaluating and comparing alternative representations.

2. Alternative definitions Informally, a representation is a certain view of a problem or class of problems, and an approach to solving these problems. Although researchers agree in their intuitive understanding of this concept, they have not yet developed its standard formalization; several alternative views are as follows. Representation… includes a machine language for the description of reasoning tasks and a specific encoding of a given problem in this language (Amarel). is the space expanded by a solver during its search for a solution (Newell and Simon). is the state space of a given problem, formed by all legal states of the simulated world and transitions between them (Korf). consists of both data structures and programs operating on them to make inferences (Simon).

3. Domain representation We follow Simon’s view of representation as “data structures and programs operating on them.” A problem solver is an algorithm that performs some type of reasoning tasks. When we apply it to a given problem, it may solve the problem or report a failure. A problem description is an input to a solver; in most search systems, it includes allowed operations, initial world state, goal description, and possibly heuristics for guiding the search. A domain description is the part of a problem description common for a class of problems. A representation is a domain description along with a problem solver that uses this description. A representation change may involve improving a description, selecting a new solver, or both.

4. Gain function We assume that the application of a problem solver leads to one of three outcomes: finding a solution, terminating with failure after exhausting its search space, or hitting a time bound. We pay for running time and get a reward for solving a problem; the reward may depend on a specific problem and its solution. The overall problem-solving gain is a function of a problem, time, and search result. We denote it by gn(prob, time, result), where the result may be a specific solution or the failure (denoted “ fail ”). A user has to provide a specific gain function, thus defining values of different solutions. We impose three constraints on the allowed functions. The gain decreases with time: For every prob, result, and time 1 < time 2, gn(prob, time 1, result) ≥ gn(prob, time 2, result). A zero-time failure gives zero gain: For every prob, gn(prob, 0, fail ) = 0. The gain of solving a problem is no smaller than the failure “gain”: For every prob, time, and result, gn(prob, time, result) ≥ gn(prob, time, fail ).

5. Solution quality We may define a relative quality of solutions through a gain function. Suppose that soln 1 and soln 2 are two solutions of prob. soln 1 has higher quality than soln 2 if, for every time, gn(prob, time, soln 1 ) ≥ gn(prob, time, soln 2 ). If soln 1 gives larger gains than soln 2 for some running times and lower gains for others, then neither of them has higher quality than the other. If the solutions of every problem are totally ordered by relative quality, then we can define a quality function, quality(prob, result), that satisfies the following conditions for every problem and every two results: We may then view gain as a function of a problem, time, and solution quality, gn q (prob, time, quality), which satisfies the following condition: quality(prob, fail ) = 0. If gn(prob, 0, result 1 ) = gn(prob, 0, result 2 ), then quality(prob, result 1 ) = quality(prob, result 2 ). If gn(prob, 0, result 1 ) > gn(prob, 0, result 2 ), then quality(prob, result 1 ) > quality(prob, result 2 ). For every prob, time, and result, gn q (prob, time, quality(prob, result)) = gn(prob, time, result). Most domains have natural quality measures that satisfy it, such as the length and cost of solutions.

6. Representation utility We derive a utility function for evaluating representations, and then extend it to account for the use of time bounds and multiple representations. We assume that solver algorithms never make random choices; then, for every problem prob, representation uniquely determines the running time, time(prob), and the result, result(prob). Therefore, it also uniquely determines the gain, gn(prob, time(prob), result(prob)). We define a representation utility by averaging the gain over the set P of all possible problems. We assume a fixed probability distribution on P, and denote the probability of encountering prob by p(prob). If we select a problem at random, the expected gain is G = p(prob) ∙ gn(prob, time(prob), result(prob)). We use G as a utility function for evaluating representation, which unifies the three main dimensions of utility: number of solved problems, speed, and solution quality. Σ prob ε P

7. Time bounds If we never interrupt a solver, its search time may be infinite; in practice, we eventually have to stop the search. If we use a time bound B, the search time and result are as follows: time ’ = min(B, time(prob)) result ’ = Thus, a time bound may affect the problem- solving gain. We denote the function that maps problems and bounds into gains by gn ’ : gn ’ (prob, B) = gn(prob, time ’, result ’ ). The choice of a time bound often depends on a specific problem; for example, we usually set smaller bounds for smaller-scale problems. If we view the selected time bound as a function of a given problem, B(prob), the expected gain is G = p(prob) ∙ gn ’ (prob, B(prob)). result(prob),if B ≥ time(prob) fail,if B < time(prob) Σ prob ε P

8. Multiple representations We next consider the use of multiple alternative representations; that is, we analyze a system that includes a library of representations and a mechanism for selecting among them. We denote the number of representations by k, and consider respective gain functions gn 1,…,gn k and bound-selection functions B 1,…,B k. When solving a problem prob with representation i, we set the time bound B i (prob) and use gn i to determine the gain. For every i, we define the gain function gn ’ i similar to gn ’ ; the gain of solving prob with representation i is gn ’ i (prob, B i (prob)). For each given problem prob, the system chooses an appropriate representation, i(prob), and then sets the bound B i(prob) (prob). If we select a problem at random, the expected gain is G = p(prob) ∙ gn ’ i(prob) (prob, B i(prob) (prob)). Thus, the utility G depends on the gain function, probability distribution, and procedure for selecting representations and time bounds. Σ prob ε P

9. Conclusions The proposed framework shows that the relative quality of representations depends on the user’s judgment of relative solution quality, probabilities of encountering different problems, and heuristics for selecting time bounds. Thus, it confirms the well-known observation that no representation is universally better than the others, and the choice of representation should depend on specific requirements and problem types. We have applied it to develop a mechanism for evaluation and selection of representations (Fink, 2004), which is part of a system for automated representation improvement (Fink, 2003). Closely related work Eugene Fink. Systematic approach to the design of representation-changing algorithms. SARA Symposium, Eugene Fink. Changes of problem representation: Theory and experiments. Springer, Eugene Fink. Automatic evaluation and selection of problem-solving methods. JETAI Journal, 16(2), 2004.