Download presentation
Presentation is loading. Please wait.
1
Evaluation of representations in AI problem solving Eugene Fink
2
1. Introduction The performance of all reasoning systems depends on problem representation, and researchers have accumulated much evidence on the importance of appropriate representations for both humans and artificial-intelligence systems; however, the notion of “good” representations has remained at an informal level. We formalize the concept of representation in artificial intelligence, and propose a framework for evaluating and comparing alternative representations.
3
2. Alternative definitions Informally, a representation is a certain view of a problem or class of problems, and an approach to solving these problems. Although researchers agree in their intuitive understanding of this concept, they have not yet developed its standard formalization; several alternative views are as follows. Representation… includes a machine language for the description of reasoning tasks and a specific encoding of a given problem in this language (Amarel). is the space expanded by a solver during its search for a solution (Newell and Simon). is the state space of a given problem, formed by all legal states of the simulated world and transitions between them (Korf). consists of both data structures and programs operating on them to make inferences (Simon).
4
3. Domain representation We follow Simon’s view of representation as “data structures and programs operating on them.” A problem solver is an algorithm that performs some type of reasoning tasks. When we apply it to a given problem, it may solve the problem or report a failure. A problem description is an input to a solver; in most search systems, it includes allowed operations, initial world state, goal description, and possibly heuristics for guiding the search. A domain description is the part of a problem description common for a class of problems. A representation is a domain description along with a problem solver that uses this description. A representation change may involve improving a description, selecting a new solver, or both.
5
4. Gain function We assume that the application of a problem solver leads to one of three outcomes: finding a solution, terminating with failure after exhausting its search space, or hitting a time bound. We pay for running time and get a reward for solving a problem; the reward may depend on a specific problem and its solution. The overall problem-solving gain is a function of a problem, time, and search result. We denote it by gn(prob, time, result), where the result may be a specific solution or the failure (denoted “ fail ”). A user has to provide a specific gain function, thus defining values of different solutions. We impose three constraints on the allowed functions. The gain decreases with time: For every prob, result, and time 1 < time 2, gn(prob, time 1, result) ≥ gn(prob, time 2, result). A zero-time failure gives zero gain: For every prob, gn(prob, 0, fail ) = 0. The gain of solving a problem is no smaller than the failure “gain”: For every prob, time, and result, gn(prob, time, result) ≥ gn(prob, time, fail ).
6
5. Solution quality We may define a relative quality of solutions through a gain function. Suppose that soln 1 and soln 2 are two solutions of prob. soln 1 has higher quality than soln 2 if, for every time, gn(prob, time, soln 1 ) ≥ gn(prob, time, soln 2 ). If soln 1 gives larger gains than soln 2 for some running times and lower gains for others, then neither of them has higher quality than the other. If the solutions of every problem are totally ordered by relative quality, then we can define a quality function, quality(prob, result), that satisfies the following conditions for every problem and every two results: We may then view gain as a function of a problem, time, and solution quality, gn q (prob, time, quality), which satisfies the following condition: quality(prob, fail ) = 0. If gn(prob, 0, result 1 ) = gn(prob, 0, result 2 ), then quality(prob, result 1 ) = quality(prob, result 2 ). If gn(prob, 0, result 1 ) > gn(prob, 0, result 2 ), then quality(prob, result 1 ) > quality(prob, result 2 ). For every prob, time, and result, gn q (prob, time, quality(prob, result)) = gn(prob, time, result). Most domains have natural quality measures that satisfy it, such as the length and cost of solutions.
7
6. Representation utility We derive a utility function for evaluating representations, and then extend it to account for the use of time bounds and multiple representations. We assume that solver algorithms never make random choices; then, for every problem prob, representation uniquely determines the running time, time(prob), and the result, result(prob). Therefore, it also uniquely determines the gain, gn(prob, time(prob), result(prob)). We define a representation utility by averaging the gain over the set P of all possible problems. We assume a fixed probability distribution on P, and denote the probability of encountering prob by p(prob). If we select a problem at random, the expected gain is G = p(prob) ∙ gn(prob, time(prob), result(prob)). We use G as a utility function for evaluating representation, which unifies the three main dimensions of utility: number of solved problems, speed, and solution quality. Σ prob ε P
8
7. Time bounds If we never interrupt a solver, its search time may be infinite; in practice, we eventually have to stop the search. If we use a time bound B, the search time and result are as follows: time ’ = min(B, time(prob)) result ’ = Thus, a time bound may affect the problem- solving gain. We denote the function that maps problems and bounds into gains by gn ’ : gn ’ (prob, B) = gn(prob, time ’, result ’ ). The choice of a time bound often depends on a specific problem; for example, we usually set smaller bounds for smaller-scale problems. If we view the selected time bound as a function of a given problem, B(prob), the expected gain is G = p(prob) ∙ gn ’ (prob, B(prob)). result(prob),if B ≥ time(prob) fail,if B < time(prob) Σ prob ε P
9
8. Multiple representations We next consider the use of multiple alternative representations; that is, we analyze a system that includes a library of representations and a mechanism for selecting among them. We denote the number of representations by k, and consider respective gain functions gn 1,…,gn k and bound-selection functions B 1,…,B k. When solving a problem prob with representation i, we set the time bound B i (prob) and use gn i to determine the gain. For every i, we define the gain function gn ’ i similar to gn ’ ; the gain of solving prob with representation i is gn ’ i (prob, B i (prob)). For each given problem prob, the system chooses an appropriate representation, i(prob), and then sets the bound B i(prob) (prob). If we select a problem at random, the expected gain is G = p(prob) ∙ gn ’ i(prob) (prob, B i(prob) (prob)). Thus, the utility G depends on the gain function, probability distribution, and procedure for selecting representations and time bounds. Σ prob ε P
10
9. Conclusions The proposed framework shows that the relative quality of representations depends on the user’s judgment of relative solution quality, probabilities of encountering different problems, and heuristics for selecting time bounds. Thus, it confirms the well-known observation that no representation is universally better than the others, and the choice of representation should depend on specific requirements and problem types. We have applied it to develop a mechanism for evaluation and selection of representations (Fink, 2004), which is part of a system for automated representation improvement (Fink, 2003). Closely related work Eugene Fink. Systematic approach to the design of representation-changing algorithms. SARA Symposium, 1995. Eugene Fink. Changes of problem representation: Theory and experiments. Springer, 2003. Eugene Fink. Automatic evaluation and selection of problem-solving methods. JETAI Journal, 16(2), 2004.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.