Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu.

Similar presentations


Presentation on theme: "© Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu."— Presentation transcript:

1 © Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu

2 © Enn Tyugu2 Problem solving The last part of the course is dedicated to problem solving techniques. It is divided in its turn into the problem solving by constraint satisfaction program synthesis planning agents. These parts overlap to some extent, but each of them has its own methods and approaches to the problem solving. Constraint satisfaction and program synthesis are applicable, first of all, to computational problems, planning is a more general approach, and the most difficult to represent algorithmically. Agents are complex programs with intelligent behavior, i.e. with rather universal problem solving capabilities.

3 © Enn Tyugu3 Constraint satisfaction problem Let us have sets D1, D2,...,Dn. We shall call a relation a subset of a direct product Da ...  Db, where a,...,b  {1,...,n} are all different. Such a relation is the most general repesentation of a constraint. Each relation can be extended to the set D1  D2 ...  Dn so that the projection of the extended relation on the set Da ...  Db will be the the relation itself. A solution of the set of constraints given in the form of relations R1,..., Rk is any element of the intersection S of extended relations: S = R1´  R2´ ...  Rk´. This is a tuple which contains an element from each set D1,..., Dn, and it satisfies the relations R1,...,Rk. Another problem on the same set of constraints can be finding all solutions of the set of constarints, i.e. finding the intersection S itself. An important variation of the problem is finding a tuple of elements only of some sets Dp,..., Dq which can be extended to a solution defined above.

4 © Enn Tyugu4 Modification of the problem We shall use very extensively the structure of the set of relations, i.e. the knowledge telling us which sets actually are bound by one or another constraint. This knowledge can be represented by a graph called a constraint network. Before doing this, we shall also agree that we consider the problem of finding one tuple as a solution of the set of constraints. We introduce a variable for each set D1,..., Dn which takes values from this set, and denote these variables by x1,..., xn respectively. Now the problem will be finding values of the variables x1,..., xn which satisfy the constraints. Having a relation R as a subset of Da ...  Db, we shall say that the relation R binds variables xa,..., xb which have domains Da,...,Db.

5 © Enn Tyugu5 Constraint network Structure of a set of constraints given as relations can be represented by a bipartite graph the set of nodes of which is constituted by variables x1,..., xn and relations R1,..., Rk. Edges of the graph correspond to the bindings, i.e. there is an edge betwen a relation R and a variable x iff the relation binds this variable. This is called a constraint network. Example:

6 © Enn Tyugu6 Example: three constraints

7 © Enn Tyugu7 Solution of the constraints

8 © Enn Tyugu8 Termikitsendused, ratsionaalpuud Loogilises programmeerimises tuleb lahendada kitsendusi unifitseerimiseks. Sel juhul on terme otstarbekas vaadelda kui puid. Rekursiivsed termid annavad lõpmatuid puid, milles on vaid lõplik hulk erinevaid alampuid – ratsionaalpuid. Häid kitsenduste lahendamise algoritme on teada vaid lõplike puude jaoks.

9 © Enn Tyugu9 Binary constraints Let D1,..., Dn be finite sets. We shall consider here binary relations and denote by Rij a relation which binds the variables xi and xj from the sets Di and Dj. This relation can be considered as a mapping from Di which is called its domain to Dj which is called its range. In the case of binary relations, the graph representation of a constraint network can be simplified. A binary constraint graph is a graph with variables as its nodes and constraints as its arcs. It can be obtained from the general constraint network by dropping all constraint nodes and substituting one arc instead of the two edges of each constraint. The direction of the arc is chosen so, that a constraint Rij will be represented by the arc from xi to xj.

10 © Enn Tyugu10 Arc-consistency If a tuple of values (v1,..., vn) is a solution of the constraint satisfaction problem, then for any arc (xi,xj) of the binary constraint graph the pair of values (vi,vj) must satisfy the relation Rij. Therefore, if in the Di there is an element vi´ which doesn´t satisfy Rij with any element of Dj, then the element vi´ can be excluded from consideration when a solution is searched. And, vice versa, if an element vj´ of Dj has no match in Di that satisfies Rij, then the element vj´ can be excluded from consideration. We call such elements no-match elements. This gives the following consistency condidion for arcs. An arc (xi,xj) of a binary constraint graph is consistent, iff the sets Di and Dj don´t have no-match elements. This is expressed also by the following formula: (  u  Di  v  Dj Rij(u,v))&(  v  Dj  u  Di Rij(u,v)). An arc can be made consistent by removing from Di and Dj all no-match elements. (Strictly speaking, this changes the relations binding xi and xj, but it will not influence the solution of the consistency problem. ) A binary constraint graph is arc-consistent if all its arcs are consistent.

11 © Enn Tyugu11 Arc-consistency algorithm The procedure semijoin(R,found) removes the no-match elements of the domain of the first bound variable of R and sets the variable found true, if such elements were found. We denote by Dom(R) and Range(R) the domains of the first and second bound variable of R, i.e. the domain and range of R. Inv(R) is the inverse relation of R, i.e. R(x,y) = Inv(R(y,x)). A.4.1: semijoin(R,found) = for x  Dom(R) do L: begin for y  Range(R) do if R(x,y) then exit L od; Dom(R):=Dom(R)\{x}; found:=true end od

12 © Enn Tyugu12 Arc-consistency algorithm A.4.2: ArcConsistent(G) = found:=true; while found do found:=false; for R  G do semijoin(R,found); semijoin(Inv(R),found) od

13 © Enn Tyugu13 Arc-consistency continued A.4.3: for R  G do semijoin(R,found); semijoin(Inv(R),found) od open:=(select var(G) ); for y  open do for R  inc(y) do found:=false; semijoin(R,found); semijoin(Inv(R),found) if found then open:=open  {incvar(R,y)} fi od; open:=open \ {y} od The algorithm for arc consistency can be significantly improved by taking into account that only domains of those constraints must be checked repeatedly, whose ranges have changed. The following clever arc consistency algorithm for acyclic graphs requires only time linear in the number of constraints. select var(G) – selects a node from the graph G inc(y) – set of relations which bind the node y incvar(R,y) – the node bound by R to y.

14 © Enn Tyugu14 Path consistency Let us consider a network with arcs (xi,xj), (xi,xm) and (xm,xj). Even when all its arcs are consistent, there is no guarantee that the elements chosen from Di, Dj and Dm will satisfy together all three relations Rij, Rim and Rmj. These relations will be satisfied, if the graph is arc consistent and for each pair of values u  Di and v  Dj there exists a value w  Dm such that R(u,m)&R(m,v) is true. This can be written as  u  Di  v  Dj (Rij(u,v)   w  Dm Rim(u,w) & Rmj(m,v)). If this condition is satisfied for any path of length 2 in a network, the network is called path consistent, or 3-consistent (considering the number of consistent arcs). This means that three consecutive nodes chosen along any path are consistent with respect to the relations between these nodes on the path. Path consistency can be generalized for any number of nodes in the following way.

15 © Enn Tyugu15 k - consistency A network is k-consistent, iff given any instantiation of k-1 variables satisfying the condition that all the direct constraints among those variables are satisfied, it is possible to find an instantiation of any k-th variable such that k values taken together satisfy all the constraints among the k variables. A network is strongly k-consistent, iff it is j-consistent for all j . k. Even if a network with n variables is strongly k-consistent for k<n there is no guarantee that a solution exists. Remark. For good performance of arc- and path-consistency algorithms a special data structure is used to represent binary relations. Every value bound by a relation is connected by pointers to all values which together with it satisfy the relation.

16 © Enn Tyugu16 Functional constraint networks A relation which binds variables u,...,v, x,...,y is called a functional dependency with input variables u,...,v and output variables x,...,y, iff for any pair t1, t2 of tuples of values of its bound variables from the equality of all pairs of values for respective variables u,...,v in these tuples follows the equality of pairs of respective values for x,...,y. Functional constraint network is a constraint network which contains only functional dependencies as constraints. On a functional constraint network, it is easy to solve the following constraint satisfaction problem: given values of variables x1, …xm, find values of variables y1, …, yn that satisfy the constraints together with the given values of x1,…, xm.

17 © Enn Tyugu17 Function propagation known - set of variables already known to be computable open - set of unprocessed variables from the set known count(r) - counter of the constraint r showing how many input variables of the constraint are still unknown countdown(r) - decreases the value of the count(r) by one initcount - initializes the values of the counters according to the numbers of input variables of the corresponding constraints succ(e) - successors of the node e in the constraint network plan - sequence of applied constraints which is a plan for computations takefrom(s) - produces an element of the set s and excludes it from s.

18 © Enn Tyugu18 Function propagation A.4.4: plan:=(); known:=in; open:=in; initcount; while not empty(open) do if out  known then success fi; x:=takefrom(open); for r  succ(x) do if count > 1 & succ(r)  known then countdown(r) elif count(r) = 1 then plan:=append(plan, r); open:=open  (succ(r)\known); known:=known  succ(r); countdown(r) fi od od; failure

19 © Enn Tyugu19 Equational problem solver (EPS) It differs from A.4.4 in 1) representation of a plan, 2) applicability condition of a relation and 3) function suc(r) that gives all neighbors of r. A.4.5: plan:=(); known:=in; open:=in; initcount; while not empty(open) do if out  known then success fi; x:=takefrom(open); for r  succ(x) do if count(r) > 1 then countdown(r) elif count(r) = 1 then plan:=append(plan,( r,succ(r)\known)); open:=open  (succ(r) \known); known:=known  succ(r); countdown(r) fi od od; failure

20 © Enn Tyugu20 Improved EPS A.4.6: plan:=(); known:=in; open:=in; initcount; while not empty(open) do if out  known then success fi; x:=takefrom(open); for r  succ(x) do DERIVE(r,plan,open,known); od od; failure Here we encapsulate the derivation step into a separate procedure DERIVE:

21 © Enn Tyugu21 Improved EPS continued A.4.7: DERIVE(r, plan, open, known): if count(r) >1 then countdown(r) elif count(r) = 1 and input(r)  known and then plan:=append(plan,( r,succ(r)\known) ); open:=open  (succ(r) \known); known:=known  succ(r); countdown(r) fi The procedure DERIVE can be adapted to various representations and generalizations of relations. Here we accept equations with input variables input(r) that must be known for applying a relation.

22 © Enn Tyugu22 EPS with structural relations A.4.8: DERIVE1: if count(r) > 1 and not x=str(r) then countdown(r) elif (count(r) =1 and input(r)  known ) or x = str(r) then plan:=append(plan, r); open:=open  (succ(r) \known); known:=known  succ(r); countdown(r) fi Here we accept structural relations r that bind a structured variable str(r) with its components.

23 © Enn Tyugu23 Clustering of equations Example: given a system of independent linear equations, find its partitioning into minimal independently solvable, i.e. strongly connected blocks.

24 © Enn Tyugu24 Clustering of equations A sufficient structural condition for the solvability of a system of linearly independent equations is that a bijective (one-to-one) mapping can be established between the variables and equations. After checking this condition, the algorithm of clustering is as follows: 1. Orientate the edges of the network so that each edge which binds a variable x with a constraint C according to the established mapping becomes an arc (x,C) and all other edges become the arcs of the form (C',x’). 2. Find strongly connected parts P1,...,Pk of the directed graph obtained. Each such part represents a minimal system of equations that must be solved simultaneously. 3. Order the parts P1,..., Pk in the ordering opposite to the directions of arcs which bind them. This gives the correct order for constraint propagation between P1,...,Pk.

25 © Enn Tyugu25 Clustering of equations continued Maximal mapping: NB! One variable (x 5 )is left out of the mapping - should be given, or the equation R2 can not be taken into account. x5x5 R2

26 © Enn Tyugu26 Clustering of equations continued Resulting constraint network (numbers of variables computable by clusters are shown in the cluster nodes):

27 © Enn Tyugu27 Higher-order functional constraints The constraint c is a higher-order functional constraint, because it has a functional variable g as an input.

28 © Enn Tyugu28 Classes of constraint satisfaction problems Constraint satisfaction Finite domainsInfinite domains Subdefinite models Constraint hierarchies Inequalities Interval constraints Functional constraints n-ary constraints Binary constraints Consistency propagation Equational problem solvers Higher-order functional constraints Rational trees

29 © Enn Tyugu29 Bibliography R. Bagnara, R. Gori, P. Hill, E. Zaffanella. (2001) Finite-tree analysis for constraint logic-based languages. In: Proc. 2001 JOINT CONFERENCE ON DECLARATIVE PROGRAMMING http://host.di.uevora.pt/~agp01/accepted.html http://host.di.uevora.pt/~agp01/accepted.html A. Colmerauer (1984) Solving Equations and Inequations on Finite and Infinite Trees, In: Proceedings of the Conference on Fifth Generation Computer Systems, Tokyo. faculty.plattsburgh.edu/jan.plaza/research/lp/papers/06A-equality.ps


Download ppt "© Enn Tyugu1 Algorithms of Artificial Intelligence Lecture 7: Constraint satisfaction E. Tyugu."

Similar presentations


Ads by Google