Presentation is loading. Please wait.

Presentation is loading. Please wait.

Constraint satisfaction problems

Similar presentations


Presentation on theme: "Constraint satisfaction problems"— Presentation transcript:

1 Constraint satisfaction problems
CS171, Fall 2016 Introduction to Artificial Intelligence Prof. Alexander Ihler

2 Constraint Satisfaction Problems
What is a CSP? Finite set of variables, X1, X2, …, Xn Nonempty domain of possible values for each: D1, ..., Dn Finite set of constraints, C1...Cm Each constraint Ci limits the values that variables can take, e.g., X1  X2 Each constraint Ci is a pair: Ci = (scope, relation) Scope = tuple of variables that participate in the constraint Relation = list of allowed combinations of variables May be an explicit list of allowed combinations May be an abstract relation allowing membership testig & listing CSP benefits Standard representation pattern Generic goal and successor functions Generic heuristics (no domain-specific expertise required)

3 Example: Sudoku Problem specification
A B C D E F G H I Variables: {A1, A2, A3, … I7, I8, I9} Domains: { 1, 2, 3, … , 9 } Constraints: each row, column “all different” alldiff(A1,A2,A3…,A9), ... each 3x3 block “all different” alldiff(G7,G8,G9,H7,…I9), ... Task: solve (complete a partial solution) check “well-posed”: exactly one solution?

4 CSPs: What is a Solution?
State: assignment of values to some or all variables Assignment is complete when every variable has an assigned value Assignment is partial when one or more variables have no assigned value Consistent assignment An assignment that does not violate any constraint A solution to a CSP is a complete and consistent assignment All variables are assigned, and no constraints are violated CSPs may require a solution that maximizes an objective Linear objective ) linear programming or integer linear programming Ex: “Weighted” CSPs Examples of applications Scheduling the time of observations on the Hubble Space Telescope Airline schedules Cryptography Computer vision, image interpretation

5 Example: Map Coloring Variables: Domains: { red, green, blue }
A solution is any setting of the variables that satisfies all the constraints, e.g., Variables: Domains: { red, green, blue } Constraints: bordering regions must have different colors:

6 Example: Map Coloring Constraint graph Graphical model Binary CSP
Vertices: variables Edges: constraints (connect involved variables) Graphical model Abstracts the problem to a canonical form Can reason about problem through graph connectivity Ex: Tasmania can be solved independently (more later) Binary CSP Constraints involve at most two variables Sometimes called “pairwise”

7 Aside: Graph coloring More general problem than map coloring
Planar graph: graph in 2D plane with no edge crossings Guthrie’s conjecture (1852) Every planar graph can be colored · 4 colors Proved (using a computer) in (Appel & Haken 1977)

8 Varieties of CSPs Discrete variables Continuous variables
Finite domains, size d ) O(dn) complete assignments Ex: Boolean CSPs: Boolean satisfiability (NP-complete) Inifinite domains (integers, strings, etc.) Ex: Job scheduling, variables are start/end days for each job Need a constraint language, e.g., StartJob_1 + 5 · StartJob_3 Infinitely many solutions Linear constraints: solvable Nonlinear: no general algorithm Continuous variables Ex: Building an airline schedule or class schedule Linear constraints: solvable in polynomial time by LP methods

9 Varieties of constraints
Unary constraints involve a single variable, e.g., SA ≠ green Binary constraints involve pairs of variables, e.g., SA ≠ WA Higher-order constraints involve 3 or more variables, Ex: jobs A,B,C cannot all be run at the same time Can always be expressed using multiple binary constraints Preference (soft constraints) Ex: “red is better than green” can often be represented by a cost for each variable assignment Combines optimization with CSPs

10 Simplify… We restrict attention to: Discrete & finite domains
Variables have a discrete, finite set of values No objective function Any complete & consistent solution is OK Solution Find a complete & consistent assignment Example: Sudoku puzzles

11 Binary CSPs CSPs only need binary constraints! Unary constraints
Just delete values from the variable’s domain Higher order (3 or more variables): reduce to binary Simple example: 3 variables X,Y,Z Domains Dx={1,2,3}, Dy={1,2,3}, Dz={1,2,3} Constraint C[X,Y,Z] = {X+Y=Z} = {(1,1,2),(1,2,3),(2,1,3)} (Plus other variables & constraints elsewhere in the CSP) Create a new variable W, taking values as triples (3-tuples) Domain of W is Dw={(1,1,2),(1,2,3),(2,1,3)} Dw is exactly the tuples that satisfy the higher-order constraint Create three new constraints: C[X,W] = { [1,(1,1,2)], [1,(1,2,3)], [2,(2,1,3) } C[Y,W] = { [1,(1,1,2)], [2,(1,2,3)], [1,(2,1,3) } C[Z,W] = { [2,(1,1,2)], [3,(1,2,3)], [3,(2,1,3) } Other constraints elsewhere involving X,Y,Z are unaffected

12 Example: Cryptarithmetic problems
Find numeric substitutions that make an equation hold: T W O T W O = F O U R R U W T O F C2 C3 C1 all-different O+O = R + 10*C1 W+W+C1 = U + 10*C2 T+T+C2 = O + 10*C3 C3 = F Non-pairwise CSP: C1 = {0,1} C2 = {0,1} C3 = {0,1} = For example: O = 4 R = 8 W = 3 U = 6 T = 7 F = 1 Note: not unique – how many solutions?

13 Example: Cryptarithmetic problems
Try it yourself at home: (a frequent request from college students to parents) S E N D M O R E = M O N E Y

14 (adapted from http://www.unitime.org/csp.php)
Random binary CSPs A random binary CSP is defined by a four-tuple (n, d, p1, p2) n = the number of variables. d = the domain size of each variable. p1 = probability a constraint exists between two variables. p2 = probability a pair of values in the domains of two variables connected by a constraint is incompatible. Note that R&N lists compatible pairs of values instead. Equivalent formulations; just take the set complement. (n, d, p1, p2) generate random binary constraints The so-called “model B” of Random CSP (n, d, n1, n2) n1 = p1 n(n-1)/2 pairs of variables are randomly and uniformly selected and binary constraints are posted between them. For each constraint, n2 = p2 d^2 randomly and uniformly selected pairs of values are picked as incompatible. The random CSP as an optimization problem (minCSP). Goal is to minimize the total sum of values for all variables.

15 CSP as a standard search problem
A CSP can easily be expressed as a standard search problem. Incremental formulation Initial State: the empty assignment {} Actions: Assign a value to an unassigned variable provided that it does not violate a constraint Goal test: the current assignment is complete (by construction it is consistent) Path cost: constant cost for every step (not really relevant) Aside: can also use complete-state formulation Local search techniques (Chapter 4) tend to work well BUT: solution is at depth n (# of variables) For BFS: branching factor at top level is nd next level: (n-1)d Total: n! dn leaves! But there are only dn complete assignments!

16 Commutativity CSPs are commutative.
Order of any given set of actions has no effect on the outcome. Example: choose colors for Australian territories, one at a time. [WA=red then NT=green] same as [NT=green then WA=red] All CSP search algorithms can generate successors by considering assignments for only a single variable at each node in the search tree  there are dn irredundant leaves (Figure out later to which variable to assign which value.)

17 Backtracking search Similar to depth-first search
At each level, pick a single variable to expand Iterate over the domain values of that variable Generate children one at a time, one per value Backtrack when a variable has no legal values left Uninformed algorithm Poor general performance

18 Backtracking search (R&N Fig. 6.5)
function BACKTRACKING-SEARCH(csp) return a solution or failure return RECURSIVE-BACKTRACKING({} , csp) function RECURSIVE-BACKTRACKING(assignment, csp) return a solution or failure if assignment is complete then return assignment var  SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp) for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do if value is consistent with assignment according to CONSTRAINTS[csp] then add {var=value} to assignment result  RECURSIVE-BACTRACKING(assignment, csp) if result  failure then return result remove {var=value} from assignment return failure

19 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

20 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

21 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

22 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

23 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

24 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

25 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

26 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

27 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

28 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

29 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

30 Backtracking search Expand deepest unexpanded node
Generate only one child at a time. Goal-Test when inserted. For CSP, Goal-test at bottom Future= green dotted circles Frontier=white nodes Expanded/active=gray nodes Forgotten/reclaimed= black nodes

31 Backtracking search (R&N Fig. 6.5)
function BACKTRACKING-SEARCH(csp) return a solution or failure return RECURSIVE-BACKTRACKING({} , csp) function RECURSIVE-BACKTRACKING(assignment, csp) return a solution or failure if assignment is complete then return assignment var  SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp) for each value in ORDER-DOMAIN-VALUES(var, assignment, csp) do if value is consistent with assignment according to CONSTRAINTS[csp] then add {var=value} to assignment result  RECURSIVE-BACTRACKING(assignment, csp) if result  failure then return result remove {var=value} from assignment return failure

32 Improving Backtracking O(exp(n))
Make our search more “informed” (e.g. heuristics) General purpose methods can give large speed gains CSPs are a generic formulation; hence heuristics are more “generic” as well Before search: Reduce the search space Arc-consistency, path-consistency, i-consistency Variable ordering (fixed) During search: Look-ahead schemes: Detecting failure early; reduce the search space if possible Which variable should be assigned next? Which value should we explore first? Look-back schemes: Backjumping Constraint recording Dependency-directed backtracking

33 Look-ahead: Variable and value orderings
Intuition: Apply propagation at each node in the search tree (reduce future branching) Choose a variable that will detect failures early (low branching factor) Choose value least likely to yield a dead-end (find solution early if possible) Forward-checking (check each unassigned variable separately) Maintaining arc-consistency (MAC) (apply full arc-consistency)

34 Dependence on variable ordering
Example: coloring Color WA, Q, V first: 9 ways to color none inconsistent (yet) only 3 lead to solutions… Color WA, SA, NT first: 6 ways to color all lead to solutions no backtracking

35 Dependence on variable ordering
Another graph coloring example:

36 Minimum remaining values (MRV)
A heuristic for selecting the next variable a.k.a. most constrained variable (MCV) heuristic choose the variable with the fewest legal values will immediately detect failure if X has no legal values (Related to forward checking, later)

37 Degree heuristic Another heuristic for selecting the next variable
a.k.a. most constraining variable heuristic Select variable involved in the most constraints on other unassigned variables Useful as a tie-breaker among most constrained variables What about the order to try values?

38 Least Constraining Value
Heuristic for selecting what value to try next Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables Makes it more likely to find a solution early

39 Variable and value orderings
Minimum remaining values for variable ordering Least constraining value for value ordering Why do we want these? Is there a contradiction? Intuition: Choose a variable that will detect failures early (low branching factor) Choose value least likely to yield a dead-end (find solution early if possible) MRV for variable selection reduces current branching factor Low branching factor throughout tree = fast search Hopefully, when we get to variables with currently many values, forward checking or arc consistency will have reduced their domains & they’ll have low branching too LCV for value selection increases the chance of success If we’re going to fail at this node, we’ll have to examine every value anyway If we’re going to succeed, the earlier we do, the sooner we can stop searching

40 Summary CSPs special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values Backtracking = depth-first search with one variable assigned per node Heuristics Variable ordering and value selection heuristics help significantly Variable ordering (selection) heuristics Choose variable with Minimum Remaining Values (MRV) Degree Heuristic – break ties after applying MRV Value ordering (selection) heuristic Choose Least Constraining Value

41 Constraint satisfaction problems (continued)
CS171, Fall 2016 Introduction to Artificial Intelligence Prof. Alexander Ihler

42 You Should Know Node consistency, arc consistency, path consistency, K-consistency (6.2) Forward checking (6.3.2) Local search for CSPs Min-Conflict Heuristic (6.4) The structure of problems (6.5)

43 Minimum remaining values (MRV)
A heuristic for selecting the next variable a.k.a. most constrained variable (MCV) heuristic choose the variable with the fewest legal values will immediately detect failure if X has no legal values (Related to forward checking, later) Idea: reduce the branching factor now Smallest domain size = fewest # of children = least branching

44 Detailed MRV example Initially, all regions have |Di|=3
WA=red Initially, all regions have |Di|=3 Choose one randomly, e.g. WA & pick value, e.g., red (Better: tie-break with degree…) Do forward checking (next topic) NT & SA cannot be red Now NT & SA have 2 possible values – pick one randomly

45 Detailed MRV example NT & SA have two possible values
NT=green NT & SA have two possible values Choose one randomly, e.g. NT & pick value, e.g., green (Better: tie-break with degree; select value by least constraining) Do forward checking (next topic) SA & Q cannot be green Now SA has only 1 possible value; Q has 2 values.

46 Detailed MRV example SA has only one possible value Assign it
SA=blue SA has only one possible value Assign it Do forward checking (next topic) Now Q, NSW, V cannot be blue Now Q has only 1 possible value; NSW, V have 2 values.

47 Degree heuristic Another heuristic for selecting the next variable
a.k.a. most constraining variable heuristic Select variable involved in the most constraints on other unassigned variables Useful as a tie-breaker among most constrained variables Note: usually (& in picture above) we use the degree heuristic as a tie-breaker for MRV; however, in homeworks & exams we may use it without MRV to show how it works. Let’s see an example.

48 Ex: Degree heuristic (only)
Select variable involved in largest # of constraints with other un-assigned vars Initially: degree(SA) = 5; assign (e.g., red) No neighbor can be red; we remove the edges to assist in counting degree Now, degree(NT) = degree(Q) = degree(NSW) = 2 Select one at random, e.g. NT; assign to a value, e.g., blue Now, degree(NSW)=2 Idea: reduce branching in the future The variable with the largest # of constraints will likely knock out the most values from other variables, reducing the branching factor in the future SA=red NT=blue NSW=blue

49 Ex: MRV + degree Idea: reduce branching in the future
Initially, all variables have 3 values; tie-breaker degree => SA No neighbor can be red; we remove the edges to assist in counting degree Now, WA, NT, Q, NSW, V have 2 values each WA,V have degree 1; NT,Q,NSW all have degree 2 Select one at random, e.g. NT; assign to a value, e.g., blue Now, WA and Q have only one possible value; degree(Q)=1 > degree(WA)=0 Idea: reduce branching in the future The variable with the largest # of constraints will likely knock out the most values from other variables, reducing the branching factor in the future SA=red NT=blue NSW=blue

50 Least Constraining Value
Heuristic for selecting what value to try next Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables Makes it more likely to find a solution early

51 Look-ahead: Constraint propagation
Intuition: Apply propagation at each node in the search tree (reduce future branching) Choose a variable that will detect failures early (low branching factor) Choose value least likely to yield a dead-end (find solution early if possible) Forward-checking (check each unassigned variable separately) Maintaining arc-consistency (MAC) (apply full arc-consistency)

52 Forward checking Idea:
Keep track of remaining legal values for unassigned variables Backtrack when any variable has no legal values

53 Forward checking Idea:
Keep track of remaining legal values for unassigned variables Backtrack when any variable has no legal values Red Not red Assign {WA = red} Effect on other variables (neighbors of WA): NT can no longer be red SA can no longer be red

54 Forward checking Idea:
Keep track of remaining legal values for unassigned variables Backtrack when any variable has no legal values Red Not red Not green Green Assign {Q = green} Effect on other variables (neighbors of Q): NT can no longer be green SA can no longer be green NSW can no longer be green (We already have failure, but FC is too simple to detect it now)

55 Forward checking Idea:
Keep track of remaining legal values for unassigned variables Backtrack when any variable has no legal values Not red Not green Green Red Not green Not blue Not red Not green Not blue Blue Assign {V = blue} Effect on other variables (neighbors of V): NSW can no longer be blue SA can no longer be blue (no values possible!) Forward checking has detected this partial assignment is inconsistent with any complete assignment

56 Ex: 4-Queens Problem X1 {1,2,3,4} X3 X4 X2 1 3 2 4 X3 X2 X4 X1
Backtracking search with forward checking Bookkeeping is tricky & complicated X1 {1,2,3,4} X3 X4 X2 1 3 2 4 X3 X2 X4 X1

57 Ex: 4-Queens Problem X1 {1,2,3,4} X3 X4 X2 1 3 2 4 X3 X2 X4 X1
Red = value is assigned to variable

58 Ex: 4-Queens Problem X1 {1,2,3,4} X3 X4 X2 1 3 2 4 X3 X2 X4 X1
Red = value is assigned to variable

59 Ex: 4-Queens Problem X1 Level: Deleted:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } (Please note: As always in computer science, there are many different ways to implement anything. The book-keeping method shown here was chosen because it is easy to present and understand visually. It is not necessarily the most efficient way to implement the book-keeping in a computer. Your job as an algorithm designer is to think long and hard about your problem, then devise an efficient implementation.) One possibly more efficient equivalent alternative (of many): { (X2:1,2) (X3:1,3) (X4:1,4) }

60 Ex: 4-Queens Problem X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable

61 Ex: 4-Queens Problem X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable

62 Ex: 4-Queens Problem X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable

63 Ex: 4-Queens Problem X1 Level: X2 Level: Deleted:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: { (X3,2) (X3,4) (X4,3) } (Please note: Of course, we could have failed as soon as we deleted { (X3,2) (X3,4) }. There was no need to continue to delete (X4,3), because we already had established that the domain of X3 was null, and so we already knew that this branch was futile and we were going to fail anyway. The book-keeping method shown here was chosen because it is easy to present and understand visually. It is not necessarily the most efficient way to implement the book-keeping in a computer. Your job as an algorithm designer is to think long and hard about your problem, then devise an efficient implementation.)

64 Ex: 4-Queens Problem X1 {1,2,3,4} X3 { , , , } X4 { ,2, , } X2
{ , , , } X4 { ,2, , } X2 { , ,3,4} 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable

65 Ex: 4-Queens Problem X1 Level: X2 Level: Deleted: FAIL at X2=3.
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: FAIL at X2=3. Restore: { (X3,2) (X3,4) (X4,3) }

66 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

67 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

68 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

69 Ex: 4-Queens Problem X1 Level: X2 Level: Deleted:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: { (X3,4) (X4,2) }

70 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, , } X4 { , ,3, } X2
{ ,2, , } X4 { , ,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

71 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, , } X4 { , ,3, } X2
{ ,2, , } X4 { , ,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

72 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, , } X4 { , ,3, } X2
{ ,2, , } X4 { , ,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

73 Ex: 4-Queens Problem X1 Level: X2 Level: X3 Level: Deleted:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: { (X3,4) (X4,2) } X3 Level: { (X4,3) }

74 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 { ,2, , } X4 { , , , } X2
{ ,2, , } X4 { , , , } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

75 Ex: 4-Queens Problem X1 Level: X2 Level: X3 Level: Deleted:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: { (X3,4) (X4,2) } X3 Level: Fail at X3=2. Restore: { (X4,3) }

76 Ex: 4-Queens Problem X X X1 {1,2,3,4} X3 { ,2, , } X4 { , ,3, } X2
{ ,2, , } X4 { , ,3, } X2 { , ,3,4} X 1 3 2 4 X3 X2 X4 X1 X Red = value is assigned to variable X = value led to failure

77 Ex: 4-Queens Problem X1 Level: X2 Level: Deleted: Fail at X2=4.
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) } X2 Level: Fail at X2=4. Restore: { (X3,4) (X4,2) }

78 Ex: 4-Queens Problem X X X1 {1,2,3,4} X3 { ,2, ,4} X4 { ,2,3, } X2
{ ,2, ,4} X4 { ,2,3, } X2 { , ,3,4} X X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

79 Ex: 4-Queens Problem X1 Level: Fail at X1=1. Restore:
{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }

80 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 X4 X2
Red = value is assigned to variable X = value led to failure

81 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 X4 X2
Red = value is assigned to variable X = value led to failure

82 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 X4 X2
Red = value is assigned to variable X = value led to failure

83 Ex: 4-Queens Problem X1 Level: Deleted:
{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) }

84 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, ,3, } X4 {1, ,3,4} X2
{1, ,3, } X4 {1, ,3,4} X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

85 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, ,3, } X4 {1, ,3,4} X2
{1, ,3, } X4 {1, ,3,4} X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

86 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, ,3, } X4 {1, ,3,4} X2
{1, ,3, } X4 {1, ,3,4} X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

87 Ex: 4-Queens Problem X1 Level: X2 Level: Deleted:
{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) } X2 Level: { (X3,3) (X4,4) }

88 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, , , } X4 {1, ,3, } X2
{1, , , } X4 {1, ,3, } X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

89 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, , , } X4 {1, ,3, } X2
{1, , , } X4 {1, ,3, } X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

90 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, , , } X4 {1, ,3, } X2
{1, , , } X4 {1, ,3, } X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

91 Ex: 4-Queens Problem X1 Level: X2 Level: X3 Level: Deleted:
{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) } X2 Level: { (X3,3) (X4,4) } X3 Level: { (X4,1) }

92 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, , , } X4 { , ,3, } X2
{1, , , } X4 { , ,3, } X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

93 Ex: 4-Queens Problem X X1 {1,2,3,4} X3 {1, , , } X4 { , ,3, } X2
{1, , , } X4 { , ,3, } X2 { , , ,4} X 1 3 2 4 X3 X2 X4 X1 Red = value is assigned to variable X = value led to failure

94 Constraint propagation
Forward checking propagates information from assigned to unassigned variables But, doesn't provide early detection for all failures: NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally Can detect failure earlier But, takes more computation – is it worth the extra effort?

95 Arc consistency (AC-3) Simplest form of propagation makes each arc consistent X ! Y is consistent iff for every value x of X there is some allowed value y for Y (note: directed!) Consider state after WA=red, Q=green SA ! NSW consistent if SA = blue and NSW = red

96 Arc consistency Simplest form of propagation makes each arc consistent
X ! Y is consistent iff for every value x of X there is some allowed value y for Y (note: directed!) Consider state after WA=red, Q=green NSW ! SA consistent if NSW = red and SA = blue NSW = blue and SA = ??? ) NSW = blue can be pruned No current domain value for SA is consistent

97 Arc consistency Simplest form of propagation makes each arc consistent
X ! Y is consistent iff for every value x of X there is some allowed value y for Y (note: directed!) Enforce arc consistency: arc can be made consistent by removing blue from NSW Continue to propagate constraints Check V ! NSW : not consistent for V = red; remove red from V If X loses a value, neighbors of X need to be rechecked

98 Arc consistency Simplest form of propagation makes each arc consistent
X ! Y is consistent iff for every value x of X there is some allowed value y for Y (note: directed!) Continue to propagate constraints SA ! NT not consistent: And cannot be made consistent! Failure Arc consistency detects failure earlier than FC But requires more computation: is it worth the effort?

99 Ex: Arc Consistency in Sudoku
Variables: 81 slots Domains = {1,2,3,4,5,6,7,8,9} Constraints: 27 not-equal 2 Each row, column and major block must be alldifferent “Well posed” if it has unique solution: 27 constraints

100 Arc consistency checking
Can be run as a preprocessor, or after each assignment As preprocessor before search: Removes obvious inconsistencies After each assignment: Reduces search cost but increases step cost AC is run repeatedly until no inconsistency remains Like Forward Checking, but exhaustive until quiescence Trade-off Requires overhead to do; but usually better than direct search In effect, it can successfully eliminate large (and inconsistent) parts of the state space more effectively than can direct search alone Need a systematic method for arc-checking If X loses a value, neighbors of X need to be rechecked: i.e., incoming arcs can become inconsistent again (outgoing arcs stay consistent).

101 Arc consistency algorithm (AC-3)
function AC-3(csp) returns false if inconsistency found, else true, may reduce csp domains inputs: csp, a binary CSP with variables {X1, X2, …, Xn} local variables: queue, a queue of arcs, initially all the arcs in csp /* initial queue must contain both (Xi, Xj) and (Xj, Xi) */ while queue is not empty do (Xi, Xj)  REMOVE-FIRST(queue) if REMOVE-INCONSISTENT-VALUES(Xi, Xj) then if size of Di = 0 then return false for each Xk in NEIGHBORS[Xi] − {Xj} do add (Xk, Xi) to queue if not already there return true function REMOVE-INCONSISTENT-VALUES(Xi, Xj) returns true iff we delete a value from the domain of Xi removed  false for each x in DOMAIN[Xi] do if no value y in DOMAIN[Xj] allows (x,y) to satisfy the constraints between Xi and Xj then delete x from DOMAIN[Xi]; removed  true return removed (from Mackworth, 1977)

102 Complexity of AC-3 A binary CSP has at most n2 arcs
Each arc can be inserted in the queue d times (worst case) (X, Y): only d values of X to delete Consistency of an arc can be checked in O(d2) time Complexity is O(n2 d3) Although substantially more expensive than Forward Checking, Arc Consistency is usually worthwhile.

103 K-consistency Arc consistency does not detect all inconsistencies:
Partial assignment {WA=red, NSW=red} is inconsistent. Stronger forms of propagation can be defined using the notion of k-consistency. A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. E.g. 1-consistency = node-consistency E.g. 2-consistency = arc-consistency E.g. 3-consistency = path-consistency Strongly k-consistent: k-consistent for all values {k, k-1, …2, 1}

104 Trade-offs Running stronger consistency checks…
Takes more time But will reduce branching factor and detect more inconsistent partial assignments No “free lunch” In worst case n-consistency takes exponential time “Typically” in practice: Often helpful to enforce 2-Consistency (Arc Consistency) Sometimes helpful to enforce 3-Consistency Higher levels may take more time to enforce than they save.

105 Improving backtracking
Before search: (reducing the search space) Arc-consistency, path-consistency, i-consistency Variable ordering (fixed) During search: Look-ahead schemes: Value ordering/pruning (choose a least restricting value), Variable ordering (choose the most constraining variable) Constraint propagation (take decision implications forward) Look-back schemes: Backjumping Constraint recording Dependency-directed backtracking

106 Further improvements Checking special constraints
Checking Alldiff(…) constraint E.g. {WA=red, NSW=red} Checking Atmost(…) constraint Bounds propagation for larger value domains Intelligent backtracking Standard form is chronological backtracking, i.e., try different value for preceding variable. More intelligent: backtrack to conflict set. Set of variables that caused the failure or set of previously assigned variables that are connected to X by constraints. Backjumping moves back to most recent element of the conflict set. Forward checking can be used to determine conflict set.

107 Local search for CSPs Use complete-state representation For CSPs
Initial state = all variables assigned values Successor states = change 1 (or more) values For CSPs allow states with unsatisfied constraints (unlike backtracking) operators reassign variable values hill-climbing with n-queens is an example Variable selection: randomly select any conflicted variable Value selection: min-conflicts heuristic Select new value that results in a minimum number of conflicts with the other variables

108 Local search for CSPs function MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current  an initial complete assignment for csp for i = 1 to max_steps do if current is a solution for csp then return current var  a randomly chosen, conflicted variable from VARIABLES[csp] value  the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure

109 Number of conflicts Solving 4-queens with local search Q (5 conflicts)
Note: here I check all neighbors & pick the best; typically in practice pick one at random Number of conflicts Solving 4-queens with local search Q (5 conflicts) 4 5 Q Q 2 4 5 Q 4 3 5 Q 3 5

110 Number of conflicts Solving 4-queens with local search Q Q Q
Note: here I check all neighbors & pick the best; typically in practice pick one at random Number of conflicts Solving 4-queens with local search Q Q Q (5 conflicts) (2 conflicts) (0 conflicts) 3 2 1 Q Q 2 4 5 Q 2 Q 2 3 1

111 Local optima Local search may get stuck at local optima
Locations where no neighboring value is better Success depends on initialization quality & basins of attraction Can use multiple initializations to improve: Re-initialize randomly (“repeated” local search) Re-initialize by perturbing last optimum (“iterated” local search) Can also add sideways & random moves (e.g., WalkSAT) (R&N Fig 7.18) states objective current state global maximum local maximum plateau of local optima

112 Local optimum example Solving 4-queens with local search Q
Q “Plateau” example: no single move can decrease # of conflicts (1 conflict) 2 3 1 Q Q 1 3 Q 2 1 Q 3 2 4 1

113 Comparison of CSP algorithms
Evaluate methods on a number of problems Median number of consistency checks over 5 runs to solve problem Parentheses -> no solution found USA: 4 coloring n-queens: n = 2 to 50 Zebra: see exercise 6.7 (3rd ed.); exercise 5.13 (2nd ed.)

114 Advantages of local search
Local search can be particularly useful in an online setting Airline schedule example E.g., mechanical problems require than 1 plane is taken out of service Can locally search for another “close” solution in state-space Much better (and faster) in practice than finding an entirely new schedule Runtime of min-conflicts is roughly independent of problem size. Can solve the millions-queen problem in roughly 50 steps. Why? n-queens is easy for local search because of the relatively high density of solutions in state-space

115 Hardness of CSPs x1 … xn discrete, domain size d: O( dn ) configurations “SAT”: Boolean satisfiability: d=2 One of the first known NP-complete problems “3-SAT” Conjunctive normal form (CNF) At most 3 variables in each clause: Still NP-complete How hard are “typical” problems? CNF clause: rule out one configuration

116 Hardness of random CSPs
Random 3-SAT problems: n variables, p clauses in CNF: Choose any 3 variables, signs uniformly at random What’s the probability there is no solution to the CSP? Phase transition at (p/n) ¼ 4.25 “Hard” instances fall in a very narrow regime around this point! ratio ( p/n ) avg time (sec) minisat easy, sat easy, unsat ratio ( p/n ) Pr[ unsat ] satisfiable unsatisifable

117 Hardness of random CSPs
Random 3-SAT problems: n variables, p clauses in CNF: Choose any 3 variables, signs uniformly at random What’s the probability there is no solution to the CSP? Phase transition at (p/n) ¼ 4.25 “Hard” instances fall in a very narrow regime around this point! ratio ( p/n ) Pr[ unsat ] satisfiable unsatisifable minisat log avg time (sec) ratio ( p/n ) easy, sat easy, unsat

118 Ex: Sudoku Backtracking search + forward checking R = [number of initially filled cells] / [total number of cells] R = [number of initially filled cells] / [total number of cells] R = [number of initially filled cells] / [total number of cells] Success Rate = P(random puzzle is solvable) [total number of cells] = 9x9 = 81 [number of initially filled cells] = variable

119 Graph structure and complexity
Q WA NT SA NSW V T Disconnected subproblems Configuration of one subproblem cannot affect the other: independent! Exploit: solve independently Suppose each subproblem has c variables out of n Worse case cost: O( n/c dc ) Compare to O( dn ), exponential in n Ex: n=80, c=20, d=2 ) 280 = 4 billion years at 1 million nodes per second 4 * 220 = 0.4 seconds at 1 million nodes per second

120 Tree-structured CSPs Theorem: If a constraint graph has no cycles, then the CSP can be solved in O(n d^2) time. Compare to general CSP: worst case O(d^n) Method: directed arc consistency (= dynamic programming) Select a root (e.g., A) & do arc consistency from leaves to root: D! F: remove values for D not consistent with any value for F, etc.) D! E, B! D, … etc Select a value for A There must be a value for B that is compatible; select it There must be values for C, and for D, compatible with B’s; select them There must be values for E, F compatible with D’s; select them. You’ve found a consistent solution! A F E D C B

121 Exploiting structure How can we use efficiency of trees?
Cutset conditioning Exploit easy-to-solve problems during search Tree decomposition Convert non-tree problems into (harder) trees Tree! SA=red Now: “unary” WA-SA constraint “binary” (WA,SA) – (NT,SA) require all 3 consistent Q WA NT SA NSW V T Q,SA WA,SA NT,SA NSW,SA V,SA T Change “variables” = color of pair of areas

122 Summary CSPs special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values Backtracking = depth-first search, one variable assigned per node Heuristics: variable order & value selection heuristics help a lot Constraint propagation does additional work to constrain values and detect inconsistencies Works effectively when combined with heuristics Iterative min-conflicts is often effective in practice. Graph structure of CSPs determines problem complexity e.g., tree structured CSPs can be solved in linear time.


Download ppt "Constraint satisfaction problems"

Similar presentations


Ads by Google