CHAPTER 6 OLIVER SCHULTE SUMMER2011 Constraint Satisfaction Problems.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Constraint Satisfaction Problems Russell and Norvig: Chapter
Constraint Satisfaction Problems (Chapter 6). What is search for? Assumptions: single agent, deterministic, fully observable, discrete environment Search.
Constraint Satisfaction Problems
This lecture topic (two lectures) Chapter 6.1 – 6.4, except
Lecture 11 Last Time: Local Search, Constraint Satisfaction Problems Today: More on CSPs.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Constraint Satisfaction problems (CSP)
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
Constraint Satisfaction Problems
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Constraint Satisfaction Problems
Constraint Satisfaction
Chapter 5 Outline Formal definition of CSP CSP Examples
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint Satisfaction Problems
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 4 Constraint Satisfaction Problems Instructor: Alireza Tavakkoli September 17, 2009 University.
Variables “To be is to be the value of a variable.” - William Quine.
1 Constraint Satisfaction Problems Slides by Prof WELLING.
Constraint Satisfaction Problems (CSPs) Chapter 6.1 – 6.4, except
CSPs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew.
Constraint Satisfaction Problems Chapter 6. Review Agent, Environment, State Agent as search problem Uninformed search strategies Informed (heuristic.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
CSC 8520 Spring Paula Matuszek Based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are based on Russell, aima.eecs.berkeley.edu/slides-pdf.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Constraint Satisfaction Problems (CSPs) This lecture topic (two lectures) Chapter 6.1 – 6.4, except Next lecture topic (two lectures) Chapter 7.1.
1 Constraint Satisfaction Problems Soup Total Cost < $30 Chicken Dish Vegetable RiceSeafood Pork Dish Appetizer Must be Hot&Sour No Peanuts No Peanuts.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
1 Chapter 5 Constraint Satisfaction Problems. 2 Outlines  Constraint Satisfaction Problems  Backtracking Search for CSPs  Local Search for CSP  The.
CSC 8520 Spring Paula Matuszek CS 8520: Artificial Intelligence Search 3: Constraint Satisfaction Problems Paula Matuszek Spring, 2013.
Lecture 5: Solving CSPs Fast! 1/30/2012 Robert Pless – Wash U. Multiple slides over the course adapted from Kilian Weinberger, Dan Klein (or Stuart Russell.
Chapter 5 Constraint Satisfaction Problems
CSE 473: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Constraint Satisfaction Problems (Chapter 6)
Constraint Satisfaction Problems 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호
An Introduction to Artificial Intelligence Lecture 5: Constraint Satisfaction Problems Ramin Halavati In which we see how treating.
1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3 Grand Challenge:
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
Constraint Satisfaction Problems University of Berkeley, USA
Constraint Satisfaction Problems Chapter 6. Outline CSP? Backtracking for CSP Local search for CSPs Problem structure and decomposition.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Computing & Information Sciences Kansas State University Friday, 08 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 7 of 42 Friday, 08 September.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
EXAMPLE: MAP COLORING. Example: Map coloring Variables — WA, NT, Q, NSW, V, SA, T Domains — D i ={red,green,blue} Constraints — adjacent regions must.
1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Dr. Shazzad Hosain Department of EECS North South University Lecture 01 – Part C Constraint Satisfaction Problems.
1 Constraint Satisfaction Problems (CSP). Announcements Second Test Wednesday, April 27.
Constraint Satisfaction Problems
Constraint Satisfaction Problems Russell and Norvig: Chapter 5 CMSC 421 – Fall 2006.
ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information Systems.
CS 561, Session 8 1 This time: constraint satisfaction - Constraint Satisfaction Problems (CSP) - Backtracking search for CSPs - Local search for CSPs.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Constraint Satisfaction Problems Lecture # 14, 15 & 16
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint Satisfaction Problems
Constraint satisfaction problems
Constraint Satisfaction Problems. A Quick Overview
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Constraint satisfaction problems
Constraint Satisfaction Problems (CSP)
Presentation transcript:

CHAPTER 6 OLIVER SCHULTE SUMMER2011 Constraint Satisfaction Problems

Midterm Announcements Please bring ID to the exam. You can bring a cheat sheet. Chapters 1-6 covered, see Lecture Schedule. CSP definition is covered, also arc consistency. Not covered: CSP search methods (topic today). No calculators, smartphones, textbook, notes. Be on time. Read the instructions ahead of time – posted on the web.

Talk Announcement CS SEMINAR June 23rd, 2:30 p.m., TASC1 9204W Generalized Planning: Theory and Practice Hector J. Levesque ABSTRACT: While most of the research in automated planning within AI has focussed on methods for finding sequential (or straight-line) plans, there is growing interest in a more general account, where the plans may need to contain branches and loops. In this talk, I will present some recent work in this area, including some new theoretical results, as well as a description of a new planning system called FSAPLANNER and some of the generalized planning problems it can solve.

Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Field Institute Summer Workshops

Environment Type Discussed In this Lecture Static Environment CMPT Blind Search 5 Fully Observable Deterministic Sequential yes Discrete yes Planning, heuristic search yes Control, cybernetics no Continuous Function Optimization Vector Search: Constraint Satisfaction no yes

Agent Architecture Discussed In this Lecture Graph-Based Search: State is black box, no internal structure, atomic. Factored Representation: State is list or vector of facts. CSP: a fact is of the form “Variable = value”. A model is a structured representation of the world.

Constraint satisfaction problems (CSPs) CSP:  state is defined by variables X i with values from domain D i  goal test is a set of constraints specifying allowable combinations of values for subsets of variables. Allows useful general-purpose algorithms with more power than standard search algorithms. Power close to simulating Turing Machines.

Example: Map-Coloring

CSPs (continued) An assignment is complete when every variable is mentioned. A solution to a CSP is a complete assignment that satisfies all constraints. Some CSPs require a solution that maximizes an objective function. Constraints with continuous variables are common.  Linear Constraints  linear programming. Examples of Applications:  Airline schedules  Final Exam Scheduling.  Cryptography  Sudoku, cross-words.

Example: Map-Coloring contd.

Varieties of constraints Unary constraints involve a single variable,  e.g., SA 6= green Binary constraints involve pairs of variables,  e.g., SA <> WA Higher-order constraints involve 3 or more variables Preferences (soft constraints), e.g., red is better than green often representable by a cost for each variable assignment  constrained optimization problems

Constraint graph Binary CSP: each constraint relates at most two variables Constraint graph: nodes are variables, arcs show constraints General-purpose CSP algorithms use the graph structure to speed up search. E.g., Tasmania is an independent subproblem!

Graphs and Factored Representations UBC AI Space CSP Graphs for variables (concepts, facts) capture local dependencies between variables (concepts, facts). Absence of edges = independence. AI systems try to reason locally as much as possible. Potential Solution to the Relevance Problem:  How does the brain retrieve relevant facts in a given situation, out of the million facts that it knows?  Answer: Direct links represent direct relevance. Computation in general is a local process operating on a factored state. (What is the state of a program run?)

ca d e b Consider the constraint graph on the right. The domain for every variable is [1,2,3,4]. There are 2 unary constraints: - variable “a” cannot take values 3 and 4. - variable “b” cannot take value 4. There are 8 binary constraints stating that variables connected by an edge cannot have the same value. Problem

Example: 4-Queens Problem X1 {1,2,3,4} X3 {1,2,3,4} X4 {1,2,3,4} X2 {1,2,3,4} 5 Queen's Problem

Arc consistency An Arc X  Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) Consider state of search after WA and Q are assigned: SA  NSW is consistent if SA=blue and NSW=red

Arc consistency X  Y is consistent if for every value x of X there is some value y consistent with x NSW  SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

Arc consistency Can enforce arc-consistency: Arc can be made consistent by removing blue from NSW Continue to propagate constraints….  Check V  NSW  Not consistent for V = red  Remove red from V

Arc consistency Continue to propagate constraints…. SA  NT is not consistent  and cannot be made consistent

Standard search formulation (incremental) Let’s formulate a state graph problem, then take advantage of special structure later. States are defined by the values assigned so far  Initial state: the empty assignment, { }  Successor function: assign a value to an unassigned variable that does not conflict with current assignment. ⇒ fail if no legal assignments (not fixable!)  Goal test: the current assignment is complete This is the same for all CSPs!

Standard search formulation (incremental) Can we use breadth first search?  Branching factor at top level?  nd any of the d values can be assigned to any variable  Next level?  (n-1)d  We generate n!.d n leaves even though there are d n complete assignments. Why?  Commutatively  If the order of applications on any given set of actions has no effect on the outcome.

Backtracking search Variable assignments are commutative, i.e.,  [WA=red then NT =green] same as [NT =green thenWA=red] Only need to consider assignments to a single variable at each node b=d and there are d n leaves Depth-first search for CSPs with single-variable assignments is called backtracking search Is this uninformed or informed?  Backtracking search is the basic uninformed algorithm for CSPs

Backtracking example 4 Feb 2004 CS Constraint Satisfaction 23

Backtracking example 4 Feb 2004 CS Constraint Satisfaction 24

Backtracking example 4 Feb 2004 CS Constraint Satisfaction 25

Backtracking example 4 Feb 2004 CS Constraint Satisfaction 26

Improving backtracking efficiency 4 Feb 2004 CS Constraint Satisfaction 28 General-purpose methods can give huge gains in speed:  Which variable should be assigned next?  In what order should its values be tried?  Can we detect inevitable failure early?  Constraint Learning: Can we keep track of what search has learned?  Can we take advantage of problem structure?

Most constrained variable 4 Feb 2004 CS Constraint Satisfaction 29 Most constrained variable: choose the variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic Only picks a variable (Not a value) Demo for MRV

Most constraining variable 4 Feb 2004 CS Constraint Satisfaction 30 How to choose between the variable with the fewest legal values? Tie-breaker among most constrained variables  Degree heuristic: choose the variable with the most constraints on remaining variables

Least constraining value 4 Feb 2004 CS Constraint Satisfaction 31 Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables. Intuition: choose “most likely” solution. Combining these heuristics makes 1000 queens feasible

Forward checking 4 Feb 2004 CS Constraint Satisfaction 32 Idea:  Keep track of remaining legal values for unassigned variables  Terminate search when any variable has no legal values

Forward checking 4 Feb 2004 CS Constraint Satisfaction 33 Idea:  Keep track of remaining legal values for unassigned variables  Terminate search when any variable has no legal values

Forward checking 4 Feb 2004 CS Constraint Satisfaction 34 Idea:  Keep track of remaining legal values for unassigned variables  Terminate search when any variable has no legal values

Forward checking 4 Feb 2004 CS Constraint Satisfaction 35 Idea:  Keep track of remaining legal values for unassigned variables  Terminate search when any variable has no legal values

Constraint propagation 4 Feb 2004 CS Constraint Satisfaction 36 NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally. Use arc-consistency. Inference complements search. Has to be faster than searching

Example: 4-Queens Problem X1 {1,2,3,4} X3 {1,2,3,4} X4 {1,2,3,4} X2 {1,2,3,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {1,2,3,4} X4 {1,2,3,4} X2 {1,2,3,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,,4} X4 {,2,3, } X2 {,,3,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,,4} X4 {,2,3, } X2 {,,3,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,,, } X4 {,,3, } X2 {,,3,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,,4} X4 {,2,3, } X2 {,,,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,,4} X4 {,2,3, } X2 {,,,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,, } X4 {,,3, } X2 {,,,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,, } X4 {,,3, } X2 {,,,4}

Example: 4-Queens Problem X1 {1,2,3,4} X3 {,2,, } X4 {,,, } X2 {,,3,4}

Arc Consistency vs. Search Arc Consistency speeds up search, but is in itself not a complete search procedure. Example: Simple Problem 2 in AI space. Arc Consistency does not detect that the problem is insoluble.

Constraint propagation Techniques like CP and FC are in effect eliminating parts of the search space  Inference complements search (= simulation). Constraint propagation goes further than FC by repeatedly enforcing constraints locally. Arc-consistency (AC) is a systematic procedure for Constraint propagation (Macworth 1977 UBC).

Arc consistency checking Can be run as a preprocessor or after each assignment  Or as preprocessing before search starts AC must be run repeatedly until no inconsistency remains Trade-off  Requires some overhead to do, but generally more effective than direct search  In effect it can eliminate large (inconsistent) parts of the state space more effectively than search can Need a systematic method for arc-checking  If X loses a value, neighbors of X need to be rechecked.

Arc consistency checking

Arc-consistency as message-passing This is a propagation algorithm. It’s like sending messages to neighbors on the graph. How do we schedule these messages? Every time a domain changes, all incoming messages need to be re-sent. Repeat until convergence  no message will change any domains. Since we only remove values from domains when they can never be part of a solution, an empty domain means no solution possible at all  back out of that branch.

Example 5-queen’s problem in AIspace. Apply the heuristics:  Use variables with minimum remaining values.  Use variable with maximum degree.  Use least constraining value. Use arc-consistency after choosing each value.

K-consistency: bounded local checks Arc consistency does not detect all inconsistencies:  Partial assignment {WA=red, NSW=red} is inconsistent. Stronger forms of propagation can be defined using the notion of k-consistency. A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable.  E.g. 1-consistency = node-consistency  E.g. 2-consistency = arc-consistency  E.g. 3-consistency = path-consistency Strongly k-consistent:  k-consistent for all values {k, k-1, …2, 1}

Trade-offs Running stronger consistency checks…  Takes more time  But will reduce branching factor and detect more inconsistent partial assignments  No “free lunch”  In worst case n-consistency takes exponential time

Back-tracking or back-jumping? {Q=red, NSW= green, V= blue, T=red} red green blue red ? blue green

Local search for CSPs Use complete-state representation  Initial state = all variables assigned values  Successor states = change 1 (or more) values For CSPs  allow states with unsatisfied constraints (unlike backtracking)  operators reassign variable values  hill-climbing with n-queens is an example Variable selection: randomly select any conflicted variable.  Local Stochastic Search Demo Local Stochastic Search Demo Value selection: min-conflicts heuristic  Select new value that results in a minimum number of conflicts with the other variables

Min-conflicts example 1 Use of min-conflicts heuristic in hill-climbing. h=5h=3h=1

Min-conflicts example 2 A two-step solution for an 8-queens problem using min-conflicts heuristic At each stage a queen is chosen for reassignment in its column The algorithm moves the queen to the min-conflict square breaking ties randomly.

Local search for CSP function MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current  an initial complete assignment for csp for i = 1 to max_steps do if current is a solution for csp then return current var  a randomly chosen, conflicted variable from VARIABLES[csp] value  the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure

Advantages of local search Local search can be particularly useful in an online setting  Airline schedule example  E.g., mechanical problems require than 1 plane is taken out of service  Can locally search for another “close” solution in state-space  Much better (and faster) in practice than finding an entirely new schedule. The runtime of min-conflicts is roughly independent of problem size.  Can solve the millions-queen problem in roughly 50 steps.  Why?  n-queens is easy for local search because of the relatively high density of solutions in state-space.

Graph structure and problem complexity Divide-and-conquer: Solving disconnected subproblems.  Suppose each subproblem has c variables out of a total of n.  Worst case solution cost is O(n/c d c ), i.e. linear in n  Instead of O(d n ), exponential in n E.g. n= 80, c= 20, d=2  2 80 = 4 billion years at 1 million nodes/sec.  4 * 2 20 =.4 second at 1 million nodes/sec

Tree-structured CSPs Theorem:  if a constraint graph has no loops then the CSP can be solved in O(nd 2 ) time  linear in the number of variables! Compare difference with general CSP, where worst case is O(d n )

Algorithm for Solving Tree-structured CSPs  Choose some variable as root, order variables from root to leaves such that every node’s parent precedes it in the ordering.  Label variables from X 1 to X n.  Every variable now has 1 parent  Backward Pass  For j from n down to 2, apply arc consistency to arc [Parent(X j ), X j ) ]  Remove values from Parent(X j ) if needed to make graph directed arc consistent.  Forward Pass  For j from 1 to n assign X j consistently with Parent(X j )

Tree CSP Example GB

BRGBRG BGBG BRGBRG RGB Backward Pass (constraint propagation)

Tree CSP Example BRGBRG BGBG BRGBRG RGB B GRGBR Forward Pass (assignment) Backward Pass (constraint propagation)

Tree CSP complexity Backward pass  n arc checks  Each has complexity d 2 at worst Forward pass  n variable assignments, O(nd)  Overall complexity is O(nd 2 ) Algorithm works because if the backward pass succeeds, then every variable by definition has a legal assignment in the forward pass

What about non-tree CSPs? General idea is to convert the graph to a tree 2 general approaches 1. Assign values to specific variables (Cycle Cutset method). 1. Tries to exploit context-specific independence. 1. Construct a tree-decomposition of the graph - Connected subproblems (subgraphs) becomes nodes in a tree structure.

Cycle-cutset conditioning Choose a subset S of variables from the graph so that graph without S is a tree  S = “cycle cutset” For each possible consistent assignment for S  Remove any inconsistent values from remaining variables that are inconsistent with S  Use tree-structured CSP to solve the remaining tree-structure  If it has a solution, return it along with S  If not, continue to try other assignments for S

Finding the optimal cutset If c is small, this technique works very well However, finding smallest cycle cutset is NP-hard  But there are good approximation algorithms

Tree Decompositions Red, green, blue Red, blue, green, blue, red, green … Red, green, blue Red, blue, green, blue, red, green …

Rules for a Tree Decomposition Every variable appears in at least one of the subproblems. If two variables are connected in the original problem, they must appear together (with the constraint) in at least one subproblem. If a variable appears in two subproblems, it must appear in each node on the path between them.

Tree Decomposition Algorithm View each subproblem as a “super-variable”  Domain = set of solutions for the subproblem  Obtained by running a CSP on each subproblem.  Maximum-size of subproblem - 1 = treewidth of constraint graph.  E.g., 6 solutions for 3 fully connected variables in map problem Now use the tree CSP algorithm to solve the constraints connecting the subproblems  Declare a subproblem a root node, create tree  Backward and forward passes Example of “divide and conquer” strategy

Tree Decomposition Every graph has a tree decomposition, not just constraint graphs. Tree decomposition shows optimal divide-and- conquer strategy.  For CSPs, optimization, search, inference. Typical result: if treewidth of a problem graph is constant, then the search/optimization/inference problem is solvable by divide and conquer. Treewidth in General Treewidth in General

Summary CSPs  special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values: factored representation.  Represents many practical problems Microsoft Research. University of Essex.Microsoft Research.University of Essex. Backtracking=depth-first search with one variable assigned per node Heuristics  Variable ordering and value selection heuristics help significantly Constraint propagation does additional work to constrain values and detect inconsistencies  Works effectively when combined with heuristics Iterative min-conflicts is often effective in practice. Graph structure of CSPs determines problem complexity  e.g., tree structured CSPs can be solved in linear time.