MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Constraint Satisfaction Problems Russell and Norvig: Chapter
Constraint Satisfaction Problems
This lecture: CSP Introduction and Backtracking Search
Lecture 11 Last Time: Local Search, Constraint Satisfaction Problems Today: More on CSPs.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
1 Constraint Satisfaction Problems. 2 Intro Example: 8-Queens Generate-and-test: 8 8 combinations.
This lecture topic (two lectures) Chapter 6.1 – 6.4, except 6.3.3
1 Finite Constraint Domains. 2 u Constraint satisfaction problems (CSP) u A backtracking solver u Node and arc consistency u Bounds consistency u Generalized.
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Constraint Satisfaction problems (CSP)
Constraint Satisfaction Problems. Constraint satisfaction problems (CSPs) Standard search problem: – State is a “black box” – any data structure that.
Constraint Satisfaction Problems
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
Constraint Satisfaction Problems
Constraint Satisfaction
Constraint Satisfaction Problems Russell and Norvig: Chapter 3, Section 3.7 Chapter 4, Pages Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/home.htm.
Chapter 5 Outline Formal definition of CSP CSP Examples
Constraint Satisfaction Problems
Constraint Satisfaction Problems
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 4 Constraint Satisfaction Problems Instructor: Alireza Tavakkoli September 17, 2009 University.
Variables “To be is to be the value of a variable.” - William Quine.
1 Constraint Satisfaction Problems Slides by Prof WELLING.
CS 484 – Artificial Intelligence1 Announcements Homework 2 due today Lab 1 due Thursday, 9/20 Homework 3 has been posted Autumn – Current Event Tuesday.
Constraint Satisfaction Problems Chapter 6. Review Agent, Environment, State Agent as search problem Uninformed search strategies Informed (heuristic.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
Constraint Satisfaction CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
CSC 8520 Spring Paula Matuszek CS 8520: Artificial Intelligence Search 3: Constraint Satisfaction Problems Paula Matuszek Spring, 2013.
Chapter 5 Constraint Satisfaction Problems
Constraint Satisfaction Problems (Chapter 6)
An Introduction to Artificial Intelligence Lecture 5: Constraint Satisfaction Problems Ramin Halavati In which we see how treating.
1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3 Grand Challenge:
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
Constraint Satisfaction Problems University of Berkeley, USA
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Computing & Information Sciences Kansas State University Friday, 08 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 7 of 42 Friday, 08 September.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
EXAMPLE: MAP COLORING. Example: Map coloring Variables — WA, NT, Q, NSW, V, SA, T Domains — D i ={red,green,blue} Constraints — adjacent regions must.
Constraint Satisfaction Problems Russell and Norvig: Chapter 3, Section 3.7 Chapter 4, Pages CS121 – Winter 2003.
Constraint Satisfaction Problems Rich and Knight: 3.5 Russell and Norvig: Chapter 3, Section 3.7 Chapter 4, Pages Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/home.htm.
Dr. Shazzad Hosain Department of EECS North South University Lecture 01 – Part C Constraint Satisfaction Problems.
PSU CS 370 – Introduction to Artificial Intelligence 1 Constraint Satisfaction Problems.
1 Constraint Satisfaction Problems (CSP). Announcements Second Test Wednesday, April 27.
Constraint Satisfaction Problems
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information Systems.
CS 561, Session 8 1 This time: constraint satisfaction - Constraint Satisfaction Problems (CSP) - Backtracking search for CSPs - Local search for CSPs.
10/16/02copyright Brian Williams, courtesy of JPL Conflict-directed A* Brian C. Williams J/6.834J October 21, 2002 Brian C. Williams, copyright.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Constraint Satisfaction Problems Lecture # 14, 15 & 16
Constraint Satisfaction Problems vs. Finite State Problems
CS 188: Artificial Intelligence
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Constraint Propagation
Constraint Satisfaction Problems
Artificial Intelligence
Constraint Satisfaction Problems
Constraint satisfaction problems
Constraint Satisfaction Problems. A Quick Overview
Constraint Satisfaction Problems
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Constraint satisfaction problems
Constraint Satisfaction Problems (CSP)
Presentation transcript:

MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams

Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

Intro Example: 8-Queens Generate-and-test, 8 8 combinations

Intro Example: 8-Queens

Constraint Satisfaction Problem  Set of variables {X 1, X 2, …, X n }  Each variable X i has a domain D i of possible values Usually D i is discrete and finite  Set of constraints {C 1, C 2, …, C p } Each constraint C k involves a subset of variables and specifies the allowable combinations of values of these variables  Goal: Assign a value to every variable such that all constraints are satisfied

Example: 8-Queens Problem  8 variables X i, i = 1 to 8  Domain for each variable {1,2,…,8}  Constraints are of the forms: X i = k  X j  k for all j = 1 to 8, ji

Example: Map Coloring 7 variables {WA,NT,SA,Q,NSW,V,T} Each variable has the same domain {red, green, blue} No two adjacent variables have the same value: WA  NT, WA  SA, NT  SA, NT  Q, SA  Q, SA  NSW, SA  V,Q  NSW, NSW  V WA NT SA Q NSW V T WA NT SA Q NSW V T

Example: Task Scheduling T1 must be done during T3 T2 must be achieved before T1 starts T2 must overlap with T3 T4 must start after T1 is complete T1 T2 T3 T4

Constraint Graph Binary constraints T WA NT SA Q NSW V Two variables are adjacent or neighbors if they are connected by an edge or an arc T1 T2 T3 T4

CSP as a Search Problem  Initial state: empty assignment  Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables  Goal test: the assignment is complete  Path cost: irrelevant

Backtracking example

Backtracking Algorithm CSP-BACKTRACKING(PartialAssignment a) If a is complete then return a X  select an unassigned variable D  select an ordering for the domain of X For each value v in D do  If v is consistent with a then  Add (X= v) to a  result  CSP-BACKTRACKING(a)  If result  failure then return result  Remove (X= v) from a Return failure Start with CSP-BACKTRACKING({})

Improving backtracking efficiency Which variable should be assigned next? In what order should its values be tried? Can we detect obvious failure early?

Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

What is an Optimal CSP (OCSP)?  Set of decision variables y,  A utility function g on y,  A set of constraints C that y must satisfy.  The solution is an assignment to y, maximizing g and satisfying C.

What is an OCSP formally?  OCSP is:  Where CSP = x – set of variables D x – Domain over variables C x – set of constraints over variables  - set of decision variables  Cost function –  We call the elements of D y decision states.  A solution y* to an OCSP is a minimum cost decision state that is consistent with the CSP.

Methods for solving OCSP  Traditional method: A* Good at finding optimal solutions. Visits every state that whose estimated cost is less than the true optimum*. Computationally unacceptable for model-based executives. * A* heuristic is admissible, that is, it never overestimates cost.  Solution: CONFLICT-DIRECTED A* Also searches in best-first order Eliminates subspaces around inconsistent states

Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

CONFLICT-DIRECTED A* Williams&Rango 03 Step 1: Best candidate S generated Step 2: S is tested for consistency Step 3: If S is inconsistent, the inconsistency is generalized to conflicts, removing a subspace of the solution space Step 4: The program jumps to the next-best candidate, resolving all conflicts.

Enumerating Probable Candidates Generate Candidate Test Candidate Consistent? Keep Compute Posterior p Below Threshold? Extract Conflict Done YesNo YesNo Leading Candidate Based on Priors

Increasing Cost consistent inconsistent A* - must visit all states with cost lower than optimum

Increasing Cost Conflict-directed A* - found inconsistent state eliminates all states entail the same symptom consistent inconsistent

Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent

Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent

Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent

Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* Feasible regions described by the implicates of known conflicts (Kernel Assignments) Want kernel assignment containing the best cost candidate consistent inconsistent

Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP]  {} OCSP  Initialize-Best-Kernels(OCSP) Solutions[OCSP]  {} loop do decision-state  Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts  Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP]  Eliminate-Redundant-Conflicts(Conflicts[OCSP]  new-conflicts) end

Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP]  {} OCSP  Initialize-Best-Kernels(OCSP) Solutions[OCSP]  {} loop do decision-state  Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts  Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP]  Eliminate-Redundant-Conflicts(Conflicts[OCSP]  new-conflicts) end

M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 Assume Independent failures:  P G(mi) >> P U(mi) //G-good, U-unknown  P single >> P double  P U(M2) > P U(M1) > P U(M3) > P U(A1) > P U(A2) Example: Diagnosis as an OCSP

M1 M2 M3 A1 A2 A B C D E F G X Y Z 12  OBS = observation.  COMPONENTS = variable set y.  System Description = constraints - Cy. M1=G  X=A+B  Candidate = a mode assignments to y.  Diagnosis = a candidate that is consistent with Cy and OBS.  Utility = candidate probability P(y).  Cost = 1/P(y). Example: Diagnosis

M1 M2 M3 A1 A2 A B C D E F G X Y Z 12  Conflicts / Constituent Diagnoses none  Best Kernel: {}  Best Candidate: ? First Iteration

{ } M1=?  M2=?  M3=?  A1=?  A2=? M1=G  M2=G  M3=G  A1=G  A2=G  Select most likely value for unassigned modes

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z 12

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z  Extract Conflict and Constituent Diagnoses:

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z  Extract Conflict and Constituent Diagnoses:

 Test: M1=G  M2=G  M3=G  A1=G  A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z  Extract Conflict and Constituent Diagnoses: ¬ [M1=G  M2=G  A1=G]

 Test: M1=G  M2=G  M3=G  A1=G  A2=G  Extract Conflict and Constituent Diagnoses: ¬ [M1=G  M2=G  A1=G] M1=U  M2=U  A1=U M1 M2 M3 A1 A2 A B C D E F G X Y Z This process has been done through propositional satisfiability algorithm DPLL algorithm*

 Conflicts / Constituent Diagnoses M1=U  M2=U  A1=U  Best Kernel: M2=U  Best Candidate: M1=G  M2=U  M3=G  A1=G  A2=G Second Iteration M1 M2 M3 A1 A2 A B C D E F G X Y Z Later we will describe how to determine the best Kernel

 Test: M1=G  M2=U  M3=G  A1=G  A2=G M1 M3 A2 A B C D E F G X Y Z 12 A1

M1 M3 A2 A B C D E F G X Y Z 12 6  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict:

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict: Not [G(M1) & G(M3) & G(A1) & G(A2)]

M1 M3 A2 A B C D E F G X Y Z  Test: M1=G  M2=U  M3=G  A1=G  A2=G A1  Extract Conflict: ¬ [M1=G  M3=G  A1=G  A2=G] M1=U  M3=U  A1=U  A2=U

 Conflicts / Constituent Diagnoses M1=U  M2=U  A1=U M1=U  M3=U  A1=U  A2=U  Best Kernel: M1=U  Best Candidate: M1=U  M2=G  M3=G  A1=G  A2=G Third Iteration M1 M3 A2 A B C D E F G X Y Z A1

 Test: M1=U  M2=G  M3=G  A1=G  A2=G M2 M3 A1 A2 A B C D E F G X Y Z 12

M2 M3 A1 A2 A B C D E F G X Y Z 12  Test: M1=U  M2=G  M3=G  A1=G  A2=G

M2 M3 A1 A2 A B C D E F G X Y Z 12 6  Test: M1=U  M2=G  M3=G  A1=G  A2=G

M2 M3 A1 A2 A B C D E F G X Y Z  Test: M1=U  M2=G  M3=G  A1=G  A2=G

M2 M3 A1 A2 A B C D E F G X Y Z

 Test: M1=U  M2=G  M3=G  A1=G  A2=G M2 M3 A1 A2 A B C D E F G X Y Z Consistent!

A2=UM1=U M3=UA1=U A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A1=U M1=U M1=U  A2=U M2=U  M3=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Superset of M1

A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A2=UM1=U M3=UA1=U M2=U  A2=U M2=U  M3=U M1=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses A1=U Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Continue expansion to find best candidate. P U (M2) > P U (M1)

Outline  Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone  Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child

OCSP Admissible Estimate:  M1=?  M3=?  A1=?  A2=? x P M1=G x P M3=G x P A1=G x P A2=G Select most likely value for unassigned modes F = G + H (H is admissible) P M2=u M2=U MPI: mutual, preferential independence To find the best candidate we assign each variable its best utility value, independent of the values assigned to the other variables.

M2=UA1=UM1=U M2=U  M1=U  A1=U { } For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h (by MPI).  Only need to expand the best child. Conflict-directed A*: Expand Best Child & Sibling Constituent kernels

M2=UA1=UM1=U Order constituents by decreasing likelihood { } >> Conflict-directed A*: Expand Best Child & Sibling For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h.  Only need to expand the best child. M2=U  M1=U  A1=U Constituent kernels

M2=U Order constituents by decreasing likelihood { } Conflict-directed A*: Expand Best Child & Sibling For any Node N:  The child of N containing the best cost candidate is the child with the best estimated cost f = g+h.  Only need to expand the best child. M2=U  M1=U  A1=U Constituent kernels

M1=U  M3=U  A1=U  A2=U M1=U M2=U M1=U Conflict-directed A*: Expand Best Child & Sibling M2=U  M1=U  A1=U Constituent kernels When a best child loses any candidate, expand child’s next best sibling: If unresolved conflicts: expand as soon as next conflict of child is resolved. If conflicts resolved: expand as soon as child is expanded to a full candidate. { } M2=GM3=GA1=GA2=G M2=UM3=UA1=U A2=U

Appendix: A* Search: Preliminaries Problem: State Space Search Problem  Initial State  Expand(node) Children of Search Node = next states  Goal-Test(node) True if search node at a goal-state hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree  StateState the search is at  ParentParent in search tree

A* Search: Preliminaries Problem: State Space Search Problem  Initial State  Expand(node) Children of Search Node = adjacent states  Goal-Test(node) True if search node at a goal-state  NodesSearch Nodes to be expanded  ExpandedSearch Nodes already expanded  InitializeSearch starts at , with no expanded nodes hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree  StateState the search is at  ParentParent in search tree Nodes[Problem]:  Remove-Best(f)Removes best cost node according to f  Enqueue(new-node, f )Adds search node to those to be expanded

A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Expand best first

A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Terminates when...

A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x)  g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node  Remove-Best(Nodes[problem], f) state  State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem]  Expanded[problem]  {state} new-nodes  Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem]  Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Dynamic Programming Principle...

Next Best State Resolving Conflicts Function Next-Best-State-Resolving-Conflicts (OCSP) returns the best cost state consistent with Conflicts[OCSP]. f(x)  G[problem] (g[problem](x), h(x)) loop do if Nodes[OCSP] is empty then return failure node  Remove-Best(Nodes[OCSP], f) state  State[node] add state to Visited[OCSP] new-nodes  Expand-State-Resolving-Conflicts(node, OCSP) for each new-node in new-nodes unless Exists n in Nodes[OCSP] such that State[new-node] = state OR State[new-node] is in Visited[problem] then Nodes[OCSP]  Enqueue(Nodes[OCSP], new-node, f ) if Goal-Test-State-Resolving-Conflicts[OCSP] applied to state succeeds then return node end An instance of A*