MBD and CSP Meir Kalech Partially based on slides of Jia You and Brian Williams
Outline Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child
Intro Example: 8-Queens Generate-and-test, 8 8 combinations
Intro Example: 8-Queens
Constraint Satisfaction Problem Set of variables {X 1, X 2, …, X n } Each variable X i has a domain D i of possible values Usually D i is discrete and finite Set of constraints {C 1, C 2, …, C p } Each constraint C k involves a subset of variables and specifies the allowable combinations of values of these variables Goal: Assign a value to every variable such that all constraints are satisfied
Example: 8-Queens Problem 8 variables X i, i = 1 to 8 Domain for each variable {1,2,…,8} Constraints are of the forms: X i = k X j k for all j = 1 to 8, ji
Example: Map Coloring 7 variables {WA,NT,SA,Q,NSW,V,T} Each variable has the same domain {red, green, blue} No two adjacent variables have the same value: WA NT, WA SA, NT SA, NT Q, SA Q, SA NSW, SA V,Q NSW, NSW V WA NT SA Q NSW V T WA NT SA Q NSW V T
Example: Task Scheduling T1 must be done during T3 T2 must be achieved before T1 starts T2 must overlap with T3 T4 must start after T1 is complete T1 T2 T3 T4
Constraint Graph Binary constraints T WA NT SA Q NSW V Two variables are adjacent or neighbors if they are connected by an edge or an arc T1 T2 T3 T4
CSP as a Search Problem Initial state: empty assignment Successor function: a value is assigned to any unassigned variable, which does not conflict with the currently assigned variables Goal test: the assignment is complete Path cost: irrelevant
Backtracking example
Backtracking Algorithm CSP-BACKTRACKING(PartialAssignment a) If a is complete then return a X select an unassigned variable D select an ordering for the domain of X For each value v in D do If v is consistent with a then Add (X= v) to a result CSP-BACKTRACKING(a) If result failure then return result Remove (X= v) from a Return failure Start with CSP-BACKTRACKING({})
Improving backtracking efficiency Which variable should be assigned next? In what order should its values be tried? Can we detect obvious failure early?
Outline Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child
What is an Optimal CSP (OCSP)? Set of decision variables y, A utility function g on y, A set of constraints C that y must satisfy. The solution is an assignment to y, maximizing g and satisfying C.
What is an OCSP formally? OCSP is: Where CSP = x – set of variables D x – Domain over variables C x – set of constraints over variables - set of decision variables Cost function – We call the elements of D y decision states. A solution y* to an OCSP is a minimum cost decision state that is consistent with the CSP.
Methods for solving OCSP Traditional method: A* Good at finding optimal solutions. Visits every state that whose estimated cost is less than the true optimum*. Computationally unacceptable for model-based executives. * A* heuristic is admissible, that is, it never overestimates cost. Solution: CONFLICT-DIRECTED A* Also searches in best-first order Eliminates subspaces around inconsistent states
Outline Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child
CONFLICT-DIRECTED A* Williams&Rango 03 Step 1: Best candidate S generated Step 2: S is tested for consistency Step 3: If S is inconsistent, the inconsistency is generalized to conflicts, removing a subspace of the solution space Step 4: The program jumps to the next-best candidate, resolving all conflicts.
Enumerating Probable Candidates Generate Candidate Test Candidate Consistent? Keep Compute Posterior p Below Threshold? Extract Conflict Done YesNo YesNo Leading Candidate Based on Priors
Increasing Cost consistent inconsistent A* - must visit all states with cost lower than optimum
Increasing Cost Conflict-directed A* - found inconsistent state eliminates all states entail the same symptom consistent inconsistent
Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent
Increasing Cost Conflict 1 Conflict-directed A* consistent inconsistent
Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent
Increasing Cost Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent
Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* consistent inconsistent
Increasing Cost Conflict 3 Conflict 2 Conflict 1 Conflict-directed A* Feasible regions described by the implicates of known conflicts (Kernel Assignments) Want kernel assignment containing the best cost candidate consistent inconsistent
Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP] {} OCSP Initialize-Best-Kernels(OCSP) Solutions[OCSP] {} loop do decision-state Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP] Eliminate-Redundant-Conflicts(Conflicts[OCSP] new-conflicts) end
Conflict-directed A* Function Conflict-directed-A*(OCSP) returns the leading minimal cost solutions. Conflicts[OCSP] {} OCSP Initialize-Best-Kernels(OCSP) Solutions[OCSP] {} loop do decision-state Next-Best-State-Resolving-Conflicts(OCSP) if no decision-state returned or Terminate?(OCSP) then return Solutions[OCSP] if Consistent?(CSP[OCSP ], decision-state) then add decision-state to Solutions[OCSP] else new-conflicts Extract-Conflicts(CSP[OCSP], decision-state) Conflicts[OCSP] Eliminate-Redundant-Conflicts(Conflicts[OCSP] new-conflicts) end
M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 Assume Independent failures: P G(mi) >> P U(mi) //G-good, U-unknown P single >> P double P U(M2) > P U(M1) > P U(M3) > P U(A1) > P U(A2) Example: Diagnosis as an OCSP
M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 OBS = observation. COMPONENTS = variable set y. System Description = constraints - Cy. M1=G X=A+B Candidate = a mode assignments to y. Diagnosis = a candidate that is consistent with Cy and OBS. Utility = candidate probability P(y). Cost = 1/P(y). Example: Diagnosis
M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 Conflicts / Constituent Diagnoses none Best Kernel: {} Best Candidate: ? First Iteration
{ } M1=? M2=? M3=? A1=? A2=? M1=G M2=G M3=G A1=G A2=G Select most likely value for unassigned modes
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z 12
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z 12 6
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z Extract Conflict and Constituent Diagnoses:
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z Extract Conflict and Constituent Diagnoses:
Test: M1=G M2=G M3=G A1=G A2=G M1 M2 M3 A1 A2 A B C D E F G X Y Z Extract Conflict and Constituent Diagnoses: ¬ [M1=G M2=G A1=G]
Test: M1=G M2=G M3=G A1=G A2=G Extract Conflict and Constituent Diagnoses: ¬ [M1=G M2=G A1=G] M1=U M2=U A1=U M1 M2 M3 A1 A2 A B C D E F G X Y Z This process has been done through propositional satisfiability algorithm DPLL algorithm*
Conflicts / Constituent Diagnoses M1=U M2=U A1=U Best Kernel: M2=U Best Candidate: M1=G M2=U M3=G A1=G A2=G Second Iteration M1 M2 M3 A1 A2 A B C D E F G X Y Z Later we will describe how to determine the best Kernel
Test: M1=G M2=U M3=G A1=G A2=G M1 M3 A2 A B C D E F G X Y Z 12 A1
M1 M3 A2 A B C D E F G X Y Z 12 6 Test: M1=G M2=U M3=G A1=G A2=G A1
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1 Extract Conflict:
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1 Extract Conflict: Not [G(M1) & G(M3) & G(A1) & G(A2)]
M1 M3 A2 A B C D E F G X Y Z Test: M1=G M2=U M3=G A1=G A2=G A1 Extract Conflict: ¬ [M1=G M3=G A1=G A2=G] M1=U M3=U A1=U A2=U
Conflicts / Constituent Diagnoses M1=U M2=U A1=U M1=U M3=U A1=U A2=U Best Kernel: M1=U Best Candidate: M1=U M2=G M3=G A1=G A2=G Third Iteration M1 M3 A2 A B C D E F G X Y Z A1
Test: M1=U M2=G M3=G A1=G A2=G M2 M3 A1 A2 A B C D E F G X Y Z 12
M2 M3 A1 A2 A B C D E F G X Y Z 12 Test: M1=U M2=G M3=G A1=G A2=G
M2 M3 A1 A2 A B C D E F G X Y Z 12 6 Test: M1=U M2=G M3=G A1=G A2=G
M2 M3 A1 A2 A B C D E F G X Y Z Test: M1=U M2=G M3=G A1=G A2=G
M2 M3 A1 A2 A B C D E F G X Y Z
Test: M1=U M2=G M3=G A1=G A2=G M2 M3 A1 A2 A B C D E F G X Y Z Consistent!
A2=UM1=U M3=UA1=U A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A1=U M1=U M1=U A2=U M2=U M3=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Superset of M1
A1=U, A2=U, M1=U, M3=U A1=UM1=UM2=U A2=UM1=U M3=UA1=U M2=U A2=U M2=U M3=U M1=U Conflict-Directed A*: Generating Best Kernels A1=U, M1=U, M2=U Constituent Diagnoses A1=U Minimal set covering is an instance of breadth first search. To find the best kernel, expand tree in best first order. Insight: Kernels found by minimal set covering Continue expansion to find best candidate. P U (M2) > P U (M1)
Outline Last lecture: 1. Autonomous systems 2. Model-based programming 3. Livingstone Today’s lecture: 1. Constraints satisfaction problems (CSP) 2. Optimal CSP 3. Conflict-directed A* 4. Expand best child
OCSP Admissible Estimate: M1=? M3=? A1=? A2=? x P M1=G x P M3=G x P A1=G x P A2=G Select most likely value for unassigned modes F = G + H (H is admissible) P M2=u M2=U MPI: mutual, preferential independence To find the best candidate we assign each variable its best utility value, independent of the values assigned to the other variables.
M2=UA1=UM1=U M2=U M1=U A1=U { } For any Node N: The child of N containing the best cost candidate is the child with the best estimated cost f = g+h (by MPI). Only need to expand the best child. Conflict-directed A*: Expand Best Child & Sibling Constituent kernels
M2=UA1=UM1=U Order constituents by decreasing likelihood { } >> Conflict-directed A*: Expand Best Child & Sibling For any Node N: The child of N containing the best cost candidate is the child with the best estimated cost f = g+h. Only need to expand the best child. M2=U M1=U A1=U Constituent kernels
M2=U Order constituents by decreasing likelihood { } Conflict-directed A*: Expand Best Child & Sibling For any Node N: The child of N containing the best cost candidate is the child with the best estimated cost f = g+h. Only need to expand the best child. M2=U M1=U A1=U Constituent kernels
M1=U M3=U A1=U A2=U M1=U M2=U M1=U Conflict-directed A*: Expand Best Child & Sibling M2=U M1=U A1=U Constituent kernels When a best child loses any candidate, expand child’s next best sibling: If unresolved conflicts: expand as soon as next conflict of child is resolved. If conflicts resolved: expand as soon as child is expanded to a full candidate. { } M2=GM3=GA1=GA2=G M2=UM3=UA1=U A2=U
Appendix: A* Search: Preliminaries Problem: State Space Search Problem Initial State Expand(node) Children of Search Node = next states Goal-Test(node) True if search node at a goal-state hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree StateState the search is at ParentParent in search tree
A* Search: Preliminaries Problem: State Space Search Problem Initial State Expand(node) Children of Search Node = adjacent states Goal-Test(node) True if search node at a goal-state NodesSearch Nodes to be expanded ExpandedSearch Nodes already expanded InitializeSearch starts at , with no expanded nodes hAdmissible Heuristic -Optimistic cost to go Search Node:Node in the search tree StateState the search is at ParentParent in search tree Nodes[Problem]: Remove-Best(f)Removes best cost node according to f Enqueue(new-node, f )Adds search node to those to be expanded
A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Expand best first
A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Terminates when...
A* Search Function A*(problem, h) returns the best solution or failure. Problem pre-initialized. f(x) g[problem](x) + h(x) loop do if Nodes[problem] is empty then return failure node Remove-Best(Nodes[problem], f) state State(node) remove any n from Nodes[problem] such that State(n) = state Expanded[problem] Expanded[problem] {state} new-nodes Expand(node, problem) for each new-node in new-nodes unless State(new-node) is in Expanded[problem] then Nodes[problem] Enqueue(Nodes[problem], new-node, f ) if Goal-Test[problem] applied to State(node) succeeds then return node end Dynamic Programming Principle...
Next Best State Resolving Conflicts Function Next-Best-State-Resolving-Conflicts (OCSP) returns the best cost state consistent with Conflicts[OCSP]. f(x) G[problem] (g[problem](x), h(x)) loop do if Nodes[OCSP] is empty then return failure node Remove-Best(Nodes[OCSP], f) state State[node] add state to Visited[OCSP] new-nodes Expand-State-Resolving-Conflicts(node, OCSP) for each new-node in new-nodes unless Exists n in Nodes[OCSP] such that State[new-node] = state OR State[new-node] is in Visited[problem] then Nodes[OCSP] Enqueue(Nodes[OCSP], new-node, f ) if Goal-Test-State-Resolving-Conflicts[OCSP] applied to state succeeds then return node end An instance of A*