Presentation is loading. Please wait.

Presentation is loading. Please wait.

Constraint Optimization And counting, and enumeration 275 class

Similar presentations


Presentation on theme: "Constraint Optimization And counting, and enumeration 275 class"— Presentation transcript:

1 Constraint Optimization And counting, and enumeration 275 class
Chapter 13 Constraint Optimization And counting, and enumeration 275 class

2 Outline Introduction Optimization tasks for graphical models
Solving optimization problems with inference and search Inference Bucket elimination, dynamic programming Mini-bucket elimination Search Branch and bound and best-first Lower-bounding heuristics AND/OR search spaces Hybrids of search and inference Cutset decomposition Super-bucket scheme

3 Constraint Satisfaction
Example: map coloring Variables - countries (A,B,C,etc.) Values - colors (e.g., red, green, yellow) Constraints: A B red green red yellow green red green yellow yellow green yellow red C A B D E F G Task: consistency? Find a solution, all solutions, counting

4 Propositional Satisfiability
 = {(¬C), (A v B v C), (¬A v B v E), (¬B v C v D)}.

5 Constraint Optimization Problems for Graphical Models
Cost 1 2 3 5 f(A,B,D) has scope {A,B,D} A finite COP is defined by a set of discrete variables (X) with their domains of values (D) and a set of cost functions defined on subsets of variables. We also have a global cost function defined over the variables, which has to be optimized (that is minimized or maximized). For the remaining of the talk we will assume a global cost function defined by the sum of all the individual cost function and the objective is to minimize it. On the right hand side we have the graphical representation of a COP instance, where nodes correspond to variables and an edge connects any two nodes that appear in the same function.

6 Constraint Optimization Problems for Graphical Models
Cost 1 2 3 5 f(A,B,D) has scope {A,B,D} Primal graph = Variables --> nodes Functions, Constraints - arcs f1(A,B,D) f2(D,F,G) f3(B,C,F) A B C D F A finite COP is defined by a set of discrete variables (X) with their domains of values (D) and a set of cost functions defined on subsets of variables. We also have a global cost function defined over the variables, which has to be optimized (that is minimized or maximized). For the remaining of the talk we will assume a global cost function defined by the sum of all the individual cost function and the objective is to minimize it. On the right hand side we have the graphical representation of a COP instance, where nodes correspond to variables and an edge connects any two nodes that appear in the same function. F(a,b,c,d,f,g)= f1(a,b,d)+f2(d,f,g)+f3(b,c,f) G

7 Constrained Optimization
Example: power plant scheduling

8 Probabilistic Networks
Smoking Bronchitis Cancer X-Ray Dyspnoea P(S) P(B|S) P(D|C,B) P(C|S) P(X|C,S) P(D|C,B) C B D=0 D=1 0.1 0.9 1 0.7 0.3 0.8 0.2 P(S,C,B,X,D) = P(S)· P(C|S)· P(B|S)· P(X|C,S)· P(D|C,B)

9 Outline Introduction Optimization tasks for graphical models
Solving by inference and search Inference Bucket elimination, dynamic programming, tree-clustering, bucket-elimination Mini-bucket elimination, belief propagation Search Branch and bound and best-first Lower-bounding heuristics AND/OR search spaces Hybrids of search and inference Cutset decomposition Super-bucket scheme

10 Computing MPE P(a)P(b|a)P(c|a)P(d|b,a)P(e|b,c)= P(b|a)P(d|b,a)P(e|b,c)
“Moral” graph A D E C B MPE= B C P(a)P(b|a)P(c|a)P(d|b,a)P(e|b,c)= D E P(b|a)P(d|b,a)P(e|b,c) Radu, convert this to mpw P(a) P(c|a) Variable Elimination

11 P(b|a) P(d|b,a) P(e|b,c)
Finding Algorithm elim-mpe (Dechter 1996) Non-serial Dynamic Programming (Bertele and Briochi, 1973) Elimination operator bucket B: P(a) P(c|a) P(b|a) P(d|b,a) P(e|b,c) bucket C: bucket D: bucket E: bucket A: e=0 B C D E A MPE

12 Generating the MPE-tuple
C: E: P(b|a) P(d|b,a) P(e|b,c) B: D: A: P(a) P(c|a) e=0

13 P(b|a) P(d|b,a) P(e|b,c)
Complexity Algorithm elim-mpe (Dechter 1996) Non-serial Dynamic Programming (Bertele and Briochi, 1973) Elimination operator bucket B: P(a) P(c|a) P(b|a) P(d|b,a) P(e|b,c) bucket C: bucket D: bucket E: bucket A: e=0 B C D E A exp(W*=4) ”induced width” (max clique size) MPE

14 Complexity of bucket elimination
Bucket-elimination is time and space r = number of functions The effect of the ordering: constraint graph A D E C B B C D E A E D C B A Finding smallest induced-width is hard

15 Directional i-consistency
B D C B E D C B E D C B E Adaptive d-path d-arc

16 Mini-bucket approximation: MPE task
Split a bucket into mini-buckets =>bound complexity

17 Mini-Bucket Elimination
maxB∏ maxB∏ Bucket B Bucket C Bucket D Bucket E Bucket A P(B|A) P(D|A,B) P(E|B,C) P(C|A) E = 0 P(A) P(A) hB (C,E) hB (A,D) A B C D E P(B|A) P(C|A) hC (A,E) hD (A) P(E|B,C) P(D|A,B) hE (A) MPE* is an upper bound on MPE --U Generating a solution yields a lower bound--L

18 MBE-MPE(i) Algorithm Approx-MPE (Dechter&Rish 1997)
Input: i – max number of variables allowed in a mini-bucket Output: [lower bound (cost of a sub-optimal solution), upper bound] Example: approx-mpe(3) versus elim-mpe

19 Properties of MBE(i) Complexity: O(r exp(i)) time and O(exp(i)) space.
Yields an upper-bound and a lower-bound. Accuracy: determined by upper/lower (U/L) bound. As i increases, both accuracy and complexity increase. Possible use of mini-bucket approximations: As anytime algorithms As heuristics in search Other tasks: similar mini-bucket approximations for: belief updating, MAP and MEU (Dechter and Rish, 1997)

20 Outline Introduction Optimization tasks for graphical models
Solving by inference and search Inference Bucket elimination, dynamic programming Mini-bucket elimination Search Branch and bound and best-first Lower-bounding heuristics AND/OR search spaces Hybrids of search and inference Cutset decomposition Super-bucket scheme

21 The Search Space Objective function: A B C D E F A B f1 2 1 4 A C f2 3
2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D Objective function: C D F E B A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

22 Arc-cost is calculated based on cost components.
The Search Space A B f1 2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D C D F E B A 1 2 4 1 1 3 1 5 4 2 2 5 1 1 1 1 5 6 4 2 2 4 1 5 6 4 2 2 4 1 1 1 1 1 1 1 1 1 3 5 3 5 3 5 3 5 1 3 1 3 1 3 1 3 5 2 5 2 5 2 5 2 3 3 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Arc-cost is calculated based on cost components.

23 Value of node = minimal cost solution below it
The Value Function A B f1 2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D 5 C D F E B A 5 7 1 2 4 6 5 1 7 4 1 3 1 5 4 2 2 5 8 5 1 3 1 1 7 4 1 2 1 5 6 4 2 2 4 1 5 6 4 2 2 4 1 3 3 1 3 3 1 1 1 1 1 1 1 2 2 1 2 2 1 1 1 3 5 3 5 3 5 3 5 1 3 1 3 1 3 1 3 5 2 5 2 5 2 5 2 3 3 3 3 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Value of node = minimal cost solution below it

24 Value of node = minimal cost solution below it
An Optimal Solution A B f1 2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D 5 C D F E B A 5 7 1 2 4 6 5 1 7 4 1 3 1 5 4 2 2 5 8 5 1 3 1 1 7 4 1 2 1 5 6 4 2 2 4 1 5 6 4 2 2 4 1 3 3 1 3 3 1 1 1 1 1 1 1 2 2 1 2 2 1 1 1 3 5 3 5 3 5 3 5 1 3 1 3 1 3 1 3 5 2 5 2 5 2 5 2 3 3 3 3 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 3 2 2 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 2 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Value of node = minimal cost solution below it

25 Basic Heuristic Search Schemes
Heuristic function f(x) computes a lower bound on the best extension of x and can be used to guide a heuristic search algorithm. We focus on 1.Branch and Bound Use heuristic function f(xp) to prune the depth-first search tree. Linear space 2.Best-First Search Always expand the node with the highest heuristic value f(xp). Needs lots of memory f  L L

26 Classic Branch-and-Bound
Upper Bound UB Lower Bound LB g(n) LB(n) = g(n) + h(n) n Prune if LB(n) ≥ UB h(n) OR Search Tree

27 How to Generate Heuristics
The principle of relaxed models Linear optimization for integer programs Mini-bucket elimination Bounded directional consistency ideas

28 Generating Heuristic for graphical models (Kask and Dechter, 1999)
Given a cost function C(a,b,c,d,e) = f(a) • f(b,a) • f(c,a) • f(e,b,c) • P(d,b,a) Define an evaluation function over a partial assignment as the probability of it’s best extension E D A B 1 D f*(a,e,d) = minb,c f(a,b,c,d,e) = = f(a) • minb,c f(b,a) • P(c,a) • P(e,b,c) • P(d,a,b) = g(a,e,d) • H*(a,e,d)

29 Generating Heuristics (cont.)
H*(a,e,d) = minb,c f(b,a) • f(c,a) • f(e,b,c) • P(d,a,b) = minc [f(c,a) • minb [f(e,b,c) • f(b,a) • f(d,a,b)]] <= minc [f(c,a) • minb f(e,b,c) • minb [f(b,a) • f(d,a,b)]] = minb [f(b,a) • f(d,a,b)] • minc [f(c,a) • minb f(e,b,c)] = hB(d,a) • hC(e,a) = H(a,e,d) f(a,e,d) = g(a,e,d) • H(a,e,d) <= f*(a,e,d) The heuristic function H is what is compiled during the preprocessing stage of the Mini-Bucket algorithm.

30 Generating Heuristics (cont.)
H*(a,e,d) = minb,c f(b,a) • f(c,a) • f(e,b,c) • P(d,a,b) = minc [f(c,a) • minb [f(e,b,c) • f(b,a) • f(d,a,b)]] >= minc [f(c,a) • minb f(e,b,c) • minb [f(b,a) • f(d,a,b)]] = minb [f(b,a) • f(d,a,b)] • minc [f(c,a) • minb f(e,b,c)] = hB(d,a) • hC(e,a) = H(a,e,d) f(a,e,d) = g(a,e,d) • H(a,e,d) <= f*(a,e,d) The heuristic function H is what is compiled during the preprocessing stage of the Mini-Bucket algorithm.

31 Static MBE Heuristics Given a partial assignment xp, estimate the cost of the best extension to a full solution The evaluation function f(x^p) can be computed using function recorded by the Mini-Bucket scheme A B C D E Belief Network f(a,e,D))=g(a,e) · H(a,e,D ) B: P(E|B,C) P(D|A,B) P(B|A) A: E: D: C: P(C|A) hB(E,C) hB(D,A) hC(E,A) P(A) hE(A) hD(A) f(a,e,D) = P(a) · hB(D,a) · hC(e,a) g h – is admissible E D A B 1

32 Heuristics Properties
MB Heuristic is monotone, admissible Retrieved in linear time IMPORTANT: Heuristic strength can vary by MB(i). Higher i-bound  more pre-processing  stronger heuristic  less search. Allows controlled trade-off between preprocessing and search

33 Experimental Methodology
Algorithms BBMB(i) – Branch and Bound with MB(i) BBFB(i) - Best-First with MB(i) MBE(i) Test networks: Random Coding (Bayesian) CPCS (Bayesian) Random (CSP) Measures of performance Compare accuracy given a fixed amount of time - how close is the cost found to the optimal solution Compare trade-off performance as a function of time

34 Empirical Evaluation of mini-bucket heuristics, Bayesian networks, coding
Random Coding, K=100, noise=0.32 Random Coding, K=100, noise=0.28

35 Max-CSP experiments (Kask and Dechter, 2000)

36 Dynamic MB Heuristics Rather than pre-compiling, the mini-bucket heuristics can be generated during search Dynamic mini-bucket heuristics use the Mini-Bucket algorithm to produce a bound for any node in the search space (a partial assignment, along the given variable ordering)

37 Dynamic MB and MBTE Heuristics (Kask, Marinescu and Dechter, 2003)
Rather than precompile compute the heuristics during search Dynamic MB: Dynamic mini-bucket heuristics use the Mini-Bucket algorithm to produce a bound for any node during search Dynamic MBTE: We can compute heuristics simultaneously for all un-instantiated variables using mini-bucket-tree elimination . MBTE is an approximation scheme defined over cluster-trees. It outputs multiple bounds for each variable and value extension at once.

38 Cluster Tree Elimination - example
ABC 1 G E F C D B A BC BCDF 2 BF BEF 3 EF EFG 4

39 Mini-Clustering Motivation: The basic idea:
Time and space complexity of Cluster Tree Elimination depend on the induced width w* of the problem When the induced width w* is big, CTE algorithm becomes infeasible The basic idea: Try to reduce the size of the cluster (the exponent); partition each cluster into mini-clusters with less variables Accuracy parameter i = maximum number of variables in a mini-cluster The idea was explored for variable elimination (Mini-Bucket)

40 Idea of Mini-Clustering
Split a cluster into mini-clusters => bound complexity

41 Mini-Clustering - example
ABC 1 BC BCDF 2 BF BEF 3 EF EFG 4

42 Mini Bucket Tree Elimination
ABC 1 ABC 1 BC BC 2 BCDF 2 BCDF BF BF 3 BEF 3 BEF EF EF 4 EFG 4 EFG

43 Mini-Clustering Correctness and completeness: Algorithm MC(i) computes a bound (or an approximation) for each variable and each of its values. MBTE: when the clusters are buckets in BTE.

44 Branch and Bound w/ Mini-Buckets
BB with static Mini-Bucket Heuristics (s-BBMB) Heuristic information is pre-compiled before search. Static variable ordering, prunes current variable BB with dynamic Mini-Bucket Heuristics (d-BBMB) Heuristic information is assembled during search. Static variable ordering, prunes current variable BB with dynamic Mini-Bucket-Tree Heuristics (BBBT) Heuristic information is assembled during search. Dynamic variable ordering, prunes all future variables

45 Empirical Evaluation Algorithms: Complete Incomplete Measures: BBBT
Time Accuracy (% exact) #Backtracks Bit Error Rate (coding) Algorithms: Complete BBBT BBMB Incomplete DLM GLS SLS IJGP IBP (coding) Benchmarks: Coding networks Bayesian Network Repository Grid networks (N-by-N) Random noisy-OR networks Random networks

46 Average Accuracy and Time. 30 samples, 10 observations, 30 seconds
Real World Benchmarks Average Accuracy and Time. 30 samples, 10 observations, 30 seconds

47 Empirical Results: Max-CSP
Random Binary Problems: <N, K, C, T> N: number of variables K: domain size C: number of constraints T: Tightness Task: Max-CSP

48 BBBT(i) vs BBMB(i), N=100 i=2 i=3 i=4 i=5 i=6 i=7 BBBT(i) vs. BBMB(i).

49 Searching the Graph; caching goods
B f1 2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D 5 A context(A) = [A] 1 2 4 B context(B) = [AB] 1 1 3 1 5 4 2 2 5 C context(C) = [ABC] 1 1 1 1 6 4 4 1 6 4 4 D context(D) = [ABD] 5 2 2 5 2 2 1 1 1 1 E context(E) = [AE] 5 5 1 1 2 3 2 3 3 3 5 3 3 5 F context(F) = [F] 1 1 2 1 3 2 2 4 1

50 Searching the Graph; caching goods
B f1 2 1 4 A C f2 3 1 A E f3 1 3 2 A F f4 2 1 B C f5 1 2 4 B D f6 4 1 2 B E f7 3 1 2 C D f8 1 4 E F f9 1 2 A E C B F D 5 A context(A) = [A] 5 7 1 2 4 B context(B) = [AB] 6 5 1 7 4 1 3 1 5 4 2 2 5 C context(C) = [ABC] 8 5 1 3 1 1 7 4 1 2 1 6 4 4 1 6 4 4 1 D context(D) = [ABD] 5 2 2 5 2 2 3 3 1 1 1 1 2 2 1 1 E context(E) = [AE] 5 5 1 1 2 2 3 3 3 3 5 3 3 5 F context(F) = [F] 2 1 1 1 2 1 3 2 2 4 1

51 Outline Introduction Optimization tasks for graphical models
Solving by inference and search Inference Bucket elimination, dynamic programming Mini-bucket elimination, belief propagation Search Branch and bound and best-first Lower-bounding heuristics AND/OR search spaces Hybrids of search and inference Cutset decomposition Super-bucket scheme


Download ppt "Constraint Optimization And counting, and enumeration 275 class"

Similar presentations


Ads by Google