Download presentation
Presentation is loading. Please wait.
Published byRylee Lark Modified over 9 years ago
1
Dynamic Restarts Optimal Randomized Restart Policies with Observation Henry Kautz, Eric Horvitz, Yongshao Ruan, Carla Gomes and Bart Selman
2
Outline Background heavy-tailed run-time distributions of backtracking search restart policies Optimal strategies to improve expected time to solution using observation of solver behavior during particular runs predictive model of solver performance Empirical results
3
Backtracking Search Backtracking search algorithms often exhibit a remarkable variability in performance among: slightly different problem instances slightly different heuristics different runs of randomized heuristics Problematic for practical application Verification, scheduling, planning
4
Heavy-tailed Runtime Distributions Observation (Gomes 1997): distributions of runtimes of backtrack solvers often have heavy tails infinite mean and variance probability of long runs decays by power law (Pareto- Levy), rather than exponentially (Normal) Very short Very long
5
Formal Models of Heavy-tailed Behavior Imbalanced tree search models (Chen 2001) Exponentially growing subtrees occur with exponentially decreasing probabilities Heavy-tailed runtime distribution can arise in backtrack search for imbalanced models with appropriate parameters p and b p is the probability of the branching heuristics making an error b is the branch factor
6
Randomized Restarts Solution: randomize the systematic solver Add noise to the heuristic branching (variable choice) function Cutoff and restart search after some number of steps Provably eliminates heavy tails Effective whenever search stagnates Even if RTD is not formally heavy-tailed! Used by all state-of-the-art SAT engines Chaff, GRASP, BerkMin Superscalar processor verification
7
Complete Knowledge of RTD P(t) t D
8
Complete Knowledge of RTD P(t) t D T*
9
Complete Knowledge of RTD P(t) t D T*
10
No Knowledge of RTD Open cases: Partial knowledge of RTD (CP 2002) Additional knowledge beyond RTD
11
Example: Runtime Observations P(t) t D1 D2 D T1T2 T* Idea: use observations of early progress of a run to induce finer- grained RTD’s
12
Example: Runtime Observations P(t) t D1 D2 What is optimal policy, given original & component RTD’s, and classification of each run? Lazy: use static optimal cutoff for combined RTD D T*
13
Example: Runtime Observations P(t) t D1 D2 T1*T2* What is optimal policy, given original & component RTD’s, and classification of each run? Naïve: use static optimal cutoff for each RTD
14
Results Method for inducing component distributions using Bayesian learning on traces of solver Resampling & Runtime Observations Optimal policy where observation assigns each run to a component distribution Conditions under which optimal policy prunes one (or more) distributions Empirical demonstration of speedup
15
I. Learning to Predict Solver Performance
16
Formulation of Learning Problem Consider a burst of evidence over observation horizon Learn a runtime predictive model using supervised learning LongShort Observation horizon Median run time Horvitz, et al. UAI 2001
17
Runtime Features Solver instrumented to record at each choice (branch) point: SAT & CSP generic features: number free variables, depth of tree, amount unit propagation, number backtracks, … CSP domain-specific features (QCP): degree of balance of uncolored squares, … Gather statistics over 10 choice points: initial / final / average values 1 st and 2 nd derivatives SAT: 127 variables, CSP: 135 variables
18
Learning a Predictive Model Training data: samples from original RTD labeled by (summary features, length of run) Learn a decision tree that predicts whether current run will complete in less than the median run time 65% - 90% accuracy
19
Generating Distributions by Resampling the Training Data Reasons: The predictive models are imperfect Analyses that include a layer of error analysis for the imperfect model are cumbersome Resampling the training data: Use the inferred decision trees to define different classes Relabel the training data according to these classes
20
Creating Labels The decision tree reduces all the observed features to a single evidential feature F F can be: Binary valued Indicates prediction: shorter than median runtime? Multi-valued Indicates particular leaf of the decision tree that is reached when trace of a partial run is classified
21
Result Decision tree can be used to precisely classify new runs as random samples from the induced RTD’s P(t) t Observed F Observed F D median Make Observation
22
II. Creating Optimal Control Policies
23
Control Policies Problem Statement: A process generates runs randomly from a known RTD After the run has completed K steps, we may observe features of the run We may stop a run at any point Goal: Minimize expected time to solution Note: using induced component RTD’s implies that runs are statistically independent Optimal policy is stationary
24
Optimal Policies Straightforward generalization to multi-valued features
25
Case (2): Determining Optimal Cutoffs
26
Optimal Pruning Runs from component D 2 should be pruned (terminated) immediately after observation when:
27
III. Empirical Evaluation
28
Backtracking Problem Solvers Randomized SAT solver Satz-Rand, a randomized version of Satz ( Li 1997) DPLL with 1-step lookahead Randomization with noise parameter for increasing variable choices Randomized CSP solver Specialized CSP solver for QCP ILOG constraint programming library Variable choice, variant of Brelaz heuristic
29
Domains Quasigroup With Holes Graph Coloring Logistics Planning (SATPLAN)
30
Dynamic Restart Policies Binary dynamic policies Runs are classified as either having short or long run-time distributions N-ary dynamic policies Each leaf in the decision tree is considered as defining a distinct distribution
31
Policies for Comparison Luby optimal fixed cutoff For original combined distribution Luby universal policy Binary naïve policy Select distinct, separately optimal fixed cutoffs for the long and for the short distributions
32
Illustration of Cutoffs P(t) t D1 D2 D T1*T2* T* Make Observation T2** T1**
33
Comparative Results Expected Runtime (Choice Points) QCP (CSP)QCP (Satz)Graph Coloring (Satz)Planning (Satz) Dynamic n-ary3,2958,9629,4995,099 Dynamic binary5,22011,95910,1575,366 Fixed optimal6,53412,55114,6696,402 Binary naïve17,61712,05514,6696,962 Universal12,80429,32038,62317,359 Median (no cutoff)69,04648,24439,59825,255 Improvement of dynamic policies over Luby fixed optimal cutoff policy is 40~65%
34
Cutoffs: Graph Coloring (Satz) Dynamic n-ary: 10, 430, 10, 345, 10, 10 Dynamic binary:455, 10 Binary naive: 342, 500 Fixed optimal: 363
35
Discussion Most optimal policies turned out to prune runs Policy construction independent from run classification – may use other learning techniques Does not require highly-accurate prediction! Widely applicable
36
Limitations Analysis does not apply in cases where runs are statistically dependent Example: We begin with 2 or more RTD’s E.g.: of SAT and UNSAT formulas Environment flips a coin to choose a RTD, and then always samples that RTD We do not get to see the coin flip! Now each unsuccessful run gives us information about that coin flip!
37
The Dependent Case Dependent case much harder to solve Ruan et al. CP-2002: “Restart Policies with Dependence among Runs: A Dynamic Programming Approach” Future work Using RTD’s of ensembles to reason about RTD’s of individual problem instances Learning RTD’s on the fly (reinforcement learning)
38
Big Picture Problem Instances Solver static features runtime Learning / Analysis Predictive Model dynamic features control / policy
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.