Download presentation
Presentation is loading. Please wait.
Published byHolly Sidle Modified over 10 years ago
1
Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown Department of Computer Science University of British Columbia
2
2 Solving MIP more effectively Portfolio-based algorithm selection (SATzilla) [Xu et al., 2007;2008;2009] Where are the solvers? Parameter settings of a single solver (e.g. CPLEX) How to find good settings? Automated algorithm configuration tool [Hutter et al., 2007;2009] How to find good candidates for algorithm selection? Algorithm configuration with dynamic performance metric [Xu et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
3
Some particularly related work: [Gratch & Dejong, 1992]; [Balaprakash, Birattari & Stuetzle, 2007]; [Hutter, Babic, Hoos & Hu, 2007]; [Hutter, Hoos, Stuetzle & Leyton-Brown, 2009] Some particularly related work: [Rice, 1976]; [Leyton-Brown, Nudelman & Shoham, 2003; 2009]; [Guerri & Milano, 2004]; [Nudelman, Leyton-Brown, Shoham & Hoos, 2004] 3 Hydra Portfolio-based algorithm selection: Automated algorithm configuration: Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
4
Outline Improve algorithm selection – SATzilla – Drawback of SATzilla – New SATzilla with cost sensitive classification – Results Reduce the construction cost – Hydra – The cost – Make full use of configuration – Results Conclusion Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP 4
5
5 Given: – training set of instances – performance metric – candidate solvers – portfolio builder (incl. instance features) Training: – collect performance data – portfolio builder learns predictive models At Runtime: – predict performance – select solver Metric Portfolio Builder Training Set Novel Instance Portfolio-Based Algorithm Selector Candidate Solvers Selected Solver SATzilla: Portfolio-Based Algorithm Selection [Xu, Hutter, Hoos, Leyton-Brown, 2007; 2008] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP 5
6
6 Drawback of SATzilla Algorithm selection in SATzilla based on regression: – Predict each solver performance independently – Select best predicted solver – Classification based on regression Goal of regression: Accurately predict each solvers performance Algorithm selection: Pick solvers on a per-instance basis in order to minimize some overall performance metric Better regression Better algorithm selection Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP Algorithm Selector
7
7 Cost sensitive classification for SATzilla Loss function: the performance difference – Punish misclassifications in direct proportion to their impact on portfolio performance – No need for predicting runtime Implementation: Binary cost sensitive classifier: decision forest (DF) – Build DF for each pair of candidate solvers – one vote for the better solver – Most votes -> Best solver Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
8
8 SATzilla DF performance DataSetModelAverage TimeSolved PercentageTime speedup RAND LR17799.1% HAND LR54992.9% INDU LR54592.1% LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
9
SATzilla DF performance DataSetModelAverage TimeSolved PercentageTime speedup RAND LR17799.1% 1.08× DF16499.3% HAND LR54992.9% 1.16× DF47594.4% INDU LR54592.1% 1.12× DF48794.4% LR: linear regression as used in previous SATzilla; DF: cost sensitive decision forest 9 Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
10
10 MIPzilla DF performance DataSetModelAverage TimeSolved Percentage Time speedup LR39.4100% LR102.6100% ISAC (new) LR2.36100% MIX LR5699.6% Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
11
11 MIPzilla DF performance DataSetModelAverage TimeSolved Percentage Time speedup LR39.4100% 1.00× DF39.3100% LR102.6100% 1.04× DF98.8100% ISAC (new) LR2.36100% 1.18× DF2.00100% MIX LR5699.6% 1.05× DF4899.6% Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
12
12 Hydra Procedure: Iteration 1 Algorithm Configurator Metric Training Set Portfolio-Based Algorithm Selector Candidate Solver Set Candidate Solver Parameterized Algorithm Portfolio Builder Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
13
13 Hydra Procedure: Iteration 2 Algorithm Configurator Metric Training Set Portfolio-Based Algorithm Selector Candidate Solver Set Candidate Solver Parameterized Algorithm Portfolio Builder Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
14
14 Hydra Procedure: Iteration 3 Algorithm Configurator Metric Training Set Portfolio-Based Algorithm Selector Candidate Solver Set Candidate Solver Parameterized Algorithm Portfolio Builder Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
15
15 Output: Hydra Procedure: After Termination Portfolio-Based Algorithm Selector Novel Instance Selected Solver Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
16
16 We are wasting configuration results! Algorithm Configurator Metric Training Set Candidate Solver Parameterized Algorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
17
17 Make full use of configurations Algorithm Configurator Metric Training Set Parameterized Algorithm Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP k Candidate Solvers
18
18 Make full use of configurations Advantage: – Add k solvers instead of 1 in each iteration (good for algorithm selection) – No need for validation step in configuration (SAVE time) Disadvantage: – Need to collect runtime data for more solvers (COST time) In our experiment, we found SAVE = COST (k=4) Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
19
19 Experimental Setup: Hydras Inputs Portfolio Builder: MIPzilla LR (SATzilla for MIP) [Xu et al., 2008] MIPzilla DF (MIPzilla using cost sensitive DF) Parameterized Solver: CPLEX12.1 Algorithm Configurator: FocusedILS 2.4.3 [Hutter, Hoos, Leyton-Brown, 2009] Performance Metric: Penalized average runtime (PAR) Instance Sets: 4 heterogeneous sets by combining homogeneous subsets [Hutter et al., 2010];[Kadioglu et al., 2010]; [Ahmadizadeh et al., 2010] Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
20
20 Three versions of Hydra for MIP Hydra LR,1 : Original Hydra for MIP [Xu et al., 2010] Hydra DF,1 : Hydra for MIP with Improvement I Hydra DF,4 : Hydra for MIP with Improvement I and II Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
21
MIP-Hydra performance on MIX Hydra DF,* performs better than Hydra LR,1 Hydra DF,4 performs similar to Hydra DF,1, but converge faster Performance close to Oracle and MIPzilla DF 21 Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
22
22 Conclusion Cost sensitive classification based SATzilla outperforms original SATzilla New Hydra-MIP outperforms CPLEX default, algorithm configuration alone, and original Hydra on four heterogeneous MIP sets Technical contributions: – Cost sensitive classification results better algorithm selection for SAT and MIP – Using multiple configurations speeds up the convergence of Hydra Xu, Hutter, Hoos, and Leyton-Brown: Hydra-MIP
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.