On the Potential of Automated Algorithm Configuration Frank Hutter, University of British Columbia, Vancouver, Canada. Motivation for automated tuning.

Slides:



Advertisements
Similar presentations
Automated Parameter Setting Based on Runtime Prediction: Towards an Instance-Aware Problem Solver Frank Hutter, Univ. of British Columbia, Vancouver, Canada.
Advertisements

SATzilla: Portfolio-based Algorithm Selection for SAT Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown Department of Computer Science University.
Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming Lin Xu, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown Department.
Empirical Algorithmics Reading Group Oct 11, 2007 Tuning Search Algorithms for Real-World Applications: A Regression Tree Based Approach by Thomas Bartz-Beielstein.
Adopt Algorithm for Distributed Constraint Optimization
1 Local Restarts in SAT Solvers Vadim Ryvchin and Ofer Strichman Technion, Haifa, Israel.
1 An Adaptive GA for Multi Objective Flexible Manufacturing Systems A. Younes, H. Ghenniwa, S. Areibi uoguelph.ca.
Query Optimization of Frequent Itemset Mining on Multiple Databases Mining on Multiple Databases David Fuhry Department of Computer Science Kent State.
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
Dynamic Restarts Optimal Randomized Restart Policies with Observation Henry Kautz, Eric Horvitz, Yongshao Ruan, Carla Gomes and Bart Selman.
CSE 460 Hybrid Optimization In this section we will look at hybrid search methods That combine stochastic search with systematic search Problem Classes.
Automatic Tuning1/33 Boosting Verification by Automatic Tuning of Decision Procedures Domagoj Babić joint work with Frank Hutter, Holger H. Hoos, Alan.
Automatic Algorithm Configuration based on Local Search EARG presentation December 13, 2006 Frank Hutter.
SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT Lin Xu, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown University of British.
Hierarchical Hardness Models for SAT Lin Xu, Holger H. Hoos and Kevin Leyton-Brown University of British Columbia {xulin730, hoos,
Hrinking hrinking A signment tack tack. Agenda Introduction Algorithm Description Heuristics Experimental Results Conclusions.
CP Formal Models of Heavy-Tailed Behavior in Combinatorial Search Hubie Chen, Carla P. Gomes, and Bart Selman
Heavy-Tailed Behavior and Search Algorithms for SAT Tang Yi Based on [1][2][3]
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Ensemble Learning: An Introduction
Frank Hutter, Holger Hoos, Kevin Leyton-Brown
Hierarchical Hardness Models for SAT Hierarchical Hardness Models for SAT Building the hierarchical hardness models 1.The classifier is trained on a set.
1 Combinatorial Problems in Cooperative Control: Complexity and Scalability Carla Gomes and Bart Selman Cornell University Muri Meeting March 2002.
Hardness-Aware Restart Policies Yongshao Ruan, Eric Horvitz, & Henry Kautz IJCAI 2003 Workshop on Stochastic Search.
1 Efficient Stochastic Local Search for MPE Solving Frank Hutter The University of British Columbia (UBC), Vancouver, Canada Joint work with Holger Hoos.
Distributions of Randomized Backtrack Search Key Properties: I Erratic behavior of mean II Distributions have “heavy tails”.
Combining Exact and Metaheuristic Techniques For Learning Extended Finite-State Machines From Test Scenarios and Temporal Properties ICMLA ’14 December.
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
MIC’2011 1/58 IX Metaheuristics International Conference, July 2011 Restart strategies for GRASP+PR Talk given at the 10 th International Symposium on.
1 Outline:  Outline of the algorithm  MILP formulation  Experimental Results  Conclusions and Remarks Advances in solving scheduling problems with.
Using Genetic Programming to Learn Probability Distributions as Mutation Operators with Evolutionary Programming Libin Hong, John Woodward, Ender Ozcan,
Large-scale Hybrid Parallel SAT Solving Nishant Totla, Aditya Devarakonda, Sanjit Seshia.
Parameter tuning based on response surface models An update on work in progress EARG, Feb 27 th, 2008 Presenter: Frank Hutter.
Dept. of Computer and Information Sciences : University of Delaware John Cavazos Department of Computer and Information Sciences University of Delaware.
Parallel Algorithm Configuration Frank Hutter, Holger Hoos, Kevin Leyton-Brown University of British Columbia, Vancouver, Canada.
CISC Machine Learning for Solving Systems Problems Presented by: Alparslan SARI Dept of Computer & Information Sciences University of Delaware
Heavy-Tailed Phenomena in Satisfiability and Constraint Satisfaction Problems by Carla P. Gomes, Bart Selman, Nuno Crato and henry Kautz Presented by Yunho.
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
Iterated Local Search (ILS) for the Quadratic Assignment Problem (QAP) Tim Daniëlse en Vincent Delvigne.
Extended Finite-State Machine Inference with Parallel Ant Colony Based Algorithms PPSN’14 September 13, 2014 Daniil Chivilikhin PhD student ITMO.
Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms Frank Hutter 1, Youssef Hamadi 2, Holger Hoos 1, and Kevin Leyton-Brown.
1 Outline:  Optimization of Timed Systems  TA-Modeling of Scheduling Tasks  Transformation of TA into Mixed-Integer Programs  Tree Search for TA using.
T. Messelis, S. Haspeslagh, P. De Causmaecker B. Bilgin, G. Vanden Berghe.
Schreiber, Yevgeny. Value-Ordering Heuristics: Search Performance vs. Solution Diversity. In: D. Cohen (Ed.) CP 2010, LNCS 6308, pp Springer-
Experimental Algorithmics Reading Group, UBC, CS Presented paper: Fine-tuning of Algorithms Using Fractional Experimental Designs and Local Search by Belarmino.
Tuning Tabu Search Strategies via Visual Diagnosis >MIC2005
Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms: An Initial Investigation Frank Hutter 1, Youssef Hamadi 2, Holger.
SAT 2009 Ashish Sabharwal Backdoors in the Context of Learning (short paper) Bistra Dilkina, Carla P. Gomes, Ashish Sabharwal Cornell University SAT-09.
Minimum Effort Design Space Subsetting for Configurable Caches + Also Affiliated with NSF Center for High- Performance Reconfigurable Computing This work.
Patrick De Causmaecker Stefaan Haspeslagh Tommy Messelis.
Parameter tuning based on response surface models An update on work in progress EARG, Feb 27 th, 2008 Presenter: Frank Hutter.
Classification Ensemble Methods 1
Balance and Filtering in Structured Satisfiability Problems Henry Kautz University of Washington joint work with Yongshao Ruan (UW), Dimitris Achlioptas.
Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks Authors: Pegna, J.M., Lozano, J.A., Larragnaga, P., and Inza, I. In.
Data Mining and Decision Support
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Custom Computing Machines for the Set Covering Problem Paper Written By: Christian Plessl and Marco Platzner Swiss Federal Institute of Technology, 2002.
Tommy Messelis * Stefaan Haspeslagh Burak Bilgin Patrick De Causmaecker Greet Vanden Berghe *
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Unified Adaptivity Optimization of Clock and Logic Signals Shiyan Hu and Jiang Hu Dept of Electrical and Computer Engineering Texas A&M University.
Ensemble Classifiers.
Data Driven Resource Allocation for Distributed Learning
SUNNY-CP: A Multicore Tool for Constraint Solving Jacopo Mauro University of Oslo NordConsNet Uppsala, May 2017.
Lin Xu, Holger H. Hoos, Kevin Leyton-Brown
Christophe Dubach, Timothy M. Jones and Michael F.P. O’Boyle
Massive Parallelization of SAT Solvers
Data Mining CSCI 307, Spring 2019 Lecture 24
Rohan Yadav and Charles Yuan (rohany) (chenhuiy)
Area Coverage Problem Optimization by (local) Search
Advisor: Dr.vahidipour Zahra salimian Shaghayegh jalali Dec 2017
Presentation transcript:

On the Potential of Automated Algorithm Configuration Frank Hutter, University of British Columbia, Vancouver, Canada. Motivation for automated tuning Experimental Setup Algorithm design: new application/algorithm: Restart parameter tuning from scratch Waste of time both for researchers and practicioners Algorithm analysis: comparability Is algorithm A faster than algorithm B because they spent more time tuning it ?  Algorithm use in practice: Want to solve MY problems fast, not necessarily the ones the developers used for parameter tuning Automated tuning: related work Best default parameter settings for instance set: Full factorial/fractional factorial experimental design Racing algorithms [ Birattari, Stuetzle, Paquete, and Varrentrapp, GECCO-02] Sequential Parameter Optimization: response surface based [Bartz-Beielstein, 2006] CALIBRA: experimental design & local search [Adenso-Diaz and Laguna, OR Journal 2006] Best parameter setting per instance: Predict runtime with each parameter setting, pick best one [Hutter, Hamadi, Hoos, and Leyton-Brown, CP-06] Best sequence of parameter settings during algorithm run: Reactive search [Battiti and Brunato, 2007] Reinforcement learning [ Lagoudakis and Littman, AAAI ‘00 ] Given: A parametric algorithm A with possible parameter configurations C An instance distribution D, A cost function assigning a cost to each run of A on an instance of D A function combining multiple costs Find: A parameter configuration c  C s.t. algorithm A achieves its lowest overall combined cost on instances from distribution D. Example: (Spear algorithm by Domagoj Babic) New tree search algorithm for SAT Design choices made explicit as parameters 26 parameters, including many categorical and continuous ones SAT-encoded software verification problems Minimize mean runtime across instances Training vs. test instances: Disjoint training and test instances (split 50-50) Want generalization to unseen instances Standard in machine learning Unfortunately not yet standard in optimization Stochastic Optimization Problem: A single instance gives a noisy estimate of a configurations’s cost Reduce noise by performing multiple runs Automated Algorithm Configuration Solvers and benchmark data sets Search in parameter configuration space Evaluation of parameter configuration c Run algorithm with configuration c on a number of instances Comparison of parameter configurations With respect to their performance on the same instances, and with the same random seeds (variance reduction) Conclusions Do not tune your parameter manually anymore Automatic approach is easier, more principled, and yields better results Potentially large potential for improvement over default parameter configuration Large speedups in our domains, even based on simple random sampling Higher potential for higher-dimensional tuning problems: humans cannot grasp high-dimensional parameter interactions well Analysis of tuning scenarios based on random sampling Performance variation across random configurations. Training performances, compared to default. Training performance of random sampling (keeping incumbent). Variations across different orders of configurations. Hardness variation across instances. CDFs for five random configurations and default. Test performance of random sampling (N=100 for training). Variations across different orders of configurations. Solvers: (Dynamic) Local Search for SAT: SAPS 4 continuous parameters Tree Search for SAT: Spear 26 categorical/binary/continuous/integer parameters Branch and Cut for MIP: CPLEX 80 categorical/ordinal/continous/integer parameters Benchmark data sets: SAT-encoded Quasigroup completion SAT-encoded graph colouring based on small-world graphs Winner determination in combinatorial auctions Test performance of random sampling for different sizes of training benchmark set. Percent ‘correct’ pairwise comparisons of parameter configurations, based on varying training cutoff and number of training instances, N Comparison to Iterated Local Search BasicILS and FocusedILS from [Hutter, Hoos, and Stuetzle, AAAI-07]. Both Random and ILS enhanced by novel pruning technique. Future work Continuous parameters (currently discretized) Statistical tests (c.f. racing) Learning approaches (sequential experimental design) Per-instance tuning Automated algorithm design