Schreiber, Yevgeny. Value-Ordering Heuristics: Search Performance vs. Solution Diversity. In: D. Cohen (Ed.) CP 2010, LNCS 6308, pp. 429-444. Springer-

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Crew Pairing Optimization with Genetic Algorithms
Constraint Satisfaction Problems Russell and Norvig: Parts of Chapter 5 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2004/home.htm Prof: Dekang.
Dynamic Thread Assignment on Heterogeneous Multiprocessor Architectures Pree Thiengburanathum Advanced computer architecture Oct 24,
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Optimal Instruction Scheduling for Multi-Issue Processors using Constraint Programming Abid M. Malik and Peter van Beek David R. Cheriton School of Computer.
IBM Labs in Haifa © 2005 IBM Corporation Adaptive Application of SAT Solving Techniques Ohad Shacham and Karen Yorav Presented by Sharon Barner.
Review: Constraint Satisfaction Problems How is a CSP defined? How do we solve CSPs?
1 Restart search techniques for Employee Timetabling Problems Amnon Meisels And Eliezer Kaplansky Ben-Gurion University.
Best-First Search: Agendas
- 1 -  P. Marwedel, Univ. Dortmund, Informatik 12, 05/06 Universität Dortmund Hardware/Software Codesign.
21-May-15 Genetic Algorithms. 2 Evolution Here’s a very oversimplified description of how evolution works in biology Organisms (animals or plants) produce.
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
1 Refining the Basic Constraint Propagation Algorithm Christian Bessière and Jean-Charles Régin Presented by Sricharan Modali.
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Lecture 5: Learning models using EM
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Constraint Satisfaction Problems
CS460 Fall 2013 Lecture 4 Constraint Satisfaction Problems.
1 A Novel Binary Particle Swarm Optimization. 2 Binary PSO- One version In this version of PSO, each solution in the population is a binary string. –Each.
Parametric Query Generation Student: Dilys Thomas Mentor: Nico Bruno Manager: Surajit Chaudhuri.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
CS121 Heuristic Search Planning CSPs Adversarial Search Probabilistic Reasoning Probabilistic Belief Learning.
On the Task Assignment Problem : Two New Efficient Heuristic Algorithms.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 2 Ryan Kinworthy CSCE Advanced Constraint Processing.
Simple search methods for finding a Nash equilibrium Ryan Porter, Eugene Nudelman, and Yoav Shoham Games and Economic Behavior, Vol. 63, Issue 2. pp ,
Universität Dortmund  P. Marwedel, Univ. Dortmund, Informatik 12, 2003 Hardware/software partitioning  Functionality to be implemented in software.
Constraint Satisfaction Problems
Modularizing B+-trees: Three-Level B+-trees Work Fine Shigero Sasaki* and Takuya Araki NEC Corporation * currently with 1st Nexpire Inc.
Slide 1 CSPs: Arc Consistency & Domain Splitting Jim Little UBC CS 322 – Search 7 October 1, 2014 Textbook §
by B. Zadrozny and C. Elkan
Collection of Data Chapter 4. Three Types of Studies Survey Survey Observational Study Observational Study Controlled Experiment Controlled Experiment.
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
Constraint Propagation as the Core of Local Search Nikolaos Pothitos, George Kastrinis, Panagiotis Stamatopoulos Department of Informatics and Telecommunications.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
3 rd Nov CSV881: Low Power Design1 Power Estimation and Modeling M. Balakrishnan.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Chapter 5: Constraint Satisfaction ICS 171 Fall 2006.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Variable and Value Ordering for MPE Search Sajjad Siddiqi and Jinbo Huang.
Conformant Probabilistic Planning via CSPs ICAPS-2003 Nathanael Hyafil & Fahiem Bacchus University of Toronto.
Tetris Agent Optimization Using Harmony Search Algorithm
Sorting: Implementation Fundamental Data Structures and Algorithms Klaus Sutner February 24, 2004.
Chapter 5 Constraint Satisfaction Problems
1 Simulations: Basic Procedure 1. Generate a true probability distribution, and a set of base conditionals (premises), with associated lower probability.
1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3 Grand Challenge:
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
Ensemble Methods in Machine Learning
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Arc Consistency CPSC 322 – CSP 3 Textbook § 4.5 February 2, 2011.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
Chapter 5 Team Teaching AI (created by Dewi Liliana) PTIIK Constraint Satisfaction Problems.
Simulation. Types of simulation Discrete-event simulation – Used for modeling of a system as it evolves over time by a representation in which the state.
Constraint Satisfaction Problems
Automatic Test Generation
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Computer Science cpsc322, Lecture 13
Computer Science cpsc322, Lecture 14
Multi - Way Number Partitioning
Computer Science cpsc322, Lecture 13
CS 583 Analysis of Algorithms
Hierarchical Search on DisCSPs
Text Categorization Berlin Chen 2003 Reference:
Hierarchical Search on DisCSPs
Presentation transcript:

Schreiber, Yevgeny. Value-Ordering Heuristics: Search Performance vs. Solution Diversity. In: D. Cohen (Ed.) CP 2010, LNCS 6308, pp Springer- Heidelberg (2010). ~ Presented by: Michael Gould CS 275 December 7, 2010

Introduction Problem: Given a CSP, generate a large number of diverse solutions as fast as possible. Trade-off: Solution search performance (time) vs. the diversity of the generated solutions. Tool: Use value-ordering heuristics to help solve problem.

General Solution Search Method Input: A CSP (X, D, C) a) Repeatedly select an unassigned variable x. b) Try to assign to x one of the remaining values in its domain. c) Propagate every constraint c that involves x by removing the conflicting values in the domains of other unassigned variables. d) Backtrack if necessary.

Value-Ordering Heuristics A value-ordering heuristic determines which value is selected by the backtracking search algorithm for a given unassigned variable. Ex. Survivors-first heuristics use simple statistics accumulated during the search to select the value that has been involved in the least number of conflicts.

Solution Diversity Solution Diversity: Requirement that the multiple solutions we find be as “different” from each other as possible. Solution Diversity is different from solution distribution. Consider a solution space composed of a small subset S1 of solutions where each variable is assigned a different value and a much larger set of solutions S2 so that only a single variable is assigned a different value in each solution.

Solution Diversity (continued) There is a trade-off between the search performance and the solution diversity. The M AX D IVERSE kS ET problem is to compute k maximally diverse solutions of a given CSP. One measure of the distance between a pair of solutions is Hamming Distance: Given solution s = and solution s' = we define H i (s,s') = 1 if s i ≠ s i ' and 0 otherwise for 1≤i≤n. The Hamming distance is the sum of all H i values.

Automatic Test Generation Problem (ATGP) A real-world example of a CSP where solution diversity is important. The problem is to generate automatically a valid test for a given hardware specification which is a sequence of a large number of instructions. These instructions must be diverse in order to trigger as many as possible different hardware events. Time is important: thousands of tests have to be generated even for a small subset of a modern hardware specification We consider the overall quality of the whole test- base: size and diversity of tests.

The RANDOM Heuristic Simply select a uniformly random value from the domain of a variable. Often achieves relatively high solution diversity and performs relatively fast. Reason: in these problems there is a relatively large number of values in every domain whose selection does not lead to a conflict. Procedure itself of randomly selecting a value is very fast.

(1) The LeastFails Heuristic

(2) BestSuccessRatio Heuristic

(3) & (4) Probabilistic Versions

Heuristics (5) and (6)

Initial Ordering of Values No information is available at the beginning of the search for a first solution. Thus, all the values in each domain are initially unordered. The heuristics can only order values that have already been attempted to be assigned to variables in the past. It must be decided what probability should be used for the values whose order cannot be determined. We define the conservativeness of a heuristic. The more conservative a heuristic is, the lower the probability to select an unordered value. We can define the conservativeness C(H) of a heuristic H separately for each variable whose domain contains at least one unordered value. Low C(H) value → initially random behavior but prevents situations where many unordered values are never selected.

Heuristic Parameters α : Controls the conservativeness of a heuristic. Represents an “initial score” of an unordered value u. β : Used in the probabilistic heuristics to control the aggressiveness of the heuristic (the distribution of probabilities). γ : A tie-range parameter used to smooth differences between “sufficiently close” values for the heuristics LeastFails and BestSuccessRatio.

Experiments (1) Randomly generated problems. : a = number of variables, b = domain size of each variable, c = number of binary constraints, d = number of incompatible value pairs in each constraint. Tested and with 30 problems in each set. Looked for 30 solutions for each problem, and solved using the variable-ordering that picked the variable with the minimal domain. (2) ATGP problems. Thousands of variables and constraints that model Intel 64 and IA-32 processor architecture. : n = # of problems in set, m = avg. # of instructions required to generate for each problem. Tested and.

Results Results summarized in tables. Table entries: A: the acronym of the heuristic. B: the value of α used as the “initial score” of an unordered value. C: the value of either β or γ. D: ratio of the avg. time for the heuristic to find a single solution over time required for the random heuristic. E: ratio of avg. Hamming distance between solutions found by the heuristic over distance achieved by random.

Analysis Entries in table sorted by D/E. Lower the ratio → better the performance/quality. Heuristic considered better than RANDOM if D <= E. Hard to achieve better solution diversity than RANDOM. Only a few entries have E > 1. Many heuristic configurations run much faster than RANDOM. (e.g. LeastFails can run 10 to 20 times faster). Very high speedup → significant loss of solution diversity. Some good heuristic configurations that achieve significant speedup and hardly any loss of solution diversity at top of tables.

Analysis (continued) LeastFails and BestSuccessRatio: Usually achieve very high speed-up, but often accompanied by a significant loss of solution diversity. In randomly generated problems, loss of solution diversity was lower than speed-up gain. ProbMostFails and ProbWorstSuccessRatio: Usually do not achieve a better solution diversity than Random, but can be much slower. ProbLeastFails and ProbBestSuccessRation: Often achieve a moderate speed-up without sacrificing much solution diversity.

Questions Do the experimental results hold for larger problems (hundreds of variables, larger domain sizes)? Do the experimental results hold for other real- world CSP problems? Do other variable orderings achieve better performance? Can we combine value-ordering heuristics with inference methods such as search look-ahead or arc-consistency or is the overhead too high? Is there a consensus on which value-ordering heuristic is the best?