Software Development Developing a MAX-CSP Solver Karl Lieberherr.

Slides:



Advertisements
Similar presentations
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Advertisements

Problems and Their Classes
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
P-Optimal CSP Solvers Applied to Software Security P-Optimal CSP Solvers Applied to Software Security Ahmed Abdel Mohsen and Karl Lieberherr College of.
Max Cut Problem Daniel Natapov.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
NP-complete and NP-hard problems Transitivity of polynomial-time many-one reductions Concept of Completeness and hardness for a complexity class Definition.
The Theory of NP-Completeness
CS420 lecture one Problems, algorithms, decidability, tractability.
Phase Transitions of PP-Complete Satisfiability Problems D. Bailey, V. Dalmau, Ph.G. Kolaitis Computer Science Department UC Santa Cruz.
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
Probabilistic Analysis and Randomized Algorithms
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
The Theory of NP-Completeness
Statistics Lecture 11.
1 Discrete Structures CS 280 Example application of probability: MAX 3-SAT.
Time Complexity.
Lecture 20: April 12 Introduction to Randomized Algorithms and the Probabilistic Method.
. PGM 2002/3 – Tirgul6 Approximate Inference: Sampling.
Objectives (BPS chapter 13) Binomial distributions  The binomial setting and binomial distributions  Binomial distributions in statistical sampling 
SAT Solver Math Foundations of Computer Science. 2 Boolean Expressions  A Boolean expression is a Boolean function  Any Boolean function can be written.
T Ball (1 Relation) What Your Robots Do Karl Lieberherr CSU 670 Spring 2009.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Normal Approximation Of The Binomial Distribution:
4.4 Normal Approximation to Binomial Distributions
Poisson Random Variable Provides model for data that represent the number of occurrences of a specified event in a given unit of time X represents the.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Statistics Section 5-6 Normal as Approximation to Binomial.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
UBC March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
Monté Carlo Simulation  Understand the concept of Monté Carlo Simulation  Learn how to use Monté Carlo Simulation to make good decisions  Learn how.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSC401 – Analysis of Algorithms Lecture Notes 2 Asymptotic Analysis Objectives: Mathematics foundation for algorithm analysis Amortization analysis techniques.
Stat 13 Lecture 19 discrete random variables, binomial A random variable is discrete if it takes values that have gaps : most often, integers Probability.
UBC March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
Definition A random variable is a variable whose value is determined by the outcome of a random experiment/chance situation.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
THE NORMAL APPROXIMATION TO THE BINOMIAL. Under certain conditions the Normal distribution can be used as an approximation to the Binomial, thus reducing.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
NPC.
PhD March The Evergreen Project: How To Learn From Mistakes Caused by Blurry Vision in MAX-CSP Solving Karl J. Lieberherr Northeastern University.
Non-LP-Based Approximation Algorithms Fabrizio Grandoni IDSIA
UBC March The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers* Karl J. Lieberherr Northeastern University Boston joint work.
(Complexity) Analysis of Algorithms Algorithm Input Output 1Analysis of Algorithms.
The Theory of NP-Completeness
More NP-Complete and NP-hard Problems
8.1 Determine whether the following statements are correct or not
P & NP.
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Normal as Approximation to Binomial
Normal Approximations to the Binomial Distribution
Richard Anderson Lecture 28 NP-Completeness
Continuous Random Variable Normal Distribution
Inside Microsoft Research
The Theory of NP-Completeness
Presentation transcript:

Software Development Developing a MAX-CSP Solver Karl Lieberherr

General solver Based on transition system Correctness of solver –Does it find maximum? –Does it find a assignment satisfying t G ?

General Theme The average over a set of assignments is f. We want to derive an algorithm that finds an assignment ≥ f. Two choices: –randomized algorithm (very simple but has failure probability) –derandomized algorithm (more complex)

Derandomized algorithm Need a correctness proof For each variable x in H if average H(k=1) > average H(k=0) Need: average H ≥ max(average H(k=1), average H(k=0) )

Polynomials based on Averages mean H (n,k)= average fraction of satisfied constraints in H among all assignments to the n variables of H that set k variables to true. appmean H (x) = average fraction of satisfied constraints in H among all assignments that are generated with a bent coin of probability x. –The averaging includes all assignments but they have different weights based on the Binomial Distribution. –We generate the assignments following the binomial distribution and appmean gives the expected value of a random variable: the fraction of satisfied constraints. –expected value = mean

Look-ahead polynomial lap F,N (x) = appmean n-map(F,N) (x)

Binomial Distribution X ~ B(n,p) The probability of getting exactly k successes: C(n,k)*p k *(1-p) n-k E(X) = np The most likely value: largest integer ≤ (n+1)*p

Start with a simple case meanall H = average fraction of satisfied constraints in H among all assignments to the variables of H. meanall H = 1/2 (meanall H(k=1) + meanall H(k=0) ) meanall H = Sum all relations R in H t R (H) m R 2 –r(R) –m R is the number of satisfying rows in truth table of R. –check: relation 22: 3/8 –max 0 ≤ k ≤ n mean H (n,k) = 4/9 + h(n)

Start with a simple case meanall H = 1/2 (meanall H(k=1) + meanall H(k=0) ) Proof: Consider all 2^n assignments. Half set k=1 and half set k=0. We compute the average for the two halves separately and then average the two averages to get the overall average.

Start with a simple case Algorithm MEANALL Input: H = CSP(G)-formula Output: An assignment which satisfies meanall H if meanall H(k=1) > meanall H(k=0)

Next more complicated case mean H (n,k) Want an algorithm for finding an assignment at least as good as mean H (n,k)

mean polynomials mean H (n,k)= k/n * mean H(k=1) (n-1,k-1) + (n-k)/n * mean H(k=0) (n-1,k) C(n,k) = C(n-1,k-1)+C(n,k)

Example C(5,2) = C(4,1) + C(4,2) 10 = mean(5,2) = C(4,1)/C(5,2) * mean(4,1) + C(4,2)/C(5,2) * mean(4,2) mean(n,k) = C(n-1,k-1)/C(n,k) * mean(n-1,k-1) + C(n-1,k)/C(n,k) * mean(n-1,k)

Ahmed’s and Christine’s conjecture For all assignments M: max 0 ≤ x ≤ 1 appmean n-map(H,M) (x) ≥ fsat(H,M) An assignment M is maximal if max 0 ≤ x ≤ 1 appmean n-map(H,M) (x) = fsat(H,M) Intuition: an assignment is maximal if the polynomials don’t help.

Question Does it matter in the definition of maximal whether we use appmean or mean?

Proofs or Counterexamples Using Daniel’s programs, we can easily find counterexamples. If we cannot find counterexamples, we should find proofs.

Which ones are correct? mean H (n,k)= k/n mean H(k=1) (n-1,k-1) + (n-k)/n mean H(k=0) (n-1,k) mean H (n,k) ≤ max( mean H(k=1) (n-1,k-1), mean H(k=0) (n-1,k)) appmean H (x) ≤ max( appmean H(k=1) (x), appmean H(k=0) (x))

Which ones are correct? Def: maxappmean H (x) = max 0 ≤ x ≤ 1 appmean H (x) maxappmean H (x) ≤ max( maxappmean H(k=1) (x), maxappmean H(k=0) (x))

By Analogy mean H (n,k)=k/n mean H(k=1) (n-1,k-1) +(n- k)/n mean H(k=0) (n-1,k) For 1 ≤ k ≤ n: appmean H (k/n)=k/n appmean H(k=1) ((k-1)/(n-1)) +(n-k)/n appmean H(k=0) (k/(n-1))

Example Relation = 22 appmean(x) = 3 x (1-x) 2 mean(n,k) = (3 / C(3,1)) * k * C(n-k,2) / C(n,3) = k * C(n-k,2) / C(n,3) rough approximation: –k/n * ((n-k)/n) 2 * 1/2 * 6 = 3 x (1-x) 2