Presentation is loading. Please wait.

Presentation is loading. Please wait.

Independence and chromatic number (and random k-SAT): Sparse Case Dimitris Achlioptas Microsoft.

Similar presentations


Presentation on theme: "Independence and chromatic number (and random k-SAT): Sparse Case Dimitris Achlioptas Microsoft."— Presentation transcript:

1 Independence and chromatic number (and random k-SAT): Sparse Case Dimitris Achlioptas Microsoft

2 Random graphs

3 Let be the moment all vertices have degree 2 Hamiltonian cycle

4 Let be the moment all vertices have degree 2 W.h.p. has a Hamiltonian cycle Hamiltonian cycle

5 Let be the moment all vertices have degree 2 W.h.p. has a Hamiltonian cycle W.h.p. it can be found in time [Ajtai, Komlós, Szemerédi 85] [Bollobás, Fenner, Frieze 87]

6 Let be the moment all vertices have degree 2 W.h.p. has a Hamiltonian cycle W.h.p. it can be found in time [Ajtai, Komlós, Szemerédi 85] [Bollobás, Fenner, Frieze 87] In Hamiltonicity can be decided in O( n ) expected time. [Gurevich, Shelah 84] Hamiltonian cycle

7 Cliques in random graphs The largest clique in has size [Bollobás, Erdős 75] [Matula 76]

8 The largest clique in has size No maximal clique of size < Cliques in random graphs [Bollobás, Erdős 75] [Matula 76]

9 The largest clique in has size No maximal clique of size < Can we find a clique of size ? [Karp 76] Cliques in random graphs [Bollobás, Erdős 75] [Matula 76]

10 The largest clique in has size No maximal clique of size < Can we find a clique of size ? What if we “hide” a clique of size ? Cliques in random graphs [Bollobás, Erdős 75] [Matula 76]

11 Two problems for which we know much less. Chromatic number of sparse random graphs Random k -SAT

12 Two problems for which we know much less. Chromatic number of sparse random graphs Random k -SAT Canonical for random constraint satisfaction: – Binary constraints over k -ary domain – k -ary constraints over binary domain Studied in: AI, Math, Optimization, Physics,…

13 A factor-graph representation of k -coloring Each vertex is a variable with domain { 1, 2,…,k}. Each edge is a constraint on two variables. All constraints are “not-equal”. Random graph = each constraint picks two variables at random. Vertices Edges... v1v1 e1e1 e2e2 v2v2

14 SAT via factor-graphs

15 Edge between x and c iff x occurs in clause c. Edges are labeled +/- to indicate whether the literal is negated. Constraints are “at least one literal must be satisfied”. Random k-SAT = constraints pick k literals at random. Clause nodes Variable nodes Clause nodes... x c

16 Diluted mean-field spin glasses Variables Constraints... x c Small, discrete domains: spins Conflicting, fixed constraints: quenched disorder Random bipartite graph: lack of geometry, mean field Sparse: diluted Hypergraph coloring, random XOR-SAT, error-correcting codes…

17 Random graph coloring: Background

18 A trivial lower bound For any graph, the chromatic number is at least: Number of vertices Size of maximum independent set

19 A trivial lower bound For any graph, the chromatic number is at least: Number of vertices Size of maximum independent set For random graphs, use upper bound for largest independent set.

20 An algorithmic upper bound Repeat Pick a random uncolored vertex Assign it the lowest allowed number (color) Uses 2 x trivial lower bound number of colors

21 An algorithmic upper bound Repeat Pick a random uncolored vertex Assign it the lowest allowed number (color) Uses 2 x trivial lower bound number of colors No algorithm is known to do better

22 The lower bound is asymptotically tight As d grows, can be colored using independent sets of essentially maximum size [Bollobás 89] [ Łuczak 91]

23 The lower bound is asymptotically tight As d grows, can be colored using independent sets of essentially maximum size [Bollobás 89] [ Łuczak 91]

24 Only two possible values [ Łuczak 91]

25 “The Values”

26 Examples If, w.h.p. the chromatic number is or. 4 5

27 Examples If, w.h.p. the chromatic number is or. If, w.h.p. the chromatic number is or 377145549067226075809014239493833600551612641764765068157 377145549067226075809014239493833600551612641764765068157 6 5 4 5

28 One value

29 If, then w.h.p. the chromatic number is One value

30 Random k-SAT: Background

31 Random k-SAT Fix a set of n variables Among all possible k-clauses select m uniformly and independently. Typically. Example ( ) :

32 Generating hard 3-SAT instances [Mitchell, Selman, Levesque 92]

33 Generating hard 3-SAT instances [Mitchell, Selman, Levesque 92] The critical point appears to be around

34 The satisfiability threshold conjecture For every, there is a constant such that

35 The satisfiability threshold conjecture For every, there is a constant such that For every,

36 Unit-clause propagation Repeat – Pick a random unset variable and set it to 1 – While there are unit-clauses satisfy them – If a 0-clause is generated fail

37 Unit-clause propagation Repeat – Pick a random unset variable and set it to 1 – While there are unit-clauses satisfy them – If a 0-clause is generated fail UC finds a satisfying truth assignment if [Chao, Franco 86]

38 The probability of satisfiability it at most An asymptotic gap

39 Since mid-80s, no asymptotic progress over

40 Getting to within a factor of 2

41 The trivial upper bound is the truth!

42 Some explicit bounds for the k-SAT threshold

43 The second moment method For any non-negative r.v. X,

44 Ideal for sums

45

46 General observations Method works well when the X i are like “needles in a haystack”

47 General observations Method works well when the X i are like “needles in a haystack” Lack of correlations rapid drop in influence around solutions

48 General observations Method works well when the X i are like “needles in a haystack” Lack of correlations rapid drop in influence around solutions Algorithms get no “hints”

49 Let X be the # of satisfying truth assignments The second moment method for random k-SAT

50 Let X be the # of satisfying truth assignments The second moment method for random k-SAT

51 Let X be the # of satisfying truth assignments The number of satisfying truth assignments has huge variance. The second moment method for random k-SAT

52 Let X be the # of satisfying truth assignments The number of satisfying truth assignments has huge variance. The satisfying truth assignments do not form a “uniformly random mist” in The second moment method for random k-SAT

53 Let be the number of satisfied literal occurrences in F under σ where satisfies. To prove 2 k ln2 – k/2 - 1

54 Let be the number of satisfied literal occurrences in F under σ Let be defined as where satisfies. To prove 2 k ln2 – k/2 - 1

55 Let be the number of satisfied literal occurrences in F under σ Let be defined as where satisfies.

56 General functions Given any t.a. σ and any k -clause c let be the values of the literals in c under σ.

57 General functions Given any t.a. σ and any k -clause c let be the values of the literals in c under σ. We will study random variables of the form where is an arbitrary function

58 # of satisfying truth assignments # of “Not All Equal” truth assignments (NAE)

59 # of satisfying truth assignments # of satisfying truth assignments whose complement is also satisfying

60 Overlap parameter = distance Overlap parameter is Hamming distance

61 Overlap parameter = distance Overlap parameter is Hamming distance For any f, if agree on variables

62 Overlap parameter = distance Overlap parameter is Hamming distance For any f, if agree on variables For any f, if agree on variables, let

63 Independence Contribution according to distance

64 Entropy vs. correlation For every function f : Recall:

65 Independence Contribution according to distance

66 ®=z/n®=z/n

67 The importance of being balanced An analytic condition: the s.m.m. fails

68 ®=z/n®=z/n

69 NAE 5-SAT

70

71 The importance of being balanced An analytic condition: the s.m.m. fails

72 The importance of being balanced An analytic condition: the s.m.m. fails A geometric criterion:

73 The importance of being balanced (+1,+1,…,+1) (-1,-1,…,-1) Constant SAT Complementary

74 Balance & Information Theory Want to balance vectors in “optimal” way

75 Balance & Information Theory Want to balance vectors in “optimal” way Information theory maximize the entropy of the subject to and

76 Balance & Information Theory Want to balance vectors in “optimal” way Information theory maximize the entropy of the subject to and Lagrange multipliers the optimal f is for the unique that satisfies the constraints

77 Balance & Information Theory Want to balance vectors in “optimal” way Information theory maximize the entropy of the subject to and (+1,+1,…,+1) (-1,-1,…,-1) Heroic

78 Random graph coloring

79 Threshold formulation. and whp non - k - colorable if

80 Main points Non- k -colorability: k -colorability: Proof. The¬probability that¬there exists¬any k-coloring¬is at¬most

81 Setup Let X σ be the indicator that the balanced k -partition σ is a proper k -coloring. We will prove that if then for all there is a constant such that This implies that is k -colorable w.h.p.

82 Setup sum over all of. For any pair of balanced k -partitions let a ij n be the # of vertices having color i in σ and color j in τ.

83 Examples When are uncorrelated, is the flat matrix A Balance is doubly-stochastic.

84 Examples When are uncorrelated, is the flat matrix As align, tends to the identity matrix A Balance is doubly-stochastic. IA

85 A matrix-valued overlap So,

86 A matrix-valued overlap over doubly-stochastic matrices. So, which is controlled by the maximizer of

87 A matrix-valued overlap over doubly-stochastic matrices. So, which is controlled by the maximizer of

88 Entropy decreases away from the flat matrix – For small, this loss overwhelms the sum of squares gain – But for large enough,…. The maximizer jumps instantaneously from flat, to a matrix where vertices capture the majority of mass Proved this happens only for cc k 2

89 Entropy decreases away from the flat matrix – For small, this loss overwhelms the sum of squares gain – But for large enough,…. The maximizer jumps instantaneously from flat, to a matrix where vertices capture the majority of mass Proved this happens only for c k c 2

90 Entropy decreases away from the flat matrix – For small, this loss overwhelms the sum of squares gain – But for large enough,…. The maximizer jumps instantaneously from flat, to a matrix where vertices capture the majority of mass Proved this happens only for cc k 2

91 Entropy decreases away from the flat matrix – For small, this loss overwhelms the sum of squares gain – But for large enough,…. The maximizer jumps instantaneously from flat, to a matrix where entries capture majority of mass k cc 2

92 Entropy decreases away from the flat matrix – For small, this loss overwhelms the sum of squares gain – But for large enough,…. The maximizer jumps instantaneously from flat, to a matrix where entries capture majority of mass k This jump happens only after cc 2

93 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:

94 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.

95 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.  Prescribe the L 2 norm of each row ρ i.

96 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.  Prescribe the L 2 norm of each row ρ i.  Find max-entropy, f(ρ i ), of each row given ρ i.

97 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.  Prescribe the L 2 norm of each row ρ i.  Find max-entropy, f(ρ i ), of each row given ρ i.  Prove that f ’’’ > 0.

98 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.  Prescribe the L 2 norm of each row ρ i.  Find max-entropy, f(ρ i ), of each row given ρ i.  Prove that f ’’’ > 0.  Use (4) to determine the optimal distribution of the ρ i given their total ρ.

99 Proof overview Proof. Compare the value at the flat matrix with upper bound for everywhere else derived by:  Relax to singly stochastic matrices.  Prescribe the L 2 norm of each row ρ i.  Find max-entropy, f(ρ i ), of each row given ρ i.  Prove that f ’’’ > 0.  Use (4) to determine the optimal distribution of the ρ i given their total ρ.  Optimize over ρ.

100 Random regular graphs

101 Maximize subject to for some A vector analogue (optimizing a single row)

102 subject to for some k Maximize A vector analogue (optimizing a single row)

103 subject to for some For the maximizer is Maximize A vector analogue (optimizing a single row)

104 subject to for some For the maximizer is Maximize A vector analogue (optimizing a single row)

105 subject to for some For the maximizer is Maximize A vector analogue (optimizing a single row)

106 subject to for some For the maximizer is Maximize A vector analogue (optimizing a single row)

107 subject to for some For the maximizer is For the maximizer is Maximize A vector analogue (optimizing a single row)

108 Create a composite image of an object that: – Minimizes “empirical error” Typically, least-squares error over luminance – Maximizes “plausibility” Typically, maximum entropy Maximum entropy image restoration

109 Structure of maximizer helps detect stars in astronomy Maximum entropy image restoration

110 The End


Download ppt "Independence and chromatic number (and random k-SAT): Sparse Case Dimitris Achlioptas Microsoft."

Similar presentations


Ads by Google