Presentation is loading. Please wait.

Presentation is loading. Please wait.

Definition and Related Problems

Similar presentations


Presentation on theme: "Definition and Related Problems"— Presentation transcript:

1 Definition and Related Problems
Lattices Definition and Related Problems

2 Lattices Definition (lattice): Given a basis v1,..,vnRn,
The lattice L=L(v1,..,vn) is

3 Illustration - A lattice in R2
Each point corresponds to a vector in the lattice “Recipe”: 1. Take two linearly independent vectors in R2. 2. Close them for addition and for multiplication by an integer scalar. etc. ... etc. ...

4 Shortest Vector Problem
SVP (Shortest Vector Problem): Given a lattice L find s  0  L s.t. for any x  0  L || x ||  || s ||.

5 The Shortest Vector - Examples
What’s the shortest vector in the lattice spanned by the two given vectors?

6 Closest Vector Problem
CVP (Closet Vector Problem): Given a lattice L and a vector yRn, find a vL, s.t. || y - v || is minimal. Which lattice vector is closest to the marked vector?

7 Lattice Approximation Problems
g-Approximation version: Find a vector y s.t. ||y|| < g  shortest(L) g-Gap version: Given L, and a number d, distinguish between The ‘yes’ instances ( shortest(L)  d ) The ‘no’ instances ( shortest(L) > gd ) shortest If g-Gap problem is NP-hard, then having a g-approximation polynomial algorithm --> P=NP.

8 Lattice Approximation Problems
g-Approximation version: Find a vector y s.t. ||y|| < g  shortest(L) g-Gap version: Given L, and a number d, distinguish between The ‘yes’ instances ( shortest(L)  d ) The ‘no’ instances ( shortest(L) > gd ) shortest If g-Gap problem is NP-hard, then having a g-approximation polynomial algorithm --> P=NP.

9 Lattice Problems - Brief History
[Dirichlet, Minkowski] no CVP algorithms… [LLL] Approximation algorithm for SVP, factor 2n/2 [Babai] Extension to CVP [Schnorr] Improved factor, 2n/lg n for both CVP and SVP [vEB]: CVP is NP-hard [ABSS]: Approximating CVP is NP hard to within any constant Almost NP hard to within an almost polynomial factor.

10 Lattice Problems - Recent History
[Ajtai96]: worst-case/average-case reduction for SVP. [Ajtai-Dwork96]: Cryptosystem. [Ajtai97]: SVP is NP-hard (for randomized reductions). [Micc98]: SVP is NP-hard to approximate to within some constant factor. [DKRS]: CVP is NP hard to within an almost polynomial factor. [LLS]: Approximating CVP to within n1.5 is in coNP. [GG]: Approximating SVP and CVP to within n is in coAMNP.

11 CVP/SVP - which is easier?
Ohh... but isn’t that just an annoying technicality?... Why is SVP not the same as CVP with y=0? CVP/SVP - which is easier? Reminder: Definition (Lattice): Given a basis v1,..,vnRn, The lattice L=L(v1,..,vk) is {aivi | ai integers} SVP (Shortest Vector Problem): Find the shortest non-zero vector in L. CVP (Closest Vector Problem): Given a vector yRn, find a vL closest to y. shortest y closest

12 Trying to Reduce SVP to CVP
Note that we can similarly try: ...But this will also yield s=0... #1 try: y=0  c=0 #1 try: y=0 SVP B BSVP (c1-1,c2,...,cn) e1 c c s CVP b1 y Finds (c1,...,cn)Zn which minimizes || c1b1+...+cnbn- y || b1 Finds (c1,...,cn)0Zn which minimizes || c1b1+...+cnbn ||

13 Geometrical Intuition
b1 b2 The obvious reduction: the shortest vector is the difference between (say) b2 and the lattice vector closest to b2 (not b2!!) shortest: b2-2b1 Thus we would like to somehow “extract” b2 from the lattice, so the oracle for CVP will be forced to find the non-trivial vector closest to b2. The closest to b2, besides b2 itself. The lattice L ...This is not as simple as it sounds...

14 Trying to Reduce SVP to CVP
That’s not really a problem! Since one of the coefficients of the SV must be odd (Why?), we can do this process for all the vectors in the basis and take the shortest result! The trick: replace b1 with 2b1 in the basis Since c1Z, s0 SVP B(1) B BSVP (2c1 (c1-1,c2,...,cn) CVP c c s c1 b1 y But in this way we only discover the shortest vector among those with odd coefficients for b1 Finds (c1,...,cn)Zn which minimizes || c1b1+...+cnbn- y || || c12b1+ b1 Finds (c1,...,cn)0Zn which minimizes || c1b1+...+cnbn ||

15 Geometrical Intuition
By doubling a vector in the basis, we extract it from the lattice without changing the lattice “too much” The closest to b2 in the original lattice. Also in the new lattice. The closest to b1 in the original lattice, lost in the new lattice The lattice L’’ L L’’=span (2b1,b2) The lattice L’ L L’=span (b1,2b2) But we risk losing the closest point in the process. It’s a calculated risk though: one of the closest points has to survive...

16 The Reduction of g-SVP to g-CVP
Input: A pair (B,d), B=(b1,..,bn) and dR for j=1 to n do invoke the CVP oracle on(B(j),bj,d) Output: The OR of all oracle replies. Where B(j) = (b1,..,bj-1,2bj,bj+1,..,bn)

17

18 Hardness of SVP & applications
Finding and even approximating the shortest vector is hard. Next we will see how this fact can be exploited for cryptography. We start by explaining the general frame of work: a well known cryptographic method called public-key cryptosystem.

19 Public-Key Cryptosystems and brave spies...
The enemy will attack within a week The brave spy wants to send the HQ a secret message In the HQ a brand new lock was developed for such cases ...But the enemy is in between... THEY can see and duplicate whatever transformed The spy can easily lock it without the key The solution: HQ->spy And now the locked message can be sent to the HQ without fear, And read only there. HQ<--Spy

20 Public-Key Cryptosystem (76)
Requirements: Two poly-time computable functions Encr and Decr, s.t: 1. x Decr(Encr(x))=x 2. Given Encr(x) only, it is hard to find x. Usage Make Encr public so anyone can send you messages, keep Decr private.

21 The Dual Lattice L* = { y | x  L: yx  Z}
Give a basis {v1, .., vn} for L one can construct, in poly-time, a basis {u1,…,un}: ui  vj = 0 ( i  j) ui  vi = 1 In other words U = (Vt)-1 where U = u1,…,un V = v1, .., vn

22 Shortest Vector - Hidden Hyperplane
Observation: the shortest vector induces distinct layers in the dual lattice. distance = 1/||S|| -s H0 = {y| ys = 0} H1 = {y| ys = 1} Hk = {y| ys = k} s – shortest vector H – hidden hyperplane

23 Encrypting Given the lattice L, the encryption is polynomial:
Encoding 0 Encoding 1 s (1) Choose a random lattice point (2) Perturb it Choose a random point s – shortest vector H – hidden hyperplane

24 Decrypting Given s, decryption can be carried out in polynomial-time, otherwise it is hard s Decoding 0 Decoding 1 s If the projection is close to one of the hyperplanes If the projection of the point is not close to any of the hyperplanes s – shortest vector H – hidden hyperplane

25

26 GG Approximating SVP and CVP to within n is in NP  coAM Hence if these problem are shown NP-hard the polynomial-time hierarchy collapses

27 The World According to Lattices
GG Ajtai-Micciancio DKRS L3 CVP NPco-AM Poly-time approximation SVP 1+1/n 1 O(1) O(logn) 2 n1/lglgn n 2n/lgn NP-hardness

28 Is g-SVP NP-hard to within n ?
OPEN PROBLEMS Is g-SVP NP-hard to within n ? For super-polynomial, sub-exponential factors; is it a class of its own? Can LLL be improved? CVP NPco-AM Poly-time approximation SVP 1+1/n 1 O(1) O(logn) 2 n1/lglgn n 2n/lgn NP-hardness

29

30 Approximating SVP in Poly-Time
The LLL Algorithm

31 What’s coming up next? To within what factor can SVP be approximated?
In this chapter we describe a polynomial time algorithm for approximating SVP to factor 2(n-1)/2. We would later see that approximating the shortest vector to within 2/(1+2) for some >0 is NP-hard.

32 The Fundamental Insight(?)
Assume an orthogonal basis for a lattice. The shortest vector in this lattice is

33 Illustration x=2v1+v2 ||x||>||2v1|| x v2 ||x||>||v2|| v1

34 The Fundamental Insight(!)
Assume an orthogonal basis for a lattice. The shortest vector in this lattice is the shortest basis vector

35 Why? If a1,...,akZ and v1,...,vk are orthogonal, then ||a1v1+...+akvk||2 = a12•||v1||2+...+ak2•||vk||2 Therefore if vi is the shortest basis vector, and there exits an 1  i  n s.t ai  0, then ||a1v1+...+akvk||2  ||vi||2(a ak2)  ||vi||2  No non-zero lattice vector is longer than vi

36 What if we don’t get an orthogonal basis?
take a vector and subtract its projections on each one of the vectors already taken Remember the good old Gram-Schmidt procedure: v1 .. vk basis for a sub-space in Rn Gram-Schmidt v1* .. vk* orthogonal basis for the same sub-space in Rn

37 Projections Computing the projection of v on u (denoted w): v u w

38 Formally: The Gram-Schmidt Procedure
Input: a basis {v1,...,vk} of some subspace in Rn. Output: an orthogonal basis {v1*,...,vk*}, s.t for every 1ik, span({v1,...,vi}) = span({v1*,...,vi*}) Process: The procedure starts with {v1*=v1}. Each iteration (1<ik) adds a vector, which is orthogonal to the subspace already spanned:

39 Example v3 v3* v3v1 only to simplify the presentation
The sub-space spanned by v1*,v2* v2* v1* The projection of v3 on v2*

40 Wishful Thinking Unfortunately, the basis Gram-Schmidt constructs doesn’t necessarily span the same lattice v1 .. vk basis for a sub-space in Rn Gram-Schmidt v1* .. vk* orthogonal basis for the same sub-space in Rn v1 v2 Example: the projection of v2 on v1 is 1.35v1. v1 and v2-1.35v1 don’t span the same lattice as v1 and v2. lattice lattice As a matter of fact, not every lattice even has an orthogonal basis...

41 Nevertheless... Invoking Gram-Schmidt on a lattice basis produces a lower-bound on the length of the shortest vector in this lattice.

42 Lower-Bound on the Length of the Shortest Vector
Claim: Let vi* be the shortest vector in the basis constructed by Gram-Schmidt. For any non-zero lattice vector x: ||vi*||  ||x|| Proof: jm, the projection of vj on vm* is 0. The projection of vm on vm* is vm*. There exist z1,...,zkZ, r1,...,rkR, such that Let m be the largest index for which zm0  rm=zm, and thus ||x||  rm||vm*|| = zm||vm*||  ||vi*|| 

43 Compromise Still we’ll have to settle for less than an orthogonal basis: We’ll construct reduced basis. Reduced basis are composed of “almost” orthogonal and relatively short vectors. They will therefore suffice for our purpose.

44 Reduced Basis Definition (reduced basis): (1)  1  j < i  n ij
A basis {v1,…,vn} of a lattice is called reduced if: (1)  1  j < i  n ij The projection of vi+1 on {vi*,...,vn*} (2)  1  i < n ¾||vi*||2  ||vi+1*+i+1,ivi*||2 The projection of vi on {vi*,...,vn*}

45 Properties of Reduced Basis(1)
Claim: If a basis {v1,...,vn} is reduced, then for every 1i<n ½||vi*||2  ||vi+1*||2 Proof: Since |<vi+1,vi*>/<vi*,vi*>|½ Since {v1,...,vn} is reduced Since vi* and vi+1* are orthogonal ¾|| ||2  || ||2 || || || ||2 || || || ||2 And the claim follows.  Corollary: By induction on i-j, for all ij, (½)i-j ||vj*||2||vi*||2

46 Properties of Reduced Basis(2)
Claim: If a basis {v1,...,vn} is reduced, then for every 1jin (½)i-1 ||vj||2  ||vi*||2 Proof: Since {v1*,...,vn*} is an orthogonal basis Some arithmetics... By the previous corollary Which implies that Rearranging the terms Geometric sum Since |<vi+1,vi*>/<vi*,vi*>|½ and 1kj-1 ||vk*||2 (½)k-j ||vj*||2 And this is in fact what we wanted to prove. 

47 Approximation for SVP The previous claim together with the lower bound min||vi*|| on the length of the shortest vector result: The length of the first vector of any reduced basis provides us with at least a 2(n-1)/2 approximation for the length of the shortest vector. It now remains to show that any basis can be reduced in polynomial time.

48 Reduced Basis Recall the definition of reduced basis is composed of two requirements:  1  j < i  n |ij|  ½  1  i < n ¾||vi*||2  ||vi+1*+i+1,ivi*||2 We introduce two types of “lattice-preserving” transformations: reduction and swap, which will allow us to reduce any given basis.

49 First Transformation: Reduction
The transformation ( for 1  l < k  n ) The consequences vk vk-kl•vl 1in vi vi Using this transformation we can ensure for all j<i |ij|  ½ 1in vi* vi* 1j<l kj kj - kl•lj kl kl - kl 1j<in ij ij ik

50 Second Transformation: Swap
The transformation ( for 1  k  n ) The important consequence vk vk+1 vk+1 vk 1in vi vi vk* vk+1*+k+1,kvk* If we use this transformation, for a k which satisfies ¾||vk*||2 > ||vk+1*+k+1,kvk*||2, we manage to reduce the value of ||vk*||2 by (at most) ¾.

51 Algorithm for Basis Reduction
Use the reduction transformation to obtain |ij|  ½ for any i>j. Apply the swap transformation for some k which satisfies ¾||vk*||2>||vk+1*+k+1,kvk*||2. Stop if there is no such k.

52 Termination Given a basis {v1,...,vn}; viZn i=1,...,n for a lattice, define: di=||v1*||2 •... •||vi*||2 D=d1•... •dn-1 di is the square of the determinant of the lattice of rank i spanned by v1,...,vi. Thus DN.

53 Termination How are changes in the basis affect D?
The reduction transformation doesn’t change the vj*-s, and therefore does not affect D. Suppose we apply the swap transformation for i. For all ji, dj is not affected by the swap. Also for all j<i vj* is unchanged. Thus D is reduced by a factor < ¾.

54 Polynomial-Time Since DN and its value decreases with every iteration, the algorithm necessarily terminates. Note that D’s initial value is poly-time computable, what implies the total number of iterations is also polynomial. It is also true that each iteration takes polynomial time. The proof is omitted here.

55 Summary We have seen, that reduced basis give us an approximation of 2(n-1)/2 on SVP. We have also presented a polynomial-time algorithm for the construction of such basis ([LLL82]).

56

57 Hardness of Approx. SVP [MICC]
GapSVPg: Input: (B,d) where B is a basis for a lattice in Rn and d  R. Yes instances: (B,d) s.t No instances: (B,d) s.t GapCVP’g: Input: (B,y,d) where B  Zkn, y  Zk, and d  R. Yes instances: (B,y,d) s.t No instances: (B,y,d) s.t

58 Reducing CVP to SVP We will use the fact that GapCVPc’ is NP-hard for every constant c, and give a reduction from GapCVP’2/ to GapSVP2/(1+2), for every  > 0.

59 A Robust Lattice: No Short Vectors
lnp1 lnp2 lnpm lnp1  lnpm L . p1,…,pm – the m smallest prime numbers. D P

60 A Robust Lattice: No Short Vectors
Lemma: Proof: Let zZm be a non-zero vector. Define , and g = g+ · g-. g+ and g- are integers, and z  0  g+  g-  |g+ - g-|  lnp1 lnp2 lnpm lnp1  lnpm L . p1,…,pm – the m smallest prime numbers. D P

61 A Robust Lattice: No Short Vectors (2)
Proof (cont.):  which is a convex function of g with minimum in g = 2.  lnp1 lnp2 D . . . L lnpm lnp1  lnpm P

62 A Robust Lattice: Many Close Vectors
lnp1 lnp2 lnpm lnp1  lnpm L . -lnb

63 A Robust Lattice: Many Close Vectors
Lemma:  z  {0,1}m, if then Proof:  -s lnp1 lnp2 lnpm lnp1  lnpm L . -lnb

64 Scheme

65 The End Result -s L .   Bnxk  Ckxm -lnb -y SVP lattice lnp1
lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

66 The End Result Lemma: Let Z  {0,1}m be a set of vectors containing exactly n 1’s. If and C  {0,1}km is chosen by setting each entry independently at random with probability , then Pr[x{0,1}k zZ Cz = x] > 1 - 7. -s lnp1 lnp2 lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

67 The End Result (2) -s L .   Bnxk  Ckxm -lnb -y SVP lattice lnp1
lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

68 The End Result (2) Lemma: For any constant  > 0 there exists a probabilistic polynomial time algorithm that on input 1k computes a lattice L  R(m+1)m, a vector s  Rm+1 and a matrix C  Zkm s.t. with probability arbitrarily close to 1 .  x  {0,1}k  z  Zm s.t. Cz = x and -s lnp1 lnp2 lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

69 The End Result (3) Proof: Let  be a constant 0 <  < 1, let k be a sufficiently large integer, and let  = b1-.  Let m = k4/+1 and Let Z be the set of vectors z  {0,1}m containing exactly n 1’s, s.t.  -s lnp1 lnp2 lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

70 The End Result (4) Proof (cont.): Let S be the set of all products of n distinct primes  pm.  |S| = |Z|. By the prime number theorem  1  i  m pi < 2mlnm=M.  S  [1,Mn]. . Divide [1,Mn] into intervals of the form [b,b+b’] where There are O(M(1-)n) such intervals.  they contain an average of at least elements of S each. -s lnp1 lnp2 lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

71 The End Result (5) Proof (cont.): Choose a random element of S and select the interval containing it. The probability that this interval contains less than elements of S is at most O((lnm/2)-n) < 2-n.  for all sufficiently large k we can assume and therefore with probability arbitrarily close to 1,  x  {0,1}k  z  Z Cz = x. -s lnp1 lnp2 lnpm lnp1  lnpm L SVP lattice   Bnxk  Ckxm -y . -lnb

72 Sum Up Theorem: The shortest vector in a lattice is NP-hard to approximate within any constant factor less than 2. Proof: The proof is by reduction from GapCVP’c to GapSVPg, where c = 2/ and g = 2/(1+2). Let (B,y,d) be an instance of GapCVP’c. We define an instance (V,t) of GapSVPg s.t. - (B,y,d) is a Yes instance  (V,t) is a Yes instance. - (B,y,d) is a No instance  (V,t) is a No instance. Let L, s, C and V be as defined above, where  =  / d, and let t = (1+2).

73 Completeness Proof (cont.):
Assume that (B,y,d) is a Yes instance of GapCVP’c.  From the previous lemma  z  Zm s.t. Cz = x and Define a vector  (V,t) is a Yes instance of GapSVPg.

74 Soundness Proof (cont.):
Assume that (B,y,d) is a No instance of GapCVP’c, and let If w = 0 then z  0, and If w  0 then   (V,t) is a No instance of GapSVPg.

75

76 Ajtai: SVP Instances Hard on Average
Approximating SVP (factor= nc ) On random instances from a specific constructible distribution Approximating Shortest Basis (factor= n10+c ) Approximating SVP (factor= n10+c ) Finding Unique-SVP

77 Average-Case Distribution
Pick an n*m matrix A, with coefficients uniformly ranging over [0,…,q-1] A = v1 v2 … vm Def: (A) = {x  Zm | Ax  0 mod q } Denote by this distribution

78 A mod-q lattice: (v1 v2 v3 v4)
(2,0,0,1) (1,1,1,0) q(a,b,c,d)

79 A Lattice with a Short Vector
For (which generates a lattice with a short vector) only v1,…,vm-1 are chosen randomly. vm is generated by choosing 1,…,m-1 R {0,1} and setting . Lemma: For a sufficiently large m, the distribution is exponentially close to the distribution

80 sh & bl Def: sh(L) – the length of the shortest vector in L.
Def: Let b1,…,bn  L. The length of {b1,…,bn} is defined by Def: bl(L) – the length of the shortest basis of L.

81 SVP & BL Def: SVPf( ) – for at least ½ of L  find a vector of length at most f(n)sh(L). Def: The problem BLf(L) – find a basis of length at most f(n)bl(L).

82 BL  SVP() Thm: There are constants c1,c2,c3 such that
Proof: Assuming a procedure for SVP() we construct a poly-time algorithm that finds a short basis for any lattice L. Lemma: Given a set of linearly independent elements r1,…,rn  L we can construct, in polynomial time, a basis s1,…,sn of L such that

83 Halving M Let a1,…,an  L be a set of independent elements, and let The previous lemma shows that if , we can find a short basis. In case , we will construct another set of linearly independent elements, b1,…,bn  L, such that Iterating this process, for logM steps, we can find a set of linearly independent c1,…,cn  L, such that

84 L. I. a1,…,an  L.I. f1,…,fn  L, s.t. & W = P(f1,…,fn) (parallelepiped) is almost a cube [i.e. the distance of each vertex of W from the vertices of a fixed cube is at most nM] Now we cut W into qn parallelepipeds.

85 Let vi be the corner of the parallelepiped that contains i
Let vi be the corner of the parallelepiped that contains i. v1,…,vm define a lattice from Therefore, we can find, with probability  ½ a vector h  Zm s.t and We take a random sequence of lattice points 1,…,m, and find for each 1  i  m the parallelepiped that contains i.

86 Proposition: With positive probability
and

87 SVP  BL Lemma: There is an absolute constant c such that 1  sh(L*)bl(L)  cn2. Theorem: Proof: If we can get an estimate on bl(L*), then by the above lemma, we can obtain an estimate on sh((L*)*) = sh(L).

88

89 Hardness of approx. CVP [DKRS]
g-CVP is NP-hard for g=n1/loglog n n - lattice dimension Improving Hardness (NP-hardness instead of quasi-NP-hardness) Non-approximation factor (from 2(logn)1-)

90 [ABSS] reduction: uses PCP to show
NP-hard for g=O(1) Quasi-NP-hard g=2(logn)1- by repeated blow-up. Barrier - 2(logn)1- const >0 SSAT: a new non-PCP characterization of NP. NP-hard to approximate to within g=n1/loglogn .

91 SAT Input: =f1,..,fn Boolean functions ‘tests’
x1,..,xn’ variables with range {0,1} Problem: Is  satisfiable? Thm (Cook-Levin): SAT is NP-complete (even when depend()=3)

92 SAT as a consistency problem
Input =f1,..,fn Boolean functions - ‘tests’ x1,..,xn’ variables with range R for each test: a list of satisfying assignments Problem Is there an assignment to the tests that is consistent? f(x,y,z) g(w,x,z) h(y,w,x) (0,2,7) (2,3,7) (3,1,1) (1,0,7) (1,3,1) (3,2,2) (0,1,0) (2,1,0) (2,1,5)

93 ||SA(f)|| = |-2|+|2|+|3| = 7 Norm SA - Averagef||A(f)||
Super-Assignments f(x,y,z)’s super-assignment SA(f)=-2(3,1,1)+2(3,2,5)+3(5,1,2) 3 2 1 -1 -2 (1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2) A natural assignment for f(x,y,z) A(f) = (3,1,1) 1 (1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2) ||SA(f)|| = |-2|+|2|+|3| = Norm SA - Averagef||A(f)||

94 Consistency In the SAT case: A(f) = (3,2,5) A(f)|x := (3)
x  f,g that depend on x: A(f)|x = A(g)|x

95 Consistency SA(f) = +3(1,1,2)  -2(3,2,5)  2(3,3,1)
SA(f)|x := +3(1)  0(3) -2+2=0 3 2 1 -1 -2 (3,2,5) (3,3,1) (1) (2) (3) (1,1,2) Consistency: x  f,g that depend on x: SA(f)|x = SA(g)|x

96 g-SSAT - Definition Input:
=f1,..,fn tests over variables x1,..,xn’ with range R for each test fi - a list of sat. assign. Problem: Distinguish between [Yes] There is a natural assignment for  [No] Any non-trivial consistent super-assignment is of norm > g Theorem: SSAT is NP-hard for g=n1/loglog n. (conjecture: g=n ,  = some constant)

97 SSAT is NP-hard to approximate to within g = n1/loglogn
Can’t extend everything at once: recursion-composition paradigm

98 I Reducing SSAT to CVP Yes --> Yes: dist(L,target) = n
f,(1,2) f’,(3,2) Yes --> Yes: dist(L,target) = n No --> No: dist(L,target) > gn Choose w = gn + 1 I w w * 1 2 3 f,f’,x f(w,x) f’(z,x)

99 A consistency gadget w w w * 1 2 3

100 A consistency gadget w w w w w w w w * 1 2 3 a1 a2 a3 b1 b2 b3
w w w w w w w a1 + a2 + a3 = 1 * 1 2 3 + b1 a2 + a3 = 1 + b2 a a3 = 1 + b3 a1 + a2 = 1


Download ppt "Definition and Related Problems"

Similar presentations


Ads by Google