The Polynomial Hierarchy By Moti Meir And Yitzhak Sapir Based on notes from lectures by Oded Goldreich taken by Ronen Mizrahi, and lectures by Ely Porat.

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Isolation Technique April 16, 2001 Jason Ku Tao Li.
Lecture 24 MAS 714 Hartmut Klauck
The class NP Section 7.3 Giorgi Japaridze Theory of Computability.
Department of Computer Science & Engineering
Chapter 5 The Witness Reduction Technique: Feasible Closure Properties of #P Greg Goldstein Andrew Learn 18 April 2001.
Complexity class NP Is the class of languages that can be verified by a polynomial-time algorithm. L = { x in {0,1}* | there exists a certificate y with.
NL equals coNL Section 8.6 Giorgi Japaridze Theory of Computability.
Having Proofs for Incorrectness
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 14-1 Computability and Complexity Andrei Bulatov Cook’s Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
1 Undecidability Andreas Klappenecker [based on slides by Prof. Welch]
The Counting Class #P Slides by Vera Asodi & Tomer Naveh
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Randomized Computation
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Complexity ©D. Moshkovitz 1 And Randomized Computations The Polynomial Hierarchy.
Toward NP-Completeness: Introduction Almost all the algorithms we studies so far were bounded by some polynomial in the size of the input, so we call them.
Theory of Computing Lecture 20 MAS 714 Hartmut Klauck.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
Halting Problem. Background - Halting Problem Common error: Program goes into an infinite loop. Wouldn’t it be nice to have a tool that would warn us.
Difficult Problems. Polynomial-time algorithms A polynomial-time algorithm is an algorithm whose running time is O(f(n)), where f(n) is a polynomial A.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Optimal Proof Systems and Sparse Sets Harry Buhrman, CWI Steve Fenner, South Carolina Lance Fortnow, NEC/Chicago Dieter van Melkebeek, DIMACS/Chicago.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
CSCI 2670 Introduction to Theory of Computing December 1, 2004.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
. CLASSES RP AND ZPP By: SARIKA PAMMI. CONTENTS:  INTRODUCTION  RP  FACTS ABOUT RP  MONTE CARLO ALGORITHM  CO-RP  ZPP  FACTS ABOUT ZPP  RELATION.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Interactive proof systems Section 10.4 Giorgi Japaridze Theory of Computability.
1. 2 Lecture outline Basic definitions: Basic definitions: P, NP complexity classes P, NP complexity classes the notion of a certificate. the notion of.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
Homework 8 Solutions Problem 1. Draw a diagram showing the various classes of languages that we have discussed and alluded to in terms of which class.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
CSCI 2670 Introduction to Theory of Computing December 7, 2005.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
The NP class. NP-completeness
P & NP.
Probabilistic Algorithms
Busch Complexity Lectures: Reductions
Computational Complexity Theory
The Polynomial Hierarchy
Instructor: Aaron Roth
Instructor: Aaron Roth
Presentation transcript:

The Polynomial Hierarchy By Moti Meir And Yitzhak Sapir Based on notes from lectures by Oded Goldreich taken by Ronen Mizrahi, and lectures by Ely Porat.

Introduction A Karp reduction is a deterministic polynomially time-bounded machine that creates an input to another machine. If the second machine is of class NP, our new machine that performs both steps will also be of class NP. The question is whether the new machine can also be deterministic polynomially time-bounded. This is how we have shown that classes belong to NP - Complete. A Cook reduction is when the new machine is a deterministic polynomially time-bounded that makes use of some NP oracle. If this oracle can be computed by a deterministic polynomial time- bounded machine, (that is, if P = NP ), then this is no more powerful than a Karp reduction. Furthermore, if we use the new machine as the oracle, then we will still always get a deterministic polynomially time-bounded Turing machine. But what if P ≠ NP ?

Generalization of Cook Reductions We would like to generalize the idea of Cook reductions. Particularly, we would like to refer to a machine given access to an oracle not necessarily from class NP. Let us assume an oracle from class C. In this case, we may define the class M C to be the class of all oracle machines that use an oracle from class C. A simple cook reduction, in this definition is simply M NP. We may further generalize this in some cases, to define the possible behaviors of the machine M. In this case, if M is a machine associated with class C 1, which can access an oracle belonging to class C 2 in a single deterministic step, then we may define to be the class of all such machines.

Some Examples of Generalized Oracle Machines This can’t be done always, but it can be done in some important cases: P NP - a regular Cook reduction machine P C - a Cook reduction machine with access to oracle from class C NP C - a NP machine with access to oracle from class C BPP C - a probabilistic machine with access to oracle from class C We can note that NP  co- NP  P NP : NP  P NP – For L  NP, we simply take the oracle to be an oracle for L and the P machine will simply query the oracle on the input and return its result. co- NP  P NP – Same as for NP  P NP, except the machine returns the complement of the oracle’s answer.

Definition of  i,  i, and Δ i A Karp reduction’s new machine will always solve only NP problems if the second machine is NP. But as we just saw, a Cook reduction in the same situation will also solve co- NP problems. This shows, that if P ≠ NP, then Cook reductions are more powerful than Karp reductions. Let us define the following:

Definition and Properties of the Polynomial Hierarchy ( PH ) We have defined various  i,  i, and Δ i. And now, define: That is, PH is all of the  i possible. The choice of the union of  i, rather than the union of all  i or Δ i was chosen arbitrarily, but can be justified by the following properties:

Proof: We will show that for any machine M that accepts a language L that is in either  i or  i, we can create a new machine in Δ i + 1 that will accept the original language. Suppose L is in  i, then use the machine M as the oracle of the new machine in Δ i + 1 which is a polynomial machine that simply asks the oracle for the answer. Similarly, if L is in  i, that is L’s complement is in  i, then use the corresponding complement’s machine in  i as the oracle of the new machine in Δ i + 1 but this polynomial machine will complement the oracle’s answer before returning it.

Proof: Trivially, any deterministic machine is a subset of the corresponding non-deterministic machine. So: What remains to be proven is that the same goes for  i+1. This follows from the fact that all are closed under complementation. So, And so, And therefore,

Proof: Given a machine M and an oracle A, it is easy to create a new machine M̃ such that Simply create M̃ as a copy of M and wherever M processes the oracle, flip its answer before giving it to M. If M is a deterministic polynomial time Turing machine, then so is M̃. Thus, for such M and any class C, M co- C = M̃ C. Particularly, for our case, C is  i, and co- C is  i and M is a member of P since it is deterministic polynomial time.

Proof: This is the same as above, except that instead of deterministic machines, we are working with non-deterministic machines: Given a machine M and an oracle A, it is easy to create a new machine M̃ such that Simply create M̃ as a copy of M and wherever M processes the oracle, flip its answer before giving it to M. If M is a non- deterministic polynomial time Turing machine, then so is M̃. Thus, for such M and any class C, M co- C = M̃ C. Particularly, for our case, C is  i, and co- C is  i and M is a member of NP since it is non-deterministic polynomial time.

Definition: L  NP x  L   y such that M(x,y) From here we can define: Definition: L  CO-NP x  L  x  L’  NP  ~  y such that M(x,y)   y M’(x,y) i.e. F  Taut   y F(y)=1 Problem: when given an electric circuit, determine if its minimal. Polynomial Hierarchy

We want to show that this problem belongs to (CO-NP) NP. L will be the language of all minimal electronic circuits. C n  L   C ’ n  x  {1,0} n such that C n (x)  C ’ n (x) or |C n |  |C ’ n | Which means that if C n is in L, then for all other circuits C ’ n, with the same number of input parameter, there is an x  {0,1} n That causes a different output, which mean that C n and C ’ n differ OR that C n is shorter then C ’ n there for is minimal=>C n  L. Polynomial Hierarchy

How can we prove that ? C n  L   y M SAT (C n,y) That is, for all C ’ n  y, The machine will use SAT to determine if there exist x, such that C n (x)  C ’ n (x)  y which means that a CO-NP machine uses SAT which is NP. Therefore, the problem belongs to (CO-NP) NP

The Definition of the class PH (NP) Definition: L  NP if there exists a polynomially bounded and polynomial time recognizable machine M such that: x  L   y such that M(x,y)

The Definition of the class PH (  i ) Definition: L  C I if there exists a polynomially bounded and polynomial time recognizable machine M such that x  L   y 1  y 2  y 3 ……Q i y i such that M(x,y 1,….y i )  P, Q=  or  by its position (odd or even) M is polynomial machine that gets guessing tapes y 1,….y i

Equivalence Of Definitions We will show that the two definitions of PH are equivalent, for every i, the class  i is identical in both definitions. We denote that  i 1 is the set  produced by the first definition We denote that  i 2 is the set  produced by the 2nd definition And the same for 

Equivalence Of Definitions We prove by induction on i that  i,  i 2   i 1 : Base of the induction:  1 was defined to be NP in both cases so there is nothing to prove. We assume that the claim holds for i and prove for i+1: Suppose L   i+1 2 then by definition there exists a machine M such that: x  L   y 1  y 2  y 3 ……Q i y i Q i+1 y i+1 such that M(x,y 1,….y i )  P

Equivalence Of Definitions In other words this means that: x  L   y 1, such that (x,y 1 )  L i Where L i is defined as follows: L i = {(x’,y’):  y 2  y 3 …Q i y i Q i+1 y i+1, such that M(x’,y’,…,y i,y i+1 )}

Equivalence Of Definitions We claim that L i   i 2, this is by complementing the definition of  i 2. If we do this complementation we get: x  L   y 1  y 2  y 3 ……Q i y i, such that M(x,y 1,….y i ) And x  L’   y 1  y 2 …Q’ i y i, such that M’(x,y 1,….y i )

Equivalence Of Definitions This is almost what we had in the definition of L i except for the M’ as opposed to M. Remember that M is polynomial time recognizable therefore its complement is also polynomial time recognizable. Now that we have that L i   i 2, we can use the inductive hypothesis  i 2   i 1.

Equivalence Of Definitions So far we managed to show that: x  L   y 1, such that (x,y 1 )  L i Where L i belongs to  i 1. We now claim that L  NP  i1 This is true because we can write a non-deterministic polynomial-time machine, that decides membership in L by guessing y 1, and using an oracle for L i

Equivalence Of Definitions Therefore we can further conclude that: L  NP  i1  NP  i1   i+1 1

Proof of Equivalence: Again, this will be proven by induction. The base of the induction, as before, is that  1 was defined to be NP in both cases, so The induction step assumes that implies Now, suppose, then there exists a non-deterministic polynomial time machine M that uses an oracle A  From the quantifier-definition, we know that if L ′ is the language of A, then L ′ can be described in terms of quantifiers:  y 1  y 2  y 3 … Q i y i, s.t. (x, y 1, …, y i )  R L ′

Proof of Equivalence: Now, M may make any polynomially-bounded number of queries q i to A. Let us call this number t. For each query, it receives a boolean answer a i : a i = 0 → q i isn’t in L ′ →  y 1  y 2  y 3 … Q i y i, s.t. (x, y 1, …, y i )  R̃ L ′ a i = 1 → q i is in L ′ →  z 1  z 2  z 3 … Q̃ i z i, s.t. (x, z 1, …, z i )  R L ′ We can organize and combine this as follows:

Proof of Equivalence: This combined two queries with different answers into one quantifier equation. We can combine any one query, and its two possible answers in this way and determine which answer is correct. Furthermore, we can do this for more than one query at once. Simply, combine the unrelated various quantifier relations to determine which is the correct answer for each query. Define w i to be the combination of all queries q 1 … q t as follows: w 1 is all combined variables following the first  w i is all combined variables following the i-th quantifier

Proof of Equivalence: Define R L : (w 1, …, w i +1 )  R L iff For all queries that use y variables, each query’s set of variables (y 1, …, y i )  R L′ For all queries that use z variables, each query’s set of variables (z 1, …, z i )  R̃ L′ We can now write the combined quantifier of all queries as:  q 1, a 1, q 2, a 2,..., q t, a t, w 1  w 2  w 3 … Q̃ i w i + 1, s.t. (x, w 1, …, w i + 1 )  R L This is the definition of So we have shown that if, then QED

Proof: PH  PSPACE This proof is rather trivial. Given any i, the proof will show that  i  PSPACE. Using the Quantifier Definition:  L, x  L iff  y 1  y 2  y 3 … Q i y i, s.t. (x, y 1, …, y i )  R L Because the relation R L is polynomially bounded, there is a polynomial bound on the length of each variable y i. This means that we can try all permutations of the variables y i, and then run the machine R L on each permutation. We need space for R L which is polynomial and space for all the variables y i, that is i * max(SPACE(y i )). Since i is constant and y i is polynomially bounded, the overall space is polynomial as well.

PH=  k Proposition : for every k  1, if  k =  k the PH=  k Proof: for an arbitrary fixed k, we will show by induction on i that  i  k,  i =  k :

PH=  k Base of the induction: when i=k, there is nothing to show. Induction step: by the inductive hypothesis it follows that  i =  k, so what remains to be shown is that NP  k =  k Containment in one direction is obvious since we can ignore the oracle hence we get NP  NP

PH=  k We now show that NP  k   k : Let L  NP  k, then there exist a non-deterministic, polynomial-time Machine M, and an oracle A  k, such that L=L(M A ). Since  k =  k it follows that A’  k too. There for there are polynomial bounded and polynomial time recognizable machine M,M’ such that :

PH=  k  y 1  y 2  y 3 ……Q i y i, such that M(x,y 1,….y i ) And  y 1  y 2 …Q’ i y i, such that M’(x,y 1,….y i ) Using those machines and the definition of NP  k we get : x  L   y,q 1,a 1,…,q t,a t such that for all 1  j  t :

PH=  k a i = 1  q j  A   w 1 (j,1),  w 2 (j,1),…,Q k w k (j,1) such that M(q j,w 1 (j,1),…, w k (j,1) ) a j = 1  q j  A’   w 1 (j,0),  w 2 (j,0),…,Q k w k (j,0) such that M’(q j,w 1 (j,1),…, w k (j,1) )

PH=  k We define: w 1 is the concatenation of: y,q 1,a 1,…,q t,a t,w 1 (1,0),…,w 1 (t,0),w 1 (1,1),…,w 1 (t,1).. w k is the concatenation of: w k (1,0),…,w 1 (t,0),w k (1,1),…,w k (t,1).

PH=  k M(x,w 1,…,w k )  for all 1  j  t : -a j =1 => M(q j,w 1 (j,1),…,w k (j,1) ) -a j =0 => M’(q j,w 1 (j,0),…,w k (j,0) )

PH=  k All together we get that there exists a polynomoally bounded and polynomial-time recognizable M, such that : x  L   w 1,  w 2,…,Q k w k such that M(x,w 1,…,w k ) By definition of  k, L  k

BPP   2 Not knowing whether BPP is contained in NP, it is some comfort to know that it is contained in the Polynomial Hierachy, which extends NP. Theorem (Sipser and Lautemann): BPP   2

BPP   2 We show a  2 algorithm that shifts the gap between the success probability for ‘good’ inputs and ‘bad’ ones: In the ‘good’ case - where the probability is high for a random string to succeed – that probability becomes =1 The probability of success in the bad case <1 At that point, x  L iff for every random string our test succeeds

BPP   2 xLxL Algorithm is mistaken Algorithm is correct xLxL 1- 1 / Poly 1 / Poly

BPP   2 xLxL Algorithm is mistaken Algorithm is correct xLxL xLxLxLxL We’ll show how to shift the error The algorithm will only make mistakes when the input is“bad”

BPP   2 For a ‘Good’ input: all random strings become accepting

BPP   2 For a ‘Bad’ input: some random strings must remain non-accepting

BPP   2 Proof: let L  BPP, then there exists a probabilistic polynomial time Turing machine A(x,r) where x is the input and r is a random guess. By the definition of BPP, with some amplification we get, for some polynomial p(. ):  x  {0,1} n PR r  R{0,1} p(n) [A(x,r)  c(x)] < 1 / (3*p(n)) where c(x) = 1 if x  L and c(x) = 1 if x  L note - The error probability here depends on the randomness complexity of the algorithm, with a large enough (log n) number of tries this becomes exponentially small

BPP   2 Claim: BPP  PH for every x  L  {0,1} n there exist a set of elements S 1,…,S m  {0,1} m where m=p(n) s.t  r  {0,1} m,  i  {1..m} A(x,r  S i ) = 1 This means that there is a polynomially bounded list of elements so that for every selection of r, at least one element, when xor’ed with r, will cause the algorithm to give the correct answer (if used as the random string).

BPP   2 Proof: The proof is based on the Probabilistic method. The general sketch is: Instead of proving existence of the sequence, we prove a random sequence has positive probability of satisfying the claim. We’ll actually upper-bound the probability that a random sequence {s i } does not satisfy the claim This is equal to the probability that for x  L there exists r s.t. for every {s i } the algorithm rejects r  s i

BPP   2 Joining the two claims we get x  L iff  s 1,…,s m  {0,1} m  r  {0,1} m  {1..m} A(x,r  s i )=1 as we proved both sides of the claim This proves L   2 as requested, since the sentence above is in  2

Proof: If NP  P /poly, then PH Collapses to  2 The proof will show that if NP  P /poly, then  2 =  2.. By the proof given earlier, we know that in that case, PH =  2. Because  2 and  2 are complementary classes, only one containment (either  2   2 or  2   2 ) needs to be shown and symmetrically, the other will follow as well. So the proof will only show that  2   2.

Proof: If NP  P /poly, then PH Collapses to  2 By definition of  2, we know that: if L   2, then there exists a trinary polynomially bounded relation R L = { (x, y, z) } that is recognizable in polynomial time, such that x  L if and only if  y  z, s.t. (x, y, z)  R L. The “Exists” relation can be considered some NP relation R NP, allowing us to say that if L   2, then there exists a binary polynomially bounded relation R NP = { (x, y) } that is recognizable in NP time, such that x  L if and only if  y, (x, y)  R NP.

Proof: If NP  P /poly, then PH Collapses to  2 At this point, we note that our assumption is that NP  P /poly. This means, by definition of P /poly, that any NP problem L has a series of circuits { C n } where for each n, C n has n inputs and one output, and there exists some polynomial p such that for all n, size(C n ) ≤ p(n), and C n (x) computes whether x is in L for all x  { 0, 1 } n. We don’t know what this circuit is, but we can guess it. That is, we can write:  C n, s.t.  x, C n (x) = true iff x  L. Computing C n can be done in deterministic polynomial time. However, since we guessed this circuit, we also need to check that it does indeed solve the membership in the language.

Proof: If NP  P /poly, then PH Collapses to  2 Let us assume that our NP language is 3SAT. This won’t impact the generality since 3SAT is NP - Complete and any NP problem can be Karp-reduced to NP - Complete in polynomial time. In this case, we have a 3SAT equation x with n variables and we must determine if there is an assignment of variables that will solve this equation. For this, we constructed a circuit that tells us whether our equation has a truth-assignment or not. But we don’t know if the circuit is indeed a valid circuit.

Proof: If NP  P /poly, then PH Collapses to  2 So what we can do is construct a smaller equation φ′ n with n – 1 variables from any n - variable equation φ n, where the n - th variable is set to false and φ″ n with n – 1 variables where the n - th variable is set to true. Now, a big n variable equation φ n has a truth assignment iff at least one of φ′ n or φ″ n has a truth assignment. Since we assumed NP  P /poly and 3SAT is in NP, we know there is a corresponding circuits for inputs of length n – 1 that will solve whether the n – 1 variable equation is in the language. So, again, we will guess this circuit and then, we can check that our main circuit was valid: ′″≥  C n, C n - 1, s.t.  φ n, C n (x) = true and (C n (φ n ) = C n - 1 (φ′ n ) or C n - 1 (φ″ n ))

Proof: If NP  P /poly, then PH Collapses to  2 But now, we don’t know if the smaller circuit C n – 1 is valid. But we can check it by a similar method to the way we checked the C n circuit. This would give us:  C n, C n – 1, C n – 2, s.t.  φ n, φ n - 1, C n (x) = true and (C n (φ n ) = C n - 1 (φ′ n ) or C n - 1 (φ″ n )) and (C n - 1 (φ n - 1 ) = C n - 2 (φ′ n - 1 ) or C n - 2 (φ″ n - 1 )) Furthermore, we can easily extend this to verify all circuits down to circuit C 1, the circuit that computes whether a 1 – variable 3SAT equation is in the language:

Proof: If NP  P /poly, then PH Collapses to  2 To check whether a 1 – variable 3SAT equation has a truth assignment is trivial: Just test the equation with either true or false. So if we add this condition (that the 1 – variable 3SAT equation has a truth assignment) to our quantifier equation, we have, assuming that NP  P /poly, a quantifier equation in  2 that determines if a 3SAT equation x has a truth assignment. We had before a NP -relation R NP (x, y) for a L   2. We can Karp reduce it to 3SAT since 3SAT is NP - Complete. Let f NP (x, y) be the reduction function. We can therefore write:

Proof: If NP  P /poly, then PH Collapses to  2 But since C 1 … C n are independent of y, we can rewrite this as a  2 quantifier equation: So, we have shown that, assuming NP  P /poly, any language in  2 is also in  2. In other words, that  2   2. By symmetry, this also shows that  2   2, and therefore,  2 =  2, and as shown before, this implies that PH collapses down to  2. QED