Complexity Theory Lecture 4 Lecturer: Moni Naor. Recap Last week: Space Complexity Savitch’s Theorem: NSPACE(f(n)) µ SPACE(f 2 (n)) –Collapse of NPSPACE.

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Complexity Theory Lecture 6
Lecture 24 MAS 714 Hartmut Klauck
Complexity Theory Lecture 7
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Complexity Theory Lecture 3 Lecturer: Moni Naor. Recap Last week: Non deterministic communication complexity Probabilistic communication complexity Their.
Lecture 23. Subset Sum is NPC
Fall 2013 CMU CS Computational Complexity Lecture 5 Savich’s theorem, and IS theorems. These slides are mostly a resequencing of Chris Umans’ slides.
NL equals coNL Section 8.6 Giorgi Japaridze Theory of Computability.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
CS151 Complexity Theory Lecture 7 April 20, 2004.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
Complexity ©D.Moshkovits 1 Space Complexity Complexity ©D.Moshkovits 2 Motivation Complexity classes correspond to bounds on resources One such resource.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
Analysis of Algorithms CS 477/677
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Chapter 11: Limitations of Algorithmic Power
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
1 On the Computation of the Permanent Dana Moshkovitz.
Undirected ST-Connectivity In Log Space Omer Reingold Slides by Sharon Bruckner.
Non-Deterministic Space is Closed Under Complementation Neil Immerman Richard Szelepcsenyi Presented By: Subhajit Dasgupta.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Complexity Theory Lecture 5 Lecturer: Moni Naor. Recap Last week: Probabilistic Space and Time Complexity Undirected Connectivity is in randomized logspace.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
Complexity Classes Kang Yu 1. NP NP : nondeterministic polynomial time NP-complete : 1.In NP (can be verified in polynomial time) 2.Every problem in NP.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
חישוביות וסיבוכיות Computability and Complexity Lecture 7 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AAAA.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CSEP 521 Applied Algorithms Richard Anderson Lecture 10 NP Completeness.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
Computation Model and Complexity Class. 2 An algorithmic process that uses the result of a random draw to make an approximated decision has the ability.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
PROBABILISTIC COMPUTATION By Remanth Dabbati. INDEX  Probabilistic Turing Machine  Probabilistic Complexity Classes  Probabilistic Algorithms.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
NPC.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Umans Complexity Theory Lectures
Markov Chains and Random Walks
Random walks on undirected graphs and a little bit about Markov Chains
Intro to Theory of Computation
Alternating tree Automata and Parity games
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

Complexity Theory Lecture 4 Lecturer: Moni Naor

Recap Last week: Space Complexity Savitch’s Theorem: NSPACE(f(n)) µ SPACE(f 2 (n)) –Collapse of NPSPACE to PSPACE PSPACE Completeness –TQBF, Games Logarithmic space – deterministic and non Sublogarithmic space Non-deterministic space is closed under complementation This week: Probabilistic Space Complexity

The Central Questions of Complexity Theory LogSpace µ NL µ P µ NP µ PSPACE Are any of the containments proper? All we can say: NL ( PSPACE Since NL µ Space(log 2 n) ( PSPACE from diagonalization (space hierarchy) Is NP = Co-NP? Is P = NP Å Co-NP ? Have not seen yet power of: Oracles Interaction Randomization If P = NP Å Co-NP then factoring is easy

2-SAT 2-SAT : given a formula  = C 1 Æ C 2 … Æ C m where C i =(  Ç  ) where  2 {x j, : x j } and  2 {x k, : x k } for some j and k. Is there a satisfying assumption for  Let G(  ) be a directed graph with –Nodes: literals –Edge ( ,  iff clause ( :  Ç  ) exists Claim :  is satisfiable iff for no x i are there both a path from x i to : x i and a path from : x i to x i in G(  ) Corollary : 2-SAT is in NL easy to see 2-unsat in NL and apply Immerman- - Szelepcsenyi Represent the implication the constraint puts

2-SAT is Complete for NL Reduce from (un)reachability of DAGs –This is NL-Complete as well Given a DAG G, source s and target t construct  (G): –For each node x create variable x –For edge (x,y) 2 G: add clause ( : x Ç y) –Add clauses (s) and ( : t) Claim :  (G) is satisfiable iff there is no path from s to t in G Log space construction

Algorithm for 2-SAT Start with arbitrary assignment While the assignment is not satisfying –Choose an arbitrary unsatisfied clause C i =(  Ç  ) –Choose at random from { ,  and flip the assignment to that variable –If unsuccessful for many step stop and declare ` unsat ’ Analysis: let A 2 {0,1} n be any satisfying assignment –With probability at least ½ distance to A is reduced –With probability at most ½ distance to A is increased 0n

Analysis of Algorithm Distance can never be larger than n Want to compute how long it takes a pebble to get with high probability to 0 if it starts at some 0 · i · n Dominated by a walk where –With probability exactly ½ distance to A is reduced –With probability exactly ½ distance to A is increased –When pebble hits 0 it is absorbed –When pebble hits n it is reflected 0n Dominated : for all t probability exact process takes more than t steps is at least as the original process

Analysis of Random Walks Would like to be able to say: Expected time to visit 0 For what time period can we say that there high probability?

Probabilistic Turing Machines Probabilistic TM: transition function  X Q   X Q  X {left,right} may be Probabilistic. A probability distribution on the set of moves – Accuracy Alternative view: in addition to input tape, random coins tape – How do we count it in space bounded computation head movement is one-way When is the PTM M considered to recognize a language L: Two sided error : For all x 2 L we have Pr[M stops with `yes ’]>2/3 For all x 2 L we have Pr[M stops with `no ’]>2/3 One-sided error For all x 2 L we have Pr[M stops with `yes ’]>1/2 For all x 2 L we have Pr[M stops with `no ’]=1 zero error: M never stops with the wrong answer but might never stop Want to consider expected consumption of resources Always stops Monte Carlo Vs. Las Vegas

Random LogSpace (RL) A probabilistic Turing Machine with worktape size O(log |X|) –Long enough to count and assure stopping in time RL : one-sided Error For all x 2 L we have Pr[M stops with `yes ’]>1/2 For all x 2 L we have Pr[M stops with `no ’]=1 BPL : two-sided Error For all x 2 L we have Pr[M stops with `yes ’]>2/3 For all x 2 L we have Pr[M stops with `no ’]>2/3 ZPL : No Error but stopping time is only expected polynomial time Show: ZPL= RL Å Co-RL

Undirected Connectivity and RL Given an undirected graph G=(V,E) –for nodes s and t, is there a path from s to t. – is the graph connected Homework : the two versions are log space equivalent via oracle reductions Idea of algorithm: Perform random walk for certain amount of time If node t is visited declare connected Ow unconnected Random walk: If at node u with degree d u Choose uniformly at random a neighbor and move to it

Markov Chains A discrete time stochastic process X 1, X 2, … X t … –defined over a set of states S Finite or countably infinite –in terms of transition matrix P Entry P ij is probability next state is j given current state is i: Pr[X t+1 =j|X t =i] –For all i,j 2 S we have 0 · P ij · 1 and  j 2 S P ij = 1 Memoryless Property : Pr[X t+1 =j|X t =i]= Pr[X t+1 =j|X t =i, X t-1 =i t-1, … X 0 =i 0 ] –what matter is where I am, not how I got here Example: pebble walk on the line S ={0,1, …,n} For all 1<i<n we have P ii+1 = ½ and P ii-1 = ½ P 00 = 1 and P nn-1 = 1 all other P ij = 0 0n

Markov Chains Initial state distribution q (0) =(q 1 (0),q 2 (0), … q n (0) ) State probability vector q (t) =(q 1 (t),q 2 (t), … q n (t) ) distribution of the chain at time t q (t+1) =q (t) P and q (t) = q (0) P t Stationary distribution : for Markov Chain with transition matrix P is distribution  such that  =  P If a Markov Chain is in a/the stationary distribution at step t it remains so forever Steady state behavior There can be unique, several or no stationary distribution Depending on the initial distribution

Markov Chains For states i,j 2 S first transition probability r ij (t) is probability that given X 0 =i first time state j occurs is at step t r ij (t) =Pr[X t =j Æ X s ≠j, 1 · s < t| X 0 =i] Probability that there is such a visit f ij =  t >0 r ij (t) Hitting Time : Let h ij be the expected number of steps to reach state j starting from state i. If f ij <1 then h ij = 1 h ij =  t >0 t r ij (t) converse not necessarily true in infinite chains If f ij =1 then state i is called persistent

Markov Chains Underlying directed graph of a Markov Chain: Edge (i,j) iff P ij > 0 Markov Chain is irreducible if its underlying graph consists of a single strong connected component Periodicity of a state i : GCD of the lengths of the path from i back to itself –Bipartite graphs: periodicity 2 –State is aperiodic if it has period 1

Fundamental Theorem of Markov chains For any irreducible, finite and aperiodic Markov Chain: All states are ergodic ( h ii < 1 ) There is a unique Stationary distribution  For all i 2 S we have –  i > 0 –f ij = 1 –h ii = 1/  i

Famous Markov Chain: PageRank algorithm [Brin and Page 98] Good authorities should be pointed by good authorities Random walk on the web graph –pick a page at random –with probability α follow a random outgoing link –with probability 1- α jump to a random page Rank according to the stationary distribution

Random Walks on Graphs Undirected graph G=(V,E), |E|=m, |V|=n The transition matrix P uv = 1/d u if (u,v) 2 E and 0 ow. Claim: is G=(V,E) is connected and non-bipartite then for all v 2 V  v =d u /2m Conclusion: h uu = 2m/d u How to make the graph non-bipartite? Add self-loops

Cover and Commute time Commute time C uv : expected time for a random walk starting at u to reach v and come back C uv = h uv +h vu Cover time C u (G): expected length of a walk starting at u, visiting every node at least once. C(G) =max u C u (G). Lemma: for any edge (u,v): C uv = h uv +h vu · 2m Consider Markov chain of (directed) edges. Stationary distribution: uniform (each edge+direction) 1/2m Time to get from u to v and back on edge (v,u): 2m Not true for non-edges

Upper bound on Cover Time Theorem: C u (G) · 2m (n-1) Proof: consider any spanning tree T of G. There is a traversal of T v 0, v 1, … v 2n-2 where each edge of T is traversed once in each direction. Consider random walk starting and ending at v 0, that traverses each of the edge of T once in each direction Upper bound on walk is upper bound on cover time C u (G) ·  j=0 2n-3 h v j v j+1 =  (u,w) 2 T C uw · (n-1) 2m Conclusion: taking 4n 3 as the bound in the random walk suffices for constant probability of success At most m, from Lemma

Universal Traversal Sequences G is a regular graph of degree d Labeling: the edges neighboring a node are labeled with {1,2 …d} in an arbitrary manner –Port number Can we find a polynomial sized sequence of labels s 1, s 2, … s k, s i 2 {1,2 …d} such that following it assures visiting all nodes of the graph Yes, it exists. Use the probabilistic method. Fix a graph an labeling Compute probability that sequence is not good for graph Sum over all graphs ( n O(dn) ) A log space construction puts connectivity in log space 7 4 7

Hot off the press news Omer Reingold, Undirected ST-Connectivity in Log-Space, Available: Electronic Colloquium on Computational Complexity, Report TR Important Web Resources on Complexity: ECCC: Lance Fortnow’s Computational Complexity Web Log:

What happens in directed graphs There are strongly connected directed graphs with an exponential covering time Can count to doubly exponential space probabilistically So if we do not insist on always stopping may do directed connectivity in random log space (but exponential time) –uninteresting

Probabilistic Variants of P RP : one-sided Error For all x 2 L we have Pr[M stops with `yes ’]>1/2 For all x 2 L we have Pr[M stops with `no ’]=1 BPP : two-sided Error For all x 2 L we have Pr[M stops with `yes ’]>2/3 For all x 2 L we have Pr[M stops with `no ’]>2/3 ZPP : No Error but stopping time is only expected polynomial time

The Schwartz-Zippel Algorithm/Theorem Theorem : Let Q(x 1, x 2, …, x n ) 2 F(x 1, x 2, …, x n ) be a non-zero muitivariate polynomial of total degree d. Fix any finite set S µ F and choose r 1, r 2, …, r n 2 R S. Then Pr[Q(r 1, r 2, …, r n )=0] · d/|S| Proof : by induction on n Useful when Q is not given explicitly and want to test equality to 0 Determinant of a matrix

Matching in Bipartite Graphs Let G=(V,U,E) b e a bipartite graph, |V|=|U|=n Define A to be the n £ n matrix with |E| variables A ij = x ij if (i,j) 2 E and 0 otherwise Theorem (Edmonds): G has a perfect matching iff det(A)≠0 Proof : det(A)=   2 S n sgn(  ) A 1,  (1) A 1,  (2) … A 1,  (n)

Algorithm for deciding whether a matching exists in bipartite graphs Fix a prime P larger than 2n Choose {r ij } (i,j) 2 E 2 R GF[P] Compute det(A({r ij } (i,j) )) If non-zero declare matching. Ow, no matching In general computing det(A) more expensive than running combinatorial algorithm for matching Not true in parallel computation