1 On the Computation of the Permanent Dana Moshkovitz.

Slides:



Advertisements
Similar presentations
COMPLEXITY THEORY CSci 5403 LECTURE XVI: COUNTING PROBLEMS AND RANDOMIZED REDUCTIONS.
Advertisements

NP-Hard Nattee Niparnan.
Complexity Theory Lecture 7
Theory of Computing Lecture 18 MAS 714 Hartmut Klauck.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Approximation Algorithms Chapter 28: Counting Problems 2003/06/17.
The Theory of NP-Completeness
Complexity ©D Moshkovitz 1 Approximation Algorithms Is Close Enough Good Enough?
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Markov Chains 1.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Entropy Rates of a Stochastic Process
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
Complexity 11-1 Complexity Andrei Bulatov NP-Completeness.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Computability and Complexity 15-1 Computability and Complexity Andrei Bulatov NP-Completeness.
The Counting Class #P Slides by Vera Asodi & Tomer Naveh
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Computability and Complexity 16-1 Computability and Complexity Andrei Bulatov NP-Completeness.
NP-Complete Problems Problems in Computer Science are classified into
1 Perfect Matchings in Bipartite Graphs An undirected graph G=(U  V,E) is bipartite if U  V=  and E  U  V. A 1-1 and onto function f:U  V is a perfect.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 23 Instructor: Paul Beame.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Sampling and Approximate Counting for Weighted Matchings Roy Cagan.
Approximating The Permanent Amit Kagan Seminar in Complexity 04/06/2001.
Complexity ©D.Moshkovitz 1 Paths On the Reasonability of Finding Paths in Graphs.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
May 9, New Topic The complexity of counting.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
NP-completeness Section 7.4 Giorgi Japaridze Theory of Computability.
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Complexity 24-1 Complexity Andrei Bulatov Counting Problems.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
TU/e Algorithms (2IL15) – Lecture 10 1 NP-Completeness, II.
Randomized Algorithms Hung Dang, Zheyuan Gao, Irvan Jahja, Loi Luu, Divya Sivasankaran.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
More NP-complete problems
Markov Chains and Random Walks
Richard Anderson Lecture 26 NP-Completeness
Random walks on undirected graphs and a little bit about Markov Chains
Perfect Matchings in Bipartite Graphs
Path Coupling And Approximate Counting
NP-Completeness Yin Tat Lee
CMU Graduate Complexity Lecture 20
ICS 353: Design and Analysis of Algorithms
NP-Complete Problems.
NP-Completeness Yin Tat Lee
CSE 6408 Advanced Algorithms.
Umans Complexity Theory Lectures
Trevor Brown DC 2338, Office hour M3-4pm
Instructor: Aaron Roth
Presentation transcript:

1 On the Computation of the Permanent Dana Moshkovitz

2 Overview § Presenting the problem § Introducing the Markov chain Monte-Carlo method.

3 Perfect Matchings in Bipartite Graphs An undirected graph G=(U  V,E) is bipartite if U  V=  and E  U  V. A 1-1 and onto function f:U  V is a perfect matching if for any u  U, (u,f(u))  E.

4 Finding Perfect Matchings is Easy Matching as a flow problem

5 What About Counting Them? § Let A=(a(i,j)) 1  i,j  n be the adjacency matrix of a bipartite graph G=({u 1,...,u n }  {v 1,...,v n },E), i.e. - permanent sum over the permutations of {1,...,n} §The number of perfect matchings in the graph is

6 Cycle-Covers Given an undirected bipartite graph G=({u 1,...,u n }  {v 1,...,v n },E), the corresponding directed graph is G’=({w 1,...,w n },E), where (w i,w j )  E iff (u i,v j )  E. Definition: Given a directed graph G=(V,E), a set of node-disjoint cycles that together cover V is called a cycle-cover of G. Observation: Every perfect matching in G corresponds to a cycle-cover in G’ and vice-versa.

7 Three Ways To View Our Problem 1) Counting the number of Perfect Matchings in a bipartite graph. 2) Computing the Permanent of a 0-1 matrix. 3) Counting the number of Cycle-Covers in a directed graph.

8 #P - A Complexity Class of Counting Problems L  NP iff there is a polynomial time decidable binary relation R, s.t. f  #P iff f(x)=| { y | R(x,y) } | where R is a relation associated with some NP problem. We say a #P function is #P-Complete, if every #P function Cook-reduces to it. It is well known that #SAT (i.e - counting the number of satisfying assignments) is #P-Complete. some polynomial

9 On the Hardness of Computing the Permanent Claim [Val79]: Counting the number of cycle- covers in a directed graph is #P-Complete. Proof: By a reduction from #SAT to a generalization of the problem.

10 The Generalization: Integer Permanent §Activity: an integer weight attached to each edge (u,v)  E, denoted (u,v). §The activity of a matching M is (M)=  (u,v)  M (u,v). §The activity of a set of matchings S is (M)=  M  S (M). §The goal is to compute the total activity

11 Integer Permanent Reduces to 0-1 Permanent 2 the rest of the graph 1 1 We would have loved to do something of this sort...

12 Integer Permanent Reduces to 0-1 Permanent the rest of the graph So instead we do:

13 But this is really cheating! The integers may be exponentially large, but we are forbidden to add an exponential number of nodes!

14 The Solution the rest of the graph...

15 What About Negative Numbers? § Without loss of generality, let us assume the only negative numbers are -1’s. § We can reduce the problem to calculating the Permanent modulo (big enough) N of a 0-1 matrix by replacing each -1 with (N-1). § Obviously, Perm mod N is efficiently reducible to calculating the Permanent.

16 Continuing With The Hardness Proof § We showed that computing the permanent of an integer matrix reduces to computing the permanent of a 0-1 matrix. § It remains to prove the reduction from #SAT to integer Permanent. § We start by presenting a few gadgets.

17 The Choice Gadget Observation: in any cycle- cover the two nodes must be covered by either the left cycle (true) or the right cycle (false). x= truex= false

18 The Clause Gadget Observation: §no cycle-cover of this graph contains all three external edges. §However, for every proper subset of the external edges, there is exactly one cycle-cover containing it. each external edge corresponds to one literal

19 The Exclusive-Or Gadget § The Perm. of the whole matrix is 0. § The Perm. of the matrix resulting if we delete the first (last) row and column is 0. § The Perm. of the matrix resulting if we delete the first (last) row and the last (first) column is

20 Plugging in the XOR-Gadget Observe a cycle-cover of the graph with a XOR- gadget plugged as in the below figure. §If e is traversed but not t (or vice versa), the Perm. is multiplied by 4. § Otherwise, the Perm. is added 0. e t

21 Putting It All Together § One choice gadget for every variable. § One Clause gadget for every clause. x= truex= false if the literal is x x= truex= false if the literal is  x

22 Sum Up § Though finding a perfect matching in a bipartite graph can be done in polynomial time, §counting the number of perfect matchings is #P-Complete, and hence believed to be impossible in polynomial time. § So what can we do?

23 Our Goal - FPRAS for Perm Describing an algorithm, which given a 0-1 n  n matrix M and an  >0, computes, in time polynomial in n and in  -1, a r.v Y, s.t Pr[(1-  )Perm(M)  Y  (1+  )Perm(M)]  1- , where 0<  ¼.

24 The Markov Chain Monte Carlo Method § Let  be a very large (but finite) set of combinatorial structures, § and let  be a probability distribution on . § The task is to sample an element of  according to the distribution .

The Connection to Approximate Counting U G The Monte-Carlo method: §Choose at random u 1,...,u N  U. §Let Y=|{i : u i  G }|. §Output Y|U|/N. Analysis: By standard Chernoff bound,

26 Randomized Self Reducibility §Let M denote the set of perfect matchings. §For any e  E let m e be the number of perfect matchings containing e. § Let m ne be the number of perfect matchings not containing e. §Claim: If |E|>n+1>2 and |M|>0, then  e  E, s.t m ne /|M|  1/n.

27 Counting Reduces to Sampling PermFPRAS(G) Input: a bipartite graph G=(V  U,E). Output: an approximation for |M|. 1. if |E|  n+1 or n<2, compute |M| exactly. 2. for each e  E do 3.sample  4n|E| 2 ln(2|E|/  )/  2  perfect matchings 4.Y  fraction of matchings not containing e. 5.if Y  1/n, return PermFPRAS(V  U,E\{e})/Y

28 Markov Chains Definition: A sequence of random variables {X t } t  0 is a Markov Chain (MC) with state space , if Pr[ X t+1 =y | X t =x t,...,X 0 =x 0 ] = Pr [ X t+1 =y | X t =x t ] for any natural t and x 0,...,x t . We only deal with time-homogeneous MCs, i.e Pr[ X t+1 =y | X t =x t ] is independent of t.

29 Graph Representation of MC Conceptually, A Markov chain is a HUGE directed weighted graph. §The nodes correspond to the objects in . §X t = position in step t. §The weight of (x,y)  is P(x,y)=Pr[X 1 =y|X 0 =x]

30 Iterated Transition Definition: For any natural t, i.e - P t (x,y)=Pr[X t =y|X 0 =x].

31 More Definitions § A MC is irreducible, if for every pair of states x,y , there exists t  N, s.t. P t (x,y)>0. § A MC is aperiodic, if gcd{t : P t (x,x) > 0}=1 for any x . §A finite MC is ergodic if it is both irreducible and aperiodic

32 Stationary Distribution Definition: A probability distribution  :  [0,1] is a stationary distribution of a MC with transition matrix P, if  (y)=  x   (x)P(x,y). Proposition: An ergodic MC converges to a unique stationary distribution  :  (0,1], i.e. -

33 Time Reversible Chains Definition: Markov chains for which some distribution  satisfies for all M,M’ , (the detailed balance condition) are called (time) reversible. Moreover, that  is the stationary distribution.

34 Mixing Time Definition: Given a MC with transitions matrix P and stationary distribution , we define the mixing time as  x (  )=min{ t : ½  y  |P t (x,y)-  (y)|  } Definition: A MC is rapidly mixing, if for any fixed  >0.  x (  ) is bounded above by a polynomial.

35 Conductance Definition: the conductance of a reversible MC is defined as  =min  S   (S), where Theorem: For an ergodic, reversible Markov chain with self loops probabilities P(y,y)  ½ for all states x ,

36 Framework MC: ,  irreducible aperiodic ergodic ½ self loops detailed balance condition stationary  reversible rapid mixing  1/poly

37 Our Markov Chain §The state space  will consist of all perfect and near-perfect (size n-1) matchings in the graph. §The stationary distribution  will be uniform over the perfect matchings and will assign them probability O(1/n 2 ).