An FPTAS for Counting Integer Knapsack Solutions Made Easy

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Generalization and Specialization of Kernelization Daniel Lokshtanov.
Approximation Algorithms for Capacitated Set Cover Ravishankar Krishnaswamy (joint work with Nikhil Bansal and Barna Saha)
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Pseudo-polynomial time algorithm (The concept and the terminology are important) Partition Problem: Input: Finite set A=(a 1, a 2, …, a n } and a size.
Balanced Graph Partitioning Konstantin Andreev Harald Räcke.
Dynamic Programming Reading Material: Chapter 7..
Recent Development on Elimination Ordering Group 1.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 / 22 Sublinear FPTASs for Stochastic Optimization Problems Nir Halman, HUJI Based on joint works with D. Klabjan, C-L Lee, M. Mostagir, J. Orlin and.
1 Combinatorial Dominance Analysis The Knapsack Problem Keywords: Combinatorial Dominance (CD) Domination number/ratio (domn, domr) Knapsack (KP) Incremental.
Polynomial time approximation scheme Lecture 17: Mar 13.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Algoritmi on-line e risoluzione di problemi complessi Carlo Fantozzi
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Fundamental Techniques
Computational aspects of stability in weighted voting games Edith Elkind (NTU, Singapore) Based on joint work with Leslie Ann Goldberg, Paul W. Goldberg,
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
An FPTAS for #Knapsack and Related Counting Problems Parikshit Gopalan Adam Klivans Raghu Meka Daniel Štefankovi č Santosh Vempala Eric Vigoda.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Unit 9: Coping with NP-Completeness
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
1 BECO 2004 When can one develop an FPTAS for a sequential decision problem? with apologies to Gerhard Woeginger James B. Orlin MIT working jointly with.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
1 Chapter 5 Branch-and-bound Framework and Its Applications.
Errol Lloyd Design and Analysis of Algorithms Approximation Algorithms for NP-complete Problems Bin Packing Networks.
Approximation algorithms
Chapter 10 NP-Complete Problems.
8.3.2 Constant Distance Approximations
Hard Problems Some problems are hard to solve.
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
The Variable-Increment Counting Bloom Filter
The Greedy Method and Text Compression
Approximation algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithm
Exact Algorithms via Monotone Local Search
Approximation Algorithms
Analysis and design of algorithm
Computability and Complexity
Prepared by Chen & Po-Chuan 2016/03/29
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, Nikhil Vyas
ICS 353: Design and Analysis of Algorithms
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Exponential Time Paradigms Through the Polynomial Time Lens
Neuro-RAM Unit in Spiking Neural Networks with Applications
Approximation Algorithms
Polynomial time approximation scheme
Range-Efficient Computation of F0 over Massive Data Streams
Integer Programming (정수계획법)
On Approximating Covering Integer Programs
NP-Completeness Yin Tat Lee
Topic 5: Heap data structure heap sort Priority queue
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979.
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Chapter 6. Large Scale Optimization
Presentation transcript:

An FPTAS for Counting Integer Knapsack Solutions Made Easy Nir Halman Hebrew University of Jerusalem

The Integer Knapsack Problem Input: n items. Item i has: weight wi , value vi and limit ui on #copies (U=max ui). Knapsack capacity C. Output: Max 𝑖≤𝑛 𝑣 𝑖 𝑥 𝑖 s.t. 𝑖≤𝑛 𝑤 𝑖 𝑥 𝑖 ≤𝐶; 0≤ 𝑥 𝑖 ≤ 𝑢 𝑖 Admits an FPTAS [IK75,HS76] If all ui=1, called 0/1 knapsack Counting problem: How many feasible solutions exist? (v is obsolete) Theorem. An O( 𝑛 3 log 3 𝑈log𝐶 𝜖 log 𝑛log 𝑈 𝜀 ) time FPTAS Improves over Gopalan et al. [FOCS11] by 𝑂 ( 𝑛 2 𝜀 ) Other well-know counting problems: counting satisfying assignments of distinctive normal form (DNF) formulas (1st FPRAS by [KL85]), counting independent sets in graphs of max. degree ≤5 (1st FPTAS by [We06]).

A naïve DP formulation Let 𝑚 𝑖 (𝑗)=max. #copies of item i (≤ui) using ≤ j space, 𝑠 𝑖 𝑗 =#feasible sols. using ≤ j space & items 1,…,i Recurrence: 𝑠 1 𝑗 = 𝑚 1 𝑗 +1 𝑗=1,…,𝐶 𝑠 𝑖 𝑗 = 𝑘=0 𝑚 𝑖 𝑗 𝑠 𝑖−1 𝑗−𝑘 𝑤 𝑖 2≤𝑖≤𝑛, 𝑗=1,…,𝐶 Solution = 𝑠 𝑛 𝐶 Running time: i ≤ n items, space j ≤ C and summation of size ≤ U  O(nUC) time, i.e., pseudopolynomial in both U and C

A second DP formulation Idea:  item i break integer decision i into ≤ log ui binary sub-decisions. If all ui were powers of 2 minus one, then easy: i Break item i into m bundles of weights 𝑤 𝑖 ,2 𝑤 𝑖 ,…,((ui +1)/2-1) 𝑤 𝑖 and solve the corresponding 0/1 knapsack counting problem E.g., if ui=7 (=111 in binary encoding) and 𝑤 𝑖 =1:

A second DP formulation Idea:  item i break integer decision i into ≤ log ui binary sub-decisions. But when ui is not a power of 2 minus one, then the bundles are not necessarily “independent”: E.g., if ui=5 (=101 in binary encoding) and 𝑤 𝑖 =1: + - bit # from left Corresponding bundle 1 2 3 constraint non-binding constraint binding - + - + - + - + - 5 4 3 2 1

A second DP formulation Idea: break integer decision i into at most ui binary sub-decisions,  item i Let: msb(x,i)=place (from the right) of most significant “1” bit of the ith least significant bits of x, if exists or -∞ e.g., msb(5,2)=1 (5 in binary encoding is 101); msb(4,1)=-∞ (4 in binary encoding is 100).

A second DP formulation – cont’ 𝑧 𝑖,𝑙,0 𝑗 = 𝑧 𝑖,𝑙−1,0 𝑗 + 𝑧 𝑖,𝑙−1,0 𝑗− 2 𝑙−1 𝑤 𝑖 l=2,… (a) 𝑧 𝑖,𝑙,1 𝑗 = 𝑧 𝑖,𝑙−1,0 𝑗 + 𝑧 𝑖,msb( 𝑢 𝑖 ,𝑙−1),1 𝑗− 2 𝑙−1 𝑤 𝑖 l=2,… (b) 𝑧 𝑖,1,𝑟 𝑗 = 𝑧 𝑖−1,└log+ 𝑚 𝑖 𝑗 ┘+1,1 𝑗 + 𝑧 𝑖−1,└log+ 𝑚 𝑖−1 𝑗 −𝑤 𝑖 ┘+1,1 𝑗− 𝑤 𝑖 (c) E.g., ui=5 (101 in binary encoding) and wi=1. 𝑧 𝑖,3,0 𝑗 considers placing a bundle of 4 and then recursively checks a bundle of 2 by using 𝑧 𝑖,2,0 ∙ 𝑧 𝑖,3,1 𝑗 considers placing a bundle of 4 and then recursively checks a bundle of 1 (because msb(5,2)=1) by using 𝑧 𝑖,1,0 ∙ The last index of z is an indicator for the constraint xi ≤ui being potentially binding 𝑧 𝑖,−∞,1 𝑗 = 𝑧 𝑖−1,└log+ 𝑚 𝑖−1 𝑗 ┘+1,1 𝑗 (d) 𝑧 1,𝑙,𝑟 𝑗 = 𝑚 1 𝑗 +1 l=1,… (e) 𝑧 𝑖,𝑙,𝑟 𝑗 =0 j<0 (f) where r=0,1, i=2,...,n and j=0,...,C Solution = 𝑧 𝑛,└log+ 𝑢 𝑛 ┘+1,1 𝐶 Running time: i ≤ n items, l ≤ log U sub-decisions, r=0,1 space j ≤ C, and summation of size ≤ 2 O(nClogU) time, i.e., pseudopolynomial in C only

An FPTAS Note: All functions 𝑧 𝑖,𝑙,𝑟 ∙ are univariate and monotone non-decreasing in their variable This is called a monotone dynamic program Thm[HKLOS14]: A monotone DP with depth T, |action space| ≤A, |state space|≤S and bound M on max. value, admits an O( 𝑇 2 𝜀 𝐴 log𝑆 log𝑀 log 𝑇log𝑀 𝜀 ) FPTAS  Here T=nlogU, A=2, S=C, and M=Un  O( 𝑛 3 𝜀 log3𝑈 log𝐶 log 𝑛log𝑈 𝜀 ) time

K-approximation functions Assume function φ is defined over integer domains and is non-negative, i.e., φ:[0, U]  R+ We say that φ* is a K-approximation of φ if φ(x) ≤ φ*(x) ≤ Kφ(x), x ∈ [0, U] We denote this by φ* ≅K φ

1. Calculus of K-apx. Functions Let α,β ≥ 0, φi* ≅Ki φi , K1 ≥ K2, and i =1,2 Operation name Operation and apx. ratio summation α+βφ1*+φ2* ≅K1 α+βφ1+φ2 minimization min{φ1*, φ2*} ≅K1 min{φ1, φ2} composition φ1*(ψ) ≅K1 φ1(ψ) approximation φ2* ≅K1 K2 φ1 when φ2= φ1* 𝑧 𝑖,𝑙,0 𝑗 = 𝑧 𝑖,𝑙−1,0 𝑗 + 𝑧 𝑖,𝑙−1,0 𝑗− 2 𝑙−1 𝑤 𝑖 (𝑎) 𝑧′ 𝑖,𝑙,0 𝑗 = 𝑧∗ 𝑖,𝑙−1,0 𝑗 + 𝑧∗ 𝑖,𝑙−1,0 𝑗− 2 𝑙−1 𝑤 𝑖 (𝑎′) Corollary: (summation of composition) Let z* in (a’) be K1 -apx. functions of the original functions. Then 𝑧′ 𝑖,𝑙,0 𝑗 ≅K1 𝑧 𝑖,𝑙,0 𝑗

2. K-approximation Sets Definition: Let φ:[0,...,S]→ R+ be monotone. A K-apx. set of φ is W [0,...,S] with 0,SW and ratio  K between each two consecutive points in W Construction: φ* is the apx. of φ induced by W: φ ≥K  K W φ* Question: How small a K-apx. set of φ can be, and how fast can it be constructed?

Approximating a Monotone Function: The Compress Operator Answer: A monotone φ admits a K-apx. function with |breakpoints| poly. of input size (= log S + log M) via binary search over both domain and range. We explicitly construct it via the compress procedure: g := Compress(φ , K=1+ ). g is a piecewise step function g ≅K φ #pieces is O(logK M) = O( log M ε ) (using K<2logK≥ε) Running time is O( log M log S ε ), query time is O(log log M ε ), assuming pieces are stored as a list (x, φ(x)) sorted by x

FPTAS formulation begin 1. K:= 1+𝜖 1 2𝑇 . 2. Define 𝑧∗ 1,𝑙,𝑟 ∙ := 𝑧 1,𝑙,𝑟 ∙ 3. for t=2 to depth T do 4. implement 𝑧′ 𝑖,𝑙,𝑟 ∙ oracle using DP and 𝑧∗ 𝑖 ′ , 𝑙 ′ ,𝑟 ∙ where either i’<i or l’<l 5. 𝑧∗ 𝑖,𝑙,𝑟 ∙ :=compress( 𝑧′ 𝑖,𝑙,𝑟 ∙ ,K) 6. endfor end Corollary: (summation of composition) Let z* in (a’) be K1 -apx. functions of the original functions. Then 𝑧′ 𝑖,𝑙,0 𝑗 ≅K1 𝑧 𝑖,𝑙,0 𝑗

Running Time and Error Analysis begin 1. K:= 1+𝜖 1 2𝑇 . 2. Define 𝑧∗ 1,𝑙,𝑟 ∙ := 𝑧 1,𝑙,𝑟 ∙ 3. for t=2 to depth T do 4. implement 𝑧′ 𝑖,𝑙,𝑟 ∙ oracle using DP and 𝑧∗ 𝑖 ′ , 𝑙 ′ ,𝑟 ∙ where either i’<i or l’<l 5. 𝑧∗ 𝑖,𝑙,𝑟 ∙ :=compress( 𝑧′ 𝑖,𝑙,𝑟 ∙ ,K) 6. endfor end each query = O(log logK M) O(logS logK M) queries to 𝑧′ 𝑖,𝑙,𝑟 ∙ Running time: O(TlogS logKM log logK M)=O( 𝑇 2 𝜀 log𝑆 log𝑀 log 𝑇 log𝑀 𝜀 ) Apx. Ratio: By line 1 K2T=1+

Summary / Our Approach • Functional point of view • Approximating functions via apx. sets • Propagation of errors via Calculus of approximation • Modular framework for deriving FPTASs • The notion of binding constraints is of independent interest

Bibliography: ♦ [G+11] P. Gopalan, A. Klivans, R. Meka, D Bibliography: ♦ [G+11] P. Gopalan, A. Klivans, R. Meka, D. Stefankoviˇc, S. Vempala, E. Vigoda, FPTAS for #Knapsack and Related Counting Problems, FOCS 2011 ♦[HKLOS14] N. Halman, D. Klabjan, C.L. Li, J. Orlin, D. Simchi-Levi, FPTASs for stochastic DPs, SIAM. Discrete Math. 2014 ♦ [HS76] E. Horowitz & S. Sahni, Exact and appoximate algs. for scheduling nonidentical processors, J. ACM, 1976 ♦ [IK75] H. Ibarra & C. Kim, Fast approximation algor the knapsack and sum of subset problems, J. ACM, 1975 ♦ [KL85] R. Karp & M. Luby, Monte Carlo algs. for the planar multiterminal network reliability problem, J. Complexity, 1985 ♦ [We06] D. Weitz, Counting independent sets up to the tree threshold, STOC 2006

Thank you!

Relative Approximation OPT – a minimization problem OPT(x) – value of problem on instance x A K-apx. alg. A(x) (K>1) returns A(x) ≤ K OPT(x) A PTAS (Polynomial Time Approximation Scheme) is a (1+ ε)-apx. that runs in poly(|x|) time, e.g., O(|x|3/ε ) An FPTAS (Fully Poly. Time Approximation Scheme) is a PTAS that runs in poly(|x|,1/ε) time, e.g., O((|x|/ε)2) FPTASs are considered as the “Holly Grail” among apx. algs. because one can get as close to opt(x) as one wants in poly. time

A Question Does a value oracle for a monotone function φ:[0,...,S]→R+ admits an efficient succinct K-approximation? ( Let M = φmax . Input size = log S + log M) Def: a K-approximation φ* is: succinct if stored in logarithmic space, e.g., O(log S log M) efficient if built in logarithmic time (# of oracle queries)