Download presentation
Presentation is loading. Please wait.
Published bySamuel Hancock Modified over 5 years ago
1
Submodular Maximization with Cardinality Constraints
1 2 3 Submodular Maximization with Cardinality Constraints Moran Feldman Joint work with: Niv Buchbinder Joseph (Seffi) Naor Roy Schwartz
2
f(A) + f(B) f(A B) + f(A B)
Set Functions Definition Given a ground set N, a set function f : 2N R assigns a number to every subset of the ground set. Notation Given a set A, and an elements u, fu(A) is the marginal contribution of u to A: Properties a Set Functions May Have Non negativity: Monotonicity: Submodularity: For sets A B N, and u B: fu(A) fu(B) For sets A, B N: f(A) + f(B) f(A B) + f(A B)
3
Where can One Find Submodular Set Functions?
In Combinatorial Settings In Applicative Settings Utility/cost functions in economics (economy of scale). Influence of a set of users in a social network. Ground Set Submodular Function Nodes of a graph The number of edges leaving a set of nodes. Collection of sets The number of elements in the union of a sub-collection.
4
Maximization Subject to a Cardinality Constraint
1 2 3 Maximization Subject to a Cardinality Constraint Instance A non-negative submodular function f : 2N R+ and an integer k. Objective Find a subset S N of size at most k maximizing f(S). Accessing the Function A representation of f can be exponential in the size of the ground set. The algorithm has access to f via an oracle. Value Oracle – given a set S returns f(S).
5
The (Classical) Greedy Algorithm
The Algorithm Do k iterations. In each iteration pick the element with the maximum marginal contribution. More Formally Let S0 . For i = 1 to k do: Let ui be the element maximizing: fui(Si-1). Let Si Si-1 {ui}. Return Sk.
6
1 2 3 Greedy Achieves For Monotone Functions
1-1/e approximation [Nemhauser et al. 78]. Match a hardness of [Nemhauser et al. 78] Time complexity: O(nk) For Non-Monotone Functions Same time complexity. No constant approximation ratio. Recent Results for Non-Motone Functions 0.325 approx. (simulated annealing) [Oveis Gharan and Vondrak 11] 1/e – o(1) approx. (measured continuous greedy) [Feldman et al. 11] 0.491 hardness [Oveis Gharan and Vondrak 11] All known algorithms are much slower than the greedy algorithm.
7
The Random Greedy Algorithm
The Algorithm Do k iterations. In each iteration pick at random one element out of the k with the largest marginal contributions. More Formally Let S0 . For i = 1 to k do: Let Mi be set of k the elements maximizing: fu (Si-1). Let ui be a uniformly random element from Mi. Let Si Si-1 {ui}. Return Sk.
8
Reduction Helper Lemma
We add k dummy elements of value 0. The dummy elements are removed at the end. Allows us to assume OPT is of size exactly k. Helper Lemma For a non-negative submodular function g : 2N R+ and a random set R containing every element with probability at most p: E[g(R)] ≥ (1 – p) ∙ g(). Similar to a Lemma from [Feige et al. 2007]. Intuition: Adding all the elements can reduce the value of g() by at most g() to 0. Adding at most a p fraction of every element, should reduce g () by no more than p ∙ g().
9
Algorithm Analysis First Objective Lower bound E[f(OPT Si)].
Method - show that no element belongs to Si with a large probability, and then apply the helper lemma. Observation In every iteration i, every element outside of Si-1 has a probability of at most 1/k to get into Si. Corollary An element belongs to Si with probability at most 1 – (1-1/k)i. Applying the Helper Lemma Let g(S) = f(OPT S). Observe that g(S) is non-negative and submodular. E[f(OPT Si)] = E[g(Si)] ≥ (1-1/k)i ∙ g() = (1-1/k)i ∙ f(OPT).
10
Algorithm Analysis (cont.)
In iteration i Fix everything that happened before iteration i. All the expectations will be conditioned on the history. By submodularity: The elememt ui is picked at random from Mi, and OPT is a potential candidate to be Mi.
11
Algorithm Analysis (cont.)
Unfixing history, and using previous observations, we get: Adding up all iterations We got a lower bound on the (expected) improvement in each iteration. Using induction it is possible to prove that: Remarks This algorithm is both faster than previous ones, and gets rid of the o(1) in the approximation ratio. The algorithm achieves 1 – 1/e approximation (in expectation) for monotone functions.
12
Equality Cardinality Constraint
1 2 3 Equality Cardinality Constraint New Objective Find a subset S N of size exactly k maximizing f(S). Monotone Functions Not interesting. We can always add arbitrary elements to the output. Non-monotone Functions Best previous approximation: ¼ - o(1). [Vondrak 2009]
13
The Double Greedy Algorithm
The Algorithm Based on the ½ approximation for unconstrained submodular maximization of [Buchbinder et al. 2012]. Starts with two vectors: x = 1 and y = 1N. The vectors continuously evolve toward each other and satisfy: At time t, for every element u N: yu – xu = 1 - t. At time 1, x = y is a feasible solution. Approximation Ratio – For Both Variants For k = n/2, both problems have an approximation ratio of ½, and this is tight. x y 1-t
14
Other Results Cardinality Constraint For a non-equality constraint:
e-1 + approximation, where = Proves that e-1 is not the right answer for the problem. Is e-1 the right answer for a general matroid independence constraint? For an equality constraint: A fast algorithm with an approximation ratio depending on k/n. It always achieves at least approximation. For every ratio k/n one of our algorithms achieves at least approximation. Fast Algorithms for General Matroid Constraint State of the art approximation ratio for a general matroid constraint: e-1 – o(1). Approximation Ratio Time Complexity 1/4 O(nk) (1-e2) / 2 – ε > 0.283 O(nk + kω+1)
15
Open Problems Cardinality Constraint Fast Algorithms
The approximability depends on k/n. For k/n = 0.5, we have 0.5 approximation. For small k’s, one cannot beat [Oveis Gharan and Vondrak 11] What is the correct approximation ratio for a given k/n? Fast Algorithms Finding fast algorithms for more involved constraints. Beating e-1 using a fast algorithm: Even for k = n/2. Even faster algorithms: For monotone functions there is a 1 – 1/e – ε approximation using O(nε-1log (n / ε)) oracle queries. [Ashwinkumar and Vondrak 14]
16
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.