Guess Free Maximization of Submodular and Linear Sums

Slides:



Advertisements
Similar presentations
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Advertisements

Submodular Set Function Maximization A Mini-Survey Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Ioannis Caragiannis, Jason A. Covey, Michal Feldman, Christopher M. Homan, Christos Kaklamanis, Nikos Karanikolask, Ariel D. Procaccia, Je ff rey S. Rosenschein.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
A Simple, Greedy Approximation Algorithm for MAX SAT David P. Williamson Joint work with Matthias Poloczek (Frankfurt, Cornell) and Anke van Zuylen (William.
Dependent Randomized Rounding in Matroid Polytopes (& Related Results) Chandra Chekuri Jan VondrakRico Zenklusen Univ. of Illinois IBM ResearchMIT.
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
Visual Recognition Tutorial
The Submodular Welfare Problem Lecturer: Moran Feldman Based on “Optimal Approximation for the Submodular Welfare Problem in the Value Oracle Model” By.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Approximation Algorithms
(work appeared in SODA 10’) Yuk Hei Chan (Tom)
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Fast Algorithms for Submodular Optimization
Online Oblivious Routing Nikhil Bansal, Avrim Blum, Shuchi Chawla & Adam Meyerson Carnegie Mellon University 6/7/2003.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Minimizing general submodular functions
Submodular Functions Learnability, Structure & Optimization Nick Harvey, UBC CS Maria-Florina Balcan, Georgia Tech.
Stochastic Protection of Confidential Information in SDB: A hybrid of Query Restriction and Data Perturbation ( to appear in Operations Research) Manuel.
CSCI 3160 Design and Analysis of Algorithms Chengyu Lin.
Randomized Composable Core-sets for Submodular Maximization Morteza Zadimoghaddam and Vahab Mirrokni Google Research New York.
Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.
Submodular Maximization with Cardinality Constraints Moran Feldman Based On Submodular Maximization with Cardinality Constraints. Niv Buchbinder, Moran.
Improved Competitive Ratios for Submodular Secretary Problems ? Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
A Unified Continuous Greedy Algorithm for Submodular Maximization Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
Maximization Problems with Submodular Objective Functions Moran Feldman Publication List Improved Approximations for k-Exchange Systems. Moran Feldman,
Submodular Set Function Maximization A Mini-Survey Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.
Aspects of Submodular Maximization Subject to a Matroid Constraint Moran Feldman Based on A Unified Continuous Greedy Algorithm for Submodular Maximization.
Maximizing Symmetric Submodular Functions Moran Feldman EPFL.
Approximation Algorithms for Combinatorial Auctions with Complement-Free Bidders Speaker: Shahar Dobzinski Joint work with Noam Nisan & Michael Schapira.
Approximation Algorithms based on linear programming.
Unconstrained Submodular Maximization Moran Feldman The Open University of Israel Based On Maximizing Non-monotone Submodular Functions. Uriel Feige, Vahab.
Submodularity Reading Group Matroids, Submodular Functions M. Pawan Kumar
Approximation algorithms for combinatorial allocation problems
Independent Cascade Model and Linear Threshold Model
Contention Resolution Schemes: Offline and Online
Chapter 11 Optimization with Equality Constraints
Non-additive Security Games
Vitaly Feldman and Jan Vondrâk IBM Research - Almaden
Moran Feldman The Open University of Israel
Maximum Matching in the Online Batch-Arrival Model
Nonnegative polynomials and applications to learning
The Price of information in combinatorial optimization
Distributed Submodular Maximization in Massive Datasets
Iterative Methods in Combinatorial Optimization
Combinatorial Prophet Inequalities
Independent Cascade Model and Linear Threshold Model
James B. Orlin Presented by Tal Kaminker
Framework for the Secretary Problem on the Intersection of Matroids
Chap 9. General LP problems: Duality and Infeasibility
Analysis of Algorithms
Coverage Approximation Algorithms
(22nd August, 2018) Sahil Singla
Submodular Maximization Through the Lens of the Multilinear Relaxation
On Approximating Covering Integer Programs
Online Algorithms via Projections set cover, paging, k-server
Submodular Function Maximization with Packing Constraints via MWU
Submodular Maximization in the Big Data Era
Independent Cascade Model and Linear Threshold Model
Submodular Maximization with Cardinality Constraints
1.2 Guidelines for strong formulations
Presentation transcript:

Guess Free Maximization of Submodular and Linear Sums ? Guess Free Maximization of Submodular and Linear Sums Moran Feldman The Open University of Israel

Motivation: Adding Dessert Meal 1 Meal 2 Alternative Definition f(A) + f(B) ≥ f(A  B) + f(A  B) ∀ A, B  N. Ground set N of elements (dishes). Valuation function f : 2N  ℝ (a value for each meal). Submodularity: f(A + u) – f(A) ≥ f(B + u) – f(B) ∀ A  B  N, u  B.

Another Example 7 11 6 8 5 10 5 4 N -8

Submodular Optimization Submodular functions can be found in: Combinatorics Machine Learning Image Processing Algorithmic Game Theory Motivates the optimization of submodular functions subject to various constraints. Generalizes classical problems (such as Max DiCut and Max k-cover). Many practical applications. In this talk, we only consider maximization of non-negative monotone submodular functions. f(A) ≤ f(B) ∀ A  B  N.

The Multilinear Relaxation Sumodular maximization problems are discrete. Nevertheless, many state of the art algorithms for them make use of a continuous relaxation (in the same way LPs are often used in combinatorial algorithms). In the Linear World Solve a linear program relaxation. Round the solution. In the Submodular World Solve a relaxation. Round the solution. Solve a multilinear relaxation. max w ∙ x s.t. x  P max s.t. x  P max F(x) s.t. x  P It is a multilinear function… The multilinear extension Linear extension of the objective: Agrees with the objective on integral vectors. The Multilinear Extension Given a vector x  [0, 1]N, F(x) is the expected value of f on a random set R(x) containing each element u  N with probability xu, independently.

Continuous Greedy [Calinescu et al. 2011] The first algorithm for (approximately) solving the multilinear relaxation Description Start at the point 0. At every time between [0, 1], make a small step by adding an infinitesimal fraction of some vector x  P. The vector x chosen is the vector yielding the maximum improvement. Can be (approx.) done because the step is infinitesimal. +x +εx

Analyzing Continuous Greedy Feasibility We end up with a convex combination of points in the polytope. Lemma The multilinear relaxation is concave along positive directions. Use Regardless of our current location, there is a good direction: If y is the current solution, then adding an infinitesimal fraction of OPT (in P) is at least as good as adding an infinitesimal fraction of OPT ∙ (1 – y), which increases the value by at least an infinitesimal fraction of F(y + OPT ∙ (1 – y)) – F(y) ≥ f(OPT) – F(y) .

Approximation Ratio Differential Equation 𝑑𝐹(𝑦) 𝑑𝑡 ≥𝑓 𝑂𝑃𝑇 −𝐹 𝑦 . 𝑑𝐹(𝑦) 𝑑𝑡 ≥𝑓 𝑂𝑃𝑇 −𝐹 𝑦 . Solution F(y) ≥ (1 - e-t) ∙ f(OPT). For t = 1, the approximation ratio is 1 – 1/e ≈ 0.632. Known to be tight. Theorem When f is a non-negative monotone submodular function, the multilinear relaxation can be optimized up to a factor of 1 – 1/e.

Are we done? Theorem When f is a non-negative monotone submodular function, the multilinear relaxation can be optimized up to a factor of 1 – 1/e. Can we improve for special cases? In ML one often needs to optimize f(S) – ℓ(S), where ℓ(S) is a linear regularizer. Often non-monotone. Hard to guarantee non-negativity.

Linear and Submodular Sums Σ Linear and Submodular Sums Theorem [Sviridenko, Vondrák and Ward (2017)] When f is the sum of a non-negative monotone submodular function g and a linear function ℓ, one can find in a polynomial time a vector x  P such that F(x) ≥ (1 – 1/e) ∙ g(OPT) + ℓ(OPT) . Every non-negative monotone submodular f can be decomposed in this way. Improved approximation ratio when the linear component ℓ of OPT large. Have a guarantee also when the linear regularizer makes the function non-monotone or makes it negative at some points.

About the Alg. of Sviridenko et al. We already saw adding an infinitesimal fraction of OPT increases g by an infinitesimal fraction of g(OPT) – G(y) . Doing so also increases ℓ by an infinitesimal fraction of ℓ(OPT) . By guessing ℓ(OPT) we can use an LP to find a direction with these guarantees. 𝐹 𝑦 =𝐺 𝑦 +ℓ(y) ≥ (1-1/e) ∙ g(OPT) + ℓ(OPT) . Equation: At t = 1, G(y) ≥ (1 – 1/e) ∙ g(OPT). 𝑑𝐺(𝑦) 𝑑𝑡 ≥𝑔 𝑂𝑃𝑇 −𝐺 𝑦 Equation: At t = 1, ℓ(y) ≥ ℓ(OPT). 𝑑ℓ(𝑦) 𝑑𝑡 ≥ℓ 𝑂𝑃𝑇

Shortcomings of the Above Algorithm Expensive and Involved Guessing The algorithm has to be run for many guesses for ℓ(OPT). Slows the algorithm, and complicates its implementation. LP Solving In general, even the basic version of continuous greedy requires solving an LP. However, this can sometimes be avoided when the constraint polytope P has extra properties. The additional constraint in the algorithm of Sviridenko et al. prevents this saving. Matroid Speedup Tricks developed for speeding up continuous greedy for matroid constraints cannot be applied for the same reason.

Our Observation 𝑑𝐺(𝑦) 𝑑𝑡 ≥𝑔 𝑂𝑃𝑇 −𝐺 𝑦 𝑑ℓ(𝑦) 𝑑𝑡 ≥ℓ 𝑂𝑃𝑇 By explicit calculation of the imbalance, we get that we really want to maximize in time t the improvement in et – 1 ∙ G(y) + ℓ(y) . Every extra gain now decreases the guarantee at later times. The gain now does not affect later guarantees. Gaining in ℓ is more important than gaining in G. This imbalance reduces with time.

Φ(t) = et – 1 ∙ G(y(t)) + ℓ(y(t)) , Our Algorithm Start at the point 0. At every time between [0, 1], make a small step by adding an infinitesimal fraction of some vector x  P. The vector x chosen is the vector yielding the maximum improvement in et – 1 ∙ G(y) + ℓ(y) , where y is the current solution. No guesses No extra constraints Analysis We study the change in the potential function Φ(t) = et – 1 ∙ G(y(t)) + ℓ(y(t)) , where y(t) is the solution at time t.

Analysis (cont.) The derivative of Φ(t) = et – 1 ∙ G(y(t)) + ℓ(y(t)) , is at least et – 1 ∙ G(y(t)) et – 1 ∙ [g(OPT) - G(y(t))] + ℓ(OPT) Due to increase in et – 1 The x chosen is at least as good as OPT Leads to Φ(t) ≥ (et - 1 – 1/e) ∙ G(OPT) + 𝑡∙ℓ(OPT). For t = 1, F(y(1)) = Φ(1) ≥ (1– 1/e) ∙ G(OPT) + ℓ(OPT) .

Follow Up Work A follow up work recently appeared in ICML 2019. [“Submodular Maximization beyond Non-negativity: Guarantees, Fast Algorithms, and Applications” by Harshaw, Feldman, Ward and Karbasi] Results In this work we got the same approximation guarantee, but using faster algorithms, for cardinality constraint and the unconstrained problem. Extended the guarantee to weakly submodular functions. Technique Applied the same basic observation, but not to continuous greedy, but to a random greedy algorithm due to [Buchbinder et al. (2014)].

Questions ?