Factoring distributions V X1 X2 X3 X4 X5 X6 X7 Given random variables X1,…,Xn Partition variables V into sets A and VnA as independent as possible.

Slides:



Advertisements
Similar presentations
Lesson 2 – The British in India and Independence.
Advertisements

Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders Speaker: Shahar Dobzinski Based on joint works with Noam Nisan & Michael Schapira.
Beyond Convexity – Submodularity in Machine Learning
The Normal Distribution
CPSC 455/555 Combinatorial Auctions, Continued… Shaili Jain September 29, 2011.
Linear-time Median Def: Median of elements A=a 1, a 2, …, a n is the (n/2)-th smallest element in A. How to find median? sort the elements, output the.
6.896: Topics in Algorithmic Game Theory Lecture 11 Constantinos Daskalakis.
Advanced Topics in Algorithms and Data Structures Page 1 Parallel merging through partitioning The partitioning strategy consists of: Breaking up the given.
Learning Submodular Functions Nick Harvey University of Waterloo Joint work with Nina Balcan, Georgia Tech.
The main idea of the article is to prove that there exist a tester of monotonicity with query and time complexity.
The Submodular Welfare Problem Lecturer: Moran Feldman Based on “Optimal Approximation for the Submodular Welfare Problem in the Value Oracle Model” By.
Limitations of VCG-Based Mechanisms Shahar Dobzinski Joint work with Noam Nisan.
Sensor placement applications Monitoring of spatial phenomena Temperature Precipitation... Active learning, Experiment design Precipitation data from Pacific.
Parallel Merging Advanced Algorithms & Data Structures Lecture Theme 15 Prof. Dr. Th. Ottmann Summer Semester 2006.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Standard Normal Distribution
Multicommodity Rent or Buy: Approximation Via Cost Sharing Martin Pál Joint work with Anupam Gupta Amit Kumar Tim Roughgarden.
Study Group Randomized Algorithms Jun 7, 2003 Jun 14, 2003.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
The moment generating function of random variable X is given by Moment generating function.
Incentive-compatible Approximation Andrew Gilpin 10/25/07.
1 Spanning Tree Polytope x1 x2 x3 Lecture 11: Feb 21.
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
Polyhedral Optimization Lecture 5 – Part 1 M. Pawan Kumar Slides available online
An Algorithmic Proof of the Lopsided Lovasz Local Lemma Nick Harvey University of British Columbia Jan Vondrak IBM Almaden TexPoint fonts used in EMF.
Approximating Submodular Functions Part 2 Nick Harvey University of British Columbia Department of Computer Science July 12 th, 2015 Joint work with Nina.
Submodular Functions Learnability, Structure & Optimization Nick Harvey, UBC CS Maria-Florina Balcan, Georgia Tech.
A Graphical Approach for Solving Single Machine Scheduling Problems Approximately Evgeny R. Gafarov Alexandre Dolgui Alexander A. Lazarev Frank Werner.
Extremal Problems of Information Combining Alexei Ashikhmin  Information Combining: formulation of the problem  Mutual Information Function for the Single.
5 Maximizing submodular functions Minimizing convex functions: Polynomial time solvable! Minimizing submodular functions: Polynomial time solvable!
Section 5.5 The Real Zeros of a Polynomial Function.
Improved Competitive Ratios for Submodular Secretary Problems ? Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
Information-Theoretic Co- Clustering Inderjit S. Dhillon et al. University of Texas, Austin presented by Xuanhui Wang.
Submodular set functions Set function z on V is called submodular if For all A,B µ V: z(A)+z(B) ¸ z(A[B)+z(AÅB) Equivalent diminishing returns characterization:
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
1 Ch.19 Divide and Conquer. 2 BIRD’S-EYE VIEW Divide and conquer algorithms Decompose a problem instance into several smaller independent instances May.
Testing Low-Degree Polynomials over GF(2) Noga AlonSimon LitsynMichael Krivelevich Tali KaufmanDana Ron Danny Vainstein.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Lesson 6.4 Factoring and Solving Polynomial Equations.
Theorem Suppose {a n } is non-decreasing and bounded above by a number A. Then {a n } converges to some finite limit a, with a  A. Suppose {b n } is non-increasing.
Bayesian Algorithmic Mechanism Design Jason Hartline Northwestern University Brendan Lucier University of Toronto.
Markov Random Fields in Vision
Factoring GCF, Monics, Solving Monics. Quadratics Solve x 2 – 8x + 15 = 0 by using the following graph.
The Pure Birth Process Derivation of the Poisson Probability Distribution Assumptions events occur completely at random the probability of an event occurring.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Approximation algorithms for combinatorial allocation problems
Lecture 2 Sorting.
ASV Chapters 1 - Sample Spaces and Probabilities
Lecture 4 Sorting Networks
Lap Chi Lau we will only use slides 4 to 19
Monitoring rivers and lakes [IJCAI ‘07]
Market Equilibrium Ruta Mehta.
Dana Ron Tel Aviv University
Topics in Algorithms Lap Chi Lau.
The Factor Theorem.
Vitaly Feldman and Jan Vondrâk IBM Research - Almaden
Factoring Polynomials by Grouping
Linear Combination of Two Random Variables
General Strong Polarization
Random Sampling Population Random sample: Statistics Point estimate
Example: Feature selection Given random variables Y, X1, … Xn Want to predict Y from subset XA = (Xi1,…,Xik) Want k most informative features: A*
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
ASV Chapters 1 - Sample Spaces and Probabilities
Coverage Approximation Algorithms
Master Theorem Yin Tat Lee
Medians and Order Statistics
General Strong Polarization
Lecture 11 Overview Self-Reducibility.
Submodular Maximization Through the Lens of the Multilinear Relaxation
Warm-Up Solve each proportion. 1) 2)
Presentation transcript:

Factoring distributions V X1 X2 X3 X4 X5 X6 X7 Given random variables X1,…,Xn Partition variables V into sets A and VnA as independent as possible Formally: Want A* = argminA I(XA; XVnA) s.t. 0<|A|<n where I(XA,XB) = H(XB) - H(XB j XA) X1 X3 X4 X6 X2 X5 X7 A VnA

Example: Mutual information Given random variables X1,…,Xn z(A) = I(XA; XVnA) = H(XVnA) – H(XVnA |XA)=z(V\A) Lemma: Mutual information z(A) is submodular z(A [ {s}) – z(A) = H(Xsj XA) – H(Xsj XVn(A[{s}) ) s(A) = z(A[{s})-z(A) monotonically nonincreasing  z submodular  Nonincreasing in A: AµB ) H(Xs|XA) ¸ H(Xs|XB) Nondecreasing in A

Queyranne’s algorithm [Queyranne ’98] Theorem: There is a fully combinatorial, strongly polynomial algorithm for solving A* = argminA z(A) s.t. 0<|A|<n for symmetric submodular functions z Runs in time O(n3) [instead of O(n8)…]

Why are pendent pairs useful? Key idea: Let (t,u) pendent, A* = argmin z(A) Then EITHER t and u separated by A*, e.g., u2A*, tA*. But then A*={u}!! OR u and t are not separated by A* Then we can merge u and t… V A* u t V A* u t V A* u t