Ugo Montanari On the optimal approximation of descrete functions with low- dimentional tables.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Ordinary Least-Squares
Eigen Decomposition and Singular Value Decomposition
Introduction to Computer Science 2 Lecture 7: Extended binary trees
1 Index Coding Part II of tutorial NetCod 2013 Michael Langberg Open University of Israel Caltech (sabbatical)
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Fast Algorithms For Hierarchical Range Histogram Constructions
NP-complete and NP-hard problems Transitivity of polynomial-time many-one reductions Concept of Completeness and hardness for a complexity class Definition.
1 Minimizing Movement Erik D. Demaine, MohammadTaghi Hajiagahayi, Hamid Mahini, Amin S. Sayedi-Roshkhar, Shayan Oveisgharan, Morteza Zadimoghaddam SODA.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
[1][1][1][1] Lecture 4: Frequency reuse, channel assignment, and more June 15, Introduction to Algorithmic Wireless Communications David Amzallag.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Stanford University CS243 Winter 2006 Wei Li 1 Data Dependences and Parallelization.
Approximation Algorithms
Backtracking Reading Material: Chapter 13, Sections 1, 2, 4, and 5.
Lecture 10: Support Vector Machines
Linear Programming – Max Flow – Min Cut Orgad Keller.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
1.2 – Open Sentences and Graphs
1/25 Pointer Logic Changki PSWLAB Pointer Logic Daniel Kroening and Ofer Strichman Decision Procedure.
Polynomial Factorization Olga Sergeeva Ferien-Akademie 2004, September 19 – October 1.
Summarized by Soo-Jin Kim
Lecture 2 The Relational Model. Objectives Terminology of relational model. How tables are used to represent data. Connection between mathematical relations.
Math 3121 Abstract Algebra I Lecture 3 Sections 2-4: Binary Operations, Definition of Group.
Linear Codes.
Chapter Relations & Functions 1.2 Composition of Functions
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Advanced Counting Techniques CSC-2259 Discrete Structures Konstantin Busch - LSU1.
1 Treewidth, partial k-tree and chordal graphs Delpensum INF 334 Institutt fo informatikk Pinar Heggernes Speaker:
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
Discrete Mathematics Math Review. Math Review: Exponents, logarithms, polynomials, limits, floors and ceilings* * This background review is useful for.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
Orthogonality and Least Squares
Functions of Several Variables Copyright © Cengage Learning. All rights reserved.
Chapter 8 PD-Method and Local Ratio (4) Local ratio This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
Chapter 6. Threshold Logic. Logic design of sw functions constructed of electronic gates different type of switching element : threshold element. Threshold.
CSC401 – Analysis of Algorithms Lecture Notes 2 Asymptotic Analysis Objectives: Mathematics foundation for algorithm analysis Amortization analysis techniques.
Advanced Counting Techniques CSC-2259 Discrete Structures Konstantin Busch - LSU1.
Copyright © Cengage Learning. All rights reserved.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Chapter 2 … part1 Matrices Linear Algebra S 1. Ch2_2 2.1 Addition, Scalar Multiplication, and Multiplication of Matrices Definition A matrix is a rectangular.
Latin squares Def: A Latin square of order n is a quadruple (R, C, S; L) where R, C and S are sets of cardinality n and L is a mapping L: R × C → S such.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
ICS 353: Design and Analysis of Algorithms Backtracking King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Compression for Fixed-Width Memories Ori Rottenstriech, Amit Berman, Yuval Cassuto and Isaac Keslassy Technion, Israel.
D Nagesh Kumar, IIScOptimization Methods: M8L1 1 Advanced Topics in Optimization Piecewise Linear Approximation of a Nonlinear Function.
Approximation Algorithms based on linear programming.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
拉丁方陣 交大應數系 蔡奕正. Definition A Latin square of order n with entries from an n-set X is an n * n array L in which every cell contains an element of X such.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Chapter 8 PD-Method and Local Ratio (5) Equivalence This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Computation of the solutions of nonlinear polynomial systems
Computability and Complexity
13 Functions of Several Variables
ICS 353: Design and Analysis of Algorithms
Applied Discrete Mathematics Week 9: Integer Properties
Eigen Decomposition Based on the slides by Mani Thomas
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Locality In Distributed Graph Algorithms
Presentation transcript:

Ugo Montanari On the optimal approximation of descrete functions with low- dimentional tables.

Overview Introduction Approximation with a sum of low-dimensional functions Optomal approximation with a given interaction graph Optimal approximation with a fixed amount of memory

Introduction Problem of storing large high-dimentional arrays is often critical (dynamic programming optimization techniques, belief propagation etc.)‏ Montanari proposes methos of optimal approximation (in the least square sence) of the given function with a sum of lower-dimentional functions.

Advantages The decoding process is very simple (a fixed number of summations)‏ The compression ratio often is high, mean error can be small, if the interaction between separated variables is limited The approximation function has a form of a sum of terms and is therefore suitable for the dynamic programming optimization

Approximation with a sum of low- dimentional functions F – function of n discrete variables (with same domains)‏

Lettice

Approximation with a sum of low- dimentional functions

Average projection of the function:

Proper function: A function g(X i ) such that its average projections on all the subsets of X i are identically zero will be called a proper function of X i.

Theorem 2.1 The set S i of all the proper functions of X i is a vector space and is called the proper space of X i. Theorem: The proper space S i of all the elements X i of lattice L are mutually orthogonal. Proof:

Characteristic function B B:L -> 0,1 Monotonocity constraint: The meaning of the characteristic function B is to specify the form of an approximate sum of terms for function F

Example of characteristic function We want to approximate function F(x 1,x 2,x 3 ) with a sum of the form F = f 1 (x 1,x 2 )+f 2 (x 2,x 3 )‏ Function B:

Characteristic space S B :

Problem A

Algorithm A that solves Problem A Step 1. Compute the average projections of F on all elements X i of lattice L. Step 2. Let Step 3. Execute next step for all r = 1,...,n Step 4. For all elements X i of L having cardinality r, let where the summation is extended to all X j of L smaller then X i Compute function:

Theorem 2.2 Theorem 2.2 proves validity of Algorithm A

Proof of theorem 2.2 (a) For every we have: (b) and (c). We assume inductively the thesis is true for function k j (X j ) and spaces S j with cardinality X j smaller then r, and prove for r.

Proof of theorem 2.2 cont'd Prove that :  If, then From written as

Proof of theorem 2.2 cont'd is proved to be solution to Problem A

Optimal approximation with a given interaction graph Sum of terms: Interaction graph: Alternative sum of terms:

Problem B Given function F and interaction graph G find the sum such that G is the interaction graph of and the error |F- | is minimal Note: interaction graph does not define uniquely the form of approximating function, so Problem B is not trivially reducibleto Problem A

Theorem 3.1 Theorem proves that the form of the optimal approximating sum depends only on given interaction graph G, and not upon the actual values of F. Theorem: Characteristic function B of an optimal approximating sum is computable as follows. We have B( X i ) = 1 iff the set of vertices W i corresponding to the set of variables X i defines a complete subgraph of G.

Proof of theorem 3.1

Example Problem B reduces to: - Finding all complete subgraphs of graph G - Solving problem A

Optimal approximation with a fixed amount of memory Problem C: Given function F find a sum whose terms can be stored as tables in no more than M cells of memory and such that the error |F- | is minimal

2 ways of storing a sum 1) The sum reqires 2N^2 cells 2) Store 6 functions from the table, such that if any of arguments of f 1 : f 5 is zero, then the value of function is zero and it's not stored. Total storage space:

2 ways of storing a sum In general, first methos requires cells, where summation is extended to all maximal sets. Second method requires Cells, but the summation extends to all sets and is optimal, because it is exactly equal to the number of dimentions of vector space

2 ways of storing a sum

Error By definition, Thus

Translation of Problem C into integer programming problem (0,1) resticted Problem D: Determine the integer variables y i (i=1,...,m) (0,1) restricted such that with the constraints

Correnpondence between Problem C and Problem D In Problem D both the objective function and the constraints are linear. Therefore linear interger programming methods apply.