1 / 20 Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus A NEW ALGORITHM TO SOLVE OVERDEFINED SYSTEMS OF LINEAR LOGICAL EQUATIONS.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Ch 7.7: Fundamental Matrices
Linear Programming, 1 Max c 1 *X 1 +…+ c n *X n = z s.t. a 11 *X 1 +…+ a 1n *X n  b 1 … a m1 *X 1 +…+ a mn *X n  b m X 1, X n  0 Standard form.
4.3 Matrix Approach to Solving Linear Systems 1 Linear systems were solved using substitution and elimination in the two previous section. This section.
Linear Equations in Linear Algebra
Linear Inequalities and Linear Programming Chapter 5
Chapter 10: Iterative Improvement
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
Protein Encoding Optimization Student: Logan Everett Mentor: Endre Boros Funded by DIMACS REU 2004.
Linear Equations in Linear Algebra
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
Linear Systems The definition of a linear equation given in Chapter 1 can be extended to more variables; any equation of the form for real numbers.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
LIAL HORNSBY SCHNEIDER
Systems of Equations and Inequalities
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Solve Systems of Equations By Graphing
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Decidability of Minimal Supports of S-invariants and the Computation of their Supported S- invariants of Petri Nets Faming Lu Shandong university of Science.
Linear Equations in Linear Algebra
Systems and Matrices (Chapter5)
Square n-by-n Matrix.
Sect 8.1 Systems of Linear Equations A system of linear equations is where all the equations in a system are linear ( variables raised to the first power).
Copyright © 2000 by the McGraw-Hill Companies, Inc. Barnett/Ziegler/Byleen Precalculus: A Graphing Approach Chapter Eight Systems: Matrices.
Chapter 2 Simultaneous Linear Equations (cont.)
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Simplex method (algebraic interpretation)
Section 3.6 – Solving Systems Using Matrices
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
We will use Gauss-Jordan elimination to determine the solution set of this linear system.
1 1.5 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra SOLUTION SETS OF LINEAR SYSTEMS.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Section 4-1: Introduction to Linear Systems. To understand and solve linear systems.
Sec 3.5 Inverses of Matrices Where A is nxn Finding the inverse of A: Seq or row operations.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Orthogonality and Least Squares
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
5.5 Row Space, Column Space, and Nullspace
Section 2.3 Properties of Solution Sets
Weikang Qian. Outline Intersection Pattern and the Problem Motivation Solution 2.
Word : Let F be a field then the expression of the form a 1, a 2, …, a n where a i  F  i is called a word of length n over the field F. We denote the.
Slide Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley A set of equations is called a system of equations. The solution.
3.6 Solving Systems Using Matrices You can use a matrix to represent and solve a system of equations without writing the variables. A matrix is a rectangular.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 7: Linear and Generalized Discriminant Functions.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Warm- Up Solve the following systems using elimination or substitution : 1. x + y = 6 -3x + y = x + 4y = 7 x + 2y = 7.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Computational Molecular Biology
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
REVIEW Linear Combinations Given vectors and given scalars
1.4 The Matrix Equation Ax = b
Chapter 7: Systems of Equations and Inequalities; Matrices
Linear Equations in Linear Algebra
Chap 10. Sensitivity Analysis
Linear Equations in Linear Algebra
Linear Algebra Lecture 37.
Linear Equations in Linear Algebra
6 minutes Warm-Up Find each product..
Unfolding with system identification
Chapter 2. Simplex method
Linear Equations in Linear Algebra
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Chapter 2. Simplex method
Presentation transcript:

1 / 20 Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus A NEW ALGORITHM TO SOLVE OVERDEFINED SYSTEMS OF LINEAR LOGICAL EQUATIONS

2 / 20 Outline How the problem is stated How the problem can be solved Theoretical background Example Solving the core equation Experiments Results of experiments

3 / 20 How the problem is stated A system of m linear logical equations (SLLE) with n Boolean variables : a 11 x 1  a 12 x 2  …  a 1n x n = y 1, a 21 x 1  a 22 x 2  …  a 2n x n = y 2, … a m1 x 1  a m2 x 2  …  a mn x n = y m.

4 / 20 Any SLLE can be presented by equation Ax = y, A – the matrix of coefficients, x – the vector of unknowns, y – the vector of free members, all Boolean. Usually A and y are given, the problem is to find a root - a value of vector x satisfying the equation Ax = y. An SLLE could be defined (having one root), usually m = n, undefined (underdefined, several roots), usually m<n, overdefined (inconsistent, contradictory - no root), usually m > n. How the problem is stated

5 / 20 How the problem is stated Finding optimal solutions. Looking for a shortest root in undefined SLLE : A y = x T – the shortest root

6 / 20 How the problem is stated Satisfying maximum number of equations in overdefined SLLE A yey* not satisfied not satisfied x * - an optimal solution

7 / 20 Let m > n, and all columns of matrix A are linear- independent. Then system Ax = y is consistent for 2 n values of vector y (called suitable) from 2 m possible values. Suppose a suitable vector y* is distorted to y = y*  e, where e is a distortion vector. The problem is to restore vector y* (or e) for given A and y. When y is not too far from y*, that problem can be solved by finding a suitable value y”, the nearest to y. Then y”= y*. How the problem is stated

8 / 20 How the problem can be solved Matrix A generates a linear vector space M, consisting of all different sums (modulo 2) of columns from A. Equation Ax = y is consistent (and y is suitable) iff y  M. The problem is to calculate the vector distance d (A, y) between vector space M and vector y. It could be regarded as the distortion vector e if its weight w(e) (the number of 1s) is smaller then   the averaged shortest Hamming distance between elements in M. Vector e can be regarded as well as the correction vector.

9 / 20 The value of  is defined by inequality  (m, n,  ) < 1   (m, n,  + 1), where  (m, n, k) is the expected number of suitable values of vector y with weight k in a random SLLE with parameters m and n: k  (m, n, k) =  C m i 2 n-m. i = 0 - How the problem can be solved

10 / 20 Changing some column a i of matrix A for its sum with another column a j we obtain some matrix A + equivalent to initial one (generating the same linear vector space M) Affirmation 1. Vector distance d (A +, y) = d (A, y ). Changing vector y in system (A, y) for its sum with arbitrary column a j from matrix A we obtain z +. Affirmation 2. Vector distance d (A, y + ) = d (A, y). Theoretical background

11 / 20 Theoretical background Using introduced operations, we canonize system (A, y): 1) select n linearly independent rows in matrix A; 2) in every of them delete all 1s except one (put into position i for i-th of the selected rows); 3) delete 1s in corresponding components of vector z. After that the obtained system (A +, y + ) is reduced in size: 4) all selected rows are deleted from matrix A +, as well as the corresponding components of vector y. The remaining rows and components constitute Boolean ((m – n)  n)- matrix B and (m – n)-vector u.

12 / 20 Affirmation 3. The task of restoration (finding vector d (A, y)) is reduced to solving the core equation Bx = u): To find a column subset C in matrix B, which minimizes the arithmetic sum w(c) + w(s). In that case d (A, y) = (c, s), the concatenation of vectors c and s. c - the Boolean n-vector indicating columns from B entering C; w(c) - the number of 1s in c;  (C) - the mod 2 sum of columns in C; s =  (C)  u; w(s) - the number of 1s in s. Theoretical background

13 / 20 Example A y b A + y + B u 

14 / 20 Restoring the initial system: 1. Solving system B x = u, i. e. finding a value c of x which minimizes function (Bx  u) + w (x). 2. Obtaining d (A, y) = (c, Bc  u), which could be accepted as distortion (correction) vector e. 3. Calculating, if needed, suitable vector y* = y  e, then solving consistent system A x = y* and finding x*. Example

15 / 20 B u Bc  uey y* A c = x* = Example

16 / 20 The suggested method can be applied when w(e) < . As soon as w(c, s) <  for a current subset C from B, vector (c, s) could be accepted as vector e. The subsets C are checked one by one while increasing number of columns in C up to L - the level of search. The run-time T strongly depends on L, which, in its turn, depends statistically on m, n and w(e), with a great dispersion. Solving the core equation B x = u

17 / 20 Solving the core equation B x = u It follows from here that efficient algorithms can be constructed which solve the problem in the quasi-parallel mode using a set of many (q) canonical forms of system (A, y) with different basics selected at random.

18 / 20 Additional acceleration in finding a short solution can be achieved by randomization. q different canonical forms are prepared, which have various basics selected at random. Then the solution is searched in parallel over all these forms, at levels of exhaustive search 0, 1, etc., until at a current level L a solution with weight w, satisfying condition w <   1 will be recognized. With raising q this level L can be reduced, as well as the run-time T, which powerfully depends on L. Solving the core equation B x = u

19 / 20 Experiments 10 random overdefined SLLEs (A, y) were prepared with m = 1000, n = 100, and w(e) = 100. Each of them was solved. The level of search was minimized by: randomization – constructing q random equivalent forms (A +, y + ) and transforming them to (B, u), solving systems (B, u) in parallel, gradually raising the level of search, restricting the search by recognizing short solutions. Conducting the experiments for q = 1, q = 10 and q = 100 to see how the run-time T depend on q.

20 / 20 Results of experiments (m=1000, n=100, w(e)=100) q = 1 q = 10 q = 100 № L T L T L T y 3 10s 3 6m y 8 27d 3 6m y 7 4d 3 7m y 5 33m 4 12m y 5 1h 3 7m y 7 3d 2 6m d 6 3h 4 15m s 4 25s 4 8m 9 6 1h 4 2m 4 9m y 5 52m 5 1h