Matrix Sparsification. Problem Statement Reduce the number of 1s in a matrix.

Slides:



Advertisements
Similar presentations
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
Advertisements

Example: Given a matrix defining a linear mapping Find a basis for the null space and a basis for the range Pamela Leutwyler.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Error Control Code.
1 Copyright © 2015, 2011, 2007 Pearson Education, Inc. Chapter 4-1 Systems of Equations and Inequalities Chapter 4.
Systems of Linear Equations
4 4.3 © 2012 Pearson Education, Inc. Vector Spaces LINEARLY INDEPENDENT SETS; BASES.
The Inverse of a Matrix (10/14/05) If A is a square (say n by n) matrix and if there is an n by n matrix C such that C A = A C = I n, then C is called.
Maths for Computer Graphics
Tirgul 9 Amortized analysis Graph representation.
Outline for Today More math… Finish linear algebra: Matrix composition
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
Intro to Matrices Don’t be scared….
Last lecture summary Fundamental system in linear algebra : system of linear equations Ax = b. nice case – n equations, n unknowns matrix notation row.
MATRICES AND DETERMINANTS
ECON 1150 Matrix Operations Special Matrices
Recap of linear algebra: vectors, matrices, transformations, … Background knowledge for 3DM Marc van Kreveld.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Some matrix stuff.
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate systems.
We will use Gauss-Jordan elimination to determine the solution set of this linear system.
The Matrix Reloaded
MIMO continued and Error Correction Code. 2 by 2 MIMO Now consider we have two transmitting antennas and two receiving antennas. A simple scheme called.
Class Opener:. Identifying Matrices Student Check:
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Matrix Algebra Section 7.2. Review of order of matrices 2 rows, 3 columns Order is determined by: (# of rows) x (# of columns)
Error Detection and Correction – Hamming Code
Math 1320 Chapter 3: Systems of Linear Equations and Matrices 3.2 Using Matrices to Solve Systems of Equations.
Matrix Multiplication The Introduction. Look at the matrix sizes.
CS 450: COMPUTER GRAPHICS TRANSFORMATIONS SPRING 2015 DR. MICHAEL J. REALE.
4 4.2 © 2016 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
4 Vector Spaces 4.1 Vector Spaces and Subspaces 4.2 Null Spaces, Column Spaces, and Linear Transformations 4.3 Linearly Independent Sets; Bases 4.4 Coordinate.
Notes Over 4.2 Finding the Product of Two Matrices Find the product. If it is not defined, state the reason. To multiply matrices, the number of columns.
Computer Graphics Mathematical Fundamentals Lecture 10 Taqdees A. Siddiqi
Where do you sit?. What is a matrix? How do you classify matrices? How do you identify elements of a matrix?
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Reliability of Disk Systems. Reliability So far, we looked at ways to improve the performance of disk systems. Next, we will look at ways to improve the.
2.3 MODELING REAL WORLD DATA WITH MATRICES By the end of the section students will be able to add, subtract, and multiply matrices of various sizes. Students.
 Matrix Operations  Inverse of a Matrix  Characteristics of Invertible Matrices …
Linear Algebra Review.
12-1 Organizing Data Using Matrices
Matrix Operations Free powerpoints at
Matrix Operations.
Matrix Operations Free powerpoints at
Matrix Operations Monday, August 06, 2018.
Matrix Operations.
Matrix Operations Add and Subtract Matrices Multiply Matrices
Matrix Operations Free powerpoints at
An Improved Split-Row Threshold Decoding Algorithm for LDPC Codes
Agenda Textbook / Web Based Resource Basics of Matrices Classwork
Multiplying Matrices.
4.6: Rank.
Multiplying Matrices.
Mathematics for Signals and Systems
Dimensions matching Rows times Columns
Multiplying Matrices.
Multiplying Matrices.
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Multiplying Matrices.
Presentation transcript:

Matrix Sparsification

Problem Statement Reduce the number of 1s in a matrix

Measuring Sparsity The way I measured sparsity was by adding up the total number of 1s in a matrix and dividing by the total number of elements This gives you a number between 0 and 1 that tells you what percentage of the matrix is filled with ones

Experiment Setup First I generated H, which is a sparse 300x582 matrix with a column weight of 3, sparsity=.01 Then I multiplied it by a random invertible matrix. D is the resulting dense matrix, sparsity≈.5 Then I tried to make D as sparse as the original matrix

Comparing rowspaces From the sparse H matrix, generate a G matrix so that mod(H*G,2)=0 Then test D and any sparsified versions of D to ensure that mod2 of multiplication by G still results in a 0 matrix

GF2 difficulties Matrix sparsification is difficult in GF2 because it requires a different set of math rules For example:  With real numbers, the vectors [0 1 1],[1 0 1], and [1 1 0] are independent  In GF2, any one of those vectors can be made from the other two

Row Echelon Form The dense matrix starts with sparsity≈.5 In an [m x n] matrix, row reduction will give m columns with only one 1 in them. The rest of the columns should be approximately half 1s Now sparsity≈(m+.5*(n-m)*m)/(n*m) The more square a matrix is, the more this step helps.

Null Space Row Echelon Form tries to make the pivot columns the earliest possible columns By computing the null space of the original matrix, and then computing the null space of that null space, you get back the original rowspace Now the pivot columns are in different locations

SP2 function Find the row which reduces the number of 1s the most. A(row,:)*A T is a vector that gives the number of matching 1s in each row (i.e. the number of 1s that will be eliminated if the rows are added) (1-A(row,:))*A T = a vector giving you the number of 1s that will be introduced if the rows are added.

SP2 function If B= A(row,:)*A T - (1-A(row,:))*A T, then finding the maximum of B will reduce the sparsity the most per row addition. But first, you have to set B(row)= -n, because otherwise B(row) will always be the maximum, and you can’t add a row to itself.

SP2 function If max(B) is positive, then adding rows helps sparsify the matrix If max(B) is 0, then adding rows keeps the sparsity the same, but changes the location of the 1s When max(B) is negative, it makes the matrix more dense, but it can be helpful in overall sparsification because it helps get you out of local minimums

SP3 function First I make a matrix of all possible combinations of two different rows and also the original rows that I’m testing on This new matrix has (m-1)Choose2 + (m-1) rows Then I follow the same process as with the SP2 function, but using this new matrix, instead of one made up of only the original rows

Why not add 3? The matrix size grows too quickly. With 300 rows, the matrix for adding 1 or 2 rows has rows, and the matrix for adding 1 to 3 rows would have rows.

Why not add 3? You could break the matrix down into multiple smaller matrices, then save the best row from each matrix and add in the best overall row One reason for the significant improvement when adding two rows instead of one is that in GF2 1+1=0.

Orthogonal Projection The easiest way to span a space is with orthogonal vectors If the projection of vector a onto b is greater than.5, then b is a significant component of a, so removing it will make the vectors closer to orthogonal, and the matrix more sparse i.e. if((a·b)/(|b| 2 )>.5) rowa=mod2(rowa+rowb)

LDPC Decoding results In LDPC codes, every 1 in a matrix represents a connection between check nodes and variable nodes. Reducing the sparsity of a matrix makes LDPC decoding faster, and more reliable

SNR Vs. Probability of Error