Presentation is loading. Please wait.

Presentation is loading. Please wait.

Direct Methods for Sparse Linear Systems

Similar presentations


Presentation on theme: "Direct Methods for Sparse Linear Systems"— Presentation transcript:

1 Direct Methods for Sparse Linear Systems
Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

2 Last lecture review Solution of system of linear equations
Existence and uniqueness review Gaussian elimination basics GE basics LU factorization Pivoting

3 Outline Error Mechanisms Sparse Matrices Why are they nice
How do we store them How can we exploit and preserve sparsity

4 Error Mechanisms Round-off error Ill conditioning (almost singular)
Pivoting helps Ill conditioning (almost singular) Bad luck: property of the matrix Pivoting does not help Numerical Stability of Method

5 Ill-Conditioning : Norms
Norms useful to discuss error in numerical problems Norm

6 Ill-Conditioning : Vector Norms
L2 (Euclidean) norm : Unit circle L1 norm : 1 1 As the algebra on the slide shows the relative changes in the solution x is bounded by an A-dependent factor times the relative changes in A. The factor was historically referred to as the condition number of A, but that definition has been abandoned as then the condition number is norm-dependent. Instead the condition number of A is the ratio of singular values of A. Singular values are outside the scope of this course, consider consulting Trefethen & Bau. L norm : Unit square

7 Ill-Conditioning : Matrix Norms
Vector induced norm : Induced norm of A is the maximum “magnification” of by = max abs column sum As the algebra on the slide shows the relative changes in the solution x is bounded by an A-dependent factor times the relative changes in A. The factor was historically referred to as the condition number of A, but that definition has been abandoned as then the condition number is norm-dependent. Instead the condition number of A is the ratio of singular values of A. Singular values are outside the scope of this course, consider consulting Trefethen & Bau. = max abs row sum = (largest eigenvalue of ATA)1/2

8 Ill-Conditioning : Matrix Norms
More properties on the matrix norm: Condition Number: It can be shown that: Large k(A) means matrix is almost singular (ill-conditioned)

9 Ill-Conditioning: Perturbation Analysis
What happens if we perturb b? As the algebra on the slide shows the relative changes in the solution x is bounded by an A-dependent factor times the relative changes in A. The factor was historically referred to as the condition number of A, but that definition has been abandoned as then the condition number is norm-dependent. Instead the condition number of A is the ratio of singular values of A. Singular values are outside the scope of this course, consider consulting Trefethen & Bau.  k(M) large is bad

10 Ill-Conditioning: Perturbation Analysis
What happens if we perturb M?  k(M) large is bad As the algebra on the slide shows the relative changes in the solution x is bounded by an A-dependent factor times the relative changes in A. The factor was historically referred to as the condition number of A, but that definition has been abandoned as then the condition number is norm-dependent. Instead the condition number of A is the ratio of singular values of A. Singular values are outside the scope of this course, consider consulting Trefethen & Bau. Bottom line: If matrix is ill-conditioned, round-off puts you in troubles

11 Ill-Conditioning: Perturbation Analysis
Geometric Approach is more intuitive Vectors are nearly aligned Vectors are orthogonal When vectors are nearly aligned, Hard to decide how much of versus how much of

12 Numerical Stability Rounding errors may accumulate and propagate unstably in a bad algorithm. Can be proven that for Gaussian elimination the accumulated error is bounded

13 Summary on Error Mechanisms for GE
Rounding: due to machine finite precision we have an error in the solution even if the algorithm is perfect Pivoting helps to reduce it Matrix conditioning If matrix is “good”, then complete pivoting solves any round-off problem If matrix is “bad” (almost singular), then there is nothing to do Numerical stability How rounding errors accumulate GE is stable

14 LU – Computational Complexity
O(n3), where M: n x n We cannot afford this complexity Exploit natural sparsity that occurs in circuits equations Sparsity: many zero elements Matrix is sparse when it is advantageous to exploit its sparsity Exploiting sparsity: O(n1.1) to O(n1.5)

15 LU – Goals of exploiting sparsity
Avoid storing zero entries Memory usage reduction Decomposition is faster since you do need to access them (but more complicated data structure) (2) Avoid trivial operations Multiplication by zero Addition with zero (3) Avoid losing sparsity

16 Sparse Matrices – Resistor Line
Tridiagonal Case m

17 GE Algorithm – Tridiagonal Example
For i = 1 to n-1 { “For each Row” For j = i+1 to n { “For each target Row below the source” For k = i+1 to n { “For each Row element beyond Pivot” } Pivot Multiplier Order N Operations!

18 Sparse Matrices – Fill-in – Example 1
Nodal Matrix Recalling from lecture 2, the entries in the nodal matrix can be derived by noting that a resistor, as contributes to four locations in the nodal matrix as shown below. It is also resisting to note that Gii is equal to the sum of the conductances (one over resistance) incident at node i. Symmetric Diagonally Dominant

19 Sparse Matrices – Fill-in – Example 1
Matrix Non zero structure Matrix after one LU step X X X X X X During a step of LU factorization a multiple of a source row will be subtracted from a row below it. Since these two rows will not necessarily have non zeros in the same columns, the result of the subtraction might be to introduce additional non zeros into the target row. As a simple example, consider LU factoring The result is Notice that the factored matrix has a non zero entry in the bottom right corner, where as the original matrix did not. This changing of a zero entry to a non zero entry is referred to as a fill-in. X= Non zero

20 Sparse Matrices – Fill-in – Example 2
Fill-ins Propagate X X X X X X X In the example, the 4 x 4 mesh begins with 7 zeros. During the LU factorization, 5 of the zeros become non zero. What is of additional concern is the problem of fill-ins. The first step of LU factorization where a multiple of the first row is subtracted from the second row, generates fill-ins in the third and fourth column of row two. When multiples of row 2 are subtracted from row 3 and row 4, the fill-ins generated in row 2 generate second-level fill-ins in rows 3 and 4. X X X Fill-ins from Step 1 result in Fill-ins in step 2

21 Sparse Matrices – Fill-in & Reordering
Fill-ins No Fill-ins In the context of the nodal equation formulation, renumbering the nodes seems like a simple operation to reduce fill-in, as selecting the node numbers was arbitrary to begin with. Keep in mind, however, that such a renumbering of nodes in the nodal equation formulation corresponds to swapping both rows and columns in the matrix. Node Reordering Can Reduce Fill-in - Preserves Properties (Symmetry, Diagonal Dominance) - Equivalent to swapping rows and columns

22 Exploiting and maintaining sparsity
Criteria for exploiting sparsity: Minimum number of ops Minimum number of fill-ins Pivoting to maintain sparsity: NP-complete problem  heuristics are used Markowitz, Berry, Hsieh and Ghausi, Nakhla and Singhal and Vlach Choice: Markowitz 5% more fill-ins but Faster Pivoting for accuracy may conflict with pivoting for sparsity

23 Sparse Matrices – Fill-in & Reordering
Where can fill-in occur ? Already Factored Possible Fill-in Locations Multipliers Fill-in Estimate = (Non zeros in unfactored part of Row -1) (Non zeros in unfactored part of Col -1) Markowitz product

24 Sparse Matrices – Fill-in & Reordering
Markowitz Reordering (Diagonal Pivoting) In order to understand the Markowitz reordering algorithm, it is helpful to consider the cost of the algorithm. The first step is to determine the diagonal with the minimum Markowitz product. The cost of this step is where K is the average number of non zeros per row. The second step of the algorithm is to swap rows and columns in the factorization. A good data structure will make the swap inexpensively. The third step is to factor the reordered matrix and insert the fill-ins. If the matrix is very sparse, this third step will also be inexpensive. Since one must then find the diagonal in the updated matrix with the minimum Markowitz product, the products must be computed at a cost of It is possible to improve the situation by noting that very few Markowitz products will change during a single step of the factorization. The mechanics of such an optimization are easiest to see by examining the graphs of a matrix. Greedy Algorithm (but close to optimal) !

25 Sparse Matrices – Fill-in & Reordering
Why only try diagonals ? Corresponds to node reordering in Nodal formulation 1 2 3 1 3 2 Reduces search cost Preserves Matrix Properties Diagonal Dominance Symmetry

26 Sparse Matrices – Fill-in & Reordering
Pattern of a Filled-in Matrix Very Sparse Very Sparse Dense

27 Sparse Matrices – Fill-in & Reordering
Unfactored random Matrix

28 Sparse Matrices – Fill-in & Reordering
Factored random Matrix

29 Sparse Matrices – Data Structure
Several ways of storing a sparse matrix in a compact form Trade-off Storage amount Cost of data accessing and update procedures Efficient data structure: linked list

30 Sparse Matrices – Data Structure 1
In order to store a sparse matrix efficiently, one needs a data structure which can represent only the matrix non zeros. One simple approach is based on the observation that each row of a sparse matrix has at least one non zero entry. Then one constructs one pair of arrays for each row, where the array part corresponds to the matrix entry and the entry’s column. As an example, consider the matrix The data structure for this example is Note that there is no explicit storage for the zeros Orthogonal linked list

31 Sparse Matrices – Data Structure 2
Vector of row pointers Arrays of Data in a Row Matrix entries Val 11 Val 12 Val 1K 1 Column index Col 11 Col 12 Col 1K Val 21 Val 22 Val 2L Col 21 Col 22 Col 2L In order to store a sparse matrix efficiently, one needs a data structure which can represent only the matrix non zeros. One simple approach is based on the observation that each row of a sparse matrix has at least one non zero entry. Then one constructs one pair of arrays for each row, where the array part corresponds to the matrix entry and the entry’s column. As an example, consider the matrix The data structure for this example is Note that there is no explicit storage for the zeros Val N1 Val N2 Val Nj N Col N1 Col N2 Col Nj

32 Sparse Matrices – Data Structure Problem of Misses
Eliminating Source Row i from Target row j Row i Row j In order to store a sparse matrix efficiently, one needs a data structure which can represent only the matrix non zeros. One simple approach is based on the observation that each row of a sparse matrix has at least one non zero entry. Then one constructs one pair of arrays for each row, where the array part corresponds to the matrix entry and the entry’s column. As an example, consider the matrix The data structure for this example is Note that there is no explicit storage for the zeros Must read all the row j entries to find the 3 that match row i Every Miss is an unneeded memory reference (expensive!!!) Could have more misses than ops!

33 Sparse Matrices – Data Structure
Scattering for Miss Avoidance Row j In order to store a sparse matrix efficiently, one needs a data structure which can represent only the matrix non zeros. One simple approach is based on the observation that each row of a sparse matrix has at least one non zero entry. Then one constructs one pair of arrays for each row, where the array part corresponds to the matrix entry and the entry’s column. As an example, consider the matrix The data structure for this example is Note that there is no explicit storage for the zeros 1) Read all the elements in Row j, and scatter them in an n-length vector 2) Access only the needed elements using array indexing!

34 Sparse Matrices – Graph Approach
Structurally Symmetric Matrices and Graphs 1 X X X X X X X 2 X X X X 3 X X X The graph has The graph has two important properties 1) The node degree squared yields the Markowitz product. 2) The graph can easily be updated after one step of factorization. The graph makes efficient a two-step approach to factoring a structurally symmetric matrix. First one determines an ordering which produces little fill by using the graph. Then, one numerically factors the matrix in the graph-determined order. 4 X X X 5 One Node Per Matrix Row One Edge Per Off-diagonal Pair

35 Sparse Matrices – Graph Approach Markowitz Products
X 1 2 4 3 5 Markowitz Products = (Node Degree)2 That the ith node degree squared is equal to the Markowitz product associated with the ith diagonal is easy to see. The node degree is the number of edges emanating from the node, and each edge represents both an off-diagonal row entry and an off-diagonal column entry. Therefore, the number of off-diagonal row entries multiplied by the number of off-diagonal column entries is equal to the node degree squared.

36 Sparse Matrices – Graph Approach Factorization
One Step of LU Factorization 1 X X X X X X X 2 X X X X X 3 X X X 4 X X X X X One step of LU factorization requires a number of floating point operations and produces a reduced matrix, as below After step i in the factorization, the unfactored portion of the matrix is smaller of size (i-1) x (i-1), and may be denser if there are fill-ins. The graph can be used to represent the location of non zeros in the unfactored portion of the matrix, but two things must change. 1) A node must be removed as the unfactored portion has one fewer row. 2) The edges associated with fill-ins must be added. In the animation, we show by example how the graph is updated during a step of LU factorization. We can state the manipulation precisely by noting that if row i is eliminated in the matrix, the node i must be eliminated from the graph. In addition, all nodes adjacent to node i ( adjacent nodes are ones connected by an edge) will be made adjacent to each other by adding the necessary edges. The added edges represent fill-in. 5 factored Delete the node associated with pivot row “Tie together” the graph edges factored

37 Sparse Matrices – Graph Approach Example
1 2 3 4 5 Graph Markowitz products ( = Node degree) 1 2 3 4 5

38 Sparse Matrices – Graph Approach Example
Swap 2 with 1 1 3 4 5 Graph Examples that factor with no fill-in Tridiagonal Another ordering for the tridiagonal matrix that is more parallel 1 7 A B C D E F G 1 2 3 A B C D E F G 4 5 A C E G A E

39 Summary Gaussian Elimination Error Mechanisms
Ill-conditioning Numerical Stability Gaussian Elimination for Sparse Matrices Improved computational cost: factor in O(N1.5) operations (dense is O(N3) ) Example: Tridiagonal Matrix Factorization O(N) Data structure Markowitz Reordering to minimize fill-ins Graph Based Approach


Download ppt "Direct Methods for Sparse Linear Systems"

Similar presentations


Ads by Google