Download presentation
Presentation is loading. Please wait.
1
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux
2
CSCI 317 Mike Heroux2 Matrices Matrix (defn): (not rigorous) An m-by-n, 2 dimensional array of numbers. Examples: 1.0 2.0 1.5 A = 2.0 3.0 2.5 1.5 2.5 5.0 a 11 a 12 a 13 A = a 21 a 22 a 23 a 31 a 32 a 33
3
CSCI 317 Mike Heroux3 Sparse Matrices Sparse Matrix (defn): (not rigorous) An m-by-n matrix with enough zero entries that it makes sense to keep track of what is zero and nonzero. Example: a 11 a 12 0 0 0 0 a 21 a 22 a 23 0 0 0 A = 0 a 32 a 33 a 34 0 0 0 0 a 43 a 44 a 45 0 0 0 0 a 54 a 55 a 56 0 0 0 0 a 65 a 66
4
4 Dense vs Sparse Costs What is the cost of storing the tridiagonal matrix with all entries? What is the cost if we store each of the diagonals as vector? What is the cost of computing y = Ax for vectors x (known) and y (to be computed): –If we ignore sparsity? –If we take sparsity into account? CSCI 317 Mike Heroux
5
5 Origins of Sparse Matrices In practice, most large matrices are sparse. Specific sources: –Differential equations. Encompasses the vast majority of scientific and engineering simulation. E.g., structural mechanics. –F = ma. Car crash simulation. –Stochastic processes. Matrices describe probability distribution functions. –Networks. Electrical and telecommunications networks. Matrix element a ij is nonzero if there is a wire connecting point i to point j. –And more… CSCI 317 Mike Heroux
6
6 Example: 1D Heat Equation (Laplace Equation) The one-dimensional, steady-state heat equation on the interval [0,1] is as follows: The solution u(x), to this equation describes the distribution of heat on a wire with temperature equal to a and b at the left and right endpoints, respectively, of the wire. CSCI 317 Mike Heroux
7
7 Finite Difference Approximation The following formula provides an approximation of u”(x) in terms of u: For example if we want to approximate u”(0.5) with h = 0.25: CSCI 317 Mike Heroux
8
8 1D Grid x 0 = 0x 4 = 1x 1 = 0.25x 2 = 0.5x 3 = 0.75 x:x: u(x): u(0)=a = u 0 u(0.25)=u 1 u(0.5)=u 2 u(0.75)=u 3 u(1)=b = u 4 Note that it is impossible to find u(x) for all values of x. Instead we: Create a “grid” with n points. Then find an approximate to u at these grid points. If we want a better approximation, we increase n. Interval: Note: We know u 0 and u 4. We know a relationship between the u i via the finite difference equations. We need to find u i for i=1, 2, 3. CSCI 317 Mike Heroux
9
9 What We Know CSCI 317 Mike Heroux
10
10 Write in Matrix Form This is a linear system with 3 equations and three unknowns. We can easily solve. Note that n=5 generates this 3 equation system. In general, for n grid points on [0, 1], we will have n-2 equations and unknowns. CSCI 317 Mike Heroux
11
11 General Form of 1D Finite Difference Matrix CSCI 317 Mike Heroux
12
12 A View of More Realistic Problems The previous example is very simple. But basic principles apply to more complex problems. Finite difference approximations exist for any differential equation. Leads to far more complex matrix patterns. For example… CSCI 317 Mike Heroux
13
13 “Tapir” Matrix (John Gilbert) CSCI 317 Mike Heroux
14
14 Corresponding Mesh CSCI 317 Mike Heroux
15
15 Sparse Linear Systems: Problem Definition A frequent requirement for scientific and engineering computing is to solve: Ax = b where A is a known large (sparse) matrix a linear operator, b is a known vector, x is an unknown vector. NOTE: We are using x differently than before. Previous x: Points in the interval [0, 1]. New x:Vector of u values. Goal: Find x. Method: We will look at two different methods: Jacobi and Gauss-Seidel. CSCI 317 Mike Heroux
16
16 Iterative Methods Given an initial guess for x, called x (0), (x (0) = 0 is acceptable) compute a sequence x (k), k = 1,2, … such that each x (k) is “closer” to x. Definition of “close”: –Suppose x (k) = x exactly for some value of k. –Then r (k) = b – Ax (k) = 0 (the vector of all zeros). –And norm(r (k) ) = sqrt( ) = 0 (a number). –For any x (k), let r (k) = b – Ax (k) –If norm(r (k) ) = sqrt( ) is small (< 1.0E-6 say) then we say that x (k) is close to x. –The vector r is called the residual vector. CSCI 317 Mike Heroux
17
17 Linear Conjugate Gradient Methods scalar product defined by vector space vector-vector operations linear operator applications Scalar operations Types of operationsTypes of objects Linear Conjugate Gradient Algorithm CSCI 317 Mike Heroux
18
18 General Sparse Matrix Example: a 11 0 0 0 0 a 16 0 a 22 a 23 0 0 0 A = 0 a 32 a 33 0 a 35 0 0 0 0 a 44 0 0 0 0 a 53 0 a 55 a 56 a 61 0 0 0 a 65 a 66
19
CSCI 317 Mike Heroux19 Compressed Row Storage (CRS) Format Idea: Create 3 length m arrays of pointers 1 length m array of ints : double ** values = new double *[m]; double** diagonals = new double*[m]; int ** indices = new int*[m]; int * numEntries = new int[m];
20
CSCI 317 Mike Heroux20 Compressed Row Storage (CRS) Format Fill arrays as follows: for (i=0; i<m; i++) { // for each row numEntries[i] = numRowEntries, number of nonzero entries in row i. values[i] = new double[numRowEntries]; indices[i] = new int[numRowEntries]; for (j=0; j<numRowEntries; j++) { // for each entry in row i values[i][j] = value of j th row entry. indices[i][j] = column index of j th row entry. if (i==column index) diagonal[i] = &(values[i][j]); }
21
CSCI 317 Mike Heroux21 CRS Example (diagonal omitted)
22
CSCI 317 Mike Heroux22 Matrix, Scalar-Matrix and Matrix-Vector Operations Given vectors w, x and y, scalars alpha and beta and matrices A and B we define: matrix trace: –alpha = tr(A) α = a 11 + a 22 +... + a nn matrix scaling: –B = alpha * A b ij = α a ij matrix-vector multiplication (with update): –w = alpha * A * x + beta * y w i = α (a i1 x 1 + a i2 x 2 +... + a in x n ) + βy i
23
CSCI 317 Mike Heroux23 Common operations (See your notes) Consider the following operations: –Matrix trace. –Matrix scaling. –Matrix-vector product. Write mathematically and in C/C++.
24
CSCI 317 Mike Heroux24 Complexity (arithmetic) complexity (defn) The total number of arithmetic operations performed using a given algorithm. –Often a function of one or more parameters. parallel complexity (defn) The number parallel operations performed assuming an infinite number of processors.
25
CSCI 317 Mike Heroux25 Complexity Examples (See your notes) What is the complexity of: –Sparse Matrix trace? –Sparse Matrix scaling. –Sparse Matrix-vector product? What is the parallel complexity of these operations?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.