Ax = b Methods for Solution of the System of Equations (ReCap):

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Linear Systems of Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Chapter 9.1 = LU Decomposition MATH 264 Linear Algebra.
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 11 - LU Decomposition
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 10 LU Decomposition and Matrix.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
Using LU Decomposition to Optimize the modconcen.m Routine Matt Tornowske April 1, 2002.
MATH 250 Linear Equations and Matrices
Row Reduction Method Lesson 6.4.
G AUSSIAN E LIMINATION Presented by: Max Le Max Le Justin Lavorgna Data Structures II: Algorithm Design & Analysis Algorithm Design & Analysis.
Yasser F. O. Mohammad Assiut University Egypt. Previously in NM Introduction to NM Solving single equation System of Linear Equations Vectors and Matrices.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Elliptic PDEs and the Finite Difference Method
Chapter 9 Gauss Elimination The Islamic University of Gaza
Lecture 10 - Nonlinear gradient techniques and LU Decomposition CVEN 302 June 24, 2002.
Linear Systems Dinesh A.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
2 2.5 © 2016 Pearson Education, Ltd. Matrix Algebra MATRIX FACTORIZATIONS.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
1 Numerical Methods Solution of Systems of Linear Equations.
LU Decomposition ● In Gauss elimination; Forward elimination Backward substitution Major computational effort Low computational effort can be used for.
Multivariable Linear Systems and Row Operations
Chapter 7: Systems of Equations and Inequalities; Matrices
Part 3 Chapter 9 Gauss Elimination
Parallel Direct Methods for Sparse Linear Systems
Chapter: 3c System of Linear Equations
Chapter 8 Numerical Technique
Solving Systems of Equations Using Matrices
MAT 322: LINEAR ALGEBRA.
ECE 3301 General Electrical Engineering
Simultaneous Linear Equations
Spring Dr. Jehad Al Dallal
Numerical Analysis Lecture12.
Linear Equations.
Linear Algebra Lecture 15.
Instruction of Chapter 8 on
RECORD. RECORD Gaussian Elimination: derived system back-substitution.
Numerical Analysis Lecture 45.
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
Lecture 11 Matrices and Linear Algebra with MATLAB
Numerical Analysis Lecture14.
Numerical Computation and Optimization
Numerical Analysis Lecture13.
Numerical Analysis Lecture10.
Linear Systems Numerical Methods.
Major: All Engineering Majors Authors: Autar Kaw
Lecture 8 Matrix Inverse and LU Decomposition
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

Ax = b Methods for Solution of the System of Equations (ReCap): Direct Methods: one obtains the exact solution (ignoring the round-off errors) in a finite number of steps. These group of methods are more efficient for dense and banded matrices. Gauss Elimination; Gauss-Jordon Elimination LU-Decomposition Thomas Algorithm (for tri-diagonal banded matrix) Iterative Methods: solution is obtained through successive approximation. Number of computations is a function of desired accuracy/precision of the solution and are not known apriori. More efficient for sparse matrices. Jacobi Iterations Gauss Seidal Iterations with Successive Over/Under Relaxation

Gauss Elimination for the matrix equation Ax = b: 𝑎 11 𝑎 12 … 𝑎 1𝑗 … 𝑎 1𝑛 𝑎 21 𝑎 22 … 𝑎 2𝑗 … 𝑎 2𝑛 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑎 𝑖1 𝑎 𝑖2 … 𝑎 𝑖𝑗 … 𝑎 𝑖𝑛 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 𝑎 𝑛1 𝑎 𝑛2 … 𝑎 𝑛𝑗 … 𝑎 𝑛𝑛 𝑥 1 𝑥 2 ⋮ 𝑥 𝑖 ⋮ 𝑥 𝑛 = 𝑏 1 𝑏 2 ⋮ 𝑏 𝑖 ⋮ 𝑏 𝑛 Approach in two steps: Operating on rows of matrix A and vector b, transform the matrix A to an upper triangular matrix. Solve the system using Back substitution algorithm. Indices: i: Row index j: Column index k: Step index

Matrix after the kth Step: We only need to perform steps up to k = n - 1 in order to make the matrix upper triangular

Gauss Elimination Algorithm Forward Elimination: For k = 1, 2, …. (n - 1) Define multiplication factors: 𝑙 𝑖𝑘 = 𝑎 𝑖𝑘 𝑎 𝑘𝑘 Compute: 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 - 𝑙 𝑖𝑘 𝑎 𝑘𝑗 ; 𝑏 𝑖 = 𝑏 𝑖 − 𝑙 𝑖𝑘 𝑏 𝑘 for i = k+1, k+2, ….n and j = k+1, k+2, ….n Resulting System of equation is upper triangular. Solve it using the Back-Substitution algorithm: 𝑥 𝑛 = 𝑏 𝑛 𝑎 𝑛𝑛 ; 𝑥 𝑖 = 𝑏 𝑖 − 𝑗=𝑖+1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 𝑎 𝑖𝑖 ; 𝑖= 𝑛−1 , 𝑛−2 , …3, 2, 1

For large n: Number of Floating Point Operations required to solve a system of equation using Gauss elimination is ⁓ 2n3/3 (*of the order of*) When is the Gauss Elimination algorithm going to fail ? For k = 1, 2, …. (n - 1) 𝑙 𝑖𝑘 = 𝑎 𝑖𝑘 𝑎 𝑘𝑘 ; 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 - 𝑙 𝑖𝑘 𝑎 𝑘𝑗 ; 𝑏 𝑖 = 𝑏 𝑖 − 𝑙 𝑖𝑘 𝑏 𝑘 for i = k+1, k+2, ….n and j = k+1, k+2, ….n If akk is zero at any step! The akk’s are called “Pivots” or “Pivotal Element” If this happens at some step, solve the system by exchanging rows such that akk is non-zero.

Gauss-Jordon Elimination for the matrix equation Ax = b: 𝑎 11 𝑎 12 … 𝑎 1𝑗 … 𝑎 1𝑛 𝑎 21 𝑎 22 … 𝑎 2𝑗 … 𝑎 2𝑛 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑎 𝑖1 𝑎 𝑖2 … 𝑎 𝑖𝑗 … 𝑎 𝑖𝑛 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 𝑎 𝑛1 𝑎 𝑛2 … 𝑎 𝑛𝑗 … 𝑎 𝑛𝑛 𝑥 1 𝑥 2 ⋮ 𝑥 𝑖 ⋮ 𝑥 𝑛 = 𝑏 1 𝑏 2 ⋮ 𝑏 𝑖 ⋮ 𝑏 𝑛 Approach: Operating on rows of matrix A and vector b, transform the matrix A to an identity matrix. The vector b transforms into the solution vector. Indices: i: Row index j: Column index k: Step index

Gauss-Jordon Algorithm: For k = 1, 2, …n 1 0 … 0 … 0 0 1 … 0 … 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 0 … 1 … 0 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 … 0 … 1 𝑥 1 𝑥 2 ⋮ 𝑥 𝑖 ⋮ 𝑥 𝑛 = 𝑏 1 𝑏 2 ⋮ 𝑏 𝑖 ⋮ 𝑏 𝑛 Gauss-Jordon Algorithm: For k = 1, 2, …n 𝑎 𝑘𝑗 = 𝑎 𝑘𝑗 𝑎 𝑘𝑘 ; 𝑏 𝑘 = 𝑏 𝑘 𝑎 𝑘𝑘 ; 𝑗=𝑘,…𝑛 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 - 𝑎 𝑖𝑘 𝑎 𝑘𝑗 ; 𝑏 𝑖 = 𝑏 𝑖 − 𝑎 𝑖𝑘 𝑏 𝑘 for i = 1, 2, 3, ….n (≠ k) and j = k,…n Final b vector is the solution. If we work with the augmented matrix: 𝑎 𝑘𝑗 = 𝑎 𝑘𝑗 𝑎 𝑘𝑘 𝑗=𝑘,…𝑛+1 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 - 𝑎 𝑖𝑘 𝑎 𝑘𝑗 i = 1, 2, 3, ….n (≠ k) and j = k,……n + 1 (n+1)th column is the solution vector

Homework: Calculate the number of floating point operations required for the solution using the Gauss- Jordon Algorithm! When is the Gauss-Jordon algorithm going to fail ? Inverse of a matrix (n × n) can be computed using the Gauss- Jordon Algorithm: Augment an identity matrix of order n with the matrix to be inverted. Resulting matrix will be (n × 2n) Carry out the operations using Gauss-Jordon Algorithm Original matrix will become an identity matrix and the augmented identity matrix will become its inverse!

𝑎 11 𝑎 12 … 𝑎 1𝑗 … 𝑎 1𝑛 | 𝑎 21 𝑎 22 … 𝑎 2𝑗 … 𝑎 2𝑛 | ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ | 𝑎 𝑖1 𝑎 𝑖2 … 𝑎 𝑖𝑗 … 𝑎 𝑖𝑛 | ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ | 𝑎 𝑛1 𝑎 𝑛2 … 𝑎 𝑛𝑗 … 𝑎 𝑛𝑛 | 1 0 … 0 … 0 0 1 … 0 … 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 0 … 1 … 0 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 … 0 … 1 Gauss-Jordon Algorithm: For k = 1, 2, …n 𝑎 𝑘𝑗 = 𝑎 𝑘𝑗 𝑎 𝑘𝑘 𝑗=𝑘,…2𝑛 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 - 𝑎 𝑖𝑘 𝑎 𝑘𝑗 i = 1, 2, 3, ….n (≠ k) and j = k,……2n Can you see why this inversion algorithm works?

Ax = b LU-Decomposition: A general method In most engineering problems, the matrix A remains constant while the vector b changes with time. The matrix A describes the system and the vector b describes the external forcing. e.g., all network problems (pipes, electrical, canal, road, reactors, etc.); structural frames; many financial analyses. If all b’s are available together, one can solve the system by augmented matrix but in practice, they are not! Instead of performing ⁓ n3 floating point operations to solve whenever a new b becomes available, it is possible to solve the system by performing ⁓ n2 floating point operations if a LU Decomposition is available for matrix A LU-decomposition requires ⁓ n3 floating point operations!

Ax = b Consider the system: (b changes!) Perform a decomposition of the form A = LU, where L is a lower-triangular and U is an upper-triangular matrix! LU-decomposition requires ⁓ n3 floating point operations! For any given b, solve Ax = LUx = b This is equivalent to solving two triangular systems: Solve Ly = b using forward substitution to obtain y (~n2 operations) Solve Ux = y using back substitution to obtain x (~n2 operations) Most frequently used method for engineering applications! We will derive LU-decomposition from Gauss Elimination!

An example of gauss elimination (four decimal places shown): 3 −1 1 −2 −5 3 1 3 −3 𝑥 1 𝑥 2 𝑥 3 = 2 −11 4 3 −1 1 0 −5.6667 3.6667 0 3.3333 −3.3333 𝑥 1 𝑥 2 𝑥 3 = 2 −9.6667 3.3333 3 −1 1 0 −5.6667 3.6667 0 0 −1.1765 𝑥 1 𝑥 2 𝑥 3 = 2 −9.6667 −2.3529 At this point, you may solve the system using back-substitution to obtain x1 = 1, x2 = 3 and x3 = 2. Check the following matrix identity: 1 0 0 −0.6667 1 0 0.3333 −0.5882 1 3 −1 1 0 −5.6667 3.6667 0 0 −1.1765 = 3 −1 1 −2 −5 3 1 3 −3 One can derive the general algorithm of LU-Decomposition by carefully studying Gauss Elimination! l21 = -2/3 = -0.6667 l31 = 1/3 = 0.3333 l32 = -3.3333/5.6667 = -0.5882

Matrix after the kth Step: Changed 1-time and became zero Changed 2-times and became zero Changed k- times and became zero Changed 0-times Changed 1-time Changed 2-times Changed 3-times Changed (k-1)-times

Gauss-Elimination Steps (example 4×4 matrix): 𝑎 11 (1) 𝑎 12 (1) 𝑎 13 (1) 𝑎 14 (1) 𝑎 21 (1) 𝑎 22 (1) 𝑎 23 (1) 𝑎 24 (1) 𝑎 31 (1) 𝑎 32 (1) 𝑎 33 (1) 𝑎 34 (1) 𝑎 41 (1) 𝑎 42 (1) 𝑎 43 (1) 𝑎 44 (1) 𝑎 11 (1) 𝑎 12 (1) 𝑎 13 (1) 𝑎 14 (1) 0 𝑎 22 (2) 𝑎 23 (2) 𝑎 24 (2) 0 𝑎 32 (2) 𝑎 33 (2) 𝑎 34 (2) 0 𝑎 42 (2) 𝑎 43 (2) 𝑎 44 (2) 𝑎 11 (1) 𝑎 12 (1) 𝑎 13 (1) 𝑎 14 (1) 0 𝑎 22 (2) 𝑎 23 (2) 𝑎 24 (2) 0 0 𝑎 33 (3) 𝑎 34 (3) 0 0 0 𝑎 44 (4) 𝑎 11 (1) 𝑎 12 (1) 𝑎 13 (1) 𝑎 14 (1) 0 𝑎 22 (2) 𝑎 23 (2) 𝑎 24 (2) 0 0 𝑎 33 (3) 𝑎 34 (3) 0 0 𝑎 43 (3) 𝑎 44 (3) Step 1 Step 2 Step 3

For elements above and on the diagonal i ≤ j: Changed 1-time and became zero Changed 2-times and became zero Changed k- times and became zero Changed 0-times Changed 1-time Changed 2-times Changed 3-times Changed (k-1)-times For elements above and on the diagonal i ≤ j: aij is actively modified for the first (i - 1) steps and remains constant for the rest (n - i) steps For elements below on the diagonal j < i: aij is actively modified for the first j steps and remains at zero for the rest (n - j) steps Combined statement: Any element aij is actively modified for the first p steps where, p = min {(i-1), j}

Any element aij is actively modified for p-steps where, p = min {(i-1), j} Modification formula: 𝑎 𝑖𝑗 𝑘+1 = 𝑎 𝑖𝑗 𝑘 - 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 Summing over p-steps: 𝑘=1 𝑝 𝑎 𝑖𝑗 𝑘+1 − 𝑎 𝑖𝑗 𝑘 =− 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 𝑝= min 𝑖−1 ,𝑗 For i ≤ j, p = i - 1 𝑎 𝑖𝑗 𝑖 − 𝑎 𝑖𝑗 1 =− 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 𝑎 𝑖𝑗 1 = 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 𝑖 + 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 Define: 𝑙 𝑖𝑖 = 𝑙 𝑗𝑗 =1 ⟹ 𝑎 𝑖𝑗 = 𝑘=1 𝑖 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘

For i ≤ j, p = i - 1 𝑎 𝑖𝑗 = 𝑘=1 𝑖 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 For j < i, p = j 𝑎 𝑖𝑗 𝑗+1 − 𝑎 𝑖𝑗 1 =− 𝑘=1 𝑗 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 But 𝑎 𝑖𝑗 𝑗+1 =0 ⇒ 𝑎 𝑖𝑗 1 = 𝑎 𝑖𝑗 = 𝑘=1 𝑗 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 Combining two statements: 𝑎 𝑖𝑗 = 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 𝑝= min 𝑖,𝑗

𝑎 𝑖𝑗 = 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 𝑝= min 𝑖,𝑗 Define the elements of matrix L as: 𝑙 𝑖𝑘 = 𝑙 𝑖𝑘 Define the elements of matrix U as: 𝑢 𝑘𝑗 = 𝑎 𝑘𝑗 𝑘 ⇒ 𝑎 𝑖𝑗 = 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑝= min 𝑖,𝑗 This is equivalent to matrix multiplication: A = LU Can you see it?

𝑎 𝑖𝑗 = 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑝= min 𝑖,𝑗 𝑎 11 𝑎 12 𝑎 13 𝑎 21 𝑎 22 𝑎 23 𝑎 31 𝑎 32 𝑎 33 = 𝑙 11 0 0 𝑙 21 𝑙 22 0 𝑙 31 𝑙 32 𝑙 33 𝑢 11 𝑢 12 𝑢 13 0 𝑢 22 𝑢 23 0 0 𝑢 33 𝑎 11 = 𝑙 11 𝑢 11 𝑎 12 = 𝑙 11 𝑢 12 𝑎 13 = 𝑙 11 𝑢 13 𝑎 21 = 𝑙 21 𝑢 11 𝑎 22 = 𝑙 21 𝑢 12 + 𝑙 22 𝑢 22 𝑎 23 = 𝑙 21 𝑢 13 + 𝑙 22 𝑢 23 𝑎 31 = 𝑙 21 𝑢 11 𝑎 32 = 𝑙 31 𝑢 12 + 𝑙 32 𝑢 22 𝑎 33 = 𝑙 31 𝑢 13 + 𝑙 32 𝑢 23 + 𝑙 33 𝑢 33 12 Unknowns and 9 equations! 3 free entries! In general, n2 equations and n2 + n unknows! n free entries!

Doolittle’s Algorithm: Define: 𝑙 𝑖𝑖 = 𝑙 𝑗𝑗 =1 The U matrix: i ≤ j 𝑎 𝑖𝑗 = 𝑎 𝑖𝑗 𝑖 + 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 ⇒ 𝑎 𝑖𝑗 = 𝑢 𝑖𝑗 + 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑢 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑖=1,2,…𝑛; 𝑗=𝑖,𝑖+1,…𝑛 The L matrix: j < i 𝑎 𝑖𝑗 = 𝑘=1 𝑗 𝑙 𝑖𝑘 𝑎 𝑘𝑗 𝑘 ⇒ 𝑎 𝑖𝑗 = 𝑘=1 𝑗 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑙 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑗−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑢 𝑗𝑗 𝑖=𝑗+1,…𝑛; 𝑗=1,2,…𝑛

Crout’s Algorithm: 𝑎 𝑖𝑗 = 𝑘=1 𝑝 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑝= min 𝑖,𝑗 Define: 𝑢 𝑖𝑖 = 𝑢 𝑗𝑗 =1 The L matrix: j ≤ i 𝑎 𝑖𝑗 = 𝑘=1 𝑗 𝑙 𝑖𝑘 𝑢 𝑘𝑗 ⇒ 𝑙 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑗−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑗=1,2,…𝑛; 𝑖=𝑗, 𝑗+1,…𝑛 The U matrix: i < j 𝑎 𝑖𝑗 = 𝑘=1 𝑖 𝑙 𝑖𝑘 𝑢 𝑘𝑗 ⇒ 𝑢 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑙 𝑖𝑖 𝑖=1,2,…𝑛; 𝑗=𝑖+1,…𝑛

Doolittle’s Algorithm (3×3 example): The L matrix: j < i 𝑙 11 = 𝑙 22 = 𝑙 33 =1 𝑙 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑗−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑢 𝑗𝑗 𝑖=𝑗+1,…𝑛; 𝑗=1,2,…𝑛−1 j = 1, i = 2, 3: 𝑙 21 = 𝑎 21 𝑢 11 , 𝑙 31 = 𝑎 31 𝑢 11 j = 2, i = 3: 𝑙 32 = 𝑎 32 − 𝑙 31 𝑢 12 𝑢 22 The U matrix: i ≤ j 𝑢 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑖=1,2,…𝑛; 𝑗=𝑖,𝑖+1,…𝑛 i = 1, j = 1, 2, 3: 𝑢 11 = 𝑎 11 , 𝑢 12 = 𝑎 12 , 𝑢 13 = 𝑎 13 i = 2, j = 2, 3: 𝑢 22 = 𝑎 22 − 𝑙 21 𝑢 12 , 𝑢 23 = 𝑎 23 − 𝑙 21 𝑢 13 i = 3, j = 3: 𝑢 33 = 𝑎 23 − 𝑙 31 𝑢 13 − 𝑙 32 𝑢 23 1 0 0 𝑙 21 1 0 𝑙 31 𝑙 32 1 2 4 𝑢 11 𝑢 12 𝑢 13 0 𝑢 22 𝑢 23 0 0 𝑢 33 1 3 5

Verify this computation sequence! Crout’s Algorithm (3×3 example): Define: 𝑢 𝑖𝑖 = 𝑢 𝑗𝑗 =1 The L matrix: j ≤ i 𝑙 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑗−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑗=1,2,…𝑛; 𝑖=𝑗, 𝑗+1,…𝑛 The U matrix: i < j 𝑢 11 = 𝑢 22 = 𝑢 33 =1 𝑢 𝑖𝑗 = 𝑎 𝑖𝑗 − 𝑘=1 𝑖−1 𝑙 𝑖𝑘 𝑢 𝑘𝑗 𝑙 𝑖𝑖 𝑖=1,2,…𝑛−1; 𝑗=𝑖+1,…𝑛 Verify this computation sequence! 𝑙 11 0 0 𝑙 21 𝑙 22 0 𝑙 31 𝑙 32 𝑙 33 1 3 5 1 𝑢 12 𝑢 13 0 1 𝑢 23 0 0 1 2 4