MATLAB EXAMPLES Matrix Solution Methods

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Linear System Remark: Remark: Remark: Example: Solve:
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Lecture 4.
Chapter: 3c System of Linear Equations
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 18 Sit in your groups.
Solving Linear Systems (Numerical Recipes, Chap 2)
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 11 - LU Decomposition
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 20 Solution of Linear System of Equations - Iterative Methods.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 18 LU Decomposition and Matrix Inversion.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Chapter 8 Objectives Understanding matrix notation.
Chapter 10 Review: Matrix Algebra
Computer Engineering Majors Authors: Autar Kaw
Engineering Analysis ENG 3420 Fall 2009
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Using LU Decomposition to Optimize the modconcen.m Routine Matt Tornowske April 1, 2002.
13.6 MATRIX SOLUTION OF A LINEAR SYSTEM.  Examine the matrix equation below.  How would you solve for X?  In order to solve this type of equation,
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Chapter 2 System of Linear Equations Sensitivity and Conditioning (2.3) Solving Linear Systems (2.4) January 19, 2010.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Part 3 Chapter 12 Iterative Methods
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
LU Decomposition ● In Gauss elimination; Forward elimination Backward substitution Major computational effort Low computational effort can be used for.
Iterative Solution Methods
Chapter: 3c System of Linear Equations
Solving Linear Systems Syed Nasrullah
Simultaneous Linear Equations
Spring Dr. Jehad Al Dallal
Solving Systems of Linear Equations: Iterative Methods
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Linear Equations.
Iterative Methods Good for sparse matrices Jacobi Iteration
Chapter 10: Solving Linear Systems of Equations
Chapter 10 and Matrix Inversion LU Decomposition
CSE 245: Computer Aided Circuit Simulation and Verification
Autar Kaw Benjamin Rigsby
CSE 245: Computer Aided Circuit Simulation and Verification
CSCE569 Parallel Computing
Matrix Methods Summary
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
MATH-321 In One Slide MATH-321 & MATLAB Command.
Linear Systems Numerical Methods.
Numerical Linear Algebra
Introduction to Scientific Computing II
Programming assignment # 3 Numerical Methods for PDEs Spring 2007
LU Decomposition.
Lecture 8 Matrix Inverse and LU Decomposition
Linear Systems of Equations: solution and applications
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

MATLAB EXAMPLES Matrix Solution Methods 58:110 Computer-Aided Engineering Spring 2005 MATLAB EXAMPLES Matrix Solution Methods Department of Mechanical and industrial engineering January 2005

Some useful functions det (A) Determinant lu(A) LU decomposition cond(A) Matrix condition number inv(A) Matrix inverse rank(A) Matrix rank diag, trace Vector and sum of the diagonal elements tril,triu Lower and upper triangular part of matrix cgs, pcg (Preconditioned) conjugate gradients iterative linear equation solver

LU decomposition Here is an example to use LU decomposition for 4X4 matrix A: >> A=[7 3 -1 2;3 8 1 -4; -1 1 4 -1; 2 -4 -1 6] A = 7 3 -1 2 3 8 1 -4 -1 1 4 -1 2 -4 -1 6   >> [l u p]=lu(A) l = 1.0000 0 0 0 0.4286 1.0000 0 0 -0.1429 0.2128 1.0000 0 0.2857 -0.7234 0.0898 1.0000   u = 7.0000 3.0000 -1.0000 2.0000 0 6.7143 1.4286 -4.8571 0 0 3.5532 0.3191 0 0 0 1.8862 p = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 Note: p is a permutation matrix corresponding to the pivoting strategy used. If the matrix is not diagonally dominant, p will not be an identity matrix.

Condition number In MATLAB, Condition number is defined by: To calculate condition number of matrix A by MATLAB built-in function: >> cond(A) ans = 13.7473 Or by the definition, the product of 2-norm of A and inverse A: >> norm(A,2)*norm(inv(A),2)

Iterative methods – Jocobi method The following M-file shows how to use Jocobi method in MATLAB: (jocobi_example.m) if j==i continue else s=s+A(i,j)*x0(j); end x1(i)=(b(i)-s)/A(i,i); if norm(x1-x0)<erp break x0=x1; % show the final solution x=x1 % show the total iteration number n_iteration=k % Iterative Solutions of linear euations:(1) Jocobi Method % Linear system: A x = b % Coefficient matrix A, right-hand side vector b A=[7 3 -1 2; 3 8 1 -4; -1 1 4 -1; 2 -4 -1 6]; b= [-1;0;-3;1]; % Set initial value of x to zero column vector x0=zeros(1,4); % Set Maximum iteration number k_max k_max=1000; % Set the convergence control parameter erp erp=0.0001; % Show the q matrix q=diag(diag(A)) % loop for iterations for k=1:k_max for i=1:4 s=0.0; for j=1:4

Iterative methods- Jocobi method Running M-file in command window: >> jocobi_r_ex q = 7 0 0 0 0 8 0 0 0 0 4 0 0 0 0 6 x = -0.9996 0.9996 -0.9999 0.9996 n_iteration = 60

Iterative methods - Gauss-Seidel method In the M-file for Gauss-seidel method, the only difference is the q matrix, which replaced by: q=tril(A), the codes in the loop changed as the following: for i=1:4 s1=0.0; s2=0.0; if (i==1) continue else for j=1:i-1 s1=s1+A(i,j)*x1(j); end for j=i+1:4 s2=s2+A(i,j)*x0(j); x1(i)=(b(i)-s1-s2)/A(i,i); Running M-file in command window >> gauss_example q = 7 0 0 0 3 8 0 0 -1 1 4 0 2 -4 -1 6 x = -0.9996 0.9997 -0.9999 0.9997 n_iteration = 10

Iterative methods - SOR method Again, In the M-file for SOR method, the only difference is the q matrix, which replaced by: q1=tril(A)-diag(diag(A)); q2=diag(diag(A))/1.4; q=q1+q2; Note: the relaxation factor set to 1.4 for this case. for i=1:4 s1=0.0; s2=0.0; if (i==1) continue else for j=1:i-1 s1=s1+A(i,j)*x1(j); end for j=i+1:4 s2=s2+A(i,j)*x0(j); x1(i)=r*(b(i)-s1-s2)/A(i,i)+(1.0-r)*x0(i); Run this M-file: >> SOR_example q = 5.0000 0 0 0 3.0000 5.7143 0 0 -1.0000 1.0000 2.8571 0 2.0000 -4.0000 -1.0000 4.2857 r = 1.4000 x = -0.9996 0.9997 -0.9999 0.9997 n_iteration = 12

Iterative methods – Conjugate gradient method Use cgs, pcg to solve above linear equations, the tolerance is 1.0E-6 (default) >> x=cgs(A,b,0.00001) cgs converged at iteration 4 to a solution with relative residual 1.2e-015 x = -1.0000 1.0000 >> x=pcg(A,b,0.00001) pcg converged at iteration 4 to a solution with relative residual 3.1e-016

Iterative methods - Summary Solution Total Iteration number Jocobi method ( -0.9996, 0.9996, -0.9999, 0.9996) 60 Gauss –Seidel method (-0.9996, 0.9997, -0.9999, 0.9997) 10 SOR method (-1.0000, 0.9999, -1.0000, 1.0000) 12 Conjugate Gradient method (-1.0000, 1.0000, -1.0000, 1.0000) 4 Note: the exact solution is (-1,1,-1,1)