Basics of Matrix Algebra The Roots are in the tricks and tribulations of solving linear equations Nethra Sambamoorthi, PhD.

Slides:



Advertisements
Similar presentations
Determinant The numerical value of a square array of numbers that can be used to solve systems of equations with matrices. Second-Order Determinant (of.
Advertisements

Rules of Matrix Arithmetic
Autar Kaw Humberto Isaza
Matrices: Inverse Matrix
Matrices & Systems of Linear Equations
Chapter 2 Basic Linear Algebra
Here is a preview of one type of problem we are going to solve using matrices. Solve this system of equations:
3D Geometry for Computer Graphics
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Matrix Revolutions: Solving Matrix Equations Matrix 3 MathScience Innovation Center Betsey Davis
Pam Perlich Urban Planning 5/6020
Matrix Mathematics in MATLAB and Excel
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Warm-Up Solving Systems of Equations Learning Targets l Refresher on solving systems of equations l Matrices –Operations –Uses –Reduced Row Echelon.
Introduction Information in science, business, and mathematics is often organized into rows and columns to form rectangular arrays called “matrices” (plural.
Precalculus – MAT 129 Instructor: Rachel Graham Location: BETTS Rm. 107 Time: 8 – 11:20 a.m. MWF.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Math Dept, Faculty of Applied Science, HCM University of Technology
Chapter 10 Review: Matrix Algebra
3.5 Solution by Determinants. The Determinant of a Matrix The determinant of a matrix A is denoted by |A|. Determinants exist only for square matrices.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Matrices Square is Good! Copyright © 2014 Curt Hill.
 Row and Reduced Row Echelon  Elementary Matrices.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
Systems of Linear Equation and Matrices
Chapter 7 Notes Honors Pre-Calculus. 7.1/7.2 Solving Systems Methods to solve: EXAMPLES: Possible intersections: 1 point, 2 points, none Elimination,
Presentation by: H. Sarper
Matrices. Given below is a record of all the types of books kept in a class library. TypeFictionGeneral Malay2547 Chinese4072 English8085.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
4.6 Matrix Equations and Systems of Linear Equations In this section, you will study matrix equations and how to use them to solve systems of linear equations.
Lesson 7.6 & 7.7 Inverses of a Square Matrix & Determinant.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
ME 142 Engineering Computation I Matrix Operations in Excel.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Student Handbook 1. Matrices A matrix (plural: matrices, not matrixes) is a rectangular array of numbers such as Matrices are useful when modelling a.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Mathematics Medicine What is meant by a matrix A matrix is a set of numbers arranged in the form of a rectangle and enclosed in curved brackets.
Matrix Algebra Basics Chapter 3 Section 5. Algebra.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
10.4 Matrix Algebra. 1. Matrix Notation A matrix is an array of numbers. Definition Definition: The Dimension of a matrix is m x n “m by n” where m =
Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 13 th October 2014 DR TANIA STATHAKI READER (ASSOCIATE.
Copyright © Cengage Learning. All rights reserved. 8 Matrices and Determinants.
College Algebra Chapter 6 Matrices and Determinants and Applications
MTH108 Business Math I Lecture 20.
ECE 3301 General Electrical Engineering
Chapter 7 Matrix Mathematics
Matrices and Matrix Solutions
L9Matrix and linear equation
Matrix Algebra.
Chapter 7: Matrices and Systems of Equations and Inequalities
Matrices.
3.8 Use Inverse Matrices to Solve Linear Systems
Matrix Algebra.
Mathematics for Signals and Systems
Chapter 4 Systems of Linear Equations; Matrices
Matrices and Determinants
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Presentation transcript:

Basics of Matrix Algebra The Roots are in the tricks and tribulations of solving linear equations Nethra Sambamoorthi, PhD

Ok, why do we need matrix algebra? Ok, real life problems(opportunities) are always mixed with more than one or two variable problems – Opportunities are multivariate in nature. With out the structural simplification of representation, solvability, and systematic evaluation of solutions are not possible with out the multivariate structures extracted in the form of matrices, which on closer looks seems to have its own set of nice properties giving raise to a form of algebra which simplifies the our understanding and extension of the science of matrices. The word “Algebra” means a system of working with letters and other general symbols to represent numbers and quantities in formulae and equations

The story of lost receipts from a store… Solving Equations to figure out unit prices of mango and banana x=unit price of a mango y=unit price of a banana 2x+3y=9 5x+1y=16 The price and total receipts that Ren brought from the store is lost and their Mom does not know the price of a mango and banana. Ren, being mischievous tells Mom that, Oh…, the price of a mango and banana can be figured out with the above equations. Jen, younger sister of Ren, solves these equations and figures out the price of a mango and a price of a banana, so that her mom can give right money to Ren to buy 25 mangoes and 50 bananas. What is the amount that Ren’s mom has to give Ren? Solve this system of two equations in two unknowns and figure out a solution This will not work, if there are many many(say 20) items in the billing sheets across multiple days, and items are missing on different days, even if the price is same across days, because of the tediousness and accuracy maintenance. The question is, are there ways to develop systematic steps of solving such equations simplifying the method, irrespective of number of equations, and number of unknowns There is a way. For learning purposes, we will define simpler problems and solve them

Rewriting equations in Matrix form This representation of rectangular system of number is called a matrix vector

An alternative way of writing the equations is…

The Key Question then is …

Verifying using Excel that the answers are the same… It comes with formula such as MINVERSE MMULT TRANSPOSE

Matrices, vectors, scalars… It turns out that by just studying the properties of the numerical structures such as matrices, vectors, and scalars, we can create a systematic way of solving equations however big they are, and however Inconvenient those numbers are in terms of number of decimals the numbers may carry… This is mathematically written as (Units matrix) (price vector) = (total price vector) Up= t If there is only one variable, say mango alone, then p=t/U (agree…?) or U -1 x t (agree…?). This is easy to understand, interpret, and apply. But we have a structure of rectangular collection of numbers and there are multiple units of multiple items; so what does it mean to say U -1 ? How do I calculate it? What are the properties of such a square or possibly rectangular collection of numbers? The subject of studying the properties and algorithms of such rectangular collection of numbers is called the study of matrix algebra. We will only go through some important collection of matrices that will help solve our problems and as we go deeper into many of the analytics solutions we will bring more matrix algebra into our discussions. U x U -1 = 1 ; U -1 x U = I; this defines UNIT matrix U 1 U 2 p=t ; then the order of usage of inverses are reversed. P=U -1 2 U This defines the order of inversion

Properties of Row Echelon Transformations EE -1 =E -1 E (E -1 ) T =(E T ) -1 (E 1 E 2 ) T =(E 1 ) T (E 2 ) T

Mathematically we want to find U -1 for a square matrix U, which is called the inverse matrix of U. In the numerical example I showed earlier, we are clear that U -1 xU; does this mean UxU -1 = I (identity matrix) ? The numerical calculations verifies that, but that will be used as a property for the definition of inverse of a matrix; what is implied by this property is that it does not matter, for the applications of row echelon transformation, whether we use the augmented Identity matrix on the right of U or on the left of U, it does not matter (U T ) -1 U T = (U x U -1 ) T : We want this to happen; so we define the property (AB) T =B T A T ; This simply means the order of row echelon transformations can be reversed and still we will get the same inverse (U t)-1 =(U -1 ) t : We want this to happen; so we also define the If U is diagonal then U x U -1 = I = U -1 x U

A note on humble beginning of matrices: Though solving systems of equations are the beginning, now it is applicable in all kinds of sophisticated and huge problems of solving complex non-linear, less than full rank systems of equations, including linear and non-linear optimization problems, sparse data analysis, least squares, data and image compression problems, computer vision analysis, dimensionality reduction, dynamical systems, and so on.

So, How to Compute Such an Inverse matrix…? (Bring about Identity matrix times the variable vector in the left side of the equation?) Let us get back to the equations. The way we solve the equation will become, what is called “sweeping method”, a method where we sweep all the coefficients of “diagonal type variables” to “1” and other coefficients to zero. See specific three examples of what is called row echelon sweeping method to solve the problem in Khan academy; Think about why Sal (Khanacadamy visionary) provides these three different examples. echelon-form-1 echelon-form-2 echelon-form-3 Big question: Can you guess how to get inverses using the same method but doing something in the place of cost vector that is added at the end in these examples ….!!! Remember inverses are for the square matrix.

Ok,…, here is the way

Once you get these foundation ideas well ingrained… It is surprisingly simple to understand why the linear programming simplex solution approach works (remember it is always important to know both how as well as why) What is meant by eigen vectors and eigen values, why do we need them? What is meant by decomposition of a square matrix as well as any general matrix and how they are related to eigen vectors and eigen values? Are there eigen vectors for a non-square matrices? How do we compute them? What is meant by singular value decomposition? Why do they matter? What is meant by g-inverse (generalized inverse of a matrix) and why it is needed? And, of course, more importantly how to solve least squares problem, which is the reason why we started on this path. All good stuff; suddenly you feel that you know how to easily work with this strange looking items called matrices, wherever they penetrate…, and they penetrate all of applied mathematics areas.