Matrix Algebra (and why it’s important!)

Slides:



Advertisements
Similar presentations
4.4.1 Generalised Row Echelon Form
Advertisements

Adding & Subtracting Matrices
General Linear Model L ύ cia Garrido and Marieke Schölvinck ICN.
General Linear Model Beatriz Calvo Davina Bristow.
5.4 Basis And Dimension.
6.4 Best Approximation; Least Squares
Linear Algebra (Mathematics for CG) Reading: HB Appendix A
State Variables.
1 G Lect 4M Interpreting multiple regression weights: suppression and spuriousness. Partial and semi-partial correlations Multiple regression in.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
3_3 An Useful Overview of Matrix Algebra
Linear Algebraic Equations
Review of Matrix Algebra
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
Methods for Dummies FIL January Jon Machtynger & Jen Marchant
Matrix Mathematics in MATLAB and Excel
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Intro to Matrices Don’t be scared….
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert.
Linear Algebra and Matrices
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
1 Operations with Matrice 2 Properties of Matrix Operations
1 Chapter 3 Matrix Algebra with MATLAB Basic matrix definitions and operations were covered in Chapter 2. We will now consider how these operations are.
Linear Algebra, Matrices (and why they matter to (f)MRI!) Methods for Dummies FIL October 2008 Nick Henriquez & Nick Wright Theory & Application Theory.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Linear Algebra & Matrices MfD 2004 María Asunción Fernández Seara January 21 st, 2004 “The beginnings of matrices and determinants.
Linear Algebra and Matrices Methods for Dummies FIL November 2011 Narges Bazargani and Sarah Jensen Linear Algebra and Matrices Methods for Dummies FIL.
Chapter 10 Review: Matrix Algebra
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7.1 Solving Systems of Two Equations.
ECON 1150 Matrix Operations Special Matrices
Matlab tutorial course Lesson 2: Arrays and data types
Matrices Square is Good! Copyright © 2014 Curt Hill.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Matrix Algebra. Quick Review Quick Review Solutions.
CMPS 1371 Introduction to Computing for Engineers MATRICES.
Copyright © 2011 Pearson, Inc. 7.2 Matrix Algebra.
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
P1 RJM 06/08/02EG1C2 Engineering Maths: Matrix Algebra 1 EG1C2 Engineering Maths : Matrix Algebra Dr Richard Mitchell, Department of Cybernetics AimDescribe.
Examples of linear transformation matrices Some special cases of linear transformations of two-dimensional space R 2 are illuminating:dimensional Dimoffree.svgDimoffree.svg‎
Fundamentals of Engineering Analysis
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Matrix Algebra Methods for Dummies FIL November Mikkel Wallentin
Matrices, Vectors, Determinants.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
MTH108 Business Math I Lecture 20.
Linear Algebraic Equations and Matrices
Lecture 2 Matrices Lat Time - Course Overview
Linear Algebraic Equations and Matrices
Chapter 4 Systems of Linear Equations; Matrices
Matrix Solutions to Linear Systems
Chapter 4 Systems of Linear Equations; Matrices
Math review - scalars, vectors, and matrices
Linear Algebra and Matrices
Presentation transcript:

Matrix Algebra (and why it’s important!) Methods for Dummies FIL October 2007 Steve Fleming & Verity Leeson Need to add: division, and why its important. Graphical model of determinant. Animate slide 4.

Sources and further information Jon Machtynger & Jen Marchant’s slides! Human Brain Function textbook (for GLM) SPM course http://www.fil.ion.ucl.ac.uk/spm/course/ Web Guides http://mathworld.wolfram.com/LinearAlgebra.html http://www.maths.surrey.ac.uk/explore/emmaspages/option1.html http://www.inf.ed.ac.uk/teaching/courses/fmcs1/ (Formal Modelling in Cognitive Science course) http://www.wikipedia.org

Scalars, vectors and matrices Scalar: Variable described by a single number – e.g. Image intensity (pixel value) Vector: Variable described by magnitude and direction – e.g. Image intensity at a particular time Linear algebra had its beginnings in the study of vectors in Cartesian 2-space and 3-space. A vector, here, is a directed line segment, characterized by both its magnitude, represented by length, and its direction. Vectors can be used to represent physical entities such as forces, and they can be added to each other and multiplied with scalars, thus forming the first example of a real vector space. Modern linear algebra has been extended to consider spaces of arbitrary or infinite dimension. Four dimensional vector is image at time point in 3d space Matrix: Rectangular array of vectors defined by number of rows and columns Square (3 x 3) Rectangular (3 x 2) d r c : rth row, cth column 3 2 (Roman Catholic)

Matrices in Matlab Vector formation: [1 2 3] Matrix formation: ‘;’ is used to signal end of a row ‘:’ is used to signify all rows or columns Subscripting – each element of a matrix can be addressed with a pair of numbers; row first, column second (Roman Catholic) e.g. X(2,3) = 6 X(3, :) = X( [2 3], 2) = “Special” matrix commands: zeros(3,1) = ones(2) = magic(3) more to come… Magic is where rows, columns and diagonals all sum to same total. Has to be square.

Matrix addition Addition (matrix of same size) Commutative: A+B=B+A Associative: (A+B)+C=A+(B+C) Subtraction (consider as the addition of a negative matrix)

Matrix multiplication Scalar multiplication: Rule for multiplication of vectors/matrices: n l a11 a12 a13 b11 b12 a21 a22 a23 X b21 b22 a31 a32 a33 b31 b32 a41 a42 a43 m k Matrix multiplication rule: “When A is a mxn matrix & B is a kxl matrix, AB is only viable if n=k. The result will be an mxl matrix” b11 b12 a11 a12 a13 b21 b22 x a21 a22 a23 b31 b32 a31 a32 a33 a41 a42 a43

Multiplication method Sum over product of respective rows and columns For larger matrices, following method might be helpful: m m Define output matrix X = r c Sum over crc = Matlab does all this for you! Simply type: C = A * B N.B. If you want to do element-wise multiplication, use: A .* B =

Transposition column → row row → column Mrc = Mcr In Matlab: AT = A’

Outer and inner products of vectors Two vectors: Inner product = scalar (1xn)(nx1)  (1X1) Outer product = matrix (nx1)(1xn)  (nXn)

Identity matrices Is there a matrix which plays a similar role as the number 1 in number multiplication? Consider the nxn matrix: A square nxn matrix A has one A In = In A = A An nxm matrix A has two!! In A = A & A Im = A Worked example A In = A for a 3x3 matrix: 1 2 3 1+0+0 0+2+0 0+0+3 4 5 6 X = 4+0+0 0+5+0 0+0+6 7 8 9 7+0+0 0+8+0 0+0+9 In Matlab: eye(r, c) produces an r x c identity matrix

Inverse matrices Definition. A matrix A is nonsingular or invertible if there exists a matrix B such that: worked example: 1 X 2 3 -1 3 = 2 + 1 3 3 -1 + 1 3 3 -1 2 1 3 -2+ 2 3 3 1 + 2 3 3 Common notation for the inverse of a matrix A is A-1 The inverse matrix A-1 is unique when it exists. If A is invertible, A-1 is also invertible  A is the inverse matrix of A-1. Matrix division: A/B = AB-1 If A is an invertible matrix, then (AT)-1 = (A-1)T In Matlab: A-1 = inv(A)

Determinants Determinant is a function: In Matlab: det(A) = det(A) Input is nxn matrix Output is a single number (real or complex) called the determinant A matrix A has an inverse matrix A-1 if and only if det(A)≠0 (see next slide) In Matlab: det(A) = det(A)

Calculation of inversion using determinants Or you can just type inv(A)! thus Note: det(A)≠0 More complex matrices can be inverted using methods such as the Gauss-Jordan elimination, Gaussian elimination or LU decomposition

Applications

SEM http://www.maths.soton.ac.uk/~jav/soton/MATH1007/workbook_8/8_2_inv_mtrx_sim_lin_eqnpdf.pdf Neural Networks http://csn.beckman.uiuc.edu/k12/nn_matrix.pdf SPM http://imaging.mrc-cbu.cam.ac.uk/imaging/PrinciplesStatistics

Solving simultaneous equations For one linear equation ax=b where the unknown is x and a and b are constants 3 possibilities

With >1 equation and >1 unknown Can use solution from the single equation to solve For example In matrix form AX = B

Need to find determinant of matrix A (because X =A-1B) From earlier (2 x -2) – (3 x 1) = -4 – 3 = -7 So determinant is -7 To find A-1

if B is So

Neural Networks Neural networks create a mathematical model of the connections in a neural system Connections are the excitatory and inhibitory synapses between neurons Excitatory Connection Inhibitory Connection Input Neuron Output Neuron

Scenario 1 Input Neuron Output Neuron then If then If

Scenario 2 + = The combination of both an active excitatory and active inhibitory input will cancel out No net activity

Matrix Representations of Neural Connections –Scenario 2 again #1 #3 #2 + = -1 +1 1 Excitatory = positive influence on post synaptic cell Inhibitory = negative influence With the synapses labelled (1-3) and activity level specified we can translate this information into a set of vectors (1 row matrices)

Input vector = (1 1) relates to activity (#1 #2) #3 #2 + = -1 +1 1 Input vector = (1 1) relates to activity (#1 #2) Weight vector = (1 -1) relates to connection weight (#1 #2) Activity of Neuron 3 Input x weight With varying input (activity) and weight, neuron 3 can take on a wide range of values

How are matrices relevant to fMRI data? Consider that data measured includes Response variable e.g BOLD signal at a particular voxel Many scalars for this one voxel Explanatory variables These are assumed to be measured without error May be continuous May be dummy indicating levels of an experimental factor

With a single explanatory variable Y = X . β + ε Observed = Predictors * Parameters + Error BOLD = Design Matrix * Betas + Error

Y = X . β + ε Y is a matrix of BOLD signals Preprocessing ... Y Y is a matrix of BOLD signals Each column represents a single voxel sampled at successive time points. Each voxel is considered as independent observation Analysis of individual voxels over time, not groups over space Time Intensity

Design Matrix Matrix Rows : values of X for a single predictor Y = X . β + ε Y X1 X2 X1 X2 Most –ve nearest black, most +ve nearest white Matrix Rows : values of X for a single predictor Columns : different predictors

A complex version = + Y = X × b + e data vector (Voxel) design matrix Solve equation for β – tells us how much of the BOLD signal is explained by X data vector (Voxel) design matrix parameters error vector a m b3 b4 b5 b6 b7 b8 b9 = + Y = X × b + e

The End… Any (easy) questions?!