X.6 Non-Negative Matrix Factorization

Slides:



Advertisements
Similar presentations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Advertisements

Lecture 3: A brief background to multivariate statistics
Uncertainty in fall time surrogate Prediction variance vs. data sensitivity – Non-uniform noise – Example Uncertainty in fall time data Bootstrapping.
Solving Systems of Linear Equations Part Pivot a Matrix 2. Gaussian Elimination Method 3. Infinitely Many Solutions 4. Inconsistent System 5. Geometric.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
CHAPTER 19 Correspondence Analysis From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon.
What is entry A in the matrix multiplication: ( a) 1 (b) -2(c) 5 (d) 11(e) 13 (f) 0.
1 2. The PARAFAC model Quimiometria Teórica e Aplicada Instituto de Química - UNICAMP.
Intro to Matrices Don’t be scared….
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
1 Operations with Matrice 2 Properties of Matrix Operations
Matrices Write and Augmented Matrix of a system of Linear Equations Write the system from the augmented matrix Solve Systems of Linear Equations using.
Row 1 Row 2 Row 3 Row m Column 1Column 2Column 3 Column 4.
Multivariate Linear Systems and Row Operations.
Solving Systems of Equations and Inequalities
 Row and Reduced Row Echelon  Elementary Matrices.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Matrix Algebra. Quick Review Quick Review Solutions.
Row 1 Row 2 Row 3 Row m Column 1Column 2Column 3 Column 4.
Sec 3.5 Inverses of Matrices Where A is nxn Finding the inverse of A: Seq or row operations.
8.1 Matrices & Systems of Equations
SVD: Singular Value Decomposition
Unit 3: Matrices.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
CHAPTER 2 MATRICES 2.1 Operations with Matrices Matrix
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Solve a system of linear equations By reducing a matrix Pamela Leutwyler.
3.6 Solving Systems Using Matrices You can use a matrix to represent and solve a system of equations without writing the variables. A matrix is a rectangular.
Copyright ©2015 Pearson Education, Inc. All rights reserved.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Unit 3: Matrices. Matrix: A rectangular arrangement of data into rows and columns, identified by capital letters. Matrix Dimensions: Number of rows, m,
4.1 An Introduction to Matrices Katie Montella Mod. 6 5/25/07.
Self-Modeling Curve Resolution and Constraints Hamid Abdollahi Department of Chemistry, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan,
1 Matrix Math ©Anthony Steed Overview n To revise Vectors Matrices.
Linear Algebra by Dr. Shorouk Ossama.
College Algebra Chapter 6 Matrices and Determinants and Applications
MTH108 Business Math I Lecture 20.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Linear Algebraic Equations and Matrices
MATRICES.
1.4 Inverses; Rules of Matrix Arithmetic
L8 inverse of the matrix.
Linear Algebra Lecture 19.
Numerical Analysis Lecture12.
Linear Equations.
Warm-Up - 8/30/2010 Simplify. 1.) 2.) 3.) 4.) 5.)
What we’re learning today:
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Matrices 3 1.
Matrix Operations SpringSemester 2017.
Section 7.4 Matrix Algebra.
Fitting Curve Models to Edges
Physics 114: Exam 2 Review Material from Weeks 7-11
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Nonlinear regression.
Unit 3: Matrices
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
5.2 Least-Squares Fit to a Straight Line
X.1 Principal component analysis
5.4 General Linear Least-Squares
Linear Algebra Lecture 18.
Chapter 4 Systems of Linear Equations; Matrices
1.8 Matrices.
DETERMINANT MATH 80 - Linear Algebra.
Matrix Operations SpringSemester 2017.
1.8 Matrices.
3.5 Perform Basic Matrix Operations Algebra II.
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Prepared by Po-Chuan on 2016/05/24
Presentation transcript:

X.6 Non-Negative Matrix Factorization a.k.a. Multivariate Curve Resolution (MCR) when applied to spectroscopy. Goal: Decompose a set of spectra into their pure-component spectra. Method: Assuming the pure component spectra are known (or guessed), determine the concentrations of each in each spectrum, removing negative entries. From the known (or guessed) set of concentrations, determine the spectra of each component, removing negative entries. Iterate to convergence. 4.1 : 1/14

When to consider MCR/NNMF Multiple spectra are available, each with varying (but not necessarily known) contributions from all the components. The obtained spectra can be described as a linear combination of pure component spectra. 4.1 : 1/14

Overview 1. Decide the number of species present in your sample. Select a few of the spectra at random (same number as your species) and assign them as pure component spectra to get the ball rolling. 2. Fit each spectrum in the set as a linear combination of the pure component spectra, and replace negative concentrations with zeros. 3. Using these concentrations, solve for the set of pure component spectra, setting all negative amplitudes in the spectra to zero. 4. Iterate between steps 2 and 3 until no negative values are present in either the concentration or the pure component spectra. 4.1 : 1/14

The Maths Remember this? If we have guesses for the pure component spectra, we can invert the problem and solve for the concentrations. 4.1 : 1/14

The Maths In MCR and NNMF, D = the set of n measured multi-component spectra of length N are given by an (Nn) data matrix D  = the (Nm) matrix of m pure component spectra C = the (mn) set of corresponding concentrations m = assumed number of pure components Each column of C describes the combination s of pure components required to recover the corresponding column in D. Each column of D is a measured spectrum Each column of  is a pure component spectrum 4.1 : 1/14

The Maths Part 1. If the pure spectra are known (or guessed), the concentrations can be isolated by matrix inversion. Since negative concentrations correspond to nonphysical results, set all negative entries to zero. 4.1 : 1/14

The Maths Part 2. Use the recovered nonnegative concentrations to solve for the set of pure component spectra . Since negative intensities/absorbances correspond to nonphysical results, set all negative entries to zero. 4.1 : 1/14

Uncertainties For a given measurement i, the standard deviations are calculated in the usual way: Note: N is the number of elements in the spectrum, and n is the number of unique pure components. The variance in the concentration for the species in row r for the ith measurement is given by the following: 4.1 : 1/14