Self-Modeling Curve Resolution and Constraints Hamid Abdollahi Department of Chemistry, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan,

Slides:



Advertisements
Similar presentations
6.4 Best Approximation; Least Squares
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Visualizing the Microscopic Structure of Bilinear Data: Two components chemical systems.
Mutidimensional Data Analysis Growth of big databases requires important data processing.  Need for having methods allowing to extract this information.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Lecture 19 Singular Value Decomposition
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Chapter 3 Determinants and Matrices
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
Chapter 2 Matrices Definition of a matrix.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
PATTERN RECOGNITION : PRINCIPAL COMPONENTS ANALYSIS Prof.Dr.Cevdet Demir
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
An Introduction to Model-Free Chemical Analysis Hamid Abdollahi IASBS, Zanjan Lecture 2.
Orthogonality and Least Squares
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Subspaces, Basis, Dimension, Rank
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
SVD(Singular Value Decomposition) and Its Applications
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Chapter 5 Orthogonality.
Next. A Big Thanks Again Prof. Jason Bohland Quantitative Neuroscience Laboratory Boston University.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
AN ORTHOGONAL PROJECTION
SVD: Singular Value Decomposition
Elementary Linear Algebra Anton & Rorres, 9th Edition
Nonlinear Programming Models
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
PATTERN RECOGNITION : PRINCIPAL COMPONENTS ANALYSIS Richard Brereton
Rotational Ambiguity in Soft- Modeling Methods. D = USV = u 1 s 11 v 1 + … + u r s rr v r Singular Value Decomposition Row vectors: d 1,: d 2,: d p,:
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
An Introduction to Model-Free Chemical Analysis Hamid Abdollahi IASBS, Zanjan Lecture 3.
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Matrices and Vectors Review Objective
Rotational Ambiguity in Hard-Soft Modeling Method
ECE 638: Principles of Digital Color Imaging Systems
Lecture on Linear Algebra
X.6 Non-Negative Matrix Factorization
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Symmetric Matrices and Quadratic Forms
I.4 Polyhedral Theory (NW)
Basics of Linear Algebra
I.4 Polyhedral Theory.
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Self-Modeling Curve Resolution and Constraints Hamid Abdollahi Department of Chemistry, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran

A matrix can be decomposed into the product of two significantly smaller matrices. Factorization: In many chemical studies, the measured or calculated properties of the system can be considered to be the linear sum of the term representing the fundamental effects in that system times appropriate weighing factors. D = A B + R DA B = + R

Rows and Columns Vector Spaces R = USV T = XV T = U Y T =D U SVTVT = X VTVT = U YTYT The rows of X are coordinates of rows of D in row vector space on orthonormal V bases set The rows of Y are coordinates of Columns of D in column vector space on orthonormal U bases set

Two components chromatographic system

Two components kinetic system

Two components multivariate calibration

Three components chromatographic system

Bilinearity R = XV T = C A T Bilinearity is property of a model and its definition for a data will be completely based on obeying the data from a particular model. So, when chemometricians say “a data matrix is bilinear” that means this data obey from a “bilinear model” and then we can understand the meaning of “non-bilinear data”. C = X T A=T -1 V Thus mathematically, every data is bilinear

? Is rank deficient data bilinear?

Bilinearity in Abstract Space Always in vector space of the data, all columns or rows can be produced by linear combinations of bases sets of the corresponding subspace. When the data obeys from Beer’s law, all rows and columns are linear combination of pure spectra or concentration profiles, respectively The Beer law is not fulfill when all rows and columns can not be produced by the same pure spectral or concentration profiles

Bilinearity in Abstract Space A simple first order kinetic system:

Bilinearity in Abstract Space Row (spectral) space:

Bilinearity in Abstract Space Column (concentration) space:

Non-negativity Constraint

Bilinear decomposition and non-negativity constraint Mathematically bilinear decomposition of all data matrices are possible. Generally. In r component systems, r independent vectors in row an r independent vectors in column space of the data should be defined. In most cases, chemically meaningful decomposition should produce r independent vectors which all fulfill non-negative properties of their elements. Accordingly, the question is: How we can find the bases vectors with non-negative elements?

Non-negative meaningful profiles should be in abstract spaces of the data:

Is there any exact definition for non-negative subspace? D = USV T = XV T = U Y T D ≥ 0 U Y T ≥ 0 All elements of data matrix are greater or equal to zero: The coordinates of rows and columns of D matrix in abstract space can be calculated as: Rows of X matrix are coordinate of rows of D matrix in abstract space. Rows of Y matrix are coordinate of columns of D matrix in abstract space. X V T ≥ 0 Row vectors are non-negative Column vectors are non-negative

Definition of non-negativity in abstract space: Row space: Each point in abstract space defines a spectral profile z T = [z 1 z 2 z 3 … z r ] s = z 1 v 1 + z 2 v 2 + z 3 v 3 + … z r v r z points should produce the all elements of s non- negative For each element of s, an inequality can be defined: z 11 v 11 + z 21 v 21 + z 31 v 31 + … z r1 v r1 ≥ 0 z 12 v 12 + z 22 v 22 + z 32 v 32 + … z r2 v r2 ≥ 0 z 1p v 1p + z 2p v 2p + z 3p v 3p + … z rp v rp ≥ 0 … ……

Definition of non-negativity in abstract space: z 11 v 11 + z 21 v 21 ≥ 0 z 12 v 12 + z 22 v 22 ≥ 0 z 1p v 1p + z 2p v 2p ≥ 0 … … Two components system in spectral space: z 21 ≥ (-v 11 / v 21 ) z 11 … … z 22 ≥ (-v 12 / v 22 ) z 12 z 2p ≥ (-v 1p / v 2p ) z 1p z2z2 z2z2 0 ith half-plane ith border line z 2 ≥ (-v 1i / v 2i ) z 1

Inequality borders corresponding to each element of spectral profile:

Inequality borders corresponding to each element of concentration profile:

Microscopic structure of the data: * Negative Region * Data Region * Feasible Solution Regions

Microscopic structure of the data: Normalization has not any effect on areas of different regions Normalizing to unit length in abstract space, transfer the object point on an arc of the circle with radius equal to one

Microscopic structure of the data: Normalizing to first eigenvector in abstract space, transfer the object point on a line with the first coordinates of all points equal to one

Microscopic structure of the data: Lawton-Silvester Plot is a very simple image from the data including all information which can be obtained under non-negativity constraint of bilinear decomposition

Concentration and Spectral bands: Some parts of LSP can be transferred to meaningful spectral and concentration feasible bands. Certainly the “true” solutions are here.

One of the feasible solutions of two components system under just non-negativity constraint always exist in the data. ? Discussion

? Simulate a chemical data matrix and use LSP-m file for visualizing the “data based uniqueness”

Definition of non-negativity in abstract space: z 11 v 11 + z 21 v 21 + z 31 v 31 ≥ 0 … … Three components system in spectral space: z 12 v 12 + z 22 v 22 + z 32 v 32 ≥ 0 z 1p v 1p + z 2p v 2p + z 3p v 3p ≥ 0 … z 31 ≥ –(v 11 /v 31 ) z 11 - (v 21 / v 31 ) z 21 … … … z 32 ≥ –(v 12 /v 32 ) z 12 - (v 22 / v 32 ) z 22 z 3p ≥ –(v 1p /v 3p ) z 1p - (v 2p / v 3p ) z 2p

Definition of non-negativity in abstract space: Three components system in spectral space: A plane defines the border of non-negative subspace for ith element of spectral profile z 3 ≥ –(v 1i /v 3i ) z 1 - (v 2i / v 3i ) z 2 z2z2 z1z1 z3z3 ith border plabe Generally an inequality define the non-negative half- volume in there dimensional space :

Definition of non-negativity in abstract space: Non-negativity boundaries * * * * * * * * * * * * * * * * A non-negative feasible solution

Schematic presentation of microstructure of the data: Non-negativity boundaries * * * * * * * * * * * * * * * * A non-negative feasible solution

Microstructure of 3 components chromatographic data:

Inner polyhedron Outer polyhedron Row Space

Microstructure of 3 components chromatographic data: Inner polyhedron Outer polyhedron Column Space

3D visualization of microstructure of data

Microstructure of 3 components data *Negative region *Data region *Feasible solution regions *Positive non feasible regions

Implementation of non-negativity constraint ? Discussion

Closure Constraint

c 1 + c 2 = const. D = C A d 1 = c 11 a 1 + c 12 a 2 = const. a 2 - c 11 (a 1 - a 2 ) d 2 = c 21 a 1 + c 22 a 2 = const. a 2 - c 21 (a 1 - a 2 ) d i = c i1 a 1 + c i2 a 2 = const. a 2 - c i1 (a 1 - a 2 ) Closed Two Component Systems:

d i = c i1 a 1 + c i2 a 2 = const. a 2 - c i1 (a 1 - a 2 ) (a 1 - a 2 ) a1a1 a2a2 const. a 1 c 11 (a 1 - a 2 ) d1d1 c 21 (a 1 - a 2 ) d2d2 c i1 (a 1 - a 2 ) didi Spectral Subspace of Closed Systems: D = C A In a closed two component system, the positions of all row spectra are on a line which with the origin produce two dimensional space

a1a1 c1 a1c1 a1 a2a2 (1- c 1 ) a 2 c 1 a 1 + (1- c 1 ) a 2 c2 a1c2 a1 (1- c 2 ) a 2 c 2 a 1 + (1- c 2 ) a 2 ci a1ci a1 (1- c i ) a 2 c i a 1 + (1- c i ) a 2 Convex Linear Combination and Closure: In a closed system subspace of data rows is convex

Spectral subspace reduction in a closed system is general for any component system ? Discussion

Why column mean centering in a closed system reduce the rank of the data? ?

Score plot of a closed system:

Microscopic structure of a closed system:

Closure Normalization or Constraint? Feasible solutions with limited intensities to data line are corresponding to closed concentration profiles. Closure normalization line

Implementation of closure constraint ? Discussion