1 A Fast-Nonegativity-Constrained Least Squares Algorithm R. Bro, S. D. Jong J. Chemometrics,11,393-401, 1997 By : Maryam Khoshkam.

Slides:



Advertisements
Similar presentations
Data Modeling and Least Squares Fitting COS 323. Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional.
Advertisements

Linear Inverse Problems
Lecture 3 Linear Programming: Tutorial Simplex Method
Nonlinear Programming McCarl and Spreen Chapter 12.
pH Emission Spectrum Emission(3 λ) λ1 λ2 λ3 A λ λ1λ2λ3λ1λ2λ3 A Ex 1 Emission(3 λ) λ1λ2λ3λ1λ2λ3 A Ex 2 Emission(3 λ) λ1λ2λ3λ1λ2λ3 A Ex 3 λ1λ2λ3λ1λ2λ3.
Transportation Problem (TP) and Assignment Problem (AP)
MS&E 211 Quadratic Programming Ashish Goel. A simple quadratic program Minimize (x 1 ) 2 Subject to: -x 1 + x 2 ≥ 3 -x 1 – x 2 ≥ -2.
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Chapter 5 The Simplex Method The most popular method for solving Linear Programming Problems We shall present it as an Algorithm.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
The Simplex Algorithm An Algorithm for solving Linear Programming Problems.
Optimization in Engineering Design 1 Lagrange Multipliers.
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
A B C k1k1 k2k2 Consecutive Reaction d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k.
By: S.M. Sajjadi Islamic Azad University, Parsian Branch, Parsian,Iran.
Curve-Fitting Regression
1 Chapter 8: Linearization Methods for Constrained Problems Book Review Presented by Kartik Pandit July 23, 2010 ENGINEERING OPTIMIZATION Methods and Applications.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Lecture 10: Support Vector Machines
Linear Programming Applications
5.6 Maximization and Minimization with Mixed Problem Constraints
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Constrained Optimization Rong Jin. Outline  Equality constraints  Inequality constraints  Linear Programming  Quadratic Programming.
Module 6 Matrices & Applications Chapter 26 Matrices and Applications I.
Non-Linear Simultaneous Equations
1 Chapter 3 Matrix Algebra with MATLAB Basic matrix definitions and operations were covered in Chapter 2. We will now consider how these operations are.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Collaborative Filtering Matrix Factorization Approach
LINEAR PROGRAMMING SIMPLEX METHOD.
Least-Squares Regression
The Simplex algorithm.
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Linear Programming: Basic Concepts
Chapter 6 Linear Programming: The Simplex Method
1 Introduction to Linear and Nonlinear Programming.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Solving Linear Programming Problems: The Simplex Method
Water Resources Development and Management Optimization (Linear Programming) CVEN 5393 Mar 4, 2011.
Curve-Fitting Regression
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chapter 4 Linear Programming: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Sensitivity analysis LI Xiao-lei. A graphical introduction to sensitivity analysis Sensitivity analysis is concerned with how changes in an LP’s parameters.
Solving linear models. x y The two-parameter linear model.
NNLS (Lawson-Hanson) method in linearized models.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
 LP graphical solution is always associated with a corner point of the solution space.  The transition from the geometric corner point solution to the.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 9: Least.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: simple regression model Original citation: Dougherty, C. (2012) EC220.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
MTH108 Business Math I Lecture 20.
Lecture 16: Image alignment
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Chapter 7. Classification and Prediction
deterministic operations research
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Collaborative Filtering Matrix Factorization Approach
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
Presentation transcript:

1 A Fast-Nonegativity-Constrained Least Squares Algorithm R. Bro, S. D. Jong J. Chemometrics,11, , 1997 By : Maryam Khoshkam

2 Introduction Introduction Algorithm of classical non-negative least square (nnls) Algorithm of classical non-negative least square (nnls) A numerical example A numerical example Fast nonegative least square (Fnnls) Fast nonegative least square (Fnnls) Results and discussion Results and discussion

3 introduction Estimation of models subject to non-negativity constraints is of practical importance in chemistry. The time required for estimating true least squares non- negativity constrained model is typicaly many times longer than for estimating unconstrained model. Approximations procedures Unconstrained estimation+ Setting negative value to zero What is the problem of force to zero algorithm????

4 1.There is no guarantee whatsoever for the quality of the model. 2.When included in multiway algorithm, it can cause the algorithm to diverge. (specially, in noisy data or difficult models) Non-negativity constrained linear least square

5 NNLS NNLS will be stated using the following nomenclature: For all m Lawson, C.L. and R.J. Hanson, Solving Least-Squares Problems, Prentice- Hall, Chapter 23, p. 161, lsqnonneg command in MATLAB is based on this algorithm

6 Some important aspects of the algorithm This is an active set algorithm. Thus the nnls first find the true passive and active sets, then perform the least square on corresponding columns of Z. HOW…???? First of all, it suppose that all elements of d, are in active set. R={1,2,….,M} P={} If the all elements of d are active, estimate that how the form of initial d….?????

7 Initial solution vector d is feasible and set equal to a M×1 zero vector Using the vector w Then it removes the passive sets elements of d, one by one, and STEP by STEP HOW…????

8 w=Z T ( x – Z d ) One necessary condition for optimally of a solution, is that the derivatives with respect to the parameters of the passive set be zero. WHY ?? How is the f ’ (d m ) if m is an active set ????

9 Thus at optimal solution, we expect that: When we are being at optimal condition, w m >0. wht is the meaning of positive value for w m ?

10 positive w m shows that by increasing d m to a more positive value, the change in residual is negative (f ', which is the slope is negative). It means that the residual becomes less, and more close to zero, when d m goes toward positive.

11 Algorithm nnls 1.P=Ø 2.R={1,2,…,M} 3.d=0 4.w=Z T (x-Zd) loop A No Opt. sol. Of d yes P={m} & R=R-{m} S P =[(Z P ) T Z P ] -1 (Z P ) T x No d=s w=Z T (x-Zd) yes Loop C all s p >0 Loop B Build s p

12 A simple numerical example loop A Max(w n )>1e-15 & R≠Ø Loop B

13 Loop B No Max(w n )>1e-15 & R≠Ø Opt. sol. Of d

14 Graphical representation of nonlinear least square algorithm: H 3 A ↔ H + + H 2 A - pk a1 =2.6 H 2 A - ↔ H + + HA 2- pk a2 =4.0 HA 2- ↔ H + + A 3- pk a3 =6.3 D=X +. Z (ILS) Z D X

15 X (10x100) = Z (10x4) D (4x100) x 10x1 =X(:,1) Step1 d (4x1) w x D(:,1) 1)P=[]; 2)R={1,2,3,4}

16 Force to zero nnls Variation of w during nonnegative least square

17 Residual matrix w w=Z T (x-Zd) = w (4×1) r (10x1) Z (10x4) T w i in each λ, is the contribution of ith species in in residual vector in each λ.

18 Why it is necessary to modify the NNLS algorithm?

19 PARAFAC-ALS I J K Least square

20 If the size of X is 10x200x5 in a 3 component system, the size of Z is: 500x3 And so on…. Computation of Z, can be computationally costly for large arrayes Excessive memory is required to calculate X (IxJK), X (JxIK) and X (K,IJ)

21 And in similar way for estimation of B and C

22 Nnls can not used for this simplified version of PARAFAc, Why??

23 In estimating A, and using non-negativity constraint on it, A (IxF) X (IxJK) Z = SLOW!!!!!!!  A modification…

24 Fast non-negativity least square 1. Accept the cross product (xZ and Z’Z) instead of the row data w=Z T ( x – Z d )  w=Z T x – (Z T Z)d S P =[(Z P ) T Z P ] -1 (Z P ) T x  S P =[(Z T Z) P ] -1 (Z T x) 2. Set the passive and active variables, before enter to loop B d is not the zero vector in this case

25 Thanks

26 Ex. d=Z\xd=Z\x Force to zero RMS=103 nnlsq RMS=20

27 3th element of d In unconstrained solution of d, the mth negative element is active set In constrained solution of d, the mth zero element is active set 2th and 3th elements of d What happen if the true active set is known? Thus it is not possible to realize the true active set form unconstraint least square solution.

28 For example, if we know that 2th and 3th elements of d are true active sets, Then perform the simple unconstrained least square with the 1 st column of Z. (columns corresponding to passive sets. selectivity constraint !!!! NOTE!!!!! 1) 2) f(d) minimized by columns of Z corresponding to passive sets. Thus active sets which for them, d m =0, is not take a part in minimizing the residuals

29 Loop C Update R and P S P =[(Z P ) T Z P ] -1 (Z P ) T x 1) 2) 3) 4) SR=0SR=0 5)