Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.

Slides:



Advertisements
Similar presentations
Environmental Data Analysis with MatLab
Advertisements

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Environmental Data Analysis with MatLab Lecture 16: Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Environmental Data Analysis with MatLab
Linear Inequalities and Linear Programming Chapter 5
Operation Research Chapter 3 Simplex Method.
Visual Recognition Tutorial
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Chapter 5 Orthogonality
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Lecture 10: Support Vector Machines
Basics of regression analysis
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Chapter 4 The Simplex Method
Lecture II-2: Probability Review
Today Wrap up of probability Vectors, Matrices. Calculus
LINEAR PROGRAMMING SIMPLEX METHOD.
Summarized by Soo-Jin Kim
Making Models from Data A Basic Overview of Parameter Estimation and Inverse Theory or Four Centuries of Linear Algebra in 10 Equations Rick Aster Professor.
1. The Simplex Method.
Chapter 6 Linear Programming: The Simplex Method
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Water Resources Development and Management Optimization (Linear Programming) CVEN 5393 Mar 4, 2011.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Kinematic Redundancy A manipulator may have more DOFs than are necessary to control a desired variable What do you do w/ the extra DOFs? However, even.
CHAP 3 WEIGHTED RESIDUAL AND ENERGY METHOD FOR 1D PROBLEMS
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 CHAP 3 WEIGHTED RESIDUAL AND ENERGY METHOD FOR 1D PROBLEMS FINITE ELEMENT ANALYSIS AND DESIGN Nam-Ho Kim.
Lecture 16: Image alignment
Perturbation method, lexicographic method
Two-view geometry Computer Vision Spring 2018, Lecture 10
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Singular Value Decomposition SVD
Chapter 2. Simplex method
Presentation transcript:

Lecture 12 Equality and Inequality Constraints

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empirical Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture Review the Natural Solution and SVD Apply SVD to other types of prior information and to equality constraints Introduce Inequality Constraints and the Notion of Feasibility Develop Solution Methods Solve Exemplary Problems

Part 1 Review the Natural Solution and SVD

subspaces model parameters m p can affect data m 0 cannot affect data data d p can be fit by model d 0 cannot be fit by any model

natural solution determine m p by solving d p -Gm p =0 set m 0 =0

error reduced to its minimum E=e 0 T e 0

natural solution determine m p by solving d p -Gm p =0 set m 0 =0 solution length reduced to its minimum L=m p T m p

Singular Value Decomposition (SVD)

singular value decomposition U T U=I and V T V=I

suppose only p λ ’s are non-zero

only first p columns of U only first p columns of V

U p T U p =I and V p T V p =I since vectors mutually pependicular and of unit length U p U p T ≠ I and V p V p T ≠ I since vectors do not span entire space

The Natural Solution

natural generalized inverse G -g

resolution and covariance

Part 2 Application of SVD to other types of prior information and to equality constraints

general solution to linear inverse problem

2 lectures ago general minimum-error solution

natural solution plus amount α of null vectors

you can adjust α to match whatever a priori information you want for example m= by minimizing L=||m- || 2 w.r.t. α

you can adjust α to match whatever a priori information you want for example m= by minimizing L=||m- || 2 w.r.t. α get α =V 0 T so m = V p Λ p -1 U p T d + V 0 V 0 T

equality constraints minimize E with constraint Hm=h

Step 1 find part of solution constrained by Hm=h SVD of H (not G) H = V p Λ p U p T so m=V p Λ p -1 U p T h + V 0 α

Step 2 convert Gm=d into and equation for α GV p Λ p -1 U p T h + GV 0 α = d and rearrange [GV 0 ]α = [d - GV p Λ p -1 U p T h] G’α= d’

Step 3 solve G’α= d’ for α using least squares

Step 4 reconstruct m from α m=V p Λ p -1 U p T h + V 0 α

Part 3 Inequality Constraints and the Notion of Feasibility

Not all inequality constraints provide new information x > 3 x > 2

Not all inequality constraints provide new information x > 3 x > 2 follows from first constraint

Some inequality constraints are incompatible x > 3 x < 2

Some inequality constraints are incompatible x > 3 x < 2 nothing can be both bigger than 3 and smaller than 2

every row of the inequality constraint Hm ≥ h divides the space of m into two parts one where a solution is feasible one where it is infeasible the boundary is a planar surface

when all the constraints are considered together they either create a feasible volume or they don’t if they do, then the solution must be in that volume if they don’t, then no solution exists

(A)(B) m1m1 m1m1 m2m2 m2m2 feasible region

now consider the problem of minimizing the error E subject to inequality constraints Hm ≥ h

if the global minimum is inside the feasible region then the inequality constraints have no effect on the solution

but if the global minimum is outside the feasible region then the solution is on the surface of the feasible volume

but if the global minimum is outside the feasible region then the solution is on the surface of the feasible volume the point on the surface where E is the smallest

Hm≥h m1m1 m2m2 -E-E m est E min infeasible feasible

furthermore the feasible-pointing normal to the surface must be parallel to ∇E else you could slide the point along the surface to reduce the error E

Hm≥h m1m1 m2m2 -E-E m est E min

Kuhn – Tucker theorem

it’s possible to find a vector y with y i ≥0 such that

it’s possible to find a vector y with y≥0 such that feasible-pointing normals to surface

it’s possible to find a vector y with y≥0 such that the gradient of the error feasible-pointing normals to surface

it’s possible to find a vector y with y≥0 such that the gradient of the error is a non-negative combination of feasible normals feasible-pointing normals to surface

it’s possible to find a vector y with y≥0 such that the gradient of the error is a non-negative combination of feasible normals feasible-pointing normals to surface y specifies the combination

it’s possible to find a vector y with y≥0 such that for linear case with Gm=d

it’s possible to find a vector y with y≥0 such that some coefficients y i are positive

it’s possible to find a vector y with y≥0 such that some coefficients y i are positive the solution is on the corresponding constraint surface

it’s possible to find a vector y with y≥0 such that some coefficients y i are zero

it’s possible to find a vector y with y≥0 such that some coefficients y i are zero the solution is on the feasible side of the corresponding constraint surface

Part 4 Solution Methods

simplest case minimize E subject to m i >0 (H=I and h=0) iterative algorithm with two nested loops

Step 1 Start with an initial guess for m The particular initial guess m=0 is feasible It has all its elements in m E constraints satisfied in the equality sense

Step 2 Any model parameter m i in m E that has associated with it a negative gradient [∇E] i can be changed both to decrease the error and to remain feasible. If there is no such model parameter in m E, the Kuhn – Tucker theorem indicates that this m is the solution to the problem.

Step 3 If some model parameter m i in m E has a corresponding negative gradient, then the solution can be changed to decrease the prediction error. To change the solution, we select the model parameter corresponding to the most negative gradient and move it to the set m S. All the model parameters in m S are now recomputed by solving the system G S m’ S =d S in the least squares sense. The subscript S on the matrix indicates that only the columns multiplying the model parameters in m S have been included in the calculation. All the m E ’s are still zero. If the new model parameters are all feasible, then we set m = m′ and return to Step 2.

Step 4 If some of the elements of m’ S are infeasible, however, we cannot use this vector as a new guess for the solution. So, we compute the change in the solution and add as much of this vector as possible to the solution m S without causing the solution to become infeasible. We therefore replace m S with the new guess m S + α δm, where is the largest choice that can be made without some m S becoming infeasible. At least one of the m Si ’s has its constraint satisfied in the equality sense and must be moved back to m E. The process then returns to Step 3.

In MatLab mest = lsqnonneg(G,dobs);

example gravitational field depends upon density via the inverse square law

example observations gravitational force depends upon density model parameters via the inverse square law theory

(A) (B) (C) (D) (E) (F) (x i,y i ) (x j,y j ) gravity force, d i distance, y i θ ij R ij

more complicated case minimize ||m|| 2 subject to Hm≥h

this problem is solved by transformation to the previous problem

solve by non-negative least squares then compute m i as with e’=d’-G’m’

In MatLab

m2m2 m1m1 m2m2 m1m1 m2m2 m1m1 m2m2 m1m1 feasible (A) m 1 -m 2 ≥0 (B) ½m 1 -m 2 ≥1 (C) m 1 ≥0.2 (D) Intersection feasible infeasible m est

yet more complicated case minimize ||d-Gm|| 2 subject to Hm≥h

this problem is solved by transformation to the previous problem

minimize || m’|| subject to H’m’≥h’ and where

m1m1 m2m2 m1m1 feasible (A)(B)(C) feasible infeasible m2m d z

In MatLab [Up, Lp, Vp] = svd(G,0); lambda = diag(Lp); rlambda = 1./lambda; Lpi = diag(rlambda); % transformation 1 Hp = -H*Vp*Lpi; hp = h + Hp*Up'*dobs; % transformation 2 Gp = [Hp, hp]'; dp = [zeros(1,length(Hp(1,:))), 1]'; mpp = lsqnonneg(Gp,dp); ep = dp - Gp*mpp; mp = -ep(1:end-1)/ep(end); % take mp back to m mest = Vp*Lpi*(Up'*dobs-mp); dpre = G*mest;