Computing ill-Conditioned Eigenvalues

Slides:



Advertisements
Similar presentations
3- 1 Chapter 3 Introduction to Numerical Methods Second-order polynomial equation: analytical solution (closed-form solution): For many types of problems,
Advertisements

Interpolation A standard idea in interpolation now is to find a polynomial pn(x) of degree n (or less) that assumes the given values; thus (1) We call.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
A numerical Approach toward Approximate Algebraic Computatition Zhonggang Zeng Northeastern Illinois University, USA Oct. 18, 2006, Institute of Mathematics.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Chapter 4 Roots of Equations
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Mathematics 1 Sirola: A Versatile Algorithm for Local Positioning in Closed Form A Versatile Algorithm for Local Positioning in Closed Form Niilo.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 20 Solution of Linear System of Equations - Iterative Methods.
Open Methods Chapter 6 The Islamic University of Gaza
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Autar Kaw Humberto Isaza
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
University of Memphis Mathematical Sciences Numerical Analysis “Algebra” (from the Arabic al-jabr, (the mending of) “broken bones”) literally refers to.
Solving Quadratic Equations by Factoring. Solution by factoring Example 1 Find the roots of each quadratic by factoring. factoring a) x² − 3x + 2 b) x².
Ill-posed Computational Algebra with Approximate Data Zhonggang Zeng Northeastern Illinois University, USA Feb. 21, 2006, Radon Institute for Computational.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Using square roots to solve quadratic equations. 2x² = 8 22 x² = 4 The opposite of squaring a number is taking its square root √ 4= ± 2.
Solving Quadratic Equations – Part 1 Methods for solving quadratic equations : 1. Taking the square root of both sides ( simple equations ) 2. Factoring.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Numerical Methods.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Perfidious Polynomials and Elusive Roots Zhonggang Zeng Northeastern Illinois University Nov. 2, 2001 at Northern Illinois University.
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Backward Thinking Confessions of a Numerical Analyst Keith Evan Schubert.
©thevisualclassroom.com To solve equations of degree 2, we can use factoring or use the quadratic formula. For equations of higher degree, we can use the.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
On Computing Multiple Eigenvalues, the Jordan Structure and their Sensitivity Zhonggang Zeng Northeastern Illinois University Sixth International Workshop.
Polynomial P(x) Linear Factors Solutions of P(x)=0 Zeros of P(x) P(x) = 0.
MA2213 Lecture 9 Nonlinear Systems. Midterm Test Results.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
ConcepTest Section 3.9 Question 1 In which of the following graphs will using local linearity to approximate the value of the function near x = c give.
Newton’s method for finding local minimum
MATH 175: Numerical Analysis II
CHAPTER 3 NUMERICAL METHODS
Numerical Polynomial Algebra and Algebraic Geometry
Higher-Degree Polynomial Functions and Graphs
Chapter 4: Rational Power, and Root Functions
Chapter 3: Polynomial Functions
Chapter 3: Polynomial Functions
Newton’s method for finding local minima
Class Notes 18: Numerical Methods (1/2)
f(x) = a(x + b)2 + c e.g. use coefficients to
Northeastern Illinois University
Find all solutions of the polynomial equation by factoring and using the quadratic formula. x = 0 {image}
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Warm-up: Find the equation of a quadratic function in standard form that has a root of 2 + 3i and passes through the point (2, -27). Answer: f(x) = -3x2.
Confessions of a Numerical Analyst Keith Evan Schubert
MATH 174: Numerical Analysis
Chapter 3: Polynomial Functions
Perfidious Polynomials and Elusive Roots
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Pivoting, Perturbation Analysis, Scaling and Equilibration
Presentation transcript:

Computing ill-Conditioned Eigenvalues and Polynomial Roots Zhonggang Zeng Northeastern Illinois University International Conference on Matrix Theory and its Applications -- Shanghai

Can you solve (x-1.0 )100 = 0 Can you solve x100-100 x99 +4950 x98 - 161700 x97+3921225x96 - ... - 100 x +1 = 0

Eigenvalues of 1 0 1 1 ... ... 0 1 1 A = X X-1

The Wilkinson polynomial p(x) = (x-1)(x-2)...(x-20) = x20 - 210 x19 + 20615 x18 + ... Wilkinson wrote in 1984: Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst.

Myths on multiple eigenvalues/roots: - multiple e’values/roots are ill-conditioned, or even intractable - extension of machine precision is necessary to calculate multiple roots - there is an “attainable precision” for multiple eigenvalues/roots: machine precision attainable precision = ----------------------------- multiplicity Example: for a 100-fold eigenvalue, to get 5 digits right 500 digits in machine precision 5 digits precision = ----------------------------------------- 100 in multiplicity

Conclusion: the problem is “bad” The forward error: 5 -- Ouch! Who’s responsible? The backward error: 5 x 10-10 -- method is good! Conclusion: the problem is “bad”

Who is asking a wrong question? What is the wrong question? If the answer is highly sensitive to perturbations, you have probably asked the wrong question. Maxims about numerical mathematics, computers, science and life, L. N. Trefethen. SIAM News A: “Customer” B: Numerical analyst Who is asking a wrong question? A: The polynomial or matrix B: The computing objective What is the wrong question?

Kahan’s pejorative manifolds All n-polynomials having certain multiplicity structure form a pejorative manifold xn + a1 xn-1+...+an-1 x + an <=> (a1 , ..., an-1 , an ) Example: ( x-t )2 = x2 + (-2t) x + t2 Pejorative manifold: a1= -2t a2= t2

Pejorative manifolds of 3-polynomials ( x - s )( x - t )2 = x3 + (-s-2t) x2 + (2st+t2) x + (-st2) a1= -s-2t a2= 2st+t2 a3= -st2 Pejorative manifold of multiplicity structure [1,2] ( x - s )3 = x3 + (-3s) x2 + (3s2) x + (-s3) a1 = -3s a2 = 3s2 a3 = -s3 Pejorative manifold of multiplicity structure [ 3 ]

Pejorative manifolds of 3-polynomials The wings: a1= -s-2t a2= 2st+t2 a3= -st2 The edge: a1 = -3s a2 = 3s2 a3 = -s3 General form of pejorative manifolds u = G(z)

Ill-condition is caused by solving polynomial W. Kahan, Conserving confluence curbs ill-condition, 1972 1. Ill-condition occurs when a polynomial/matrix is near a pejorative manifold. 2. A small “drift” of the problem on that pejorative manifold does not cause large forward error to the multiple roots, except 3. If a multiple root/eigenvalue is sensitive to small perturbation on the pejorative manifold, then the polynomial/matrix is near a pejorative submanifold of higher multiplicity. Ill-condition is caused by solving polynomial equations on a wrong manifold

Pejorative manifolds of 3-polynomials The wings: a1= -s-2t a2= 2st+t2 a3= -st2 The edge: a1 = -3s a2 = 3s2 a3 = -s3

Given a polynomial p(x) = xn + a1 xn-1+...+an-1 x + an / / / / / / / / / / / / / / / / / / Find ( z1, ..., zn ) such that p(x) = ( x - z1 )( x - z2 ) ... ( x - zn ) The wrong question: because you are asking for simple roots! Find distinct z1, ..., zm such that p(x) = ( x - z1 ) s1( x - z2 )s2 ... ( x - zm )sm s1+...+ sm = n, m < n The right question: do it on the pejorative manifold!

For ill-conditioned polynomial p(x)= xn + a1 xn-1+...+an-1 x + an ~ a = (a1 , ..., an-1 , an ) The objective: find u*=G(z*) that is nearest to p(x)~a

I.e. An over determined polynomial system Let ( x - z1 ) s1( x - z2 )s2 ... ( x - zm )sm = xn + g1 ( z1, ..., zm ) xn-1+...+gn-1 ( z1, ..., zm ) x + gn ( z1, ..., zm ) Then, p(x) = ( x - z1 ) s1( x - z2 )s2 ... ( x - zm )sm <==> g1 ( z1, ..., zm ) =a1 g2( z1, ..., zm ) =a2 ... ... ... gn ( z1, ..., zm ) =an n (m<n) m I.e. An over determined polynomial system G(z) = a

Project to tangent plane The polynomial a Project to tangent plane u1 = G(z0)+J(z0)(z1- z0) ~ tangent plane P0 : u = G(z0)+J(z0)(z- z0) pejorative root u*=G(z* ) initial iterate u0=G(z0) new iterate u1=G(z1) Pejorative manifold u = G( z ) Solve G(z0)+J(z0)( z - z0 ) = a for linear least squares solution z = z1 Solve G( z ) = a for nonlinear least squares solution z=z* G(z0)+J(z0)( z - z0 ) = a J(z0)( z - z0 ) = - [G(z0) - a ] z1 = z0 - [J(z0)+] [G(z0) - a]

zi+1=zi - J(zi )+[ G(zi )-a ], i=0,1,2 ... Theorem: If z=(z1, ..., zm) with z1, ..., zm distinct, then the Jacobian J(z) of G(z) is of full rank. Theorem: Let u*=G(z*) be nearest to p(x)~a, if 1. z*=(z*1, ..., z*m) with z*1, ..., z*m distinct; 2. z0 is sufficiently close to z*; 3. a is sufficiently close to u* then the iteration converges with a linear rate. Further assume that a = u* , then the convergence is quadratic.

The “pejorative” condition number v = G(z) u = G(y) || u - v ||2 = backward error || y - z ||2 = forward error u - v = G(y) - G(z) = J(z) (y - z) + h.o.t. || u - v ||2 = || J(z) (y - z) ||2 > s || y - z ||2 || y - z ||2 < (1/s) || u - v ||2 1/s is the pejorative condition number where s is the smallest singular value of J(z) .

Example (x-0.9)18(x-1.0)10(x-1.1)16 = 0 Step z1 z2 z3 -------------------------------------------------------------------- 0 .92 .95 1.12 1 .87 1.05 1.10 2 .92 .95 1.11 3 .88 1.01 1.10 4 .90 .97 1.12 5 .901 .992 1.101 6 .89993 .9998 1.1002 7 .9000003 .999998 1.1000007 8 .899999999997 .999999999991 1.100000000009 9 .900000000000006 .99999999999997 1.10000000000001 forward error: 6 x 10-15 backward error: 8 x 10-16 Pejorative condition: 58 Even clustered multiple roots are pejoratively well conditioned

Roots are correct up to 7 digits! Example (x-.3-.6i)100 (x-.1-.7i) 200 (x - .7-.5i) 300 (x-.3-.4i) 400 =0 Scary enough? Round coefficients to 6 digits. Z1 z2 z3 z4 .289 +.601i .100 +.702i .702 +.498i .301 +.399i .309 +.602i .097 +.698i .698 +.499i .299 +.401i .293 +.596i .101 +.7003i .7002 +.5005i .3007 +.4003i .300005 +.600006i .099998 +.6999992i .69999992+.4999993i .2999992 +.3999992i .3000002+.60000005i .09999995+.69999998i .69999997+.49999998i .29999997+.400000002i Roots are correct up to 7 digits! Pejorative condition: 0.58

What are the roots of the Wilkinson polynomial? Example: The Wilkinson polynomial p(x) = (x-1)(x-2)...(x-20) = x20 - 210 x19 + 20615 x18 + ... There are 605 manifolds in total. It is near some manifolds, but which ones? Multiplicity backward error condition Estimated structure number error ------------------------------------------------------------------------ [1,1,1,1,1,1,1,1,1...,1] .000000000000003 550195997640164 1.6 [1,1,1,1,2,2,2,4,2,2,2] .000000003 29269411 .09 [1,1,1,2,3,4,5,3] .0000001 33563 .003 [1,1,2,3,4,6,3] .000001 6546 .007 [1,1,2,5,7,4] .000005 812 .004 [1,2,5,7,5] .00004 198 .008 [1,3,8,8] .0002 25 .005 [2,8,10] .003 6 .02 [5,15] .04 1 .04 [20] .9 .2 .2 What are the roots of the Wilkinson polynomial? Choose your poison!

The “right” question for ill-conditioned eigenproblem Given a matrix A Find a structured Schur form S and a matrix U such that AU - US = 0 U*U - I = 0 Over-determined!!! 3 1 3 l + l + + + A ~ m 2 S = m + m 2 1 2 Minimize || AU - US ||F2 + || U*U - I ||F2 ---- nonlinear least squares problem

+ O(10 -7) The pejorative condition number: 22.8 Example: A + E, where ||E|| A 1.0e-7 Step l m ------------------------------------------ 0 4.0 1.1 1 2.99 2.01 2 3.0006 1.9998 3 3.000001 1.9999997 4 3.00000001 1.99999992 3 + 3 + + + UT(A+E)U = + O(10 -7) 2 2 + 2 The pejorative condition number: 22.8

Conclusion 1. Ill-condition is cause by a wrong “identity” 2. Multiple eigenvalues/roots are pejoratively well conditioned, thereby tractable. 3. Extension of machine precision is NOT needed, a change in computing concept is. 4. To calculate ill-conditioned eigenvalues/roots, one has to figure out the pejorative structure (how?)