The numerical computation of the multiplicity of a component of an algebraic set Dan Bates (IMA) Joint work with: Chris Peterson (Colorado State) Andrew.

Slides:



Advertisements
Similar presentations
Roots & Zeros of Polynomials
Advertisements

5.1 Real Vector Spaces.
Ch 7.7: Fundamental Matrices
Differential Equations
Splash Screen.
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Information and Coding Theory
Colloquium, School of Mathematcs University of Minnesota, October 5, 2006 Computing the Genus of a Curve Numerically Andrew Sommese University of Notre.
Lecture 17 Introduction to Eigenvalue Problems
Eigenvalues and Eigenvectors
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Narapong Srivisal, Swarthmore College Class of 2007 Division Algorithm Fix a monomial order > in k[x 1, …, x n ]. Let F = (f 1, …, f s ) be an ordered.
Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Matlab Matlab is a powerful mathematical tool and this tutorial is intended to be an introduction to some of the functions that you might find useful.
Introduction to Gröbner Bases for Geometric Modeling Geometric & Solid Modeling 1989 Christoph M. Hoffmann.
Chapter 3 Determinants and Matrices
3D Geometry for Computer Graphics
Determination of dimension of variety & some applications Author: Wenyuan Wu Supervisor: Greg Reid.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Gröbner Bases Bernd Sturmfels Mathematics and Computer Science University of California at Berkeley.
Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University in Prague Methods for Solving Systems.
Polynomial Factorization Olga Sergeeva Ferien-Akademie 2004, September 19 – October 1.
The Fundamental Theorem of Algebra And Zeros of Polynomials
CHAPTER SIX Eigenvalues
Linear Equations in Linear Algebra
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra and Complexity Chris Dickson CAS Advanced Topics in Combinatorial Optimization McMaster University, January 23, 2006.
 Row and Reduced Row Echelon  Elementary Matrices.
Copyright © Cengage Learning. All rights reserved.
Bertini: A software package for numerical algebraic geometry Developed by: Dan Bates Jon Hauenstein Andrew Sommese Charles Wampler.
1 Part II: Linear Algebra Chapter 8 Systems of Linear Algebraic Equations; Gauss Elimination 8.1 Introduction There are many applications in science and.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Eigenvalues and Eigenvectors
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Chapter Content Real Vector Spaces Subspaces Linear Independence
Precalculus Complex Zeros V. J. Motto. Introduction We have already seen that an nth-degree polynomial can have at most n real zeros. In the complex number.
FoCM 2008, Hong Kong Regeneration: A New Algorithm in Numerical Algebraic Geometry Charles Wampler General Motors R&D Center (Adjunct, Univ. Notre Dame)
Ill-posed Computational Algebra with Approximate Data Zhonggang Zeng Northeastern Illinois University, USA Feb. 21, 2006, Radon Institute for Computational.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Great Theoretical Ideas in Computer Science.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
Section 2.3 Properties of Solution Sets
Dan Bates University of Notre Dame Charles Wampler GM Research & Development Bertini: A new software package for computations in numerical algebraic geometry.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Integrating Rational Functions by Partial Fractions Objective: To make a difficult/impossible integration problem easier.
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Does tropical geometry look like…. No, it looks like …
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Finding Limits Algebraically In Section 12-1, we used calculators and graphs to guess the values of limits. However, we saw that such methods don’t always.
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
Commuting birth-and-death processes Caroline Uhler Department of Statistics UC Berkeley (joint work with Steven N. Evans and Bernd Sturmfels) MSRI Workshop.
College Algebra Chapter 6 Matrices and Determinants and Applications
Numerical Polynomial Algebra and Algebraic Geometry
Elementary Linear Algebra Anton & Rorres, 9th Edition
Perturbation method, lexicographic method
4.6: Rank.
Linear Equations in Linear Algebra
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Equations in Linear Algebra
Presentation transcript:

The numerical computation of the multiplicity of a component of an algebraic set Dan Bates (IMA) Joint work with: Chris Peterson (Colorado State) Andrew Sommese (Notre Dame) IMA Algorithms in Algebraic Geometry workshop September 20, 2006

Contents 1.Motivation 2.Background 3.The method 4.Complete examples 5.A few implementation details 6.Other examples 7.A similar method 1/27

Motivation Basic problem: Compute the regularity and multiplicity of a 0-scheme supported at a single point. 2/27

Motivation Basic problem: Compute the regularity and multiplicity of a 0-scheme supported at a single point. Why we got involved: To develop algorithms for computing numerical free resolutions, we needed to know which degrees could appear among the syzygies. 2/27

Motivation Basic problem: Compute the regularity and multiplicity of a 0-scheme supported at a single point. Why we got involved: To develop algorithms for computing numerical free resolutions, we needed to know which degrees could appear among the syzygies. In other words, we needed a way to compute regularity. Multiplicity came for free! 2/27

Motivation Why would anyone else care about regularity or multiplicity? 3/27

Motivation Why would anyone else care about regularity or multiplicity? Here are 3 answers: 1. Like I said, regularity bounds the degrees that syzygies could have, so regularity is a stopping criterion. 3/27

Motivation Why would anyone else care about regularity or multiplicity? Here are 3 answers: 1. Like I said, regularity bounds the degrees that syzygies could have, so regularity is a stopping criterion. 2. Multiplicity is intrinsic to the polynomial system. In some settings, it is the number of paths leading to the point during homotopy continuation. 3/27

Motivation 3. Numerical primary decomposition. The witness sets of numerical algebraic geometry provide a kind of numerical primary decomposition of the radical. These witness points are found by slicing the algebraic set with hyperplane sections. 4/27

Motivation 3. Numerical primary decomposition. The witness sets of numerical algebraic geometry provide a kind of numerical primary decomposition of the radical. These witness points are found by slicing the algebraic set with hyperplane sections. One step towards the numerical primary decomposition of the original ideal is to retain the scheme structure when slicing. 4/27

Motivation Q: Why not just use symbolic methods? 5/27

Motivation Q: Why not just use symbolic methods? A: Here are two reasons: 1.Coefficient blowup (and other computational difficulties). 5/27

Motivation Q: Why not just use symbolic methods? A: Here are two reasons: 1.Coefficient blowup (and other computational difficulties). 2.Inexact input (since the polynomial system may contain random complex numbers and the point is almost surely inexact). 5/27

Motivation The point of this talk: - Can compute the multiplicity and the regularity of a 0-scheme supported at a single point numerically 6/27

Motivation The point of this talk: - Can compute the multiplicity and the regularity of a 0-scheme supported at a single point numerically - No coefficient blowup 6/27

Motivation The point of this talk: - Can compute the multiplicity and the regularity of a 0-scheme supported at a single point numerically - No coefficient blowup - Works under small perturbations 6/27

Motivation There are actually two related methods: ● B., Peterson, and Sommese (see [BPS]): Inspired by known modern symbolic methods; uses a regularity criterion of Bayer and Stillman (see [BS]) to know when to stop ● Dayton and Zeng (see [DZ]): Inspired by the duality approach of Macaulay; relies on the use of structured matrices The two methods each give the multiplicity, but the other output differs…. 7/27

Background What is multiplicity? Multiplicity you have seen, e.g., x 2. It is the number of times a component should be counted. 8/27

Background What is multiplicity? Multiplicity you have seen, e.g., x 2. It is the number of times a component should be counted. It can be extracted from the Hilbert polynomial. 8/27

Background What is regularity? This is more subtle. 9/27

Background What is regularity? This is more subtle. The regularity tells us a point at which the Hilbert function stabilizes. 9/27

Background Basic setup: ● I = {f 1, f 2, …, f r } is a homogeneous ideal. 10/27

Background Basic setup: ● I = {f 1, f 2, …, f r } is a homogeneous ideal. ● I is supported at a single point p in P n. 10/27

Background Basic setup: ● I = {f 1, f 2, …, f r } is a homogeneous ideal. ● I is supported at a single point p in P n. Note: I could be produced by slicing a positive-dimensional irreducible component with generic hyperplanes: multiplicity is preserved, regularity is not. () 10/27

Background Basic setup: ● I = {f 1, f 2, …, f r } is a homogeneous ideal. ● I is supported at a single point p in P n. Note: I could be produced by slicing a positive-dimensional irreducible component with generic hyperplanes: multiplicity is preserved, regularity is not. ● R = C [x 0, x 1, …, x n ]. () 10/27

Background ● (I) k is the degree k homogeneous part of I. 11/27

Background ● (I) k is the degree k homogeneous part of I. ● I p is the prime ideal at the point p. 11/27

Background ● (I) k is the degree k homogeneous part of I. ● I p is the prime ideal at the point p. ● (I:F) = {G in R | GF is in I} is the ideal quotient of I by F (where F is in R). 11/27

Background ● (I) k is the degree k homogeneous part of I. ● I p is the prime ideal at the point p. ● (I:F) = {G in R | GF is in I} is the ideal quotient of I by F (where F is in R). ● The saturation of I is the intersection of all primary ideals in a reduced primary decomposition of I that are not m-primary (i.e., those that have geometric content). 11/27

Background Key theorem (Bayer and Stillman, 1987): 12/27

Background Key theorem (Bayer and Stillman, 1987): Suppose I is generated in degree at most m. 12/27

Background Key theorem (Bayer and Stillman, 1987): Suppose I is generated in degree at most m. Then the following are equivalent: 1.reg(I) = m. 12/27

Background Key theorem (Bayer and Stillman, 1987): Suppose I is generated in degree at most m. Then the following are equivalent: 1.reg(I) = m. 2.(something) 12/27

Background Key theorem (Bayer and Stillman, 1987): Suppose I is generated in degree at most m. Then the following are equivalent: 1.reg(I) = m. 2.(something) 3.(something else) 12/27

Background Corollary (to Bayer and Stillman, 1987): Suppose I (homogeneous) is generated in degree at most k, vanishing only at the point p in P n. Let L be a linear form not contained in I p. Then reg(I) ≤ k if and only if (I:L) k =(I) k and (I, L) k =(R) k. 13/27

The Algorithm (roughly) A 0-scheme consists of a point and some multiplicity structure around the point. Our idea is to look at successively larger infinitessimal neighborhoods around the point until all multiplicity information is captured. This is similar to computing the multiplicity of a monomial ideal using the “staircase” method. 14/27

Background Two other facts: 1. Suppose I p is an associated, non-embedded prime of an ideal I (p a point in P n ). Let J k = (I, I p k ). Then the multiplicity of I at I p is equal to the multiplicity of J k at I p for k >> If the multiplicity of J k at I p is equal to the multiplicity of J k+1 at I p, then the multiplicity of I at I p is also equal to the same. 15/27

Background As mentioned before, if regularity is known, then computing the multiplicity is trivial: μ = dim(R/I) k for k >> 0. Which k suffices? k = reg(I). 16/27

The Algorithm (detailed) INPUT: I={F 1, …, F r } homogeneous in R (n+1 variables) and the point p at which I is supported (with the final coordinate of p being nonzero, WLOG). OUTPUT: Multiplicity and regularity. Let I p = {p i z j - p j z i | 0 ≤ i,j ≤ n}. Let m = (z 0, …, z n ). Let k = maximal degree in which I is generated, μ(k-1) = -2, and μ(k) = /27

While μ(k) ≠ μ(k-1) do: - Form I p k, J k =(I, I p k ), z n m k. Let A=0, B=1. - While A ≠ B do: - Form (J k ) k+1. - Compute P = (J k ) k+1 ∩ z n m k. - Compute the preimage P’ of P. - Compute A = rank((J k ) k ), B = rank(P’). - If A = B, μ(k) = rank((m) k ) – A; else let J k = P’. - If μ(k) = μ(k-1), then μ = μ(k) and reg(I) ≤ k; else increment k. 18/27

Complete examples Example: I = (x 2, y) in the variables (x, y, z) at the point (0, 0, 1). I p = (x, y). m = (x, y, z). k = 2, μ(2) = -2, and μ(3) = -1 19/27

Complete examples Example: I = (x 2, y) in the variables (x, y, z) at the point (0, 0, 1). I p = (x, y). m = (x, y, z). k = 2, μ(2) = -2, and μ(3) = -1 After some work at the board: μ = 2 and reg(I) = 3. 19/27

Complete-ish examples Example: I = (x 2, y 3 ) in the variables (x, y, z) at the point (0, 0, 1). 20/27

Complete-ish examples Example: I = (x 2, y 3 ) in the variables (x, y, z) at the point (0, 0, 1). k = 3: A=5, B=5, so μ(3) = 10-5 = 5. 20/27

Complete-ish examples Example: I = (x 2, y 3 ) in the variables (x, y, z) at the point (0, 0, 1). k = 3: A=5, B=5, so μ(3) = 10-5 = 5. k = 4: A=9, B=9, so μ(4) = 15-9 = 6. 20/27

Complete-ish examples Example: I = (x 2, y 3 ) in the variables (x, y, z) at the point (0, 0, 1). k = 3: A=5, B=5, so μ(3) = 10-5 = 5. k = 4: A=9, B=9, so μ(4) = 15-9 = 6. k = 5: A=15, B=15, so μ(5) = = 6. 20/27

Complete-ish examples Example: I = (x 2, y 3 ) in the variables (x, y, z) at the point (0, 0, 1). k = 3: A=5, B=5, so μ(3) = 10-5 = 5. k = 4: A=9, B=9, so μ(4) = 15-9 = 6. k = 5: A=15, B=15, so μ(5) = = 6. So μ = 6 and reg(I) = 4. 20/27

Implementation Details After fixing a term order, homogeneous polynomials may easily be represented as vectors of coefficients. 21/27

Implementation Details After fixing a term order, homogeneous polynomials may easily be represented as vectors of coefficients. Most steps of the algorithm are very straightforward symbolic maneuvers. 21/27

While μ(k) ≠ μ(k-1) do: - Form I p k, J k =(I, I p k ), z n m k. Let A=0, B=1. - While A ≠ B do: - Form (J k ) k+1. - Compute P = (J k ) k+1 ∩ z n m k. - Compute the preimage P’ of P. - Compute A = rank((J k ) k ), B = rank(P’). - If A = B, μ(k) = rank((m) k ) – A; else let J k = P’. - If μ(k) = μ(k-1), then μ = μ(k) and reg(I) ≤ k; else increment k.

Implementation Details After fixing a term order, homogeneous polynomials may easily be represented as vectors of coefficients. Most steps of the algorithm are very straightforward symbolic maneuvers. However, the matrices are inexact, so the exact rank cannot be computed. Therefore, we use the SVD to detect the rank of a matrix. 21/27

Implementation Details The singular value decomposition (SVD) of a matrix A has the form A=UΣV * with: ● U, V unitary; ● Σ is diagonal with the “singular values” along the diagonal ; ● the number of “nonzero” singular values is the numerical rank; and ● it is easy to find a basis for the nullspace of A. 22/27

Implementation Details We also use the SVD in one other step of the algorithm, namely: “Compute P = (J k ) k+1 ∩ z n m k ” 23/27

Implementation Details We also use the SVD in one other step of the algorithm, namely: “Compute P = (J k ) k+1 ∩ z n m k ” V = (J k ) k+1 and W = z n m k are just vector spaces, so we use the SVD to compute bases for the nullspaces of V and W, concatenate them, and finally find the nullspace of that matrix.. 23/27

Implementation Details We also use the SVD in one other step of the algorithm, namely: “Compute P = (J k ) k+1 ∩ z n m k ” V = (J k ) k+1 and W = z n m k are just vector spaces, so we use the SVD to compute bases for the nullspaces of V and W, concatenate them, and finally find the nullspace of that matrix.. NOTE: This was implemented in Bertini. It will not be released with the first release of Bertini. 23/27

Other examples I = {x 4 +2x 2 y 2 +y 4 +3x 2 yz-y 3 z, x 6 +3x 4 y 2 +3x 2 y 4 +y 6 -4x 2 y 2 z 2 } p = (0, 0, 1) Multiplicity is known to be /27

Other examples I = {x 4 +2x 2 y 2 +y 4 +3x 2 yz-y 3 z, x 6 +3x 4 y 2 +3x 2 y 4 +y 6 -4x 2 y 2 z 2 } p = (0, 0, 1) Multiplicity is known to be 14. Computed multiplicity=14 and regularity=9 for: ● p = (0, 0, 1) 24/27

Other examples I = {x 4 +2x 2 y 2 +y 4 +3x 2 yz-y 3 z, x 6 +3x 4 y 2 +3x 2 y 4 +y 6 -4x 2 y 2 z 2 } p = (0, 0, 1) Multiplicity is known to be 14. Computed multiplicity=14 and regularity=9 for: ● p = (0, 0, 1) ● p = ( , , 1) 24/27

Other examples I = {x 4 +2x 2 y 2 +y 4 +3x 2 yz-y 3 z, x 6 +3x 4 y 2 +3x 2 y 4 +y 6 -4x 2 y 2 z 2 } p = (0, 0, 1) Multiplicity is known to be 14. Computed multiplicity=14 and regularity=9 for: ● p = (0, 0, 1) ● p = ( , , 1) ● p = (0.0001, , 1) 24/27

Other examples I = {3x 2 + … + (383678/77763)w 2, x 3 + … + ( /699867)w 3, z 3 + … + ( /766521)w 3, x 2 + … + (64/9)w 2 } p = (1, 8/3, -2/7, 34/23) Multiplicity = 2. 25/27

Other examples I = {3x 2 + … + (383678/77763)w 2, x 3 + … + ( /699867)w 3, z 3 + … + ( /766521)w 3, x 2 + … + (64/9)w 2 } p = (1, 8/3, -2/7, 34/23) Multiplicity = 2. Key remark: We used truncated floating point numbers in place of all rational numbers, and it still worked. 25/27

A similar method Another method for computing the multiplicity of a 0-scheme supported at a single point may be found in [DZ]. Each method has its own benefits aside from producing the multiplicity. The method of [DZ] provides a more elaborate “multiplicity structure” as well as a better bound on the number of deflations needed, while the method of [BPS] produces the regularity. 26/27

A similar method A little about deflation: Given a singular root x of a polynomial system, F, one can use deflation to produce a system containing F that has (x,u) as a nonsingular solution (where the u variables are added during deflation). [OWM] developed the method, [LVZ] proved that it terminates after no more than μ steps, and [DZ] sharpened the bound. Also see Anton Leykin’s talk on September /27

References [BPS] B., C. Peterson, and A. Sommese. A numeric-symbolic method for computing the multiplicity of a component of a zero-scheme. J. Complexity, 22: , [BS] D. Bayer and M. Stillman. A criterion for detecting m-regularity. Invent. Math., 87(1):1-11, [DZ] B. Dayton and Z. Zeng. Computing the multiplicity structure in solving polynomial systems. In ISSAC 2005 Proc., pages [LVZ] A. Leykin, J. Verschelde, and A. Zhao. Evaluation of jacobian matrices for newton’s method with deflation for isolated singularities of polynomial systems. In SNC 2005 Proc., pages [OWM] T. Ojika, S. Watanabe, and T. Mitsui. Deflation algorithm for the multiple roots of a system of nonlinear equations. J. Math. Anal. Appl., 96(2): , 1983.