Efficiant polynomial interpolation algorithms

Slides:



Advertisements
Similar presentations
Sublinear Algorithms … Lecture 23: April 20.
Advertisements

More about Polynomials
Solving Linear Systems (Numerical Recipes, Chap 2)
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Numerical Algorithms Matrix multiplication
SECTION 3.6 COMPLEX ZEROS; COMPLEX ZEROS; FUNDAMENTAL THEOREM OF ALGEBRA FUNDAMENTAL THEOREM OF ALGEBRA.
Richard Fateman CS 282 Lecture 81 Evaluation/Interpolation Lecture 8.
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
Computing the Rational Univariate Reduction by Sparse Resultants Koji Ouchi, John Keyser, J. Maurice Rojas Department of Computer Science, Mathematics.
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
FFT1 The Fast Fourier Transform by Jorge M. Trabal.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
Richard Fateman CS 282 Lecture 111 Determinants Lecture 11.
Introduction Polynomials
Introduction to Algorithms
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
Lagrange interpolation Gives the same results as Newton, but different method.
Matrix Mathematics in MATLAB and Excel
Chapter 5 Determinants.
LIAL HORNSBY SCHNEIDER
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Using Inverse Matrices Solving Systems. You can use the inverse of the coefficient matrix to find the solution. 3x + 2y = 7 4x - 5y = 11 Solve the system.
Chapter 4 – Polynomials and Rational Functions
Fast Fourier Transform Irina Bobkova. Overview I. Polynomials II. The DFT and FFT III. Efficient implementations IV. Some problems.
Polynomial Factorization Olga Sergeeva Ferien-Akademie 2004, September 19 – October 1.
1 Calculating Polynomials We will use a generic polynomial form of: where the coefficient values are known constants The value of x will be the input and.
Compiled By Raj G. Tiwari
 Row and Reduced Row Echelon  Elementary Matrices.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Great Theoretical Ideas in Computer Science.
2.3 Synthetic Substitution
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
The Fast Fourier Transform
4-5, 4-6 Factor and Remainder Theorems r is an x intercept of the graph of the function If r is a real number that is a zero of a function then x = r.
Advanced Algebraic Algorithms on Integers and Polynomials Prepared by John Reif, Ph.D. Analysis of Algorithms.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 3 Polynomial and Rational Functions Copyright © 2013, 2009, 2005 Pearson Education, Inc.
7.5.1 Zeros of Polynomial Functions
Karatsuba’s Algorithm for Integer Multiplication
Introduction to Numerical Analysis I MATH/CMPSC 455 Interpolation.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 300) Karatsuba’s Algorithm for Integer Multiplication Jeremy R. Johnson.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Copyright © 2009 Pearson Education, Inc. CHAPTER 4: Polynomial and Rational Functions 4.1 Polynomial Functions and Models 4.2 Graphing Polynomial Functions.
Section 5.5 The Real Zeros of a Polynomial Function.
Zero of Polynomial Functions Factor Theorem Rational Zeros Theorem Number of Zeros Conjugate Zeros Theorem Finding Zeros of a Polynomial Function.
Remainder and Factor Theorems
Chapter 3 Polynomial and Rational Functions Copyright © 2014, 2010, 2007 Pearson Education, Inc Zeros of Polynomial Functions.
Section 1.7 Linear Independence and Nonsingular Matrices
Approximating Derivatives Using Taylor Series and Vandermonde Matrices.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 567) The Fast Fourier Transform (FFT) and Convolution Jeremy R. Johnson TexPoint fonts used.
WARM UP  Solve using the quadratic formula
May 9, 2001Applied Symbolic Computation1 Applied Symbolic Computation (CS 680/480) Lecture 6: Multiplication, Interpolation, and the Chinese Remainder.
Real Zeros of Polynomial Functions
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Section 2.5. Objectives:  Use the Fundamental Theorem of Algebra to determine the number of zeros of a polynomial function.  Find all zeros of polynomial.
1 IAS, Princeton ASCR, Prague. The Problem How to solve it by hand ? Use the polynomial-ring axioms ! associativity, commutativity, distributivity, 0/1-elements.
Computational Methods CMSC/AMSC/MAPL 460 Polynomial Interpolation Ramani Duraiswami, Dept. of Computer Science.
College Algebra Chapter 6 Matrices and Determinants and Applications
3.3 Dividing Polynomials.
Chapter 7 Matrix Mathematics
Polynomial + Fast Fourier Transform
4.4 Real Zeros of Polynomial Functions
Systems of First Order Linear Equations
Polynomial Division; The Remainder Theorem and Factor Theorem
3.3 Dividing Polynomials.
Find all solutions of the polynomial equation by factoring and using the quadratic formula. x = 0 {image}
MATH 174: Numerical Analysis
4.1: Polynomial Functions
Presentation transcript:

Efficiant polynomial interpolation algorithms

Overview Introduction to Vandermonde Matrices and its utilities Univariate Interpolation Multivariate Interpolation

Properties of Vandermonde Matrices Easy to ensure that they are non-singular Systems of linear equations whose coefficients form Vandermonde matrices are easy to solve exactly

The Vandermonde Matrix

Generalized Vandermonde where

Determinant of a Vandermonde

Determinant of a Vandermonde

Determinant of a Vandermonde The Vandermonde matrix is non-singular  the ki are distinct

Example wich is 0 also when The previous result can not be applyed for generalized Vandermonde matrices Example wich is 0 also when

Non-singularity of generalized Vandermonde matrices Proposition 1: If the ki are distinct positiv real numbers => the matrix is non-zero

The inverse of a Vandermonde matrix

The inverse of a Vandermonde matrix

Solving a Vandermonde system of equations

Solving a Vandermonde system of equations

Solving a Vandermonde system of equations

The algorithm to solve the system

The algorithm to solve the system The computation of the xi is arranged as follows: Calculate each vector and add it to the accumulating X

Analysis of the algorithm By calculating the vectors one after the other we only need to compute one Pi(Z) at the time Each Pi(Z) only needs O(n) time and since we have n polinoms to compute, the complexity is O(n2) and the space needed is O(n) Because the inverse of the transposed matrix is the transpose of the inverse of the matrix, the algorithm only need a little adjustment to solve a transposed Vandermonde system of equations On the Appendix there is an example of this alorithm taken from Zippel

Univariate Interpolation Lagrange Interpolation Newton Interpolation Abstract Interpolation

Lagrange Interpolation Giving are a set of distinct evaluation points with its correspondating functional values The goal is to find the polinome

Lagrange Interpolation This is a Vandermonde system where

Lagrange Interpolation

Lagrange Interpolation

Newton Interpolation f(a)=f(x)(mod (x-a))

The Chinese remainder algorithm over Z

Chinese remainder with polinoms When given and Then we change it to the following situation: Given Compute

Newton Interpolation algorithm Let f(x)=0, q(x)=1 Loop for n times doing following: f(x)=f(x)+q(ki)-1q(x)(wi-f(ki)) q(x)=(x-ki)q(x)

Newton´s interpolation formula Let Newton´s interpolation formula claims that there exist constants such that In fact, and is the solution of

Newton´s interpolation formula Then And more generally Solving the gives

Multivariate Interpolation Dense Interpolation Probabilistic Sparse Interpolation Deterministic Sparse Interpolation without degree bounds

Multivariate dense Interpolation We are given a black box with a degree bound „d“ for the polinom P(xi,..,xn) So we can assume that P has the form

Multivariate dense Interpolation So we get the values of which are the coeficients found by interpolating P on X1 By doing this procedure we compute recursively P(X1,...,Xk,x(k+1)0,...,xn0)

Multivariate dense Interpolation

The complexity of the dense interpolation Let I(d) be the complexity of interpolating d+1 values to produce a univariate plynomial of degree „d“ and Nk the complexity for the first k variables

Probabilistic Sparse Interpolation Formal Presentation Example Analysis

Probabilistic Sparse Interpolation Assume we want to dermine P(X1,..., Xn) which is an element of L[X] where L is a field of cardinal q and the degree of each Xi is bounded by „d“ and there are no more than T non-zero monomials

Probabilistic Sparse Interpolation Def: is a precise evaluation point if:

Probabilistic Sparse Interpolation The probability by wich is an imprecise evaluation point: For each k we can write It is an imprecise evaluation point if one of the cik = 0 And the probability that this happends is no more than

Probabilistic Sparse Interpolation Given is a k-1 tuple The probability that is 0 if we are we are working on a field of characteristic 0 or at least When working on a field of q elements the probability is bounded by

Probabilistic Sparse Interpolation So the following probability is then one that underlines the Probabilistic Sparse Interpolation

Probabilistic Sparse Interpolation Assume we want to dermine P(X1,..., Xn) which is an element of L[X] where L is a field of cardinal q and the degree of each Xi is bounded by „d“ and there are no more than T non-zero monomials As in the dense interpolation we Interpolate

Probabilistic Sparse Interpolation At the kth stage the first computation gives us: We then assume that The probability of that being the right skeleton is We then pick a (k-1) tuple And we set up the following transposed Vandermonde system of linear ecuations

Probabilistic Sparse Interpolation So each of the can be computed using O(n2) and we can avoid computing the other interpolations

Probabilistic Sparse Interpolation The probability that the Vandermonde system of equation is non-singular is bounded by

Probabilistic Sparse Interpolation So we get for each k Then we solve trough the dense interpolation We then expand it and we get And we are ready to compute the (k+1)th stage

Probabilistic Sparse Interpolation Example Lets assume we are given a Black Box representing the following polinom

Deterministic Sparse Interpolation without degree bounds Given are a bound on the number of non-zero terms „T“ and the number of variables „n“ We want to compute By choosing a distinct prime for each Xi then the quantities will all be distinct. Let Then we get:

Deterministic Sparse Interpolation without degree bounds The rank of the system of equations is exactly the number of non-zero monomials in P This could be easily done by taking the first T equations and computing their rank which requires O(T3)

Deterministic Sparse Interpolation without degree bounds Let and consider so consider also

Deterministic Sparse Interpolation without degree bounds Then we get the following Toeplitz system of linear equations

Deterministic Sparse Interpolation without degree bounds So the system is non-singular if the mi are distinct

Deterministic Sparse Interpolation without degree bounds So the system can be solved by Gaussian elimination O(t3) So then we get Q(Z) In order to find the mi we just need to find the zeroes of Q which are in fact positive integers making this procedure much easier Knowing the mi, the ei can easily be determined by factoring each of the mi, which is in fact very easy because the possible divisors are the first n primes allready known By knowing the mi it is allso easy to compute the ci just by solving the Vandermonde system, formed by the first t equations

The End Questions? Bibliography: Richard Zippel

Appendix pseudo-codes for the interpolation algorithms

Code for Vandermonde Matrices

Code for Lagrange Interpolation

Code for Newton Interpolation