Fast Fourier Transform for speeding up the multiplication of polynomials an Algorithm Visualization Alexandru Cioaca.

Slides:



Advertisements
Similar presentations
DFT & FFT Computation.
Advertisements

MATRIX MULTIPLICATION Brought to you by Tutorial Services – The Math Center.
Parallel Fast Fourier Transform Ryan Liu. Introduction The Discrete Fourier Transform could be applied in science and engineering. Examples: ◦ Voice recognition.
Instructor Neelima Gupta Table of Contents Divide and Conquer.
1 Potential for Parallel Computation Module 2. 2 Potential for Parallelism Much trivially parallel computing  Independent data, accounts  Nothing to.
Fast Fourier Transform Lecture 6 Spoken Language Processing Prof. Andrew Rosenberg.
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
FFT1 The Fast Fourier Transform by Jorge M. Trabal.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
Reconfigurable Computing S. Reda, Brown University Reconfigurable Computing (EN2911X, Fall07) Lecture 16: Application-Driven Hardware Acceleration (1/4)
Review of Matrix Algebra
CSE 421 Algorithms Richard Anderson Lecture 13 Divide and Conquer.
Introduction to Algorithms
The Fourier series A large class of phenomena can be described as periodic in nature: waves, sounds, light, radio, water waves etc. It is natural to attempt.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Matrices and Determinants
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Fast Fourier Transform Irina Bobkova. Overview I. Polynomials II. The DFT and FFT III. Efficient implementations IV. Some problems.
Function approximation: Fourier, Chebyshev, Lagrange
Algebra Review. Polynomial Manipulation Combine like terms, multiply, FOIL, factor, etc.
Pointers (Continuation) 1. Data Pointer A pointer is a programming language data type whose value refers directly to ("points to") another value stored.
1 Calculating Polynomials We will use a generic polynomial form of: where the coefficient values are known constants The value of x will be the input and.
Systems and Matrices (Chapter5)
Row Reduction Method Lesson 6.4.
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Analysis of Algorithms
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Matrices Addition & Subtraction Scalar Multiplication & Multiplication Determinants Inverses Solving Systems – 2x2 & 3x3 Cramer’s Rule.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
The Fast Fourier Transform
Karatsuba’s Algorithm for Integer Multiplication
Applied Symbolic Computation1 Applied Symbolic Computation (CS 300) Karatsuba’s Algorithm for Integer Multiplication Jeremy R. Johnson.
The Fast Fourier Transform and Applications to Multiplication
Inverse DFT. Frequency to time domain Sometimes calculations are easier in the frequency domain then later convert the results back to the time domain.
7- 1 Chapter 7: Fourier Analysis Fourier analysis = Series + Transform ◎ Fourier Series -- A periodic (T) function f(x) can be written as the sum of sines.
1 Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Polynomials, Curve Fitting and Interpolation. In this chapter will study Polynomials – functions of a special form that arise often in science and engineering.
Fast Fourier Transforms. 2 Discrete Fourier Transform The DFT pair was given as Baseline for computational complexity: –Each DFT coefficient requires.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 567) The Fast Fourier Transform (FFT) and Convolution Jeremy R. Johnson TexPoint fonts used.
ELEC692 VLSI Signal Processing Architecture Lecture 12 Numerical Strength Reduction.
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
Discrete Fourier Transform
DIGITAL SIGNAL PROCESSING ELECTRONICS
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Polynomial + Fast Fourier Transform
Applied Symbolic Computation
September 4, 1997 Applied Symbolic Computation (CS 300) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Fast Fourier Transforms Dr. Vinu Thomas
Applied Symbolic Computation
FAST FOURIER TRANSFORM ALGORITHMS
DFT and FFT By using the complex roots of unity, we can evaluate and interpolate a polynomial in O(n lg n) An example, here are the solutions to 8 =
Unit-2 Divide and Conquer
Real-time 1-input 1-output DSP systems
September 4, 1997 Applied Symbolic Computation (CS 300) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
The Fast Fourier Transform
Advanced Algorithms Analysis and Design
Fast Fourier Transformation (FFT)
Z TRANSFORM AND DFT Z Transform
Applied Symbolic Computation
Linear Equations in Linear Algebra
Applied Symbolic Computation
The Fast Fourier Transform
Applied Symbolic Computation
Linear Equations in Linear Algebra
Presentation transcript:

Fast Fourier Transform for speeding up the multiplication of polynomials an Algorithm Visualization Alexandru Cioaca

Defining the problem

The explicit form of a polynomial is given by the list of coefficients and we can use it to compute the polynomial’s values at any point (for any variable) This operation is called Evaluation In reverse, if we have the values of a polynomial of order N at at least N distinct points, we can determine its coefficients This operation is called Interpolation

Consider the following polynomial

Adding these 4 components gives us our polynomial (in black)

Let’s draw a cartesian grid for our polynomial

We can evaluate our polynomial at these points. This is Evaluation.

Now imagine the reverse operation for our polynomial. What if we don’t have its explicit form, so we can’t evaluate it?

Instead, we only have its value at certain points.

From these values, the polynomial can be reconstructed approximately. This approximation is better for more and more values.

This is Interpolation.

Consider the following two polynomials Their product is

The coefficients of the product polynomial can be computed from the following outer-product

This means computing the product of each pair of coefficients

And then adding the terms

Look at the symmetry of these roots on the Unit Circle

N=1

N=2

N=3

N=4

N=5

N=6

N=7

N=8

We can see the DFT matrix is a Vandermonde matrix of the Nth roots of unity

The rows of the DFT matrix correspond to basic harmonic waveforms They transform the seed vector in the spectral plane

This computation is nothing but a matrix-vector product

Each element of the result is equal to the inner product of the corresponding row of the matrix with the seed vector

So we are dealing with 8 terms obtained from multiplications

Adding these terms that come from multiplications

And, first and foremost, computing the elements of the DFT matrix..

..for every pair of elements from the matrix and the vector

Because we have to do this for each row. Which might be take a while..

We can speed up the matvec using some nice properties of DFT This is the FFT algorithm (FAST Fourier Transform)

Only after 3-4 steps, we filled the DFT matrix completely

FFT is used to compute this matrix-vector product with a smaller number of computations. It is a recursive strategy of divide-and-conquer. FFT uses the observation made previously that we can express any polynomial as a sum between its terms in odd coefficients and those in even coefficients. This operation can be repeated until we split the polynomial in linear polynomials that can be easily evaluated Fast Fourier Transform (FFT)

FFT

FFT transforms the vector of coefficients “a” into the vector “A”.

FFT It starts by splitting the given vector of coefficients in two subvectors. One contains the odd-order coefficients, and the other one contains those of even order.

FFT Then, it proceeds in a recursive fashion to split these vectors again

FFT This recursion stops when we reach subvectors of degree 1

FFT The actual computation is performed when the algorithm starts to exit the recursion.

FFT At each step backward, the output coefficients are updated.

FFT It evaluates polynomials from the

Let’s follow the algorithm step-by-step on the DFT matrix-vector product.

We pass the vector of coefficients to FFT which starts the recursion

First, it splits the 8 coefficients in 2 sets (odd and even)

It follows the recursion down one step for the first set of coefficients.

FFT splits this vector too and the recursion goes down one more step.

At the third split (log 8 = 3), FFT is passed a linear polynomial and returns.

FFT reached a polynomial of order 1, so it will evaluate it.

The first coefficient of A gets updated with this value.

Then, FFT evaluates the polynomial at the negative of the previous root.

The corresponding coefficient is updated with this value.

By computing these two values, FFT already computed the pairs for the other 3 polynomials.

We now exit the FFT for this polynomial (RED) and enter the branch of the recursion corresponding to the next polynomial

Again, we evaluate the two values.

And update the corresponding coefficients.

Looking at the corresponding columns, we can see that the other values are computed, but can be used only when the other polynomials are active, and when FFT evaluates at the right power of the primitive root of unity

After exiting the recursion to the second level, we can update the output coefficients by interchanging the values computed already.

FFT exits the recursion to the higher level and works on the second half.

FFT evaluates these basic polynomials too, and updates the coefficients.

After evaluating the last linear polynomial, FFT has computed all the values it needs. From now on, the computation will rely on combining these values.

Exiting the recursion, the coefficients are, again, updated at each step.

Finally, FFT goes back to the upper level and combines the subpolynomials.

At this level, we can see the strength of FFT.

It combines larger subpolynomials, so the computation is being sped up exponentially with each level.

With FFT, after three levels of recursion, we computed the matvec product.

Multiplying the polynomials In order to compute the product-polynomial, we will have to evaluate the two polynomials at enough points (2n-1) and multiply them element-wise These products correspond to the spectral coefficients of the product. In order to obtain its explicit form, we have to interpolate these values. This is done with the inverse DFT matrix, which contains the inverse of the DFT matrix, taken element-wise. We can employ the same FFT algorithm to compute this fast.