11/26/02CSE 202 - FFT,etc CSE 202 - Algorithms Polynomial Representations, Fourier Transfer, and other goodies. (Chapters 28-30)

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

Digital Kommunikationselektronik TNE027 Lecture 5 1 Fourier Transforms Discrete Fourier Transform (DFT) Algorithms Fast Fourier Transform (FFT) Algorithms.
© The McGraw-Hill Companies, Inc., Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.
RATIONAL EXPRESSIONS Chapter Quotients of Monomials.
Linear Transformations
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
Linear System of Equations MGT 4850 Spring 2008 University of Lethbridge.
Richard Fateman CS 282 Lecture 101 The Finite-Field FFT Lecture 10.
Chapter 9 Graph algorithms. Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
2 -1 Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.
1 Fast Sparse Matrix Multiplication Raphael Yuster Haifa University (Oranim) Uri Zwick Tel Aviv University ESA 2004.
CSE Shortest Paths1 Midterm page 1 The worst-case complexity of an algorithm is... a function from N (or size of instance) to N.  (n lg n) is...
12/3CSE NP-complete CSE Algorithms NP Completeness Approximations.
FFT1 The Fast Fourier Transform by Jorge M. Trabal.
Princeton University COS 423 Theory of Algorithms Spring 2002 Kevin Wayne Fast Fourier Transform Jean Baptiste Joseph Fourier ( ) These lecture.
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
Reconfigurable Computing S. Reda, Brown University Reconfigurable Computing (EN2911X, Fall07) Lecture 16: Application-Driven Hardware Acceleration (1/4)
2 -1 Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.
CSE Algorithms Minimum Spanning Trees Union-Find Algorithm
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
CSE 421 Algorithms Richard Anderson Lecture 13 Divide and Conquer.
Introduction to Algorithms
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 14 Instructor: Paul Beame.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Fast Fourier Transform Irina Bobkova. Overview I. Polynomials II. The DFT and FFT III. Efficient implementations IV. Some problems.
1 A fast algorithm for Maximum Subset Matching Noga Alon & Raphael Yuster.
5/27/03CSE Shortest Paths CSE Algorithms Shortest Paths Problems.
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
1 Fingerprinting techniques. 2 Is X equal to Y? = ? = ?
CS 6068 Parallel Computing Fall 2013 Lecture 10 – Nov 18 The Parallel FFT Prof. Fred Office Hours: MWF.
FFT1 The Fast Fourier Transform. FFT2 Outline and Reading Polynomial Multiplication Problem Primitive Roots of Unity (§10.4.1) The Discrete Fourier Transform.
200/MAPLD 2004 Craven1 Super-Sized Multiplies: How Do FPGAs Fare in Extended Digit Multipliers? Stephen Craven Cameron Patterson Peter Athanas Configurable.
5.6 Convolution and FFT. 2 Fast Fourier Transform: Applications Applications. n Optics, acoustics, quantum physics, telecommunications, control systems,
Karatsuba’s Algorithm for Integer Multiplication
Applied Symbolic Computation1 Applied Symbolic Computation (CS 300) Karatsuba’s Algorithm for Integer Multiplication Jeremy R. Johnson.
The Fast Fourier Transform and Applications to Multiplication
1 Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
1 How to Multiply Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. integers, matrices, and polynomials.
Applied Symbolic Computation1 Applied Symbolic Computation (CS 567) The Fast Fourier Transform (FFT) and Convolution Jeremy R. Johnson TexPoint fonts used.
LINEAR MODELS AND MATRIX ALGEBRA
11/21/02CSE Max Flow CSE Algorithms Max Flow Problems.
May 9, 2001Applied Symbolic Computation1 Applied Symbolic Computation (CS 680/480) Lecture 6: Multiplication, Interpolation, and the Chinese Remainder.
ENEE 322: Continuous-Time Fourier Transform (Chapter 4)
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Theoretical analysis of time efficiency
Introduction to Algorithms
Introduction Algorithms Order Analysis of Algorithm
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Polynomial + Fast Fourier Transform
PC trees and Circular One Arrangements
Applied Symbolic Computation
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CSE 421: Introduction to Algorithms
Applied Symbolic Computation
DFT and FFT By using the complex roots of unity, we can evaluate and interpolate a polynomial in O(n lg n) An example, here are the solutions to 8 =
CSE Algorithms Max Flow Problems 11/21/02 CSE Max Flow.
Applied Symbolic Computation
September 4, 1997 Applied Symbolic Computation (CS 567) Fast Polynomial and Integer Multiplication Jeremy R. Johnson.
Applied Symbolic Computation
EMIS 8373 Complexity of Linear Programming
Fast Sparse Matrix Multiplication
Major Design Strategies
Applied Symbolic Computation
Chapter 9 Graph algorithms
Presentation transcript:

11/26/02CSE FFT,etc CSE Algorithms Polynomial Representations, Fourier Transfer, and other goodies. (Chapters 28-30)

CSE FFT,etc2 Representation matters Example: simple graph algorithms Adjacency matrix M[i,j] = 1 iff (i,j) is an edge. Requires  (V 2 ) just to find some edges (if you’re unlucky and graph is sparse) Adjacency lists M[i] is list of nodes attached to node i by edge O(E) algorithm for connected components, O(E lg E) for MST...

CSE FFT,etc3 Representation matters Example: long integer operations Roman numerals multiplication isn’t hard, but if “size” of instance is number characters, can be O(N 2 ) (why??) Arabic notation O(N lg 3 ), or even better.

CSE FFT,etc4 Another long integer representation Let q 1, q 2,..., q k be relatively prime integers. Assume each is short, (e.g. < 2 31 ) For 0 < i < q 1 q 2 q k, represent i as vector: Converting to this representation takes perhaps O(k 2 ) k divides of k-long number by a short one. Converting back requires k Chinese Remainder operations. Why bother? + or x on two such number takes O(k) time. but comparisons are very time-consuming. If you have many x ’s per convert, you can save time!

CSE FFT,etc5 Representing (N-1) th degree polynomials Use N coefficients a 0 + a 1 x + a 2 x a N-1 x N-1 Addition: point wise Multiplication: convolution c i =  a k b i-k Use N points b 0 =f(0),..., b N-1 =f(N-1). Addition: point wise Multiplication: point wise To convert: Evaluate at N points O(N 2 ) Add N basis functions O(N 2 )

CSE FFT,etc6 So what? Slow convolutions can be changed to faster point wise ops. E.g., in signal processing, you often want to form dot product of a set of weights w 0, w 1,..., w N-1 with all N cyclic shifts of data a 0, a 1,..., a N-1. This is a convolution. Takes N 2 time done naively. –Pretend data and weights are coefficients of polynomials. –Convert each set to point value representations Might take O(N 2) time. –Now do one point wise multiplications O(N) time. –And finally convert each back to original form Might take O(N 2) time. Ooops... we haven’t saved any time.

CSE FFT,etc7 Fast Fourier Transform (FFT) It turns out, we can switch back and forth between two representations (often called “time domain” and “frequency domain”) in O(N lg N) time. Basic idea: rather than evaluating polynomial at 0,..., N-1, do so at the N “roots of unity” These are the complex numbers that solve X N = 1. Now use fancy algebraic identities to reduce work. Bottom line – to form convolution, it only takes O(N lg N) to convert to frequency domain O(N) to form convolution (point wise multiplication) O(N lg N) to convert back. O(N lg N) instead of O(N 2 ).

CSE FFT,etc8 The FFT computation “Butterfly network” b0b0 b1b1 b2b2 b3b3 b4b4 b 15 a0a0 a1a1 a2a2 a3a3 a4a4 a 15 x yx-  i y x+  i y Each box computes... where  i is some root of unity. Takes 10 floating-point ops (these are complex numbers). lg N stages “Bit reversal” permutation inputs Factoid: In 1990, 40% of all Cray Supercomputer cycles were devoted to FFT’s

CSE FFT,etc9 Fast Matrix Multiplication (chapter 28) Naïve matrix multiplication is O(N 3 ) for multiplying two NxN matrices. Strassen’s Algorithm uses 7 multiplies of N/2 x N/2 matrixes (rather than 8). T(N) = 7T(N/2) + c N 2. Thus, T(N) is O( ?? ) = O(N ). It’s faster than naïve method for N > 45-ish. Gives slightly different answer, due to different rounding. It can be argued that Strassen is more accurate. Current champion: O(N ) [Coppersmith & Winograd]

CSE FFT,etc10 Linear Programming (chapter 29) Given n variables x 1, x 2,... x n, –maximize “objective function” c 1 x 1 + c 2 x c n x n, –subject to the m linear constraints: a 11 x 1 + a 12 x c 1n x n  b 1... a m1 x 1 + a m2 x c mn x n  b m Simplex method: Exponential worst case, but very fast in practice. Ellipsoid method is polynomial time. There exists some good implementations (e.g. in IBM’s OSL library). NOTE: Integer linear programming (where answer must be integers) is NP-hard (i.e., probably not polynomial time).

CSE FFT,etc11 Glossary (in case symbols are weird)            subset  element of  infinity  empty set  for all  there exists  intersection  union  big theta  big omega  summation  >=  <=  about equal  not equal  natural numbers(N)  reals(R)  rationals(Q)  integers(Z)