Download presentation
Presentation is loading. Please wait.
1
Divide-and-Conquer Divide-and-conquer.
Break up problem into several parts. Solve each part recursively. Combine solutions to sub-problems into overall solution. Most common usage. Break up problem of size n into two equal parts of size ½n. Solve two parts recursively. Combine two solutions into overall solution in linear time. Consequence. Brute force: n2. Divide-and-conquer: n log n. Divide et impera. Veni, vidi, vici. - Julius Caesar
2
5.1 Mergesort "Divide and conquer", Caesar's other famous quote "I came, I saw, I conquered" Divide-and-conquer idea dates back to Julius Caesar. Favorite war tactic was to divide an opposing army in two halves, and then assault one half with his entire force.
3
Obvious sorting applications. List files in a directory.
Sorting. Given n elements, rearrange in ascending order. Obvious sorting applications. List files in a directory. Organize an MP3 library. List names in a phone book. Display Google PageRank results. Problems become easier once sorted. Find the median. Find the closest pair. Binary search in a database. Identify statistical outliers. Find duplicates in a mailing list. Non-obvious sorting applications. Data compression. Computer graphics. Interval scheduling. Computational biology. Minimum spanning tree. Supply chain management. Simulate a system of particles. Book recommendations on Amazon. Load balancing on a parallel computer
4
Mergesort Mergesort. Divide array into two halves.
Recursively sort each half. Merge two halves to make sorted whole. Jon von Neumann (1945) A L G O R I T H M S divide O(1) sort 2T(n/2) merge O(n)
5
Merging Merging. Combine two pre-sorted lists into a sorted whole.
How to merge efficiently? Linear number of comparisons. Use temporary array. Challenge for the bored. In-place merge. [Kronrud, 1969] A G L O R H I M S T A G H I using only a constant amount of extra storage
6
A G L O R H I M S T A Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A auxiliary array
7
A G L O R H I M S T A G Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G auxiliary array
8
A G L O R H I M S T A G H Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H auxiliary array
9
A G L O R H I M S T A G H I Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I auxiliary array
10
A G L O R H I M S T A G H I L Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L auxiliary array
11
A G L O R H I M S T A G H I L M Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M auxiliary array
12
A G L O R H I M S T A G H I L M O Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M O auxiliary array
13
A G L O R H I M S T A G H I L M O R Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. smallest smallest A G L O R H I M S T A G H I L M O R auxiliary array
14
A G L O R H I M S T A G H I L M O R S Merging Merge.
Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted smallest A G L O R H I M S T A G H I L M O R S auxiliary array
15
A G L O R H I M S T A G H I L M O R S T
Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted smallest A G L O R H I M S T A G H I L M O R S T auxiliary array
16
A G L O R H I M S T A G H I L M O R S T
Merging Merge. Keep track of smallest element in each sorted half. Insert smallest of two elements into auxiliary array. Repeat until done. first half exhausted second half exhausted A G L O R H I M S T A G H I L M O R S T auxiliary array
17
A Useful Recurrence Relation
Def. T(n) = number of comparisons to mergesort an input of size n. Mergesort recurrence. Solution. T(n) = O(n log2 n). Assorted proofs. We describe several ways to prove this recurrence. Initially we assume n is a power of 2 and replace with =.
18
Proof by Recursion Tree
T(n) n T(n/2) T(n/2) 2(n/2) T(n/4) T(n/4) T(n/4) T(n/4) 4(n/4) log2n . . . T(n / 2k) 2k (n / 2k) . . . T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) n/2 (2) n log2n
19
Proof by Telescoping Claim. If T(n) satisfies this recurrence, then T(n) = n log2 n. Pf. For n > 1: assumes n is a power of 2
20
Proof by Induction Claim. If T(n) satisfies this recurrence, then T(n) = n log2 n. Pf. (by induction on n) Base case: n = 1. Inductive hypothesis: T(n) = n log2 n. Goal: show that T(2n) = 2n log2 (2n). assumes n is a power of 2
21
Analysis of Mergesort Recurrence
Claim. If T(n) satisfies the following recurrence, then T(n) n lg n. Pf. (by induction on n) Base case: n = 1. Define n1 = n / 2 , n2 = n / 2. Induction step: assume true for 1, 2, ... , n–1. log2n
22
5.3 Counting Inversions
23
Counting Inversions Music site tries to match your song preferences with others. You rank n songs. Music site consults database to find people with similar tastes. Similarity metric: number of inversions between two rankings. My rank: 1, 2, …, n. Your rank: a1, a2, …, an. Songs i and j inverted if i < j, but ai > aj. Brute force: check all (n2) pairs i and j. amazon.com, launch.com, restaurants, movies, . . . Note: there can be a quadratic number of inversions. Asymptotically faster algorithm must compute total number without even looking at each inversion individually. Songs A B C D E Inversions 3-2, 4-2 Me 1 2 3 4 5 You 1 3 4 2 5
24
Applications Applications. Voting theory. Collaborative filtering.
Measuring the "sortedness" of an array. Sensitivity analysis of Google's ranking function. Rank aggregation for meta-searching on the Web. Nonparametric statistics (e.g., Kendall's Tau distance).
25
Counting Inversions: Divide-and-Conquer
1 5 4 8 10 2 6 9 12 11 3 7
26
Counting Inversions: Divide-and-Conquer
Divide: separate list into two pieces. Divide: O(1). 1 5 4 8 10 2 6 9 12 11 3 7 1 5 4 8 10 2 6 9 12 11 3 7
27
Counting Inversions: Divide-and-Conquer
Divide: separate list into two pieces. Conquer: recursively count inversions in each half. Divide: O(1). 1 5 4 8 10 2 6 9 12 11 3 7 1 5 4 8 10 2 6 9 12 11 3 7 Conquer: 2T(n / 2) 5 blue-blue inversions 8 green-green inversions 5-4, 5-2, 4-2, 8-2, 10-2 6-3, 9-3, 9-7, 12-3, 12-7, 12-11, 11-3, 11-7
28
Counting Inversions: Divide-and-Conquer
Divide: separate list into two pieces. Conquer: recursively count inversions in each half. Combine: count inversions where ai and aj are in different halves, and return sum of three quantities. Divide: O(1). 1 5 4 8 10 2 6 9 12 11 3 7 1 5 4 8 10 2 6 9 12 11 3 7 Conquer: 2T(n / 2) 5 blue-blue inversions 8 green-green inversions 9 blue-green inversions 5-3, 4-3, 8-6, 8-3, 8-7, 10-6, 10-9, 10-3, 10-7 Combine: ??? Total = = 22.
29
Counting Inversions: Combine
Combine: count blue-green inversions Assume each half is sorted. Count inversions where ai and aj are in different halves. Merge two sorted halves into sorted whole. to maintain sorted invariant 3 7 10 14 18 19 2 11 16 17 23 25 6 3 2 2 13 blue-green inversions: Count: O(n) 2 3 7 10 11 14 16 17 18 19 23 25 Merge: O(n)
30
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 6 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves auxiliary array Total:
31
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 6 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 auxiliary array Total: 6
32
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 6 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 auxiliary array Total: 6
33
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 6 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 auxiliary array Total: 6
34
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 5 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 auxiliary array Total: 6
35
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 5 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 7 auxiliary array Total: 6
36
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 4 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 7 auxiliary array Total: 6
37
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 4 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 7 10 auxiliary array Total: 6
38
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 3 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 2 3 7 10 auxiliary array Total: 6
39
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 3 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 3 7 10 11 auxiliary array Total:
40
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 3 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 3 7 10 11 auxiliary array Total:
41
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 3 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 3 7 10 11 14 auxiliary array Total:
42
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 3 7 10 11 14 auxiliary array Total:
43
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 3 7 10 11 14 16 auxiliary array Total:
44
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 3 7 10 11 14 16 auxiliary array Total:
45
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 auxiliary array Total:
46
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 auxiliary array Total:
47
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 2 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 auxiliary array Total:
48
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 1 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 auxiliary array Total:
49
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 1 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 auxiliary array Total:
50
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. first half exhausted i = 0 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 auxiliary array Total:
51
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 0 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 23 auxiliary array Total:
52
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 0 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 23 auxiliary array Total:
53
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 0 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 23 25 auxiliary array Total:
54
Merge and Count Merge and count step.
Given two sorted halves, count number of inversions where ai and aj are in different halves. Combine two sorted halves into sorted whole. i = 0 3 7 10 14 18 19 2 11 16 17 23 25 two sorted halves 6 3 2 2 2 3 7 10 11 14 16 17 18 19 23 25 auxiliary array Total: = 13
55
Counting Inversions: Implementation
Pre-condition. [Merge-and-Count] A and B are sorted. Post-condition. [Sort-and-Count] L is sorted. Sort-and-Count(L) { if list L has one element return 0 and the list L Divide the list into two halves A and B (rA, A) Sort-and-Count(A) (rB, B) Sort-and-Count(B) (rB, L) Merge-and-Count(A, B) return r = rA + rB + r and the sorted list L }
56
5.4 Closest Pair of Points "Divide and conquer", Caesar's other famous quote "I came, I saw, I conquered" Divide-and-conquer idea dates back to Julius Caesar. Favorite war tactic was to divide an opposing army in two halves, and then assault one half with his entire force.
57
Closest Pair of Points Closest pair. Given n points in the plane, find a pair with smallest Euclidean distance between them. Fundamental geometric primitive. Graphics, computer vision, geographic information systems, molecular modeling, air traffic control. Special case of nearest neighbor, Euclidean MST, Voronoi. Brute force. Check all pairs of points p and q with (n2) comparisons. 1-D version. O(n log n) easy if points are on a line. Assumption. No two points have same x coordinate. Foundation of then-fledgling field of computational geometry. "Shamos and Hoey (1970's) wanted to work out basic computational primitives in computational geometry. Surprisingly challenging to find an efficient algorithm. Shamos and Hoey asked whether it was possible to do better than quadratic. The algorithm we present is essentially their solution." fast closest pair inspired fast algorithms for these problems to make presentation cleaner
58
Closest Pair of Points: First Attempt
Divide. Sub-divide region into 4 quadrants. L
59
Closest Pair of Points: First Attempt
Divide. Sub-divide region into 4 quadrants. Obstacle. Impossible to ensure n/4 points in each piece. L
60
Closest Pair of Points Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side. L
61
Closest Pair of Points Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side. Conquer: find closest pair in each side recursively. L 21 12
62
Closest Pair of Points Algorithm.
Divide: draw vertical line L so that roughly ½n points on each side. Conquer: find closest pair in each side recursively. Combine: find closest pair with one point in each side. Return best of 3 solutions. seems like (n2) L 8 21 12
63
Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . L 21 = min(12, 21) 12
64
Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . Observation: only need to consider points within of line L. L 21 = min(12, 21) 12
65
Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . Observation: only need to consider points within of line L. Sort points in 2-strip by their y coordinate. L 7 6 21 5 4 = min(12, 21) 12 3 2 1
66
Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . Observation: only need to consider points within of line L. Sort points in 2-strip by their y coordinate. Only check distances of those within 11 positions in sorted list! L 7 6 21 5 4 = min(12, 21) 12 3 2 1
67
Closest Pair of Points Def. Let si be the point in the 2-strip, with the ith smallest y-coordinate. Claim. If |i – j| 12, then the distance between si and sj is at least . Pf. No two points lie in same ½-by-½ box. Two points at least 2 rows apart have distance 2(½). ▪ Fact. Still true if we replace 12 with 7. j 39 31 ½ can reduce neighbors to 6 Note: no points on median line (assumption that no points have same x coordinate ensures that median line can be drawn in this way) 2 rows 30 ½ 29 i ½ 28 27 26 25
68
Closest Pair Algorithm
Closest-Pair(p1, …, pn) { Compute separation line L such that half the points are on one side and half on the other side. 1 = Closest-Pair(left half) 2 = Closest-Pair(right half) = min(1, 2) Delete all points further than from separation line L Sort remaining points by y-coordinate. Scan points in y-order and compare distance between each point and next 11 neighbors. If any of these distances is less than , update . return . } O(n log n) 2T(n / 2) O(n) separation can be done in O(N) using linear time median algorithm O(n log n) O(n)
69
Closest Pair of Points: Analysis
Running time. Q. Can we achieve O(n log n)? A. Yes. Don't sort points in strip from scratch each time. Each recursive returns two lists: all points sorted by y coordinate, and all points sorted by x coordinate. Sort by merging two pre-sorted lists.
70
5.5 Integer Multiplication
"Divide and conquer", Caesar's other famous quote "I came, I saw, I conquered" Divide-and-conquer idea dates back to Julius Caesar. Favorite war tactic was to divide an opposing army in two halves, and then assault one half with his entire force.
71
Integer Arithmetic Add. Given two n-digit integers a and b, compute a + b. O(n) bit operations. Multiply. Given two n-digit integers a and b, compute a × b. Brute force solution: (n2) bit operations. 1 * Multiply 1 + Add
72
Divide-and-Conquer Multiplication: Warmup
To multiply two n-digit integers: Multiply four ½n-digit integers. Add two ½n-digit integers, and shift to obtain result. assumes n is a power of 2
73
Karatsuba Multiplication
To multiply two n-digit integers: Add two ½n digit integers. Multiply three ½n-digit integers. Add, subtract, and shift ½n-digit integers to obtain result. Theorem. [Karatsuba-Ofman, 1962] Can multiply two n-digit integers in O(n1.585) bit operations. Karatsuba: also works for multiplying two degree N univariate polynomials A B A C C
74
Karatsuba: Recursion Tree
T(n) n T(n/2) T(n/2) T(n/2) 3(n/2) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) T(n/4) 9(n/4) . . . . . . T(n / 2k) 3k (n / 2k) . . . . . . T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) 3 lg n (2)
75
Matrix Multiplication
"Divide and conquer", Caesar's other famous quote "I came, I saw, I conquered" Divide-and-conquer idea dates back to Julius Caesar. Favorite war tactic was to divide an opposing army in two halves, and then assault one half with his entire force.
76
Matrix Multiplication
Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. Brute force. (n3) arithmetic operations. Fundamental question. Can we improve upon brute force? among most basic problems in linear algebra and scientific computing
77
Matrix Multiplication: Warmup
Divide-and-conquer. Divide: partition A and B into ½n-by-½n blocks. Conquer: multiply 8 ½n-by-½n recursively. Combine: add appropriate products using 4 matrix additions. for scalars, probably not worth elimination one multiplication at expense of 14 extra additions (cost of add and multiply similar), but for matrices can make huge difference since standard multiplies are more expensive than additions
78
Matrix Multiplication: Key Idea
Key idea. multiply 2-by-2 block matrices with only 7 multiplications. 7 multiplications. 18 = additions (or subtractions). somewhat counterintuitive to be subtracting, especially if all original matrix entries are positive Winograd's variant. 7 multiplications, 15 additions, but less numerically stable than Strassen
79
Fast Matrix Multiplication
Fast matrix multiplication. (Strassen, 1969) Divide: partition A and B into ½n-by-½n blocks. Compute: 14 ½n-by-½n matrices via 10 matrix additions. Conquer: multiply 7 ½n-by-½n matrices recursively. Combine: 7 products into 4 terms using 8 matrix additions. Analysis. Assume n is a power of 2. T(n) = # arithmetic operations.
80
Fast Matrix Multiplication in Practice
Implementation issues. Sparsity. Caching effects. Numerical stability. Odd matrix dimensions. Crossover to classical algorithm around n = 128. Common misperception: "Strassen is only a theoretical curiosity." Advanced Computation Group at Apple Computer reports 8x speedup on G4 Velocity Engine when n ~ 2,500. Range of instances where it's useful is a subject of controversy. Remark. Can "Strassenize" Ax=b, determinant, eigenvalues, and other matrix ops. Parallelization can also be an issue. Not strongly numerically stable, but in practice behavior is better than theory.
81
Fast Matrix Multiplication in Theory
Q. Multiply two 2-by-2 matrices with only 7 scalar multiplications? A. Yes! [Strassen, 1969] Q. Multiply two 2-by-2 matrices with only 6 scalar multiplications? A. Impossible. [Hopcroft and Kerr, 1971] Q. Two 3-by-3 matrices with only 21 scalar multiplications? A. Also impossible. Q. Two 70-by-70 matrices with only 143,640 scalar multiplications? A. Yes! [Pan, 1980] Decimal wars. December, 1979: O(n ). January, 1980: O(n ). Hopcroft-Kerr bound applies to non-commutative case "Imagine excitement in January, 1980 when this was improved to"
82
Fast Matrix Multiplication in Theory
Best known. O(n2.376) [Coppersmith-Winograd, 1987.] Conjecture. O(n2+) for any > 0. Caveat. Theoretical improvements to Strassen are progressively less practical.
83
5.6 Convolution and FFT
84
Fast Fourier Transform: Applications
Optics, acoustics, quantum physics, telecommunications, control systems, signal processing, speech recognition, data compression, image processing. DVD, JPEG, MP3, MRI, CAT scan. Numerical solutions to Poisson's equation. Significance. Perhaps single algorithmic discovery that has had the greatest practical impact in history. Progress in these areas limited by lack of fast algorithms. Variants of FFT used for JPEG, e.g., fast sine/cosine transforms The FFT is one of the truly great computational developments of this [20th] century. It has changed the face of science and engineering so much that it is not an exaggeration to say that life as we know it would be very different without the FFT. -Charles van Loan
85
Fast Fourier Transform: Brief History
Gauss (1805, 1866). Analyzed periodic motion of asteroid Ceres. Runge-König (1924). Laid theoretical groundwork. Danielson-Lanczos (1942). Efficient algorithm. Cooley-Tukey (1965). Monitoring nuclear tests in Soviet Union and tracking submarines. Rediscovered and popularized FFT. Importance not fully realized until advent of digital computers. Gauss' algorithm published posthumously in 1866
86
Polynomials: Coefficient Representation
Polynomial. [coefficient representation] Add: O(n) arithmetic operations. Evaluate: O(n) using Horner's method. Multiply (convolve): O(n2) using brute force. can also multiply in O(n^1.585) using Karatsuba polynomial multiplication
87
Polynomials: Point-Value Representation
Fundamental theorem of algebra. [Gauss, PhD thesis] A degree n polynomial with complex coefficients has n complex roots. Corollary. A degree n-1 polynomial A(x) is uniquely specified by its evaluation at n distinct values of x. x y xj yj = A(xj) deep theorem - proof requires analysis conjectures in 17th century, but not rigorously proved until Gauss' phd thesis established imaginary numbers as fundamental mathematical objects
88
Polynomials: Point-Value Representation
Polynomial. [point-value representation] Add: O(n) arithmetic operations. Multiply: O(n), but need 2n-1 points. Evaluate: O(n2) using Lagrange's formula. Lagrange's formula is numerically unstable
89
Converting Between Two Polynomial Representations
Tradeoff. Fast evaluation or fast multiplication. We want both! Goal. Make all ops fast by efficiently converting between two representations. Representation Multiply Evaluate Coefficient O(n2) O(n) Point-value O(n) O(n2) coefficient representation point-value representation
90
Converting Between Two Polynomial Representations: Brute Force
Coefficient to point-value. Given a polynomial a0 + a1 x an-1 xn-1, evaluate it at n distinct points x0, ... , xn-1. Point-value to coefficient. Given n distinct points x0, ..., xn-1 and values y0, ..., yn-1, find unique polynomial a0 + a1 x an-1 xn-1 that has given values at given points. O(n2) for matrix-vector multiply O(n3) for Gaussian elimination Vandermonde matrix is invertible iff xi distinct
91
Coefficient to Point-Value Representation: Intuition
Coefficient to point-value. Given a polynomial a0 + a1 x an-1 xn-1, evaluate it at n distinct points x0, ... , xn-1. Divide. Break polynomial up into even and odd powers. A(x) = a0 + a1x + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + a7x7. Aeven(x) = a0 + a2x + a4x2 + a6x3. Aodd (x) = a1 + a3x + a5x2 + a7x3. A(-x) = Aeven(x2) + x Aodd(x2). A(-x) = Aeven(x2) - x Aodd(x2). Intuition. Choose two points to be 1. A(-1) = Aeven(1) + 1 Aodd(1). A(-1) = Aeven(1) - 1 Aodd(1). Can evaluate polynomial of degree n at 2 points by evaluating two polynomials of degree ½n at 1 point.
92
Coefficient to Point-Value Representation: Intuition
Coefficient to point-value. Given a polynomial a0 + a1 x an-1 xn-1, evaluate it at n distinct points x0, ... , xn-1. Divide. Break polynomial up into even and odd powers. A(x) = a0 + a1x + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + a7x7. Aeven(x) = a0 + a2x + a4x2 + a6x3. Aodd (x) = a1 + a3x + a5x2 + a7x3. A(-x) = Aeven(x2) + x Aodd(x2). A(-x) = Aeven(x2) - x Aodd(x2). Intuition. Choose four points to be 1, i. A(-1) = Aeven(-1) + 1 Aodd( 1). A(-1) = Aeven(-1) - 1 Aodd(-1). A(-i) = Aeven(-1) + i Aodd(-1). A(-i) = Aeven(-1) - i Aodd(-1). Can evaluate polynomial of degree n at 4 points by evaluating two polynomials of degree ½n at 2 points.
93
Discrete Fourier Transform
Coefficient to point-value. Given a polynomial a0 + a1 x an-1 xn-1, evaluate it at n distinct points x0, ... , xn-1. Key idea: choose xk = k where is principal nth root of unity. Discrete Fourier transform Fourier matrix Fn
94
Roots of Unity Def. An nth root of unity is a complex number x such that xn = 1. Fact. The nth roots of unity are: 0, 1, …, n-1 where = e 2 i / n. Pf. (k)n = (e 2 i k / n) n = (e i ) 2k = (-1) 2k = 1. Fact. The ½nth roots of unity are: 0, 1, …, n/2-1 where = e 4 i / n. Fact. 2 = and (2)k = k. forms a group under multiplication (similar to additive group Z_n modulo n) 2 = 1 = i 3 1 4 = 2 = -1 n = 8 0 = 0 = 1 5 7 6 = 3 = -i
95
Fast Fourier Transform
Goal. Evaluate a degree n-1 polynomial A(x) = a an-1 xn-1 at its nth roots of unity: 0, 1, …, n-1. Divide. Break polynomial up into even and odd powers. Aeven(x) = a0 + a2x + a4x2 + … + an/2-2 x(n-1)/2. Aodd (x) = a1 + a3x + a5x2 + … + an/2-1 x(n-1)/2. A(x) = Aeven(x2) + x Aodd(x2). Conquer. Evaluate degree Aeven(x) and Aodd(x) at the ½nth roots of unity: 0, 1, …, n/2-1. Combine. A(k+n) = Aeven(k) + k Aodd(k), 0 k < n/2 A(k+n) = Aeven(k) - k Aodd(k), 0 k < n/2 k = (k)2 = (k+n)2 k+n = -k
96
FFT Algorithm fft(n, a0,a1,…,an-1) { if (n == 1) return a0
(e0,e1,…,en/2-1) FFT(n/2, a0,a2,a4,…,an-2) (d0,d1,…,dn/2-1) FFT(n/2, a1,a3,a5,…,an-1) for k = 0 to n/2 - 1 { k e2ik/n yk+n/2 ek + k dk yk+n/2 ek - k dk } return (y0,y1,…,yn-1)
97
FFT Summary Theorem. FFT algorithm evaluates a degree n-1 polynomial at each of the nth roots of unity in O(n log n) steps. Running time. T(2n) = 2T(n) + O(n) T(n) = O(n log n). assumes n is a power of 2 O(n log n) coefficient representation point-value representation
98
Recursion Tree a0, a1, a2, a3, a4, a5, a6, a7 perfect shuffle
each recursive call performs a "perfect shuffle" leaves of tree are in "bit-reversed" order - if you read the bits backwards, it counts from 000 to 111 in binary a0, a4 a2, a6 a1, a5 a3, a7 a0 a4 a2 a6 a1 a5 a3 a7 000 100 010 110 001 101 011 111 "bit-reversed" order
99
Point-Value to Coefficient Representation: Inverse DFT
Goal. Given the values y0, ... , yn-1 of a degree n-1 polynomial at the n points 0, 1, …, n-1, find unique polynomial a0 + a1 x an-1 xn-1 that has given values at given points. Inverse DFT Fourier matrix inverse (Fn)-1
100
Inverse FFT Claim. Inverse of Fourier matrix is given by following formula. Consequence. To compute inverse FFT, apply same algorithm but use -1 = e -2 i / n as principal nth root of unity (and divide by n).
101
Inverse FFT: Proof of Correctness
Claim. Fn and Gn are inverses. Pf. Summation lemma. Let be a principal nth root of unity. Then If k is a multiple of n then k = 1 sums to n. Each nth root of unity k is a root of xn - 1 = (x - 1) (1 + x + x xn-1). if k 1 we have: 1 + k + k(2) k(n-1) = 0 sums to 0. ▪ summation lemma
102
Inverse FFT: Algorithm
ifft(n, a0,a1,…,an-1) { if (n == 1) return a0 (e0,e1,…,en/2-1) FFT(n/2, a0,a2,a4,…,an-2) (d0,d1,…,dn/2-1) FFT(n/2, a1,a3,a5,…,an-1) for k = 0 to n/2 - 1 { k e-2ik/n yk+n/2 (ek + k dk) / n yk+n/2 (ek - k dk) / n } return (y0,y1,…,yn-1)
103
Inverse FFT Summary Theorem. Inverse FFT algorithm interpolates a degree n-1 polynomial given values at each of the nth roots of unity in O(n log n) steps. assumes n is a power of 2 O(n log n) O(n log n) coefficient representation point-value representation
104
Polynomial Multiplication
Theorem. Can multiply two degree n-1 polynomials in O(n log n) steps. coefficient representation coefficient representation FFT O(n log n) inverse FFT O(n log n) point-value multiplication O(n)
105
FFT in Practice Fastest Fourier transform in the West. [Frigo and Johnson] Optimized C library. Features: DFT, DCT, real, complex, any size, any dimension. Won 1999 Wilkinson Prize for Numerical Software. Portable, competitive with vendor-tuned code. Implementation details. Instead of executing predetermined algorithm, it evaluates your hardware and uses a special-purpose compiler to generate an optimized algorithm catered to "shape" of the problem. Core algorithm is nonrecursive version of Cooley-Tukey radix 2 FFT. O(n log n), even for prime sizes. Reference:
106
Integer Multiplication
Integer multiplication. Given two n bit integers a = an-1 … a1a0 and b = bn-1 … b1b0, compute their product c = a b. Convolution algorithm. Form two polynomials. Note: a = A(2), b = B(2). Compute C(x) = A(x) B(x). Evaluate C(2) = a b. Running time: O(n log n) complex arithmetic steps. Theory. [Schönhage-Strassen 1971] O(n log n log log n) bit operations. Practice. [GNU Multiple Precision Arithmetic Library] GMP proclaims to be "the fastest bignum library on the planet." It uses brute force, Karatsuba, and FFT, depending on the size of n.
107
Extra Slides
108
Fourier Matrix Decomposition
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.