Presentation is loading. Please wait.

Presentation is loading. Please wait.

9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 10-11 Divide.

Similar presentations


Presentation on theme: "9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 10-11 Divide."— Presentation transcript:

1 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 10-11 Divide and Conquer Recurrences Nearest pair

2 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Divide and Conquer –Break up problem into several parts. –Solve each part recursively. –Combine solutions to sub-problems into overall solution. Most common usage. –Break up problem of size n into two equal parts of size n/2. –Solve two parts recursively. –Combine two solutions into overall solution in linear time. Consequence. –Brute force:  (n 2 ). –Divide-and-conquer:  (n log n). Divide et impera. Veni, vidi, vici. - Julius Caesar

3 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne obvious applications problems become easy once items are in sorted order non-obvious applications Sorting Given n elements, rearrange in ascending order. Applications. –Sort a list of names. –Display Google PageRank results. –Find the median. –Find the closest pair. –Binary search in a database. –Find duplicates in a mailing list. –Data compression –Computer graphics –Computational biology. –Load balancing on a parallel computer....

4 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Mergesort –Divide array into two halves. –Recursively sort each half. –Merge two halves to make sorted whole. merge sort divide ALGORITHMS ALGORITHMS AGLORHIMST AGHILMORST Jon von Neumann (1945) O(n) 2T(n/2) O(1)

5 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Merging Combine two pre-sorted lists into a sorted whole. How to merge efficiently? –Linear number of comparisons. –Use temporary array. Challenge for the bored: in-place merge [Kronrud, 1969] using only a constant amount of extra storage AGLORHIMST AGHI

6 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for Mergesort T(n) = worst case running time of Mergesort on an input of size n. Should be T(  n/2  ) + T(  n/2  ), but it turns out not to matter asymptotically. Usually omit the base case because our algorithms always run in time  (1) when n is a small constant. Several methods to find an upper bound on T(n). T(n) =  (1) if n = 1; 2T(n/2) +  (n) if n > 1.

7 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree Method Technique for guessing solutions to recurrences –Write out tree of recursive calls –Each node gets assigned the work done during that call to the procedure (dividing and combining) –Total work is sum of work at all nodes After guessing the answer, can prove by induction that it works.

8 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n) T(n/4) T(n/2)  (1) … T(n / 2 k ) h = lg n #leaves = n

9 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n/4) T(n/2)  (1) … T(n / 2 k ) cn h = lg n #leaves = n

10 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n/4)  (1) … T(n / 2 k ) cn cn/2 h = lg n #leaves = n

11 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.  (1) … T(n / 2 k ) cn cn/2 cn/4 h = lg n #leaves = n

12 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. cn cn/4 cn/2  (1) … h = lg n cn #leaves = n (n)(n) Total  (n lg n) … T(n / 2 k )

13 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Music site tries to match your song preferences with others. –You rank n songs. –Music site consults database to find people with similar tastes. Similarity metric: number of inversions between two rankings. –My rank: 1, 2, …, n. –Your rank: a 1, a 2, …, a n. –Songs i and j inverted if i a j. Brute force: check all  (n 2 ) pairs i and j. Counting Inversions You Me 14325 13245 ABCDE Songs Inversions 3-2, 4-2

14 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Applications –Voting theory. –Collaborative filtering. –Measuring the "sortedness" of an array. –Sensitivity analysis of Google's ranking function. –Rank aggregation for meta-searching on the Web. –Nonparametric statistics (e.g., Kendall's Tau distance).

15 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer 481021512113769

16 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces. 481021512113769 481021512113769 Divide:  (1).

17 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces. –Conquer: recursively count inversions in each half. 481021512113769 481021512113769 5 blue-blue inversions8 green-green inversions Divide:  (1). Conquer: 2T(n / 2) 5-4, 5-2, 4-2, 8-2, 10-2 6-3, 9-3, 9-7, 12-3, 12-7, 12-11, 11-3, 11-7

18 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces. –Conquer: recursively count inversions in each half. –Combine: count inversions where a i and a j are in different halves, and return sum of three quantities. 481021512113769 481021512113769 5 blue-blue inversions8 green-green inversions Divide:  (1). Conquer: 2T(n / 2) Combine: ??? 9 blue-green inversions 5-3, 4-3, 8-6, 8-3, 8-7, 10-6, 10-9, 10-3, 10-7 Total = 5 + 8 + 9 = 22.

19 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne 16172325211 6322 0 0 Counting Inversions: Combine Combine: count blue-green inversions –Assume each half is sorted. –Count inversions where a i and a j are in different halves. –Merge two sorted halves into sorted whole. to maintain sorted invariant 13 blue-green inversions: 6 + 3 + 2 + 2 + 0 + 0 Count:  (n) Merge:  (n) 1014181937 710111423181923251617 T(n) = 2T(n/2) +  (n). Solution: T(n) =  (n log n).

20 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Implementation Pre-condition. [Merge-and-Count] A and B are sorted. Post-condition. [Sort-and-Count] L is sorted. Sort-and-Count(L) { if list L has one element return 0 and the list L Divide the list into two halves A and B (r A, A)  Sort-and-Count(A) (r B, B)  Sort-and-Count(B) (r B, L)  Merge-and-Count(A, B) return r = r A + r B + r and the sorted list L }

21 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find 9 357891215 Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

22 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find 9 357891215 Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

23 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find 9 357891215 Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

24 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find 9 357891215 Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

25 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find 9 357891215 Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

26 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial. Example: Find 9 357891215

27 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary Search B INARY S EARCH (b, A[1.. n]) B find b in sorted array A 1.If n=0 then return “not found” 2.If A[ d n/2 e ] = b then return d n/2 e 3.If A[ d n/2 e ] < b then 4.return B INARY S EARCH (A[1.. d n/2 e ]) 5.Else 6.return d n/2 e + B INARY S EARCH (A[ d n/2 e +1.. n])

28 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for binary search T(n) = 1 T(n/2) +  (1) # subproblems subproblem size work dividing and combining

29 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for binary search T(n) = 1 T(n/2) +  (1) # subproblems subproblem size work dividing and combining  T(n)= T(n/2) + c = T(n/4) + 2c  = c d log n e =  (lg n).

30 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Exponentiation Problem: Compute a b, where b  N is n bits long Question: How many multiplications? a b = a b/2  a b/2 if b is even; a (b–1)/2  a (b–1)/2  aif b is odd. Divide-and-conquer algorithm: T(b) = T(b/2) +  (1)  T(b) =  (log b) =  (n). Naive algorithm:  (b) =  (2 n ) (exponential in the input length!)

31 9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne So far: 2 recurrences Mergesort; Counting Inversions T(n) = 2 T(n/2) +  (n) =  (n log n) Binary Search; Exponentiation T(n) = 1 T(n/2) +  (1) =  (log n) Master Theorem: method for solving recurrences.

32 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Review Question: Exponentiation Problem: Compute a b, where b  N is n bits long. Question: How many multiplications? a b = a b/2  a b/2 if b is even; a (b–1)/2  a (b–1)/2  aif b is odd. Divide-and-conquer algorithm: T(b) = T(b/2) +  (1)  T(b) =  (log b) =  (n). Naive algorithm:  (b) =  (2 n ) (exponential in the input length!) Naive algorithm:

33 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. So far: 2 recurrences Mergesort; Counting Inversions T(n) = 2 T(n/2) +  (n) =  (n log n) Binary Search; Exponentiation T(n) = 1 T(n/2) +  (1) =  (log n) Master Theorem: method for solving recurrences.

34 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a  1, b > 1, and f is asymptotically positive, that is f (n) >0 for all n > n 0. First step: compare f (n) to n log b a.

35 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases Compare f (n) with n log b a : 1. f (n) = O(n log b a –  ) for some constant  > 0. f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ).

36 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases Compare f (n) with n log b a : 1. f (n) = O(n log b a –  ) for some constant  > 0. f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ). 2. f (n) =  (n log b a lg k n) for some constant k  0. f (n) and n log b a grow at similar rates. Solution: T(n) =  (n log b a lg k+1 n).

37 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases (cont.) Compare f (n) with n log b a : 3. f (n) =  (n log b a +  ) for some constant  > 0. f (n) grows polynomially faster than n log b a (by an n  factor), and f (n) satisfies the regularity condition that a f (n/b)  c f (n) for some constant c < 1. Solution: T(n) =  ( f (n) ).

38 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a

39 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a f (n)f (n) a f (n/b) a 2 f (n/b 2 ) …

40 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) …

41 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. n log b a   (1) f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) #leaves = a h = a log b n = n log b a …

42 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) C ASE 1: The weight increases geometrically from the root to the leaves. The leaves hold a constant fraction of the total weight.  (n log b a ) … n log b a   (1)

43 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Examples E X. T(n) = 4T(n/2) + n 3 a = 4, b = 2  n log b a = n 2 ; f (n) = n 3. C ASE 3: f (n) =  (n 2 +  ) for  = 1 and 4(n/2) 3  cn 3 (reg. cond.) for c = 1/2.  T(n) =  (n 3 ).

44 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Examples E X. T(n) = 4T(n/2) + n 3 a = 4, b = 2  n log b a = n 2 ; f (n) = n 3. C ASE 3: f (n) =  (n 2 +  ) for  = 1 and 4(n/2) 3  cn 3 (reg. cond.) for c = 1/2.  T(n) =  (n 3 ). E X.T(n) = 4T(n/2) + n 2 /lg n a = 4, b = 2  n log b a = n 2 ; f (n) = n 2 /lg n. Master method does not apply. In particular, for every constant  > 0, we have n   (lg n).

45 Notes Master Th’m generalized by Akra and Bazzi to cover many more recurrences: where See http://www.dna.lth.se/home/Rolf_Karlsson/akrabazzi.ps 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova.

46 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers Given n-bit integers a, b (in binary), compute c=ab Naïve (grade-school) algorithm: –Write a,b in binary –Compute n intermediate products –Do n additions –Total work:  (n 2 ) a n-1 a n-2 … a 0 £ b n-1 b n-2 … b 0 n bits 2n bit output … 0 0 0  0

47 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers Divide and Conquer (Attempt #1): –Write a = A 1 2 n /2 + A 0 b = B 1 2 n /2 + B 0 –We want ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 –Multiply n/2 –bit integers recursively –T(n) = 4T(n/2) +  (n) –Alas! this is still  (n 2 ) (Master Theorem, Case 1)

48 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Divide and Conquer (Attempt #1): –Write a = A 1 2 n /2 + A 0 b = B 1 2 n /2 + B 0 –We want ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 –Multiply n/2 –bit integers recursively –T(n) = 4T(n/2) +  (n) –Alas! this is still  (n 2 ). (Exercise: write out the recursion tree.) Multiplying large integers –Karatsuba’s idea: (A 0 +A 1 ) (B 0 + B 1 ) = A 0 B 0 + A 1 B 1 + (A 0 B 1 + B 1 A 0 ) –We can get away with 3 multiplications! (in yellow) x = A 1 B 1 y = A 0 B 0 z = (A 0 +A 1 )(B 0 +B 1 ) –Now we use ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 = x 2 n + (z–x–y) 2 n /2 + y

49 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers M ULTIPLY (n, a, b) B a and b are n-bit integers B Assume n is a power of 2 for simplicity 1.If n · 2 then use grade-school algorithm else 2. A 1 à a div 2 n /2 ; B 1 à b div 2 n /2 ; 3.A 0 à a mod 2 n /2 ; B 0 à b mod 2 n /2. 4.x à M ULTIPLY (n/2, A 1, B 1 ) 5.y à M ULTIPLY (n/2, A 0, B 0 ) 6.z à M ULTIPLY (n/2, A 1 +A 0, B 1 +B 0 ) 7.Output x 2 n + (z–x–y)2 n /2 + y

50 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers The resulting recurrence T(n) = 3T(n/2) +  (n) Master Theorem, Case 1: T(n) =  (n log 2 3 ) =  (n 1.59… ) Note: There is a  (n log n) algorithm for multiplication (more on it later in the course).

51 Matrix multiplication 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

52 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Matrix multiplication Input:A = [a ij ], B = [b ij ]. Output:C = [c ij ] = A   B. i, j = 1, 2,…, n.

53 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Standard algorithm for i  1 to n dofor j  1 to n doc ij  0 for k  1 to n do c ij  c ij + a ik  b kj Running time =  (n 3 )

54 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Divide-and-conquer algorithm n £ n matrix = 2 £ 2 matrix of (n/2) £  n/2) submatrices: I DEA : C =A  B r=ae+bg s=af+bh t=ce+dh u=cf+dg 8 mults of (n/2) £  n/2) submatrices 4 adds of (n/2) £  n/2) submatrices ^ recursive

55 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Analysis of D&C algorithm n log b a = n log 2 8 = n 3  C ASE 1  T(n) =  (n 3 ). No better than the ordinary algorithm. # submatrices submatrix size work adding submatrices T(n) = 8 T(n/2) +  (n 2 )

56 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne 7 mults, 18 adds/subs. Note: No reliance on commutativity of multiplication! Strassen’s idea Multiply 2 £ 2 matrices with only 7 recursive mults. P 1 = a  ( f – h) P 2 = (a + b)  h P 3 = (c + d)  e P 4 = d  (g – e) P 5 = (a + d)  (e + h) P 6 = (b – d)  (g + h) P 7 = (a – c)  (e + f ) r=P 5 + P 4 – P 2 + P 6 s=P 1 + P 2 t=P 3 + P 4 u=P 5 + P 1 – P 3 – P 7

57 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Strassen’s idea Multiply 2 £ 2 matrices with only 7 recursive mults. P 1 = a  ( f – h) P 2 = (a + b)  h P 3 = (c + d)  e P 4 = d  (g – e) P 5 = (a + d)  (e + h) P 6 = (b – d)  (g + h) P 7 = (a – c)  (e + f ) r= P 5 + P 4 – P 2 + P 6 =(a + d) (e + h) + d (g – e) – (a + b) h + (b – d) (g + h) =ae + ah + de + dh + dg –de – ah – bh + bg + bh – dg – dh =ae + bg

58 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Strassen’s algorithm 1.Divide: Partition A and B into (n/2) x (n/2) submatrices. Form terms to be multiplied using + and –. 2.Conquer: Perform 7 multiplications of (n/2) x (n/2) submatrices recursively. 3.Combine: Form product matrix C using + and – on (n/2) x (n/2) submatrices. T(n) = 7 T(n/2) +  (n 2 )

59 9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Analysis of Strassen T(n) = 7 T(n/2) +  (n 2 ) n log b a = n log 2 7  n 2.81  C ASE 1  T(n) =  (n lg 7 ). Best to date (of theoretical interest only):  (n 2.376  ). Number 2.81 may not seem much smaller than 3. But the difference is in the exponent. The impact on running time is significant. Strassen’s algorithm beats the ordinary algorithm on today’s machines for n  32 or so.

60 Closest pair 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

61 Closest Pair of Points Closest pair. Given n points in the plane, find a pair with smallest Euclidean distance between them. Fundamental geometric primitive. n Graphics, computer vision, geographic information systems, molecular modeling, air traffic control. n Special case of nearest neighbor, Euclidean MST, Voronoi. Brute force. Check all pairs of points p and q with  (n 2 ) comparisons. 1-D version (points on a line): O(n log n) easy via sorting Assumption. No two points have same x coordinate. to make presentation cleaner fast closest pair inspired fast algorithms for these problems

62 Closest Pair of Points: First Attempt Divide. Sub-divide region into 4 quadrants. L

63 Closest Pair of Points: First Attempt Divide. Sub-divide region into 4 quadrants. Obstacle. Impossible to ensure n/4 points in each piece. L

64 Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. L

65 Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. n Conquer: find closest pair in each side recursively. 16 21 L

66 Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. n Conquer: find closest pair in each side recursively. n Combine: find closest pair with one point in each side. n Return best of 3 solutions. 16 21 8 L seems like  (n 2 )

67 Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . 16 21  = min(16, 21) L

68 Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L. 16 21  L  = min(16, 21)

69 16 21 1 2 3 4 5 6 7  Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L. n Sort points in 2  -strip by their y coordinate. L  = min(16, 21)

70 16 21 1 2 3 4 5 6 7  Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L. n Sort points in 2  -strip by their y coordinate. n Theorem: Only need to check distances of those within 11 positions in sorted list! L  = min(16, 21)

71 Closest Pair of Points Def. Let s i be the point in the 2  -strip, with the i th smallest y-coordinate. Claim. If |i – j|  12, then the distance between s i and s j is at least . Proof. n No two points lie in same ½  -by-½  box. n Two points at least 2 rows apart have distance  2(½  ). ▪ Fact. Still true if we replace 12 with 7.  27 29 30 31 28 26 25  ½½ 2 rows ½½ ½½ 39 i j

72 Closest Pair Algorithm Closest-Pair(p 1, …, p n ) { Compute separation line L such that half the points are on one side and half on the other side.  1 = Closest-Pair(left half)  2 = Closest-Pair(right half)  = min(  1,  2 ) Delete all points further than  from separation line L Sort remaining points by y-coordinate. Scan points in y-order and compare distance between each point and next 11 neighbors. If any of these distances is less than , update . return . } O(n log n) 2T(n / 2) O(n) O(n log n) O(n)

73 Closest Pair of Points: Analysis Running time. Q. Can we achieve O(n log n)? A. Yes. Don't sort points in strip from scratch each time. n Sort entire point set by x-coordinate only once n Each recursive call takes as input a set of points sorted by x coordinates and returns the same points sorted by y coordinate (together with the closest pair) n Create new y-sorted list by merging two output from recursive calls

74 Divide and Conquer in low-dimensional geometry  Powerful technique for low-dimensional geometric problems  Intuition: points in different parts of the plane don’t interfere too much  Example: convex hull in O(n log (n)) time a la MergeSort 1. Convex-Hull(left-half) 2. Convex-Hull(right-half) 3. Merge (see Cormen et al., Chap 33) T(n/2) Θ(n)


Download ppt "9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 10-11 Divide."

Similar presentations


Ads by Google