9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES 10-11 Divide.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J
Advertisements

Advanced Algorithms Piyush Kumar (Lecture 12: Parallel Algorithms) Welcome to COT5405 Courtesy Baker 05.
More on Divide and Conquer. The divide-and-conquer design paradigm 1. Divide the problem (instance) into subproblems. 2. Conquer the subproblems by solving.
Lecture 4 Divide and Conquer for Nearest Neighbor Problem
Divide-and-Conquer The most-well known algorithm design strategy:
DIVIDE AND CONQUER. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Algorithm Design Paradigms
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Introduction to Algorithms Jiafen Liu Sept
1 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
Lectures on Recursive Algorithms1 COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski.
CS4413 Divide-and-Conquer
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2013 Lecture 11.
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
1 Divide-and-Conquer Divide-and-conquer. n Break up problem into several parts. n Solve each part recursively. n Combine solutions to sub-problems into.
Theory of Algorithms: Divide and Conquer
Introduction to Algorithms 6.046J/18.401J L ECTURE 3 Divide and Conquer Binary search Powering a number Fibonacci numbers Matrix multiplication Strassen’s.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
ADA: 4. Divide/Conquer1 Objective o look at several divide and conquer examples (merge sort, binary search), and 3 approaches for calculating their.
Spring 2015 Lecture 5: QuickSort & Selection
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
CS38 Introduction to Algorithms Lecture 7 April 22, 2014.
Algorithm Design Strategy Divide and Conquer. More examples of Divide and Conquer  Review of Divide & Conquer Concept  More examples  Finding closest.
The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a  1, b > 1, and f is asymptotically positive.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
Closest Pair of Points Computational Geometry, WS 2006/07 Lecture 9, Part II Prof. Dr. Thomas Ottmann Khaireel A. Mohamed Algorithmen & Datenstrukturen,
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2013 Lecture 12.
Divide-and-Conquer 7 2  9 4   2   4   7
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 3 Prof. Erik Demaine.
Project 2 due … Project 2 due … Project 2 Project 2.
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
File Organization and Processing Week 13 Divide and Conquer.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
1 Closest Pair of Points (from “Algorithm Design” by J.Kleinberg and E.Tardos) Closest pair. Given n points in the plane, find a pair with smallest Euclidean.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Chapter 4 Divide-and-Conquer. 2 About this lecture Recall the divide-and-conquer paradigm, which we used for merge sort: – Divide the problem into a.
ADA: 4.5. Matrix Mult.1 Objective o an extra divide and conquer example, based on a question in class Algorithm Design and Analysis (ADA) ,
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Lecture 5 Today, how to solve recurrences We learned “guess and proved by induction” We also learned “substitution” method Today, we learn the “master.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 11.
CSCI 256 Data Structures and Algorithm Analysis Lecture 10 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
1 Chapter 5 Divide and Conquer Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Introduction to Algorithms
Chapter 2 Divide-and-Conquer algorithms
Chapter 5 Divide and Conquer
分治法.
Chapter 2 Divide-and-Conquer algorithms
Chapter 2 Divide-and-Conquer algorithms
CSCE 411 Design and Analysis of Algorithms
Divide and Conquer / Closest Pair of Points Yin Tat Lee
Chapter 5 Divide and Conquer
Divide and Conquer / Closest Pair of Points Yin Tat Lee
Introduction to Algorithms
Divide-and-Conquer The most-well known algorithm design strategy:
CSCI 256 Data Structures and Algorithm Analysis Lecture 12
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURES Divide and Conquer Recurrences Nearest pair

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Divide and Conquer –Break up problem into several parts. –Solve each part recursively. –Combine solutions to sub-problems into overall solution. Most common usage. –Break up problem of size n into two equal parts of size n/2. –Solve two parts recursively. –Combine two solutions into overall solution in linear time. Consequence. –Brute force:  (n 2 ). –Divide-and-conquer:  (n log n). Divide et impera. Veni, vidi, vici. - Julius Caesar

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne obvious applications problems become easy once items are in sorted order non-obvious applications Sorting Given n elements, rearrange in ascending order. Applications. –Sort a list of names. –Display Google PageRank results. –Find the median. –Find the closest pair. –Binary search in a database. –Find duplicates in a mailing list. –Data compression –Computer graphics –Computational biology. –Load balancing on a parallel computer....

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Mergesort –Divide array into two halves. –Recursively sort each half. –Merge two halves to make sorted whole. merge sort divide ALGORITHMS ALGORITHMS AGLORHIMST AGHILMORST Jon von Neumann (1945) O(n) 2T(n/2) O(1)

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Merging Combine two pre-sorted lists into a sorted whole. How to merge efficiently? –Linear number of comparisons. –Use temporary array. Challenge for the bored: in-place merge [Kronrud, 1969] using only a constant amount of extra storage AGLORHIMST AGHI

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for Mergesort T(n) = worst case running time of Mergesort on an input of size n. Should be T(  n/2  ) + T(  n/2  ), but it turns out not to matter asymptotically. Usually omit the base case because our algorithms always run in time  (1) when n is a small constant. Several methods to find an upper bound on T(n). T(n) =  (1) if n = 1; 2T(n/2) +  (n) if n > 1.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree Method Technique for guessing solutions to recurrences –Write out tree of recursive calls –Each node gets assigned the work done during that call to the procedure (dividing and combining) –Total work is sum of work at all nodes After guessing the answer, can prove by induction that it works.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n) T(n/4) T(n/2)  (1) … T(n / 2 k ) h = lg n #leaves = n

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n/4) T(n/2)  (1) … T(n / 2 k ) cn h = lg n #leaves = n

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. T(n/4)  (1) … T(n / 2 k ) cn cn/2 h = lg n #leaves = n

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.  (1) … T(n / 2 k ) cn cn/2 cn/4 h = lg n #leaves = n

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recursion Tree for Mergesort Solve T(n) = 2T(n/2) + cn, where c > 0 is constant. cn cn/4 cn/2  (1) … h = lg n cn #leaves = n (n)(n) Total  (n lg n) … T(n / 2 k )

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Music site tries to match your song preferences with others. –You rank n songs. –Music site consults database to find people with similar tastes. Similarity metric: number of inversions between two rankings. –My rank: 1, 2, …, n. –Your rank: a 1, a 2, …, a n. –Songs i and j inverted if i a j. Brute force: check all  (n 2 ) pairs i and j. Counting Inversions You Me ABCDE Songs Inversions 3-2, 4-2

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Applications –Voting theory. –Collaborative filtering. –Measuring the "sortedness" of an array. –Sensitivity analysis of Google's ranking function. –Rank aggregation for meta-searching on the Web. –Nonparametric statistics (e.g., Kendall's Tau distance).

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces Divide:  (1).

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces. –Conquer: recursively count inversions in each half blue-blue inversions8 green-green inversions Divide:  (1). Conquer: 2T(n / 2) 5-4, 5-2, 4-2, 8-2, , 9-3, 9-7, 12-3, 12-7, 12-11, 11-3, 11-7

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Algorithm Divide-and-conquer –Divide: separate list into two pieces. –Conquer: recursively count inversions in each half. –Combine: count inversions where a i and a j are in different halves, and return sum of three quantities blue-blue inversions8 green-green inversions Divide:  (1). Conquer: 2T(n / 2) Combine: ??? 9 blue-green inversions 5-3, 4-3, 8-6, 8-3, 8-7, 10-6, 10-9, 10-3, 10-7 Total = = 22.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Counting Inversions: Combine Combine: count blue-green inversions –Assume each half is sorted. –Count inversions where a i and a j are in different halves. –Merge two sorted halves into sorted whole. to maintain sorted invariant 13 blue-green inversions: Count:  (n) Merge:  (n) T(n) = 2T(n/2) +  (n). Solution: T(n) =  (n log n).

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Implementation Pre-condition. [Merge-and-Count] A and B are sorted. Post-condition. [Sort-and-Count] L is sorted. Sort-and-Count(L) { if list L has one element return 0 and the list L Divide the list into two halves A and B (r A, A)  Sort-and-Count(A) (r B, B)  Sort-and-Count(B) (r B, L)  Merge-and-Count(A, B) return r = r A + r B + r and the sorted list L }

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Example: Find Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial.

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary search Find an element in a sorted array: 1.Divide: Check middle element. 2.Conquer: Recursively search 1 subarray. 3.Combine: Trivial. Example: Find

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Binary Search B INARY S EARCH (b, A[1.. n]) B find b in sorted array A 1.If n=0 then return “not found” 2.If A[ d n/2 e ] = b then return d n/2 e 3.If A[ d n/2 e ] < b then 4.return B INARY S EARCH (A[1.. d n/2 e ]) 5.Else 6.return d n/2 e + B INARY S EARCH (A[ d n/2 e +1.. n])

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for binary search T(n) = 1 T(n/2) +  (1) # subproblems subproblem size work dividing and combining

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Recurrence for binary search T(n) = 1 T(n/2) +  (1) # subproblems subproblem size work dividing and combining  T(n)= T(n/2) + c = T(n/4) + 2c  = c d log n e =  (lg n).

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne Exponentiation Problem: Compute a b, where b  N is n bits long Question: How many multiplications? a b = a b/2  a b/2 if b is even; a (b–1)/2  a (b–1)/2  aif b is odd. Divide-and-conquer algorithm: T(b) = T(b/2) +  (1)  T(b) =  (log b) =  (n). Naive algorithm:  (b) =  (2 n ) (exponential in the input length!)

9/19/2008 S. Raskhodnikova; based on slides by E. Demaine, C. Leiserson, A. Smith, K. Wayne So far: 2 recurrences Mergesort; Counting Inversions T(n) = 2 T(n/2) +  (n) =  (n log n) Binary Search; Exponentiation T(n) = 1 T(n/2) +  (1) =  (log n) Master Theorem: method for solving recurrences.

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Review Question: Exponentiation Problem: Compute a b, where b  N is n bits long. Question: How many multiplications? a b = a b/2  a b/2 if b is even; a (b–1)/2  a (b–1)/2  aif b is odd. Divide-and-conquer algorithm: T(b) = T(b/2) +  (1)  T(b) =  (log b) =  (n). Naive algorithm:  (b) =  (2 n ) (exponential in the input length!) Naive algorithm:

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. So far: 2 recurrences Mergesort; Counting Inversions T(n) = 2 T(n/2) +  (n) =  (n log n) Binary Search; Exponentiation T(n) = 1 T(n/2) +  (1) =  (log n) Master Theorem: method for solving recurrences.

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. The master method The master method applies to recurrences of the form T(n) = a T(n/b) + f (n), where a  1, b > 1, and f is asymptotically positive, that is f (n) >0 for all n > n 0. First step: compare f (n) to n log b a.

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases Compare f (n) with n log b a : 1. f (n) = O(n log b a –  ) for some constant  > 0. f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ).

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases Compare f (n) with n log b a : 1. f (n) = O(n log b a –  ) for some constant  > 0. f (n) grows polynomially slower than n log b a (by an n  factor). Solution: T(n) =  (n log b a ). 2. f (n) =  (n log b a lg k n) for some constant k  0. f (n) and n log b a grow at similar rates. Solution: T(n) =  (n log b a lg k+1 n).

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Three common cases (cont.) Compare f (n) with n log b a : 3. f (n) =  (n log b a +  ) for some constant  > 0. f (n) grows polynomially faster than n log b a (by an n  factor), and f (n) satisfies the regularity condition that a f (n/b)  c f (n) for some constant c < 1. Solution: T(n) =  ( f (n) ).

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a f (n)f (n) a f (n/b) a 2 f (n/b 2 ) …

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) …

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. n log b a   (1) f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) #leaves = a h = a log b n = n log b a …

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. f (n/b) Idea of master theorem f (n/b)   (1) … Recursion tree: … f (n)f (n) a f (n/b 2 ) … a h = log b n f (n)f (n) a f (n/b) a 2 f (n/b 2 ) C ASE 1: The weight increases geometrically from the root to the leaves. The leaves hold a constant fraction of the total weight.  (n log b a ) … n log b a   (1)

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Examples E X. T(n) = 4T(n/2) + n 3 a = 4, b = 2  n log b a = n 2 ; f (n) = n 3. C ASE 3: f (n) =  (n 2 +  ) for  = 1 and 4(n/2) 3  cn 3 (reg. cond.) for c = 1/2.  T(n) =  (n 3 ).

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Examples E X. T(n) = 4T(n/2) + n 3 a = 4, b = 2  n log b a = n 2 ; f (n) = n 3. C ASE 3: f (n) =  (n 2 +  ) for  = 1 and 4(n/2) 3  cn 3 (reg. cond.) for c = 1/2.  T(n) =  (n 3 ). E X.T(n) = 4T(n/2) + n 2 /lg n a = 4, b = 2  n log b a = n 2 ; f (n) = n 2 /lg n. Master method does not apply. In particular, for every constant  > 0, we have n   (lg n).

Notes Master Th’m generalized by Akra and Bazzi to cover many more recurrences: where See 9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova.

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers Given n-bit integers a, b (in binary), compute c=ab Naïve (grade-school) algorithm: –Write a,b in binary –Compute n intermediate products –Do n additions –Total work:  (n 2 ) a n-1 a n-2 … a 0 £ b n-1 b n-2 … b 0 n bits 2n bit output …  0

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers Divide and Conquer (Attempt #1): –Write a = A 1 2 n /2 + A 0 b = B 1 2 n /2 + B 0 –We want ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 –Multiply n/2 –bit integers recursively –T(n) = 4T(n/2) +  (n) –Alas! this is still  (n 2 ) (Master Theorem, Case 1)

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Divide and Conquer (Attempt #1): –Write a = A 1 2 n /2 + A 0 b = B 1 2 n /2 + B 0 –We want ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 –Multiply n/2 –bit integers recursively –T(n) = 4T(n/2) +  (n) –Alas! this is still  (n 2 ). (Exercise: write out the recursion tree.) Multiplying large integers –Karatsuba’s idea: (A 0 +A 1 ) (B 0 + B 1 ) = A 0 B 0 + A 1 B 1 + (A 0 B 1 + B 1 A 0 ) –We can get away with 3 multiplications! (in yellow) x = A 1 B 1 y = A 0 B 0 z = (A 0 +A 1 )(B 0 +B 1 ) –Now we use ab = A 1 B 1 2 n + (A 1 B 0 + B 1 A 0 ) 2 n /2 + A 0 B 0 = x 2 n + (z–x–y) 2 n /2 + y

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers M ULTIPLY (n, a, b) B a and b are n-bit integers B Assume n is a power of 2 for simplicity 1.If n · 2 then use grade-school algorithm else 2. A 1 à a div 2 n /2 ; B 1 à b div 2 n /2 ; 3.A 0 à a mod 2 n /2 ; B 0 à b mod 2 n /2. 4.x à M ULTIPLY (n/2, A 1, B 1 ) 5.y à M ULTIPLY (n/2, A 0, B 0 ) 6.z à M ULTIPLY (n/2, A 1 +A 0, B 1 +B 0 ) 7.Output x 2 n + (z–x–y)2 n /2 + y

9/22/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova. Multiplying large integers The resulting recurrence T(n) = 3T(n/2) +  (n) Master Theorem, Case 1: T(n) =  (n log 2 3 ) =  (n 1.59… ) Note: There is a  (n log n) algorithm for multiplication (more on it later in the course).

Matrix multiplication 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Matrix multiplication Input:A = [a ij ], B = [b ij ]. Output:C = [c ij ] = A   B. i, j = 1, 2,…, n.

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Standard algorithm for i  1 to n dofor j  1 to n doc ij  0 for k  1 to n do c ij  c ij + a ik  b kj Running time =  (n 3 )

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Divide-and-conquer algorithm n £ n matrix = 2 £ 2 matrix of (n/2) £  n/2) submatrices: I DEA : C =A  B r=ae+bg s=af+bh t=ce+dh u=cf+dg 8 mults of (n/2) £  n/2) submatrices 4 adds of (n/2) £  n/2) submatrices ^ recursive

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Analysis of D&C algorithm n log b a = n log 2 8 = n 3  C ASE 1  T(n) =  (n 3 ). No better than the ordinary algorithm. # submatrices submatrix size work adding submatrices T(n) = 8 T(n/2) +  (n 2 )

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne 7 mults, 18 adds/subs. Note: No reliance on commutativity of multiplication! Strassen’s idea Multiply 2 £ 2 matrices with only 7 recursive mults. P 1 = a  ( f – h) P 2 = (a + b)  h P 3 = (c + d)  e P 4 = d  (g – e) P 5 = (a + d)  (e + h) P 6 = (b – d)  (g + h) P 7 = (a – c)  (e + f ) r=P 5 + P 4 – P 2 + P 6 s=P 1 + P 2 t=P 3 + P 4 u=P 5 + P 1 – P 3 – P 7

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Strassen’s idea Multiply 2 £ 2 matrices with only 7 recursive mults. P 1 = a  ( f – h) P 2 = (a + b)  h P 3 = (c + d)  e P 4 = d  (g – e) P 5 = (a + d)  (e + h) P 6 = (b – d)  (g + h) P 7 = (a – c)  (e + f ) r= P 5 + P 4 – P 2 + P 6 =(a + d) (e + h) + d (g – e) – (a + b) h + (b – d) (g + h) =ae + ah + de + dh + dg –de – ah – bh + bg + bh – dg – dh =ae + bg

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Strassen’s algorithm 1.Divide: Partition A and B into (n/2) x (n/2) submatrices. Form terms to be multiplied using + and –. 2.Conquer: Perform 7 multiplications of (n/2) x (n/2) submatrices recursively. 3.Combine: Form product matrix C using + and – on (n/2) x (n/2) submatrices. T(n) = 7 T(n/2) +  (n 2 )

9/24/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Analysis of Strassen T(n) = 7 T(n/2) +  (n 2 ) n log b a = n log 2 7  n 2.81  C ASE 1  T(n) =  (n lg 7 ). Best to date (of theoretical interest only):  (n  ). Number 2.81 may not seem much smaller than 3. But the difference is in the exponent. The impact on running time is significant. Strassen’s algorithm beats the ordinary algorithm on today’s machines for n  32 or so.

Closest pair 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Closest Pair of Points Closest pair. Given n points in the plane, find a pair with smallest Euclidean distance between them. Fundamental geometric primitive. n Graphics, computer vision, geographic information systems, molecular modeling, air traffic control. n Special case of nearest neighbor, Euclidean MST, Voronoi. Brute force. Check all pairs of points p and q with  (n 2 ) comparisons. 1-D version (points on a line): O(n log n) easy via sorting Assumption. No two points have same x coordinate. to make presentation cleaner fast closest pair inspired fast algorithms for these problems

Closest Pair of Points: First Attempt Divide. Sub-divide region into 4 quadrants. L

Closest Pair of Points: First Attempt Divide. Sub-divide region into 4 quadrants. Obstacle. Impossible to ensure n/4 points in each piece. L

Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. L

Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. n Conquer: find closest pair in each side recursively L

Closest Pair of Points Algorithm. n Divide: draw vertical line L so that roughly ½n points on each side. n Conquer: find closest pair in each side recursively. n Combine: find closest pair with one point in each side. n Return best of 3 solutions L seems like  (n 2 )

Closest Pair of Points Find closest pair with one point in each side, assuming that distance <   = min(16, 21) L

Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L  L  = min(16, 21)

 Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L. n Sort points in 2  -strip by their y coordinate. L  = min(16, 21)

 Closest Pair of Points Find closest pair with one point in each side, assuming that distance < . n Observation: only need to consider points within  of line L. n Sort points in 2  -strip by their y coordinate. n Theorem: Only need to check distances of those within 11 positions in sorted list! L  = min(16, 21)

Closest Pair of Points Def. Let s i be the point in the 2  -strip, with the i th smallest y-coordinate. Claim. If |i – j|  12, then the distance between s i and s j is at least . Proof. n No two points lie in same ½  -by-½  box. n Two points at least 2 rows apart have distance  2(½  ). ▪ Fact. Still true if we replace 12 with 7.   ½½ 2 rows ½½ ½½ 39 i j

Closest Pair Algorithm Closest-Pair(p 1, …, p n ) { Compute separation line L such that half the points are on one side and half on the other side.  1 = Closest-Pair(left half)  2 = Closest-Pair(right half)  = min(  1,  2 ) Delete all points further than  from separation line L Sort remaining points by y-coordinate. Scan points in y-order and compare distance between each point and next 11 neighbors. If any of these distances is less than , update . return . } O(n log n) 2T(n / 2) O(n) O(n log n) O(n)

Closest Pair of Points: Analysis Running time. Q. Can we achieve O(n log n)? A. Yes. Don't sort points in strip from scratch each time. n Sort entire point set by x-coordinate only once n Each recursive call takes as input a set of points sorted by x coordinates and returns the same points sorted by y coordinate (together with the closest pair) n Create new y-sorted list by merging two output from recursive calls

Divide and Conquer in low-dimensional geometry  Powerful technique for low-dimensional geometric problems  Intuition: points in different parts of the plane don’t interfere too much  Example: convex hull in O(n log (n)) time a la MergeSort 1. Convex-Hull(left-half) 2. Convex-Hull(right-half) 3. Merge (see Cormen et al., Chap 33) T(n/2) Θ(n)