Lecture 34 CSE 331 Nov 26, 2012
Closest pairs of points Input: n 2-D points P = {p1,…,pn}; pi=(xi,yi) d(pi,pj) = ( (xi-xj)2+(yi-yj)2)1/2 Output: Points p and q that are closest
A property of Euclidean distance yj d(pi,pj) = ( (xi-xj)2+(yi-yj)2)1/2 yi xi xj The distance is larger than the x or y-coord difference
Dividing up P Q R First n/2 points according to the x-coord
Recursively find closest pairs Q R δ = min (blue, green)
An aside: maintain sorted lists Px and Py are P sorted by x-coord and y-coord Qx, Qy, Rx, Ry can be computed from Px and Py in O(n) time
An easy case R > δ Q All “crossing” pairs have distance > δ δ = min (blue, green)
Life is not so easy though Q R δ = min (blue, green)
Today’s agenda Taking care of life’s unfairness
Euclid to the rescue (?) d(pi,pj) = ( (xi-xj)2+(yi-yj)2)1/2 yj yi xi The distance is larger than the x or y-coord difference
Life is not so easy though δ Q R > δ > δ > δ δ = min (blue, green)
All we have to do now δ R Q S Figure if a pair in S has distance < δ S δ = min (blue, green)
Assume can be done in O(n) The algorithm so far… O(n log n) + T(n) Input: n 2-D points P = {p1,…,pn}; pi=(xi,yi) Sort P to get Px and Py O(n log n) T(< 4) = c Closest-Pair (Px, Py) T(n) = 2T(n/2) + cn If n < 4 then find closest point by brute-force Q is first half of Px and R is the rest O(n) Compute Qx, Qy, Rx and Ry O(n) (q0,q1) = Closest-Pair (Qx, Qy) O(n log n) overall (r0,r1) = Closest-Pair (Rx, Ry) O(n) δ = min ( d(q0,q1), d(r0,r1) ) O(n) S = points (x,y) in P s.t. |x – x*| < δ return Closest-in-box (S, (q0,q1), (r0,r1)) Assume can be done in O(n)
Rest of today’s agenda Implement Closest-in-box in O(n) time
Three general techniques High level view of CSE 331 Problem Statement Problem Definition Three general techniques Algorithm “Implementation” Data Structures Analysis Correctness+Runtime Analysis
Greedy Algorithms Natural algorithms Reduced exponential running time to polynomial
Divide and Conquer Recursive algorithmic paradigm Reduced large polynomial time to smaller polynomial time
A new algorithmic technique Dynamic Programming
Dynamic programming vs. Divide & Conquer
Same same because Both design recursive algorithms
Different because Dynamic programming is smarter about solving recursive sub-problems
Using Memory to be smarter Pow (a,n) // n is even and ≥ 2 return t * t t= Pow(a,n/2) Pow (a,n) // n is even and ≥ 2 return Pow(a,n/2) * Pow(a, n/2) O(n) as we recompute! O(log n) as we compute only once
End of Semester blues Can only do one thing at any day: what is the optimal schedule to obtain maximum value? Write up a term paper (30) (3) (2) (5) (10) Party! Exam study 331 HW Project Monday Tuesday Wednesday Thursday Friday
Previous Greedy algorithm Order by end time and pick jobs greedily Greedy value = 5+2+3= 10 Write up a term paper (10) OPT = 30 Party! (2) Exam study (5) 331 HW (3) Project (30) Monday Tuesday Wednesday Thursday Friday
Today’s agenda Formal definition of the problem Start designing a recursive algorithm for the problem