Download presentation
Presentation is loading. Please wait.
1
Computer Algorithms Ch. 2
Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR, Prof. Elder, York Univ. 1
2
How to Add 2 n-bit Numbers
+ *
3
How to Add 2 n-bit Numbers
* +
4
How to Add 2 n-bit Numbers
* +
5
How to Add 2 n-bit Numbers
* +
6
How to Add 2 n-bit Numbers
* +
7
How to Add 2 n-bit Numbers
* + Time Cost depends on ….? Problem complexity Time Cost of grade school addition? T(n) = Θ(n) = O(n) Is there an algorithm to add two n-bit numbers faster than in linear time? The algorithm needs to examine every bit
8
Big O-notation For function g(n), we define O(g(n)), big-O of n, as the set: O(g(n)) = {f(n) : positive constants c and n0, such that n n0, we have 0 f(n) cg(n) } Intuitively: Set of all functions whose rate of growth is the same as or lower than that of g(n). Technically, f(n) O(g(n)). Older usage, f(n) = O(g(n)). For the purpose of this class either notation is acceptable
9
-notation For function g(n), we define (g(n)), big-Omega of n, as the set: (g(n)) = {f(n) : positive constants c and n0, such that n n0, we have 0 cg(n) f(n)} Intuitively: Set of all functions whose rate of growth is the same as or higher than that of g(n). Technically, f(n) Ω(g(n)). Older usage, f(n) = Ω(g(n)). For the purpose of this class either notation is acceptable
10
-notation For function g(n), we define (g(n)), big-Theta of n, as the set: (g(n)) = {f(n) : positive constants c1, c2, and n0, such that n n0, we have 0 c1g(n) f(n) c2g(n)} Technically, f(n) (g(n)). Older usage, f(n) = (g(n)). For the purpose of this class either notation is acceptable If f(n) = (g(n)) f(n) = Ω(g(n)) and f(n) = O(g(n))
11
How to Multiply 2 n-bit Numbers
X * n2 Time Cost T(n) = Θ(n2) = O(n2)
12
Function Growth Time n - # of bits (problem complexity)
No matter how dramatic the difference in the constants, the quadratic curve will eventually dominate the linear curve Algorithm with what cost would you prefer?
13
What function do you know that growth even slower than linear?
Time n - # of bits (problem complexity)
14
Time Cost Grade School addition: O(n)
Grade School multiplication: O(n2) Is there a cheaper/faster algorithm to multiply two numbers?
15
What is a complex number?
Introduced by an Italian mathematician Gerolamo Cardano in 1545 A number that can be expressed in the form a + bi where a and b are real numbers and i where is the imaginary unit (i2 = –1) Visual representation of complex numbers
16
Complex Numbers Multiplication
(a+bi)(c+di) = ac + bci + adi + bdi2 = (ac – bd) + (ad + bc)i Input: a, b, c, d output: ac – bd, ad + bc If a real multiplication costs $1 and an addition costs 1¢, what is the cheapest way to obtain the output from the input? $4.02 Can we do better?
17
Gauss’ Method Carl Friedrich Gauss (1777–1855) German mathematician
Had a remarkable influence in many fields of mathematics and science Is ranked as one of history's most influential mathematicians
18
Gauss’ $3.05 Method Input: a, b, c, d Output: ac – bd, ad + bc
c X1=a+b c X2=c+d $ X3=X1X2 =ac + ad + bc + bd $ X4=ac $ X5=bd c X6= X4 – X5 =ac – bd (real part) c X7= X3 – X4 – X5 =bc + ad (imaginary part) The Gauss optimization saves one multiplication out of four It requires 25% less work
19
Recursion Informal: Recursion is a procedure that goes through steps where one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'. Formal: In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can be defined by two properties: A simple base case (or cases) A set of rules that reduce all other cases toward the base case Example: The Fibonacci sequence is a classic example of recursion: Fib(0) = 0, base case 1 Fib(1) = 1, base case 2 For all integers n > 1, Fib (n) = Fib (n – 1) + Fib (n – 2)
20
Divide and Conquer Nothing is particularly hard if you divide it into small jobs. — Henry Ford Divide-and-Conquer algorithms are recursive in structure Divide the problem into sub-problems that are similar to the original but smaller in size Conquer the sub-problems by solving them recursively. If they are small enough, just solve them in a straightforward manner. Combine the solutions to create a solution to the original problem The tasks of dividing, conquering, combining might be non-trivial
21
Multiplication of 2 n-bit numbers
n bits X = Y = X Y n/2 bits n/2 bits Decimal: 6487 = 6* * * Binary: 1101 = 1*23 + 1*22 + 0*21 + 1 What is multiplication of a binary number by 2m? Time Cost?
22
Multiplication of 2 n-bit numbers
X = X = a2n/2 + b Y = Y = c2n/2 + d XY = (a2n/2+b)(c2n/2+d)=ac 2n + (ad+bc) 2n/2+ bd MULT(X,Y): if |X|=|Y|=1 then return XY else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) a b c d n/2 bits n/2 bits Trivial. Why? 1-bit number multiplication
23
Time Cost of MULT T(n) = time taken by MULT on two n-bit numbers
What is T(n)? What is its cost growth rate? Is the cost growth rate O(n2)?
24
Divide and Conquer Divide-and-Conquer algorithms are recursive in structure Divide the problem into sub-problems that are similar to the original but smaller in size Conquer the sub-problems by solving them recursively. If they are small enough, just solve them in a straightforward manner. Combine the solutions to create a solution to the original problem. How to combine solutions of subproblems into the solution of a problem is not a trivial task.
25
MULT(X,Y) MULT(X,Y) MULT(X,Y): if |X|=|Y|=1 then return XY
else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) MULT(X,Y)
26
MULT(X,Y) X=a;b Y=c;d MULT(X,Y): if |X|=|Y|=1 then return XY
else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) X=a;b Y=c;d MULT(a,c) MULT(a,d) MULT(b,c) MULT(b,d)
27
MULT(X,Y) X=a;b Y=c;d ac MULT(X,Y): if |X|=|Y|=1 then return XY
else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) X=a;b Y=c;d ac MULT(a,d) MULT(b,c) MULT(b,d)
28
MULT(X,Y) X=a;b Y=c;d ac ad MULT(X,Y): if |X|=|Y|=1 then return XY
else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) X=a;b Y=c;d ac ad MULT(b,c) MULT(b,d)
29
MULT(X,Y) X=a;b Y=c;d ac ad bc MULT(X,Y): if |X|=|Y|=1 then return XY
else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) X=a;b Y=c;d ac ad bc MULT(b,d)
30
MULT(X,Y) X=a;b Y=c;d ac bd ad bc MULT(X,Y):
if |X|=|Y|=1 then return XY else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT(X,Y) X=a;b Y=c;d ac bd ad bc
31
Recurrence Relation T(n) = 4T(n/2) + (k1n + k2)
T(1) = k for some constant k T(n) = 4T(n/2) + k1n + k2 for some constants k1 and k2 T(n) = 4T(n/2) + (k1n + k2) Conquering time Divide and combine time Big questions: is T(n) = O(n2) How can we identify the time cost for the above recursion?
32
Recursion Tree T(1) = 1 = T(n) = n + 4 T(n/2) n = (n/2) T(1) 1 T(n)
COSC 3101, PROF. J. ELDER
33
T(n) = n T(n/2) T(n/2) T(n/2) T(n/2) COSC 3101, PROF. J. ELDER
34
T(n) = n n/2 T(n/4) T(n/2) T(n/2) T(n/2) COSC 3101, PROF. J. ELDER
35
= n T(n) n/2 n/2 n/2 n/2 T(n/4) T(n/4) T(n/4) T(n/4)
COSC 3101, PROF. J. ELDER
36
T(n) = n n/2 n/2 n/2 n/2 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 n/4 ……………………… COSC 3101, PROF. J. ELDER
37
Level i is the sum of 4i copies of n/2i
n/ n/ n/ n/2 1 2 n/4 + n/4 + n/4 + n/4 + n/4 n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 Level i is the sum of 4i copies of n/2i i What is the height of the recursion tree? i = 0; i = 1; i = 2 … COSC 3101, PROF. J. ELDER
38
= 4logn×n/2logn =22logn×1=n2
= 4i ×n/2i = 4logn×n/2logn =22logn×1=n2 COSC 3101, PROF. J. ELDER
39
= 4logn×n/2logn =22logn×1=n2
= 4i ×n/2i = 4logn×n/2logn =22logn×1=n2 COSC 3101, PROF. J. ELDER
40
All that work for nothing!
Divide and Conquer MULT: θ(n2) time Grade School Multiplication: θ(n2) time All that work for nothing! COSC 3101, PROF. J. ELDER
41
MULT revisited MULT(X,Y): if |X|=|Y|=1 then return XY else break X into a;b and Y into c;d return (MULT(a,c)2n + (MULT(a,d)2n/2 + MULT(b,c)2n/2) + MULT(b,d)) MULT calls itself 4 times. Can you see a way to reduce the number of calls? COSC 3101, PROF. J. ELDER
42
Anatoly Karatsuba (1937-2008) In 1962 Karatsuba Gaussified MULT
This was the first multiplication algorithm to break the n2 barrier
43
Gauss’ Optimization Input: a, b, c, d Output: ac – bd, ad + bc
c X1=a+b c X2=c+d $ X3=X1X2 =ac + ad + bc + bd $ X4=ac $ X5=bd c X6= X4 – X5 =ac – bd (real part) c X7= X3 – X4 – X5 =bc + ad (imaginary part)
44
XY = (a2n/2+b)(c2n/2+d)=ac 2n + (ad+bc) 2n/2+ bd =
(2n – 2n/2)a*c + 2n/2(a+b)*(c+d)+(1 – 2n/2)b*d 2n ac – 2n/2ac + 2n/2ac+2n/2bc+2n/2ad+2n/2bd+bd – 2n/2bd= 2n ac n/2bc+2n/2ad bd = 2n ac +2n/2(bc+ad)+bd Formula looks more complicated but it results in only three multiplications of size n/2, plus a constant number of shifts and additions
45
Gaussified MULT (Karatsuba 1962)
MULT(X,Y): if |X|=|Y|=1 then return XY else break X into a;b and Y into c;d e = MULT(a,c) f = MULT(b,d) return e 2n + (MULT(a+b,c+d) – e – f ) 2n/2 + f T(n) = 3 T(n/2) + kn Strictly speaking: T(n) = 2 T(n/2) + T(n/2 + 1) + kn Why can we deal with a more general solution?
46
T(n) = n T(n/2) T(n/2) T(n/2) COSC 3101, PROF. J. ELDER
47
T(n) = n n/2 T(n/4) T(n/2) T(n/2) COSC 3101, PROF. J. ELDER
48
T(n) = n n/2 T(n/4) n/2 T(n/4) n/2 T(n/4) COSC 3101, PROF. J. ELDER
49
n n/ n/ n/2 1 2 n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 + n/4 Level i is the sum of 3i copies of n/2i i COSC 3101, PROF. J. ELDER
50
=1×n = 3×n/2 = 9×n/4 = 3i ×n/2i = 3logn×n/2logn =nlog3×1 =
COSC 3101, PROF. J. ELDER
51
Dramatic Improvement for Large n
𝑇 𝑛 =3 𝑛 log −2𝑛 =θ 𝑛 log 2 3 =θ 𝑛 1.58… A huge savings over Θ(n2) when n gets large.
52
Multiplication Algorithms
Grade School n2 Karatsuba n1.58… Fastest Known (Schönhage-Strassen algorithm, 1971) n log n log log n COSC 3101, PROF. J. ELDER
53
Another Recursion Example: Merge Sort
What sorting algorithms do you know?
54
Sorting
55
Analysis of Insertion Sort
Best-case: the input array is in the correct order Worst-case: the input array is in the reverse order Average-case The maximum number of comparisons while inserting A[i] is (i-1). So, the number of comparisons is Cwc(n) i = 2 to n (i -1) = j = 1 to n-1 j = n(n-1)/2 = (n2) For which input does insertion sort perform n(n-1)/2 comparisons?
56
Bubble Sort (another elementary sorting algorithm)
Idea: Repeatedly pass through the array Swaps adjacent elements that are out of order Easier to implement, but slower than Insertion sort i 1 2 3 n 1 3 2 9 6 4 8 j
57
Bubble-Sort Running Time
Alg.: BUBBLESORT(A) for i 1 to length[A] do for j length[A] downto i + 1 do if A[j] < A[j -1] then exchange A[j] A[j-1] Comparisons: n2/2 Exchanges: n2/2 T(n) = c1(n+1) + c2 c3 c4 = (n) + (c2 + c3 + c4) T(n) = (n2)
58
Another Recursion Example: Merge Sort
Sorting Problem: Sort a sequence of n elements into non-decreasing order. Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer.
59
Merge Sort – Example Original Sequence Sorted Sequence 18 26 32 6 43
15 9 1 18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 43 18 26 32 6 18 26 32 6 43 15 9 1 43 15 9 1 6 6 18 18 26 26 32 32 1 1 9 9 15 15 43 43 18 26 18 26 32 6 32 6 43 15 43 15 9 1 9 1 18 18 26 26 6 6 32 32 15 15 43 43 1 1 9 9 18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1 18 18 26 26 32 32 6 6 43 43 15 15 9 9 1 1 18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1
60
Merge-Sort (A, p, r) INPUT: a sequence of n numbers stored in array A
OUTPUT: an ordered sequence of n numbers MergeSort (A, p, r) // sort A[p..r] by divide & conquer if p < r then q (p+r)/2 MergeSort (A, p, q) MergeSort (A, q+1, r) Merge (A, p, q, r) // merges A[p..q] with A[q+1..r] Initial Call: MergeSort(A, 1, n)
61
Procedure Merge Merge(A, p, q, r) 1 n1 q – p + 1 2 n2 r – q for i 1 to n1 do L[i] A[p + i – 1] for j 1 to n2 do R[j] A[q + j] L[n1+1] R[n2+1] i 1 j 1 for k p to r do if L[i] R[j] then A[k] L[i] i i + 1 else A[k] R[j] j j + 1 Input: Array containing sorted subarrays A[p..q] and A[q+1..r]. Output: Merged sorted subarray in A[p..r]. Sentinels, to avoid having to check if either subarray is fully copied at each step.
62
Merge – Example A L R … 6 1 8 6 8 26 9 32 26 1 32 9 42 42 43 43 … k k
R 1 1 9 9 42 42 43 43 i i i i i j j j j
63
Merge Sort Sorting Problem: Sort a sequence of n elements into non-decreasing order. Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer.
64
Analysis of Merge Sort Running time T(n) of Merge Sort:
Divide: computing the middle takes (1) Conquer: solving 2 subproblems takes 2T(n/2) Combine: merging n elements takes (n) Total: T(n) = (1) if n = 1 T(n) = 2T(n/2) + (n) if n > 1 T(n) = (n lg n)
65
Recurrences Running time of algorithms with recursive calls can be described using recurrences A recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs. For divide-and-conquer algorithms: Example: Merge Sort 65
66
Recursion Tree for Merge Sort
For the original problem, we have a cost of cn, plus two subproblems each of size (n/2) and running time T(n/2). Each of the size n/2 problems has a cost of cn/2 plus two subproblems, each costing T(n/4). cn cn/2 T(n/4) Cost of divide and merge. cn T(n/2) Cost of sorting subproblems.
67
Recursion Tree for Merge Sort
Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 c cn cn log2 n cn cn Total : cnlog2n+cn
68
Recursion Tree for Merge Sort
Continue expanding until the problem size reduces to 1. cn cn/2 cn/4 c Each level has total cost cn. Each time we go down one level, the number of subproblems doubles, but the cost per subproblem halves cost per level remains the same. There are log2 n + 1 levels, height is log2 n. (Assuming n is a power of 2.) Can be proved by induction. Total cost = sum of costs at each level = (log2 n + 1)cn = cnlog2n + cn = (n log2n).
69
Master Method (Master theorem)
“Cookbook” approach for solving recurrences of the form T(n) = aT(n/b) + f(n) a 1, b > 1 are constants. a subproblems of size n/b are solved recursively, each in time T(n/b) f(n) is asymptotically positive. An asymptotically positive function is one that is positive for all sufficiently large n. f(n) is the cost of dividing the problem and combining the results n/b may not be an integer, but we ignore floors and ceilings. Three cases. Idea: compare f(n) with f(n) is asymptotically smaller or larger than by a polynomial factor n f(n) is asymptotically equal with Why comparing with ?
70
Recursion tree view f(n) f(n/b) (nlogba) Total: f(n) a … af(n/b)
Θ(1) f(n) f(n/b) f(n/b2) a … f(n) Split problem into a parts at logbn levels. af(n/b) a2f(n/b2) (nlogba) Total: There are leaves
71
Master Theorem Master Theorem
Let a 1 and b > 1 be constants, let f(n) be a function, and Let T(n) be defined on nonnegative integers by the recurrence T(n) = aT(n/b) + f(n), where we can replace n/b by n/b or n/b. T(n) can be bounded asymptotically in three cases: If f(n) = O(nlogba–) for some constant > 0, then T(n) = (nlogba). If f(n) = (nlogba), then T(n) = (nlogbalog2 n). If f(n) = (nlogba+) for some constant > 0, and if, for some constant c < 1 and all sufficiently large n, we have a·f(n/b) c f(n), then T(n) = (f(n)). Is the problem cost dominated by the root or by the leaves ?
72
Master Theorem – What it means?
Case 1: If f(n) = O(nlogba–) for some constant > 0, then T(n) = (nlogba). nlogba = alogbn : Number of leaves in the recursion tree. f(n) = O(nlogba–) Sum of the cost of the nodes at each internal level asymptotically smaller than the cost of leaves by a polynomial factor. Cost of the problem dominated by leaves, hence cost is (nlogba).
73
Master Theorem – What it means?
Case 2: If f(n) = (nlogba), then T(n) = (nlogbalg n). nlogba = alogbn : Number of leaves in the recursion tree. f(n) = (nlogba) Sum of the cost of the nodes at each level asymptotically the same as the cost of leaves. There are (lg n) levels. Hence, total cost is (nlogba lg n).
74
Master Theorem – What it means?
Case 3: If f(n) = (nlogba+) for some constant > 0, and if, for some constant c < 1 and all sufficiently large n, we have a·f(n/b) c f(n), then T(n) = (f(n)). nlogba = alogbn : Number of leaves in the recursion tree. f(n) = (nlogba+) Cost is dominated by the root. Cost of the root is asymptotically larger than the sum of the cost of the leaves by a polynomial factor. Hence, cost is (f(n)).
75
Analysis of Merge Sort Running time T(n) of Merge Sort:
Divide: computing the middle takes (1) Conquer: solving 2 subproblems takes 2T(n/2) Combine: merging n elements takes (n) Total: T(n) = (1) if n = 1 T(n) = 2T(n/2) + (n) if n > 1 T(n) = (n lg n)
76
Sorting Terminology Comparison-based sort: uses only the relation among keys, not any special property of the representation of the keys themselves. In-place sort: needs only a constant amount of extra space in addition to that needed to store keys. An in-place algorithm transforms an input data structure with a small (constant) amount of extra storage space In the context of sorting, this means that the input array is overwritten by the output as the algorithm executes instead of introducing a new array Advantages of in place solutions? They are sometimes more difficult to write, but take up much less memory Tradeoff between space efficiency and simplicity of the algorithm More difficult to write in-place solutions for recursive algorithms Stable sort: records with equal keys retain their original relative order; i.e., i < j & K(pi ) = K(pj ) pi < pj (next slide)
77
Stability A STABLE sort preserves relative order of records with equal keys Sort file on first key: Sort file on second key: Records with key value 3 are not in order on first key!!
78
Sorting Insertion sort Bubble Sort Design approach: Sorts in place:
Best case: Worst case: n2 comparisons, n2 exchanges Bubble Sort Running time: incremental Yes (n) (n2) incremental Yes (n2)
79
Sorting Merge Sort divide and conquer No (n lgn) Design approach:
Sorts in place: Running time: divide and conquer No (n lgn)
80
Comparison-based Sorting
Comparison sort Only comparison of pairs of elements may be used to gain order information about a sequence. Hence, a lower bound on the number of comparisons will be a lower bound on the complexity of any comparison-based sorting algorithm. The mergesort algorithm that we analyzed is a comparison sort. What other sorting algorithms do you know? Are they comparison-based? The best worst-case complexity for comparison-based algorithms is (n lg n) To sort n elements, comparison sorts must make O(nlgn) comparisons in the worst case
81
Decision Tree Model Full binary tree: every node is either a leaf of has degree 2 Represents the comparisons made by a sorting algorithm on an input of a given size: models all possible execution traces Control, data movement, other operations are ignored Count only the comparisons Decision tree for insertion sort on three elements: one execution trace node leaf:
82
Beating the lower bound
We can beat the lower bound if we don’t base our sort on comparisons: Counting sort for keys in [0..k], k=O(n) Radix sort for keys with a fixed number of “digits” Bucket sort for random keys (uniformly distributed)
83
Counting Sort 3 2 5 7 4 2 8 5 3 2 Assumption: Idea: A C B
The elements to be sorted are integers in the range 0 to k Idea: Determine for each input element x, the number of elements smaller than or equal to x Place element x into its correct position in the output array 3 2 5 1 4 6 7 8 A 7 4 2 1 3 5 C 8 5 3 2 1 4 6 7 8 B
84
Counting Sort Alg.: COUNTING-SORT(A, B, n, k) for i ← 0 to k
1 j n A Alg.: COUNTING-SORT(A, B, n, k) for i ← 0 to k do C[ i ] ← 0 for j ← 1 to n do C[A[ j ]] ← C[A[ j ]] + 1 C[i] contains the number of elements equal to i for i ← 1 to k do C[ i ] ← C[ i ] + C[i -1] C[i] contains the number of elements ≤ i for j ← n downto 1 do B[C[A[ j ]]] ← A[ j ] C[A[ j ]] ← C[A[ j ]] - 1 k C 1 n B
85
Example 3 2 5 7 4 2 8 3 3 3 3 2 A C C B C B C B C B C A[8]=3 A[7]=0
CountingSort(A, B, k) 1. for i 1 to k do C[i] 0 3. for j 1 to length[A] do C[A[j]] C[A[j]] + 1 5. for i 2 to k do C[i] C[i] + C[i –1] 7. for j length[A] downto 1 do B[C[A[ j ]]] A[j] C[A[j]] C[A[j]]–1 3 2 5 1 4 6 7 8 A C 7 4 2 1 3 5 C 8 A[8]=3 A[7]=0 3 1 2 4 5 6 7 8 B C 3 1 2 4 5 6 7 8 B C A[6]=3 A[5]=2 3 1 2 4 5 6 7 8 B C 3 2 1 4 5 6 7 8 B C
86
Example (cont.) 3 2 5 3 2 5 3 2 3 2 5 3 2 A B C B C B C B A[4]=0
2 5 1 4 6 7 8 A A[4]=0 A[2]=5 3 2 1 4 5 6 7 8 B C 5 3 2 1 4 6 7 8 B C A[3]=3 A[1]=2 3 2 1 4 5 6 7 8 B C 5 3 2 1 4 6 7 8 B
87
Counting-Sort (A, B, k) O(k) O(n) O(k) O(n) CountingSort(A, B, k)
1. for i 1 to k do C[i] 0 3. for j 1 to length[A] do C[A[j]] C[A[j]] + 1 5. for i 2 to k do C[i] C[i] + C[i –1] 7. for j length[A] downto 1 do B[C[A[ j ]]] A[j] C[A[j]] C[A[j]]–1 O(k) Stable, but not in place. No comparisons made: it uses actual values of the elements to index into an array. O(n) O(k) O(n) Overall time: (n + k)
88
Radix Sort It was used by the card-sorting machines.
Card sorters worked on one column at a time. It is the algorithm for using the machine that extends the technique to multi-column sorting. The human operator was part of the algorithm! Key idea: sort on the “least significant digit” first and on the remaining digits in sequential order. The sorting method used to sort each digit must be “stable”. If we start with the “most significant digit”, we’ll need extra storage.
89
Radix Sort Considers keys as numbers in a base-k number
A d-digit number will occupy a field of d columns Sorting looks at one column at a time For a d-digit number, sort the least significant digit first Continue sorting on the next least significant digit, until all digits have been sorted Requires only d passes through the list
90
An Example Input After sorting on LSD After sorting on middle digit After sorting on MSD 928 495
91
1 is the lowest order digit, d is the highest-order digit
Radix-Sort(A, d) RadixSort(A, d) 1. for i 1 to d do use a stable sort to sort array A on digit I 1 is the lowest order digit, d is the highest-order digit Correctness of Radix Sort By induction on the number of digits sorted. Assume that radix sort works for d – 1 digits. Show that it works for d digits. Radix sort of d digits radix sort of the low-order d – 1 digits followed by a sort on digit d .
92
Correctness of Radix sort
We use induction on the number d of passes through the digits Basis: If d = 1, there’s only one digit, trivial Inductive step: assume digits 1, 2, , d-1 are sorted Now sort on the d-th digit If ad < bd, sort will put a before b: correct a < b regardless of the low-order digits If ad > bd, sort will put a after b: correct a > b regardless of the low-order digits If ad = bd, sort will leave a and b in the same order and a and b are already sorted on the low-order d-1 digits
93
Analysis of Radix Sort Given n numbers of d digits each, where each digit may take up to k possible values, RADIX-SORT correctly sorts the numbers in (d(n+k)) One pass of sorting per digit takes (n+k) assuming that we use counting sort There are d passes (for each digit)
94
Bucket Sort Assumption: Idea:
the input is generated by a random process that distributes elements uniformly over [0, 1) the discrete () uniform distribution is a discrete probability distribution that can be characterized by saying that all values of a finite set of possible values are equally probable. Idea: Divide [0, 1) into n equal-sized buckets Distribute the n input values into the buckets Sort each bucket Go through the buckets in order, listing elements in each one Input: A[1 . . n], where 0 ≤ A[i] < 1 for all i Output: elements in A sorted Auxiliary array: B[0 . . n - 1] of linked lists, each list initially empty
95
Example - Bucket Sort 1 .78 .17 .39 .26 .72 .94 .21 .12 .23 .68 / 2 1 .17 / 3 2 .26 .21 / 4 3 / 5 4 6 5 7 / 6 8 7 .78 / 9 8 10 9 /
96
Example - Bucket Sort Concatenate the lists from
.17 .12 .23 .26 .21 .39 .68 .78 .72 / / 1 .12 / 2 .21 .23 / 3 / 4 5 / 6 7 .72 / Concatenate the lists from 0 to n – 1 together, in order 8 9 /
97
Analysis Relies on no bucket getting too many values.
All lines except insertion sorting take O(n) altogether. Intuitively, if each bucket gets a constant number of elements, it takes O(1) time to sort each bucket O(n) sort time for all buckets. We “expect” each bucket to have few elements, since the average is 1 element per bucket. But we need to do a careful analysis.
98
Summary Non-Comparison Based Sorts
Running Time worst-case average-case best-case in place Counting Sort O(n + k) O(n + k) O(n + k) no Radix Sort O(d(n + k')) O(d(n + k')) O(d(n + k')) no Bucket Sort O(n) no Counting sort assumes input elements are in range [0,1,2,..,k] and uses array indexing to count the number of occurrences of each value. Radix sort assumes each integer consists of d digits, and each digit is in range [1,2,..,k']. Bucket sort requires advance knowledge of input distribution (sorts n numbers uniformly distributed in range in O(n) time).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.