Download presentation
Presentation is loading. Please wait.
Published byRodney Greer Modified over 8 years ago
1
Instructor Neelima Gupta ngupta@cs.du.ac.in
2
Expected Running Times and Randomized Algorithms Instructor Neelima Gupta ngupta@cs.du.ac.in
3
Expected Running Time of Insertion Sort x 1,x 2,........., x i-1,x i,.......…,x n For I = 2 to n Insert the ith element x i in the partially sorted list x 1,x 2,........., x i-1. (at r th position)
4
Expected Running Time of Insertion Sort Let X i be the random variable which represents the number of comparisons required to insert i th element of the input array in the sorted sub array of first i-1 elements. X i : can take values 1…i-1 (denoted by x i1,x i2,..................…,x ii) E(X i ) = Σ j x ij p(x ij ) where E(X i ) is the expected value X i And, p(x ij ) is the probability of inserting x i in the j th position 1≤j≤i
5
Expected Running Time of Insertion Sort x 1,x 2,........., x i-1,x i,.......…,x n How many comparisons it makes to insert i th element in j th position? (at j th position)
6
Position# of Comparisions i1 i-12 i-23.. 2i-1 1i-1 Note: Here, both position 2 and 1 have # of Comparisions equal to i-1. Why? Because to insert element at position 2 we have to compare with previously first element. and after that comparison we know which of them come first and which at second.
7
Thus, E(X i ) = (1/i) { i-1 Σ k=1 k + (i-1) } where 1/i is the probability to insert at j th position in the i possible positions. For n elements, E(X 1 + X 2 +.............+X n ) = n Σ i=2 E(X i ) = n Σ i=2 (1/i) { i-1 Σ k=1 k + (i-1) } = (n-1)(n-4) / 4 Therefore average case of insertion sort takes Θ(n 2 )
8
For n number of elements, expected time taken is, T = n Σ i=2 (1/i) { i-1 Σ k=1 k + (i-1) } where 1/i is the probability to insert at r th position in the i possible positions. E(X 1 + X 2 +.............+X n ) = n Σ i=1 E(X i ) Where,Xi is expected value of inserting X i element. T = (n-1)(n-4) / 4 Therefore average case of insertion sort takes Θ(n 2 )
9
Quick-Sort Pick the first item from the array--call it the pivot Partition the items in the array around the pivot so all elements to the left are to the pivot and all elements to the right are greater than the pivot Use recursion to sort the two partitions pivot partition: items > pivot partition 1: items pivot
10
Quicksort: Expected number of comparisons Partition may generate splits (0:n-1, 1:n-2, 2:n-3, …, n-2:1, n-1:0) each with probability 1/n If T(n) is the expected running time,
11
Randomized Quick-Sort Pick an element from the array--call it the pivot Partition the items in the array around the pivot so all elements to the left are to the pivot and all elements to the right are greater than the pivot Use recursion to sort the two partitions pivot partition: items > pivot partition 1: items pivot
12
Remarks Not much different from the Q-sort except that earlier, the algorithm was deterministic and the bounds were probabilistic. Here the algorithm is also randomized. We pick an element to be a pivot randomly. Notice that there isn’t any difference as to how does the algorithm behave there onwards? In the earlier case, we can identify the worst case input. Here no input is worst case.
13
Randomized Select
14
Randomized Algorithms A randomized algorithm performs coin tosses (i.e., uses random bits) to control its execution i ← random() if i = 0 do A … else { i.e. i = 1} do B … Its running time depends on the outcomes of the coin tosses
15
Assumptions coins are unbiased, and coin tosses are independent The worst-case running time of a randomized algorithm may be large but occurs with very low probability (e.g., it occurs when all the coin tosses give “heads”)
16
Monte Carlo Algorithms Running times are guaranteed but the output may not be completely correct. Probability of error is low.
17
Las Vegas Algorithms Output is guaranteed to be correct. Bounds on running times hold with high probability. What type of algorithm is Randomized Qsort?
18
Why expected running times? Markov’s inequality P( X > k E(X)) < 1/k i.e. the probability that the algorithm will take more than O(2 E(X)) time is less than 1/2. Or the probability that the algorithm will take more than O(10 E(X)) time is less than 1/10. This is the reason why Qsort does well in practice.
19
Markov’s Bound P(X<kM)< 1/k,where k is a constant. Chernouff’s Bound P(X>2μ)< ½ A More Stronger Result P(X>k μ )< 1/n k,where k is a constant.
20
Binary Search Tree What is a binary search tree? A BST is a possibly empty rooted tree with a key value, a possible empty left subtree and a possible empty right subtree. Each of the left subtree and the right subtree is a BST.
21
Binary Search Tree Pick the first item from the array--call it the pivot…it becomes the root of the BST. Partition the items in the array around the pivot so that all elements to the left are the pivot and all elements to the right are greater than the pivot Recursively Build a BST on each partition. They become the left and the right sub-tree of the root.
22
Binary Search Tree Consider the following input: 1,2,3 …………………10,000. What is the time for construction? Search Time?
23
Randomly Built Binary Search Tree Pick an item from the array randomly --call it the pivot…it becomes the root of the BST. Partition the items in the array around the pivot so that all elements to the left are the pivot and all elements to the right are greater than the pivot Recursively Build a BST on each partition. They become the left and the right sub-tree of the root.
24
Example Consider the input 10, 20, 30, 40, 50, 60, 70, 80, 90, 100.
25
WLOG, assume that the keys are distinct. (What if they are not?) Rank(x) = number of elements < x Let X i : height of the tree rooted at a node with rank=i. Let Y i : exponential height of the tree=2^X i Let H : height of the entire BST, then H=max{H1,H2} + 1 where H1 : ht. of left subtree H2 : ht.of right subtree Height of the RBST
26
Y=2^H =2.max{2^H1,2^H2} E(EH(T(n))): Expected value of exponential ht. of the tree with ‘n’ nodes. E(EH(T(n))) =2/n ∑ max{EH(T(k)),EH(T(n-1-k))} =O(n^3) E(H(T(n))) =E(log (EH(T(n)))) = O(log n)
27
Construction Time? Search Time? What is the worst case input?
28
Acknowledgements Kunal Verma Nidhi Aggarwal And other students of MSc(CS) batch 2009.
29
Hashing Motivation: symbol tables A compiler uses a symbol table to relate symbols to associated data Symbols: variable names, procedure names, etc. Associated data: memory location, call graph, etc. For a symbol table (also called a dictionary), we care about search, insertion, and deletion We typically don’t care about sorted order
30
Hash Tables More formally: Given a table T and a record x, with key (= symbol) and satellite data, we need to support: Insert (T, x) Delete (T, x) Search(T, x) We want these to be fast, but don’t care about sorting the records The structure we will use is a hash table Supports all the above in O(1) expected time!
31
Hash Functions Next problem: collision T 0 m - 1 h(k 1 ) h(k 4 ) h(k 2 ) = h(k 5 ) h(k 3 ) k4k4 k2k2 k3k3 k1k1 k5k5 U (universe of keys) K (actual keys)
32
Resolving Collisions How can we solve the problem of collisions? One of the solution is : chaining Other solutions: open addressing
33
Chaining Chaining puts elements that hash to the same slot in a linked list: —— T k4k4 k2k2 k3k3 k1k1 k5k5 U (universe of keys) K (actual keys) k6k6 k8k8 k7k7 k1k1 k4k4 —— k5k5 k2k2 k3k3 k8k8 k6k6 k7k7
34
Chaining How do we insert an element? —— T k4k4 k2k2 k3k3 k1k1 k5k5 U (universe of keys) K (actual keys) k6k6 k8k8 k7k7 k1k1 k4k4 —— k5k5 k2k2 k3k3 k8k8 k6k6 k7k7
35
Chaining How do we delete an element? —— T k4k4 k2k2 k3k3 k1k1 k5k5 U (universe of keys) K (actual keys) k6k6 k8k8 k7k7 k1k1 k4k4 —— k5k5 k2k2 k3k3 k8k8 k6k6 k7k7
36
Chaining How do we search for a element with a given key? —— T k4k4 k2k2 k3k3 k1k1 k5k5 U (universe of keys) K (actual keys) k6k6 k8k8 k7k7 k1k1 k4k4 —— k5k5 k2k2 k3k3 k8k8 k6k6 k7k7
37
Analysis of Chaining Assume simple uniform hashing: each key in table is equally likely to be hashed to any slot Given n keys and m slots in the table: the load factor = n/m = average # keys per slot What will be the average cost of an unsuccessful search for a key?
38
Analysis of Chaining Assume simple uniform hashing: each key in table is equally likely to be hashed to any slot Given n keys and m slots in the table, the load factor = n/m = average # keys per slot What will be the average cost of an unsuccessful search for a key? A: O(1+ )
39
Analysis of Chaining Assume simple uniform hashing: each key in table is equally likely to be hashed to any slot Given n keys and m slots in the table, the load factor = n/m = average # keys per slot What will be the average cost of an unsuccessful search for a key? A: O(1+ ) What will be the average cost of a successful search?
40
Analysis of Chaining Assume simple uniform hashing: each key in table is equally likely to be hashed to any slot Given n keys and m slots in the table, the load factor = n/m = average # keys per slot What will be the average cost of an unsuccessful search for a key? A: O(1+ ) What will be the average cost of a successful search? A: O((1 + )/2) = O(1 + )
41
Analysis of Chaining Continued So the cost of searching = O(1 + ) If the number of keys n is proportional to the number of slots in the table, what is ? A: = O(1) In other words, we can make the expected cost of searching constant if we make constant
42
If we could prove this, P(failure)<1/k (we are sort of happy) P(failure)<1/n k (most of times this is true and we’re happy ) P(failure)<1/2 n (this is difficult but still we want this) A Final Word About Randomized Algorithms
43
Acknowledgements Kunal Verma Nidhi Aggarwal And other students of MSc(CS) batch 2009.
44
END
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.