Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approximate Medians and other Quantiles in One Pass and with Limited Memory Researchers: G. Singh, S.Rajagopalan & B. Lindsey Lecturer: Eitan Ben Amos,

Similar presentations


Presentation on theme: "Approximate Medians and other Quantiles in One Pass and with Limited Memory Researchers: G. Singh, S.Rajagopalan & B. Lindsey Lecturer: Eitan Ben Amos,"— Presentation transcript:

1 Approximate Medians and other Quantiles in One Pass and with Limited Memory Researchers: G. Singh, S.Rajagopalan & B. Lindsey Lecturer: Eitan Ben Amos, 2003

2 Lecture Structure Problem Definition A Deterministic Algorithm Proof Complexity analysis Comparison to other algorithms A randomized solution. Pros & cons of randomized solution.

3 Problem Definition When given a large data set (N), design an algorithm for computing approximate quantiles (  ) in a single pass. Approximation guarantee is an input (  ). Algorithm should be applicable to any distribution of values & arrival. Compute multiple values with no extra cost. Low memory requirements.

4 Quantiles Given a stream of N values, the  -quantile, for  [0,1], is the value located in position  *N  in the sorted input stream. When  =0.5 the it is the median.  is  approximate  -quantile if its rank in the sorted input stream is between  (  -  )*N  and  (  +  )*N . There can be several values in this range.

5 Database Applications Used for query optimizations. Used by parallel DB systems in order to split the inserted data among the servers into approximately equal parts. Distributed parallel sorting uses quantiles to split the ranges between the machines.

6 Algorithm Framework An algorithm is parameterized by 2 integers: b,k. It will use b buffers, each stores k elements. Memory usage is b*k + C Every buffer (x) is associated with a positive integer weight, w(x). The weight denotes the number of input elements represented by an element in x.

7 Algorithm Framework (cont ’ d) Buffers are labeled either “Empty” or “Full”. Initially all buffers are “Empty”. The values of b,k are calculated so that they enforce the approximation guarantee (  ) and minimize memory requirement: b*k It must be able to process N elements.

8 Framework Basic Operations (1) NEW Takes an empty buffer as input. Populates the buffer with the next k elements from the input stream. Assigns the “Full” buffer a weight of 1. If there are less than k elements, an equal number of +  & -  are added to fill the buffer. The input stream with the additional ±  elements is called “augmented stream”.

9 Quantile in augmented stream Length of augmented stream is  *N,  >=1  ’ = (2*  +  -1)/(2*  ) The  -quantile in the original stream is the  ’ quantile in the augmented stream. Proof: (  -1)*N elements were added, ½ of which appear before  in the sorted stream.  ’N  =  *N+(  -1)*N/2= (N/2)*(2  +  -1)

10 Basic Operations (Cont ’ d) (2) COLLAPSE Takes c  2 “Full” input buffers X 1,….X c & outputs a buffer Y (all of size k). All input buffers are marked “Empty”, output buffer Y is marked “Full”. Weight of Y is the sum of weights of all input buffers: W(Y) =  w(X i )

11 Collapsing Buffers Make w(X i ) copies of each element in X i Sort elements from all buffers together. Elements of Y are k equally-spaced elements from the sorted elements. w(Y) is odd  elements are j*w(Y)+(w(Y)+1)/2, j=0,….,k-1 w(Y) is even  elements are j*w(Y)+w(Y)/2 or j*w(Y)+(w(Y)+2)/2

12 Collapsing Buffers (Cont ’ d) For 2 successive COLLAPSE with even w(Y) alternate between the choices. Define offset(Y)=(w(Y)+z)/2, z  {0,1,2} Y has the elements : j*w(y)+offset(Y) Collapsing buffers does not require the creation of multiple copies of elements. A single scan of the elements in a manner similar to merge-sort will do.

13 COLLAPSE example

14 Lemma 1 C = Number of COLLAPSE operations made by the algorithm. W = Sum of weights of output buffers produced by these COLLAPSE operations. Lemma: sum of offsets of all COLLAPSE operations is at least (W+C-1)/2

15 Proof C=C odd +C even (Number of COLLAPSE operations with w(Y) being odd & even respectively). C even = C even1 + C even2 (Number of COLLAPSE operations with offset(Y) being w(Y)/2 & (w(Y)+2)/2 respectively). Sum of all offsets is (W+C odd +2C even2 )/2

16 Proof (Cont ’ d) Since COLLAPSE alternates between the 2 offset choices for even w(Y): If C even1 =C even2  C even =2C even2 If C even1 =C even2 +1  C even =C even2 +1+C even2. In any case : C even2  (C even -1)/2 Sum-of-offsets  (W+C-1)/2

17 Basic Operations (Cont ’ d) (3) OUTPUT OUTPUT is performed only once, just before termination. Takes c  2 “Full” input buffers X 1,….X c of size k. Outputs a single element corresponding to the  ’ quantile of the augmented stream.

18 OUTPUT (Cont ’ d) Makes w(X i ) copies of each element in buffer X i, sorts all input buffers together. Outputs the element in position  ’kW  where W=w(X 1 )+….+w(X c )

19 COLLAPSE policies Different COLLAPSE policies mean different criteria for when to use the NEW/COLLAPE operations. – Munro & Pateson – Alsabti, Ranka & Singh – New Algorithm.

20 Munro & Pateson If there are empty buffers, invoke NEW. Otherwise, invoke COLLAPSE on 2 buffers having the same weight. Following is an example of operations for b=6.

21 Munro & Pateson

22 Alsabti, Ranka & Singh Fill b/2 “Empty” buffers by invoking NEW & then invoke COLLAPSE on them. Repeat this b/2 times. Invoke OUTPUT on b/2 resulting buffers. Following is an example of operations for b=10.

23 Alsabti, Ranka & Singh

24 New Algorithm Associate with every buffer X an integer l(X) denoting its level. Let l = minimum among all levels of currently “Full” buffers. If there’s exactly one “Empty” buffer, invoke NEW & assign it level l. If there are at least 2 “Empty” buffers, invoke NEW on each & assign them level 0.

25 New Algorithm (Cont ’ d) If there are no “Empty” buffers invoke COLLAPSE on the set of buffers with level l. Assign the output buffers level (l+1). Following is an example of operations for b=5, h=4.

26 New Algorithm

27 Tree representation Sequence of operations can be seen as a tree. Vertices (except root) represent the set of all logical buffers (initial, intermediate, final). Leaves correspond to initial buffers which are populated from the input stream by the NEW operation.

28 Tree representation (Cont ’ d) An edge is drawn from every input buffer to its output buffer (created by COLLAPSE). The root corresponds to the final OUTPUT operation. The children of the root are the final buffers that are produced (by COLLAPSE operations). Broken edges are drawn toward the children of the root.

29 Definitions User Specified: – NSize of input stream –  Quantile to be computed. –  Approximation Guarantee Others: – bNumber of buffers – ksize of each buffer –  ’ Quantile in the augmented stream

30 Definitions (Cont ’ d) More Others – C Number of COLLAPSE operations – Wsum of weights of all COLLAPSE – w max weight of heaviest COLLAPSE – LNumber of leaves in tree – hheight of tree

31 Approximation Guarantees We will prove the following: The difference in rank between the true  - quantile of the original dataset & the output of the algorithm is at most w max +(W-C-1)/2

32 Lemma 2 Lemma: The sum of weights of the top buffers (the children of the root) is L, the number of leaves

33 Proof Every buffers that is filled by NEW has a weight of 1. COLLAPSE of buffers creates a buffer with a weight that is the sum of weights of input buffers. Looking at the tree of operations, every node weighs exactly like the weight of all its children. Recursively applying this from the top buffers towards the root we can see that the weight of a top buffer is identical to the number of leaves in the sub-tree root at it.

34 Definitely Small/Large Let Q be the output of the algorithm. An element in the input stream is DS(DL) if it is smaller(larger) than Q. In order to identify all the DS(DL) elements we will start from the top buffers (children of root) and move towards the leaves. Mark elements of top buffers as DS(DL) if they are smaller(larger) than Q.

35 Definitely Small/Large (Cont ’ d) When going from a parent to its children, mark as DS(DL) all elements in the child buffers that are smaller(larger) than the DS(DL) elements in their parent. We will pursue a way of showing how many DS(DL) elements exists.

36 Weighted DS/DL bound Weight of element is the weight of the buffer it is in. Weighted DS(DL) adds w(X) for every element in buffer X that is DS(DL) Let DS top (DL top ) denote the weighted sum of DS(DL) elements among the top buffers.

37 Lemma 3  ’kL  - w max  DS top   ’kL  - 1 Right side: OUTPUT gives the element at position  ’kL  of the weighted buffers & so there’s obviously less than that number of elements which are smaller.

38 Lemma 3 (Cont ’ d) Left side: Surrounding Q there are w(Xi)-1 elements that are copies of Q. if we had asked a quantile that is just a bit different we would have just got a different copy of Q as the output, although it would have been a different element in the input stream. Error can be as large as w(Xi) which is bound by w max. Reducing the number of copies from the position of Q, all others are DS for sure.

39 Lemma 3 (Cont ’ d) kL -  ’kL  - w max + 1  DL top  kL -  ’kL  Right side: there are a total of kL elements in the augmented stream. Q is in position  ’kL . So there are kL -  ’kL  elements after the position of Q, of which some might be copies of Q.

40 Lemma 3 (Cont ’ d) Left Side: there are kL -  ’kL  elements after the position of Q. of these there are at most (w max -1) copies of Q after (w max including Q) which all elements are DL.

41 Weighted DS Consider node Y of the tree corresponding to a COLLAPSE operation. Let Y have s  0 DS elements. Consider the largest element among these DS elements. It appears in position (s- 1)*w(Y)+offset(Y) in the sorted sequence of elements of its children with each element being duplicated as the weight of the buffer it originates from.

42 Weighted DS (Cont ’ d) Therefore, the weighted sum of DS elements among children of Y is (s-1)*w(Y) + offset(Y) which is equivalent to s*w(Y)- (w(Y)-offset(Y)).

43 Weighted DL Similarly, let Y have l  0 DL elements. Consider the smallest element among these DL elements. It appears in position (l-1)*w(Y) + [w(Y)-offset(Y)] in the sorted sequence of elements of its children with each element being duplicated as the weight of the buffer it originates from (when counting from end of stream towards its beginning).

44 Weighted DL (Cont ’ d) the weighted sum of DL elements among children of Y is (l-1)*w(Y) + [w(Y)- offset(Y)] which is equivalent to l*w(Y)- offset(Y) which can also be written as l*w(Y)-(w(Y)-offset(Y)).

45 DS/DL Conclusion The weighted sum of DS(DL) among the children of a node Y is smaller by at most w(Y)-offset(Y) than the weighted sum of DS(DL) elements in Y itself. So we can count DS(DL) from the top buffers towards the leaves, reducing w(Y)- offset(Y) for each COLLAPSE on the way.

46 How many leaves ? Let DS leaves (DL leaves ) denote the number of definitely-small(large) elements among the leaf buffers of the operations tree. Weight of a leaf is 1  DS leaves (DL leaves ) are, in fact, the number of definitely- small(large) elements in the augmented stream.

47 Lemma 4 DS leaves  DS top - (W-C+1)/2 DL leaves  DL top - (W-C+1)/2

48 Lemma 4 – Proof Starting at the top buffers, the initial weighted sum of DS(DL) elements is DS top (DL top ) Each COLLAPSE that creates node Y diminishes the weighted sum by at most w(Y)-offset(Y). Traveling down to the leaves we do this for all COLLAPSE operations.

49 Lemma 4 – Proof (Cont ’ d) W(Y) on all COLLAPSE operations is W. offset(Y) on all COLLAPSE operations is at least (W+C-1)/2 by lemma 1. Reducing these 2 from DS top (DL top ) yields Lemma 4.

50 Lemma 5 The difference in rank between the true  - quantile of the original input stream & that of the output of the algorithm is at most (W-C-1)/2+w max.

51 Lemma 5 - proof Since there are L leaves each of size k, there are a total of k*L elements in the augmented input stream. The true  ’-quantile of the augmented stream is in position  ’*k*L . The output of the algorithm can be any element that is neither DS nor DL.

52 Lemma 5 – proof (Cont ’ d) So the output can be as small as DS leaves +1 or as large as k*L-DL leaves. The difference between the true  ’-quantile and the output could be as large as  ’kL  - DS leaves -1or kL-DL leaves -  ’kL . Assign DS leaves from Lemma 4 & we get:  ’kL  -DS leaves -1   ’kL  -DS top +(W-C+1)/2-1

53 Lemma 5 – proof (Cont ’ d) Substituting  ’kL  -DS top  w max from lemma 3 we get:  ’kL  -DS leaves -1  w max +(W-C+1)/2-1 = w max +(W-C-1)/2 The same bound can be established for the quantity kL-DL leaves -  ’kL .

54 Approx. bound Munro-Paterson Requires 2 buffers at leaf level & one buffer at every other level, except the root. Therefore height is at most b. The original paper assumes there are exactly 2^(b-1) leaves & that the final OUTPUT operation assumes 2 buffers of level 2^(b-2) as inputs.

55 Approx. bound Munro-Paterson W=(b-2)*2^(b-1) since the weight of nodes at each level is 2^(b-1) & COLLAPSE counts all levels except leaves & root. C=2^(b-1)-2 since a tree of height b-1 (ignoring leaves) has 2^(b-1)-1 nodes. Reducing the root yields the proper value. w max = 2^(b-2) since it is the entire tree under a child of the root.

56 Approx. bound Munro-Paterson Plugging these values in to Lemma 5 yields: (W-C-1)/2+w max =(b-2)*2^(b-2)+1/2 This value has to be smaller than  *N for the output to be  -Approximation Quantile.

57 Approx. bound Alsabti-Ranka-Singh B is assumed to be even (since b/2 is used) C=b/2 W=(b/2)^2 since there are b/2 COLLAPSE operations, each with b/2 buffers of weight 1. w max =b/2 since all COLLAPSE are the same. L=(b/2)^2 since the root has b/2 children with each having b/2 children.

58 Approx. bound Alsabti-Ranka-Singh Plugging these values in to Lemma 5 yields: (W-C-1)/2+w max =[(b^2)/4-b/2-1]/2 + b/2 = (b^2)/8+b/4-1/2 This value has to be smaller than  *N for the output to be  -Approximation Quantile.

59 Approx. bound new-Algorithm The values W, C, w max are a function of the height of the tree, denoted as h in addition to b. The height of the tree is not restricted by b, unlike the previous schemes we saw. Assume h  3 (so there is a level of COLLAPSE except the leaves & the root.

60 Approx. bound new-Algorithm

61 Plugging these values in to Lemma 5 yields: This value has to be smaller than  *N for the output to be  -Approximation Quantile.

62 Memory Usage Comparison

63 Memory Usage (Cont ’ d) Why does the curve of the Munro-Paterson algorithm has these kinks? We optimize under 2 equations. (B-2)*2^(b-2)+1/2 =N As N increases, k is increased until N reaches a threshold in which adding 1to b (constraint 1) diminishes k by half, thereby decreasing the memory usage by half.

64 Multiple Quantiles During the analysis we did not assume that a single quantile is being requested. Nor did we use the specific quantile until the last operation (OUTPUT) which selected a single element from the top buffers. Conclusion: any algorithm of this framework can output multiple quantiles with the same cost (of memory) as computing a single quantile.

65 Space Complexity Best space complexity is achieved for b=h. The  -Approximation constraint can be relaxed a little to get: This means that b=h=O(log(  N))

66 Space Complexity (Cont ’ d) Second constraint is kL  N. Replacing L with its value gives: Yields: k=(1/  )*O(b)=(1/  )*O(log(  N)) = O((1/  )*log(  N))

67 Space Complexity (Cont ’ d) The overall space complexity is b*k

68 Parallel Version The new algorithm scales very good on parallel machines. The input stream can be divided among the processors either statically (each one takes T values) or dynamically. Up till having the top buffers (children of root) which are the input buffers for the OUTPUT operation, parallelism is obvious.

69 Sampling based Algorithm The deterministic algorithm presented earlier, coupled with sampling can reduce the memory requirements dramatically. Interestingly, we will achieve a space bound that is independent of N. We add a new input parameter, . The probability that the output is correct is required to be 1- .

70 Hoeffding ’ s Inequality Let X 1, …, X n be independent random variables with 0  X i  1for i=1,….n. Let X= X 1 + …+X n Let E(X) denote the expected value of X. Then, for any > 0 the following holds: Pr[X – E(X)  ]  exp ((-2* * )/n)

71 Lemma 7 Let  =  1 +  2 A total of samples drawn from a population of N elements are enough to guarantee that the set of elements between the pair of positions  (  1 )*S  in the sorted sequence of samples is a subset of the set of elements between the pair of positions  (  )*N  in the sorted sequence of the N elements.

72 Proof We say a sample is “bad” if it does not satisfy the previously mentioned property; otherwise it is called “good”. Let N  -  (N  +  ) denote the elements preceding (succeeding) the  -  (  +  ) quantiles among the N elements. A sample of size S is “bad” iff more than  (  -  1 )*S  elements are drawn from N  -  or more than S-  (  +  1 )*S  elements are drawn from N  + 

73 Proof (Cont ’ d) The probability that more than  (  -  1 )*S  elements are drawn from N  -  is bounded as follows. The drawing of S elements from a population of N can be seen as S independent coin tosses with probability  -  The expected number of successful tosses is (  -  )S

74 Proof (Cont ’ d) The probability that this occurs is:

75 Values of  1,  2 When  1 is close to 1 (  2 close to 0) the number of samples increases to be very large. When  1 is close to 0 the required approximation guarantee from the deterministic algorithm increases. In either case the memory requirement is high.

76 Values of  1,  2 (Cont ’ d) We need to optimize  1,  2 to reduce memory usage. The theoretical complexity can be determined by setting  1 =  2 =0.5 Then S becomes The new algorithm’s space complexity is:

77 Values of  1,  2 (Cont ’ d) The space required to run the new algorithm on the samples is:

78 Multiple Quantiles We want p different quantiles, each with error bound  & confidence of 1- . Let  =  1 +  2 let We choose S samples & feed them all to the deterministic algorithm, which is  approximate. Read p quantiles from the output buffers.

79 Multiple Quantiles (Cont ’ d) All quantiles are guaranteed with probability >= 1-  to be  -approximate. Using lemma 7 & substituting  with  ’=  /p we compute the number of samples. The probability that some quantile is not an is 1-  ’/p. The probability that any quantile isn’t  - approximate is p*  ’ which is .

80 Pros & Cons (Pros) The randomized algorithm has a complexity that is not a function of N. (Cons) When computing multiple quantiles, the deterministic algorithm is unchanged. The randomized algorithm, however, does require a larger sample as the number of quantiles increases.


Download ppt "Approximate Medians and other Quantiles in One Pass and with Limited Memory Researchers: G. Singh, S.Rajagopalan & B. Lindsey Lecturer: Eitan Ben Amos,"

Similar presentations


Ads by Google