Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.

Slides:



Advertisements
Similar presentations
Answering Approximate Queries over Autonomous Web Databases Xiangfu Meng, Z. M. Ma, and Li Yan College of Information Science and Engineering, Northeastern.
Advertisements

Raghavendra Madala. Introduction Icicles Icicle Maintenance Icicle-Based Estimators Quality Guarantee Performance Evaluation Conclusion 2 ICICLES: Self-tuning.
Introduction Simple Random Sampling Stratified Random Sampling
Dynamic Sample Selection for Approximate Query Processing Brian Babcock Stanford University Surajit Chaudhuri Microsoft Research Gautam Das Microsoft Research.
Aggregating local image descriptors into compact codes
Size-estimation framework with applications to transitive closure and reachability Presented by Maxim Kalaev Edith Cohen AT&T Bell Labs 1996.
A Paper on RANDOM SAMPLING OVER JOINS by SURAJIT CHAUDHARI RAJEEV MOTWANI VIVEK NARASAYYA PRESENTED BY, JEEVAN KUMAR GOGINENI SARANYA GOTTIPATI.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Query Optimization of Frequent Itemset Mining on Multiple Databases Mining on Multiple Databases David Fuhry Department of Computer Science Kent State.
Using the Optimizer to Generate an Effective Regression Suite: A First Step Murali M. Krishna Presented by Harumi Kuno HP.
Fast Algorithms For Hierarchical Range Histogram Constructions
C4.5 algorithm Let the classes be denoted {C1, C2,…, Ck}. There are three possibilities for the content of the set of training samples T in the given node.
Introduction to Histograms Presented By: Laukik Chitnis
Linked Bernoulli Synopses Sampling Along Foreign Keys Rainer Gemulla, Philipp Rösch, Wolfgang Lehner Technische Universität Dresden Faculty of Computer.
Brian Babcock Surajit Chaudhuri Gautam Das at the 2003 ACM SIGMOD International Conference By Shashank Kamble Gnanoba.
February 14, 2006CS DB Exploration 1 Congressional Samples for Approximate Answering of Group-By Queries Swarup Acharya Phillip B. Gibbons Viswanath.
Written By Surajit Chaudhuri, Gautam Das, Vivek Marasayya (Microsoft Research, Washington) Presented By Melissa J Fernandes.
A Robust, Optimization-Based Approach for Approximate Answering of Aggregate Queries By : Surajid Chaudhuri Gautam Das Vivek Narasayya Presented by :Sayed.
Chapter 6: Database Evolution Title: AutoAdmin “What-if” Index Analysis Utility Authors: Surajit Chaudhuri, Vivek Narasayya ACM SIGMOD 1998.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Chapter 11 Multiple Regression.
1 Wavelet synopses with Error Guarantees Minos Garofalakis Phillip B. Gibbons Information Sciences Research Center Bell Labs, Lucent Technologies Murray.
Privacy Preserving OLAP Rakesh Agrawal, IBM Almaden Ramakrishnan Srikant, IBM Almaden Dilys Thomas, Stanford University.
On Comparing Classifiers: Pitfalls to Avoid and Recommended Approach Published by Steven L. Salzberg Presented by Prakash Tilwani MACS 598 April 25 th.
Scalable Approximate Query Processing through Scalable Error Estimation Kai Zeng UCLA Advisor: Carlo Zaniolo 1.
O VERCOMING L IMITATIONS OF S AMPLING FOR A GGREGATION Q UERIES Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.
Hashed Samples Selectivity Estimators for Set Similarity Selection Queries.
Census A survey to collect data on the entire population.   Data The facts and figures collected, analyzed, and summarized for presentation and.
Query Optimization Allison Griffin. Importance of Optimization Time is money Queries are faster Helps everyone who uses the server Solution to speed lies.
Lecture 5 slides on Central Limit Theorem Stratified Sampling How to acquire random sample Prepared by Amrita Tamrakar.
CONGRESSIONAL SAMPLES FOR APPROXIMATE ANSWERING OF GROUP-BY QUERIES Swarup Acharya Phillip Gibbons Viswanath Poosala ( Information Sciences Research Center,
 1  Outline  stages and topics in simulation  generation of random variates.
Chap 20-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 20 Sampling: Additional Topics in Sampling Statistics for Business.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Join Synopses for Approximate Query Answering Swarup Achrya Philip B. Gibbons Viswanath Poosala Sridhar Ramaswamy Presented by Bhushan Pachpande.
Join Synopses for Approximate Query Answering Swarup Acharya Phillip B. Gibbons Viswanath Poosala Sridhar Ramaswamy.
Lesley Charles November 23, 2009.
Towards Robust Indexing for Ranked Queries Dong Xin, Chen Chen, Jiawei Han Department of Computer Science University of Illinois at Urbana-Champaign VLDB.
Yaomin Jin Design of Experiments Morris Method.
Swarup Acharya Phillip B. Gibbons Viswanath Poosala Sridhar Ramaswamy Presented By Vinay Hoskere.
A Robust, Optimization-Based Approach for Approximate Answering of Aggregate Queries Surajit Chaudhuri Gautam Das Vivek Narasayya Presented by Sushanth.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Probabilistic & Statistical Techniques Eng. Tamer Eshtawi First Semester Eng. Tamer Eshtawi First Semester
Join Synopses for Approximate Query Answering Swarup Acharya, Philip B. Gibbons, Viswanath Poosala, Sridhar Ramaswamy By Vladimir Gamaley.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Chapter 8: Confidence Intervals based on a Single Sample
BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very Large Data ACM EuroSys 2013 (Best Paper Award)
Presented By Anirban Maiti Chandrashekar Vijayarenu
A Robust, Optimization-Based Approach for Approximate Answering of Aggregate Queries Surajit Chaudhuri Gautam Das Vivek Narasayya Presented By: Vivek Tanneeru.
Robust Estimation With Sampling and Approximate Pre-Aggregation Author: Christopher Jermaine Presented by: Bill Eberle.
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
CONGRESSIONAL SAMPLES FOR APPROXIMATE ANSWERING OF GROUP BY QUERIES Swaroop Acharya,Philip B Gibbons, VishwanathPoosala By Agasthya Padisala Anusha Reddy.
Ranking of Database Query Results Nitesh Maan, Arujn Saraswat, Nishant Kapoor.
HASE: A Hybrid Approach to Selectivity Estimation for Conjunctive Queries Xiaohui Yu University of Toronto Joint work with Nick Koudas.
1 A Robust, Optimization-Based Approach for Approximate Answering of Aggregate Queries Surajit Chaudhuri Gautam Das Vivek Narasayya Proceedings of the.
Measurements and Their Analysis. Introduction Note that in this chapter, we are talking about multiple measurements of the same quantity Numerical analysis.
University of Texas at Arlington Presented By Srikanth Vadada Fall CSE rd Sep 2010 Dynamic Sample Selection for Approximate Query Processing.
Written By: Presented By: Swarup Acharya,Amr Elkhatib Phillip B. Gibbons, Viswanath Poosala, Sridhar Ramaswamy Join Synopses for Approximate Query Answering.
BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very Large Data Authored by Sameer Agarwal, et. al. Presented by Atul Sandur.
ICICLES: Self-tuning Samples for Approximate Query Answering By Venkatesh Ganti, Mong Li Lee, and Raghu Ramakrishnan Shruti P. Gopinath CSE 6339.
Virtual University of Pakistan
CSCI5570 Large Scale Data Processing Systems
A paper on Join Synopses for Approximate Query Answering
Bolin Ding Silu Huang* Surajit Chaudhuri Kaushik Chakrabarti Chi Wang
Overcoming Limitations of Sampling for Aggregation Queries
Spatial Online Sampling and Aggregation
Econ 3790: Business and Economics Statistics
Reading Report 6 Yin Chen 5 Mar 2004
Warmup To check the accuracy of a scale, a weight is weighed repeatedly. The scale readings are normally distributed with a standard deviation of
DATABASE HISTOGRAMS E0 261 Jayant Haritsa
Presentation transcript:

Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University Rajeev MotwaniStanford University Vivek NarasayyaMicrosoft Research Presented by: Nishit Shah

Introduction Why sampling? Why uniform random sampling? Uniform Random Samples and factors responsible for significant error introduction  Skewed database i.e. characterized by presence of Outlier values  Low selectivity of queries (related to aggregate queries). How do we overcome the problems?

Sampling Techniques Offline sampling  Precomputes the sample  Materialize in the database Online sampling  Computes the sample during query execution time  Answer set with error estimate

Pre-computing Samples

Techniques suggested in paper Overcoming data skew problem  Isolate the values in the dataset the contribute heavily to error in sampling Overcoming the problem of queries with low selectivity  Exploit the workload information to overcome limitations in answering queries with low selectivity.

Key contributions of the paper Paper suggests a new technique  Combination of outlier indexing and weighted sampling results in significant error reduction compared to Uniform Random Sampling. Experimental evaluation of the proposed techniques based on implementation on Microsoft SQL Server Different graphs showing that the new techniques performs better than the other techniques

Example of Data Skew and its adverse effect Suppose N=10,000 such that 99% tuples have a value of 1, remaining 1% have a value of Hence, Sum of this table is ,000=109,900 Suppose we take a Uniform Random Sample of size 1%. Hence, n=100 Now it is possible that all 100 tuples in sample have a value 1 giving us a sum of 100*100=10,000 which is way way less than correct answer. If we get 99 tuples with value 1 and 1 tuple with 1000, sum is 1099*100=109,900. Similarly for 98 tuples of 1 and 2 tuples of 1000, sum=209,800 This shows that if we get a value of 1000 in sample more than once, we are going to get a huge error. And not getting any 1000 value also causes a huge error. However, the probability to get just 1 value of 1000 in sample is mere 0.37!!! It means the probability to get an erroneous result is 0.63.

Effect of data skew (Continued) Tuples deviant from the rest of the values in terms of their contributions to aggregate are known as OUTLIERS. The paper suggests a technique called Outlier Indexing to deal with outliers in dataset

Effect of Data Skew (Continued) Theorem 1: For a relation R with elements {y 1, y 2,…,y N }, let U be the uniform random sample of size n. Then actual sum will be and the unbiased estimate of the actual sum will be Y e =(N ╱ n)∑ yi ∈ U y i with a standard error as follows: Where S is the standard deviation of the values in the relation defined as This shows that if there are outliers in the data, then S could be very large. In this case, for a given error bound, we will need to increase the sample size n.

Effect of low selectivity and Small groups What happens if the selectivity of a query is low?  It adversely impacts the accuracy of the sampling based estimation A selection query partitions the relation into two sub-relations as follows:  Tuples that satisfy the condition of select query  Tuples that don’t satisfy the condition. When we sample uniformly, the number of tuples that are sampled from the relevant sub-relation are proportional to its size. Hence, if the relevant sample size is low, it can lead to huge errors. For Uniform random sampling to perform well, the relevant sub- relation should be large in size, which is not the case in general.

Handling Data Skew: Outlier Indexes Outliers/deviants in data contributes to Large Variance and error is directly proportional to variance Hence we identify tuples with outlier values and store them in separate sub relation. The proposed technique would result in a very accurate estimation of the aggregate. In Outlier indexing method, a given relation R is partitioned as R O (Outliers) and R NO (no-outliers). The Query Q now can be run as a union of two sub-queries one on R O and another on R NO.

Handling Data Skew: Outlier Indexes (Example) Preprocessing steps  Determine Outliers – Specify sub-relation R 0 of R to be the set of outliers  Sample Non-Outliers – Select a uniform random sample T of the relation R NO Query processing steps  Aggregate outliers – Apply the query to outliers in R 0  Aggregate non-outliers – Apply the query to sample T and extrapolate to obtain an estimate of the query result for R NO  Combine Aggregates – Combine the appoximate result for R NO with the exact result for R 0 to obtain an appoximate result for R

Selection of Outliers In this method of Outlier-indexing, query error is solely because of error in estimating non-outlier aggregation. However, there is additional overhead for maintaining and accessing an outlier index. An optimal sub-relation Ro that leads to the minimum possible sampling error

Theorem 2 Consider a multiset R = {y1, y2,…yN} in sorted order. Ro C R such that |Ro| ≤ τ S(R\ Ro) = min R’C R,|R’|≤ τ {S(R\R’)} There exists some 0 ≤ τ’ ≤ τ such that Ro = {yi|1 ≤ i≤ τ’ }U{yi|(N+ τ’ +1- τ) ≤i ≤N} States that the subset that minimizes the standard deviation over the remaining set consists of the leftmost τ’ elements and the right most τ- τ’ elements from the multiset R when elements are arranged in sorted order.

A LGORITHM FOR O UTLIER I NDEX (R,C, τ ) Alogirithm Outlier-Index(R,C, τ): Let the values in column C be sorted in relation R For i = 1 to τ+1, compute E(i) = S({yi, yi+1,…yN- τ+i-1}) Let i’ be the value of i where E(i) is minimum. Outlier-index is the tuples that correspond to the set of values { y j | 1  j  τ ′ } U { y j | N+ τ ′+1- τ  j  N} i.e. lowest 1 to τ’ values and highest τ - τ’ values. where τ’ = i’ - 1 It is possible to give standard error (probabilistic) guarantee of our estimated answer using Theorem1

Storage Allocation for Outlier Indexing Given sufficient space to store m tuples, how do we allocate storage between samples and outlier-index so as to get minimum error? Suppose S(t) denotes standard deviation in non-outliers for optimal outlier index of size t. From Theorem 1, error is proportional to where t tuples are in outlier index and m-t tuples in the sample.

Handling Low Selectivity and Small Groups In this case, we want to use weighted sampling. In other words, we want to sample more from subsets of data that are small in size but are important (i.e. having high usage). We select a representative workload (i.e. a set of queries) and tune the sample so that we can answer the queries posed to database more accurately. The technique mentioned in the paper is for precomputed samples only. However, research is being done to apply this solution to online sampling.

Exploiting Workload Information Workload Collection: It means obtaining a workload consisting of representative queries that are posed to the database. Tools like profiler component in the Microsoft SQL Server allow logging of queries posed to the Database. Tracing Query Patterns: It basically involves analyzing the workload to obtain parsed information. Tracing Tuple Usage: It involves tracing number of times a tuple was accessed, number of times it satisfies the query, number of times tuple didn’t satisfy the condition, etc. Weighted Sampling: It involves sampling by taking into account weights of tuples into consideration.

Weighted Sampling: Deeper Insight All the tuple are assigned some weightage depending on traces tuple usage Whenever we want to get a sample, a tuple is associated in the sample with probability p i =n*w i ’ The inverse of p i is the multiplication factor. Each aggregate computed over this tuple gets multiplied by the multiplication factor to answer the query. As the uniform random sample has equal probability, the multiplication factor is (N/n). This method works well only if we have a workload that is a good representation of the actual queries posed in future and also if access pattern of the queries is local in nature.

Implementation and Experimental Setup Databases: The database used for experimentation is the TPC-R benchmark database. The data generation was modified a bit to get varying degrees of skew. The modified program generates data based on Zipfian distribution. Parameters:  Skew of the data (z) varied over 1, 1.5, 2, 2.5 and 3  Sampling fraction (f) varied from 1% to 100%  Storage for Outlier index varies 1%, 5%, 10% and 20%.

Implementation and Experimental Setup (2) The comparisons were done on three options:  1) Uniform Sampling  2) Weighted Sampling  3) Weighted Sampling + Outlier Indexing The results depicted here are for which the storage size for outlier-indexing was only 10% of the size of the data set.

Experimental Results

Experimental Results (2)

Experimental Results (3)

Conclusion We can easily conclude that skew in the data can lead to considerable large errors. However, Outlier indexing addresses this problem at a small additional overhead. The problem lies in creating a single outlier index that works for any query on the database. Low selectivity of queries is addressed by the weighted load of sampling. Some important references:  - for more info on TPC-R standard database.