Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Massive Data Sets: Theory & Practice Ziv Bar-Yossef IBM Almaden Research Center.

Similar presentations


Presentation on theme: "1 Massive Data Sets: Theory & Practice Ziv Bar-Yossef IBM Almaden Research Center."— Presentation transcript:

1 1 Massive Data Sets: Theory & Practice Ziv Bar-Yossef IBM Almaden Research Center

2 2 What are Massive Data Sets? Technology The World-Wide Web IP packet flows Phone call logs Science Genomic data Astronomical sky surveys Weather data Business Credit card transactions Billing records Supermarket sales Petabytes Terabytes Gigabytes Huge Distributed Dynamic Heterogeneous Noisy Unstructured / semi-structured

3 3 Nontraditional Challenges Traditionally Cope with the complexity of the problem New challenges How to efficiently compute on massive data sets? –Restricted access to the data –Not enough time to read the whole data –Tiny fraction of the data can be held in main memory How to find desired information in the data? How to summarize the data? How to clean the data? Massive Data Sets Cope with the complexity of the data

4 4 Algorithm Sampling Query a small number of data elements Data streams Stream through the data; limited main memory storage Sketching Compress data chunks into small “sketches”; compute over the sketches Computational Models for Massive Data Sets Algorithm Data Set Algorithm Data Set

5 5 Outline of the Talk Web statistics Sampling lower bounds Hamming distance sketching Template detection “Theory” “Practice”

6 6 Web Statistics (with A. Berg, S. Chien, J. Fakcharoenphol, D. Weitz, VLDB 2000) The “BowTie” Structure of the Web [Broder et al, 2000] crawlable web What fraction of the web is covered by Google? Which is the largest country domain on the web? What is the percentage of French language pages? How large is the web?

7 7 Our Approach Straightforward solution: –Crawl the crawlable web –Generate statistics based on the crawl Drawbacks: –Expensive –Complicated implementation –Slow –Inaccurate Our approach: uniform sampling by random walks –Random walk on an undirected & regular version of the crawlable web Advantages: –Provably uniform samples from the crawlable web –Runs on a desktop PC in a couple of days

8 8 Undirected Regular Random Walk Fact: A random walk on a connected (non-bipartite) undirected regular graph converges to a uniform limit distribution. w(v) = deg max - deg(v) 1 2 3 1 4 0 2 3 0 3 2 2 4 4 3 3 3 1 2 5 Follow a random out-link or a random in-link at each step Use weighted self loops to even out page degrees

9 9 Convergence Rate (“Mixing Time”) TheoremMixing time  log(N)/  (N = graph size,  = transition matrix’s spectral gap) Experiment (based on a crawl) For the web,   10 -5 Mixing time: 3.3 million steps Self loop steps are free 29,999 out of 30,000 steps are self loop steps Actual mixing time is only 110 steps

10 10 Realization of the Random Walk Problems The in-links of pages are not readily available The degree of pages is not available Available sources of in-links: Previously visited nodes Reverse link services of search engines Experiments indicate samples are still nearly uniform.

11 11 Top 20 Internet Domains (summer 2003)

12 12 Search Engine Coverage (summer 2000)

13 13 Subsequent Extensions Focused Sampling (with T. Kanungo and R. Krauthgamer, 2003) –“Focused statistics” about web communities: Statistics about the.uk domain Statistics about pages on bicycling Statistics about Arabic language pages –Based on a sophisticated extension of the above random walk. Study of the web’s decay ( with A. Broder, R. Kumar, and A. Tomkins, 2003) –A measure for how well-maintained web pages are. –Based on a random walk idea.

14 14 Sampling Lower Bounds (STOC 2003) Q1. How many samples are needed to estimate: –The fraction of pages covered by Google? –The number of distinct web-sites? –The distribution of languages on the web? Q2. Can we save samples by sampling non-uniformly? A2. For “symmetric” functions, uniform sampling is the best possible. (“symmetric” – invariant under permutations of data elements) A1. A “recipe” for obtaining sampling lower bounds for symmetric functions.

15 15 Algorithm Optimality of Uniform Sampling (with R. Kumar and D. Sivakumar, STOC 2001) Theorem When estimating symmetric functions, uniform sampling is the best possible. Proof idea X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 X1X1 X2X2 X3X3 X4X4 X5X5 X6X6 X7X7 X8X8 X2X2 X7X7 X5X5 original algorithm simulation x  x) X2X2 X7X7 X5X5

16 16 Preliminaries B f(a) f(b) pairwise “disjoint inputs“ f(c) f: A n  B : symmetric function  approximation parameter 111223 x  1) = 1/2 (2) = 1/3 (3) = 1/6 input “sample distribution”

17 17 The Lower Bound Recipe x 1,…,x m : “pairwise disjoint” inputs 1,…, m : “sample distributions” on x 1,…,x m Theorem: Any algorithm approximating f requires q samples, where Proof steps: Reduction from statistical classification Lower bound for statistical classification ( 0 · JS( 1,…, m ) · log m )

18 18 Reduction from Statistical Classification B f(a) f(b) pairwise f(c) “disjoint inputs” Statistical classification: Given uniform samples from x  { a, b, c }, decide whether x = a or x = b or x = c. f: A n  B: symmetric function Can be solved by any sampling algorithm approximating f

19 19 The “Election Problem” input: a sequence x of n votes to k parties 7/18 4/183/182/18 1/18 (n = 18, k = 6) Want to get s.t. || -  x || < . Vote Distribution  x Theorem A poll of size  (k/  2 ) is required for estimating the election problem.

20 20 Combinatorial Designs 1.Each of them constitutes half of U. 2.The intersection of each two of them is relatively small. B1B1 B2B2 B3B3 U A family of subsets B 1,…,B m of a universe U s.t. Fact: There exist designs of size exponential in |U|.

21 21 Proof of the Lower Bound for the Election Problem Step 1: Identification of a set S of pairwise disjoint inputs: B 1,…,B m µ {1,…,k}: a design of size m = 2  (k). S = { x 1,…,x m }, where in x i : BiBi BicBic Step 2: JS( 1,…, m ) = O(  2 ). By our theorem, # of queries is at least  (k/  2 ). ½ +  of the votes are split among parties in B i. ½ -  of the votes are split among parties in B i c.

22 22 Hamming Distance Sketching (with T.S. Jayram and R. Kumar, 2003) Alice Bob Referee Ham(x,y) > k x y  x)  y) Ham(x,y) · k $$

23 23 Hamming Distance Sketching Applications Maintenance of large crawls Comparison of large files over the network Previous schemes: Sketches of size O(k 2 ) [Kushilevitz, Ostrovsky, Rabani, 98], [Yao 03] Lower bound:  (k) Our scheme: Sketches of size O(k log k)

24 24 Preliminaries Balls and Bins: When throwing n balls into n/log n bins, then with high probability the fullest bin has O(log n) balls. When throwing n balls into n 2 bins, then with high probability no two balls fall into the same bin. Using KOR scheme, can assume Ham(x,y) · 2k.

25 25 First Level Hashing 10 01 11 00 01 00 11 00 x 11 01 01 01 11 00 10 01 y 10 01 11 00 01 00 11 00 11 01 01 01 11 00 10 01 k/log k bins y1y1 y2y2 y3y3 x2x2 x1x1 x3x3 Ham(x,y) =  i Ham(x i,y i ) 8i, Ham(x i,y i ) · O(log k)

26 26 Second Level Hashing y3y3 x3x3 11 01 00 01 11 10 11 01 00 01 11 10 log 2 k bins  3,1  3,2  3,3  3,4  3,5  3,6  3,1  3,2  3,3  3,4  3,5  3,6  3,j =  3,j iff # of “pink positions” in the j-th bin is even. If no collisions, Ham(  3,  3 ) = Ham(x 3,y 3 ) If collisions, Ham(  3,  3 ) · Ham(x 3,y 3 )

27 27 The Sketch  (x) = {  i j | i = 1,…,k/log k, j = 1,…,t }  (y) = {  i j | i = 1,… k/log k, j = 1,…,t } Referee decides Ham(x,y) · k if and only if  i max j Ham(  i j,  i j ) · k Probability of collision: a small constant For each i = 1,…,k/log k, repeat second level hashing t = O(log k) times, obtaining (  i 1,  i 1 ),…,(  i t,  i t ). With probability at least 1 – 1/k, Ham(x i,y i ) = max j Ham(  i j,  i j )

28 28 Other Sketching Results A sketching scheme for the edit distance –Leads to the first almost-linear time approximation algorithm for the edit distance. Sketch lower bounds for (compressed) pattern matching.

29 29 Template Detection (with S. Rajagopalan, WWW 2002) Template – Master HTML shell page used for composing new pages. Our contributions: Efficient algorithm for template detection Application to improvement of search engine precision

30 30 Templates are Bad for Web IR Pose a significant source of “noise” in web pages –Their content is not related to the topics of pages in which they reside –Create spurious linkage to unimportant pages Extremely common –Became standard in website design

31 31 Pagelets [Chakrabarti 01] has a single theme not nested within a bigger region with the same theme Navigational bar pagelet Search pagelet Directory pagelet News headlines pagelet Pagelet – a region in a page that:

32 32 Template Definition Template = a collection of pagelets that: 1.Belong to the same website. 2.Are nearly-identical.

33 33 Template Detection Template Detection Algorithm Group the pages in S according to website. For each website w: –For each page p 2 w: Partition p into pagelets p 1,…,p k Compute a “shingle” sketch for each pagelet [Broder et al. 1997] –Group the resulting pagelets by their sketches. –Output all the pagelet groups of size > 1. Template Detection Problem: Given a set of pages S, find all the templates in S.

34 34 HITS & Clever [Kleinberg 1997, Chakrabarti et al. 1998 ] HubsAuthorities h(p) =  q 2 out(p) a(q) a(p) =  q 2 in(p) h(q)

35 35 “Template” Clever Hubs Authorities Hubs – all the non-templatized constituent pagelets of pages in the base set. Authorities – all pages in the base set. Page Pagelet Templatized pagelet Legend

36 36 Classical Clever vs. Template Clever

37 37 Template Proliferation

38 38 Summary Web data mining via random walks on the web graph: –Web statistics –Focused statistics –Web decay Sampling lower bounds –Optimality of uniform sampling for symmetric functions –A “recipe” for lower bounds Sketching of string distance measures –Hamming distance –Edit distance Template detection

39 39 Some of My Other Work Database –Semi-structured data and XML Computational Complexity –Communication complexity –Pseudo-randomness and de-randomization –Space-bounded computations –Parallel computation complexity Algorithm Design –Data stream algorithms –Internet auctions

40 40

41 41 Web Statistics (with A. Berg, S. Chien, J. Fakcharoenphol, D. Weitz, VLDB 2000) The “BowTie” Structure of the Web [Broder et al, 2000] crawlable web SCC OUT IN What fraction of the web is covered by Google? Which is the largest country domain on the web? What is the percentage of porn pages? How large is the web?

42 42 Straightforward Random Walk Gets stuck in sinks and in dense web communities Biased towards popular pages Converges slowly, if at all yahoo.com amazon.com www.almaden.ibm.com/cs/people/ziv Follow a random out-link at each step 1 2 3 4 5 6 7 8 9

43 43 Undirected Regular Random Walk Fact: A random walk on a connected (non-bipartite) undirected regular graph converges to a uniform limit distribution. w(v) = deg max - deg(v) yahoo.com 1 2 3 1 amazon.com 4 0 2 3 0 3 2 2 4 4 3 3 3 1 2 5 Follow a random out-link or a random in-link at each step Use weighted self loops to even out page degrees www.almaden.ibm.com/cs/people/ziv

44 44 Evaluation: Bias towards High Degree Nodes Deciles of nodes ordered by degree High Degree Low Degree Percent of nodes from walk

45 45 Evaluation: Bias towards the Search Engines Search engine size 30%50% Estimate of search engine size

46 46 Link-Based Web IR Applications Search and ranking –HITS and Clever [Kleinberg 1997,Chakrabarti et al. 1998] –PageRank [Brin and Page 1998] –SALSA [Lempel and Moran 2000] Similarity search –Co-Citation [Dean and Henzinger 1999] Categorization –Hyperclass [Chakrabarti, Dom, Indyk 1998] Focused crawling –FOCUS [Chakrabarti, van der Berg, Dom 1999] …

47 47 Hypertext IR Principles Relevant Linkage Principle [Kleinberg 1997] –p links to q  q is relevant to p Topical Unity Principle [Kessler 1963, Small 1973] –q 1 and q 2 are co-cited in p  q 1 and q 2 are related to each other Lexical Affinity Principle [Maarek et al. 1991] –The closer the links to q 1 and q 2 are the stronger the relation between them. Underlying principles of link analysis: p q p q1q1 q2q2 p q1q1 q2q2 q3q3

48 48 Example: HITS & Clever [Kleinberg 1997, Chakrabarti et al. 1998 ] Relevant Linkage Principle –All links propagate score from hubs to authorities and vice versa. Topical Unity Principle –Co-cited authorities propagate score to each other. Lexical Affinity Principle (Clever) –Text around links is used to weight relevance of the links. HubsAuthorities h(p) =  q 2 out(p) a(q) a(p) =  q 2 in(p) h(q)


Download ppt "1 Massive Data Sets: Theory & Practice Ziv Bar-Yossef IBM Almaden Research Center."

Similar presentations


Ads by Google