Presentation is loading. Please wait.

Presentation is loading. Please wait.

USING TREES TO DEPICT A FOREST Bin Liu, H. V. Jagadish EECS, University of Michigan, Ann Arbor 1 Reading Assignment Presentation Courtesy of 35 th International.

Similar presentations


Presentation on theme: "USING TREES TO DEPICT A FOREST Bin Liu, H. V. Jagadish EECS, University of Michigan, Ann Arbor 1 Reading Assignment Presentation Courtesy of 35 th International."— Presentation transcript:

1 USING TREES TO DEPICT A FOREST Bin Liu, H. V. Jagadish EECS, University of Michigan, Ann Arbor 1 Reading Assignment Presentation Courtesy of 35 th International Conference on Very Large Databases 2009

2 Motivation – Too Many Results In interactive database querying, we often get more results than we can comprehend immediately Try search a popular keyword When do you actually click over 2-3 pages of results? – 85% of users never go to the second page [1,2] 2

3 Why IR Solutions Do NOT Apply Sorting and ranking are standard IR techniques – Search engines show most relevant hits in the first page However, for a database query, all tuples in the query result set are equally relevant – For example, Select * from Cars where price < 13,000 – All matching results should be available to user – What to do when there are millions of results? 3

4 Make the First Page Count If no user preference information available, how to best arrange results? – Sort by attribute? – Random selection? – Others? Show the most “representative” results – Best help users learn what is in the result set – User can decide further actions based on representatives 4

5 Our Proposal – MusiqLens Experience 5

6 Suppose a user wants a 2005 Civic but there are too many of them… 6

7 MusiqLens on the Car Data IDMODELPRICEYEARMILEAGECONDITION 872Civic$12,000200550,000Good 122 more like this 901Civic$16,000200540,000Excellent 345 more like this 725Civic$18,500200530,000Excellent 86 more like this 423Civic$17,000200542,000Good 201 more like this 132Civic$9,500200586,000Fair 185 more like this 322Civic$14,000200573,000Good 55 more like this 7

8 MusiqLens on the Car Data IDMODELPRICEYEARMILEAGECONDITION 872Civic$12,000200550,000Good 122 more like this 901Civic$16,000200540,000Excellent 345 more like this 725Civic$18,500200530,000Excellent 86 more like this 423Civic$17,000200542,000Good 201 more like this 132Civic$9,500200586,000Fair 185 more like this 322Civic$14,000200573,000Good 55 more like this 8

9 Zooming in: 2005 Honda Civics ~ ID 132 IDMODELPRICEYEARMILEAGECONDITION 342Civic$9,800200572,000Good 25 more like this 768Civic$10,000200560,000Good 10 more like this 132Civic$9,500200586,000Fair 63 more like this 122Civic$9,500200576,000Good 5 more like this 123Civic$9,100200581,000Fair 40 more like this 898Civic$9,000200569,000Fair 42 more like this 9

10 Now Suppose User Filters by “Price < 9,500” IDMODELPRICEYEARMILEAGECONDITION 342Civic$9,800200572,000Good 25 more like this 768Civic$10,000200560,000Good 10 more like this 132Civic$9,500200586,000Fair 63 more like this 122Civic$9,500200576,000Good 5 more like this 123Civic$9,100200581,000Fair 40 more like this 898Civic$9,000200569,000Fair 42 more like this 10

11 IDMODELPRICEYEARMILEAGECONDITION 123Civic$9,100200581,000Fair 40 more like this 898Civic$9,000200569,000Fair 42 more like this 133Civic$9,300200587,000Fair 33 more like this 126Civic$9,200200589,000Good 3 more like this 129Civic$8,900200581,000Fair 20 more like this 999Civic$9,000200587,000Fair 12 more like this After Filtering by “Price < 9,500” 11

12 Challenges Metric challenge –W–What is the best set of representatives? Representative finding challenge –H–How to find them efficiently? Query challenge –H–How to efficiently adapt to user’s query operations? 12

13 Finding a Suitable Metric Users should be the ultimate judge – Which metric generates the representatives that I can learn the most from User study – Use a set of candidates – Users observe the representatives – Users estimate more data points in the data – Representatives lead to best estimation wins 13

14 Metric Candidates Sort by attributes Uniform random sampling Density-biased sampling [3] Sort by typicality [4] K-medoids – Average – Maximum 14

15 Density-biased Sampling Proposed by C. R. Palmer and C. Faloutsos [3] Sample more from sparse regions, less from dense regions To counter the weakness of uniform sampling where small clusters are missed 15

16 Sort by Typicality 16 Proposed by Ming Hua, Jian Pei, et al [4] Figure source: slides from Ming Hua

17 Metric Candidates - K-medoids A medoid of a cluster is the object whose average or maximum dissimilarity to others is smallest – Average medoid and max medoid K-medoids are k objects, each from a cluster where the object is the medoid Why not K-means – K-means cluster centers do not exist in database – We must present real objects to users 17 C

18 Plotting the Candidates 18 Data: Yahoo! Autos, 3922 data points. Normalized price and mileage to 0-1.

19 19 Plotting the Candidates - Typicality

20 20 Plotting the Candidates – k-medoids

21 User Study Procedure Users are given – 7 sets of data, generated using the 7 candidate methods – Each set consists of 8 representative points Users predict 4 more data points – That are most likely in the data set – Should not pick those already given Measure the predication error 21

22 Predication Quality Measurement 22 P1P1 P2P2 D1D1 D2D2 SoSo For data point S o : MinDist: D 1 MaxDist: D 2 AvgDist: (D 1 +D 2 )/2

23 Performance – AvgDist and MaxDist 23 For AvgDist: Avg-Medoid is the winner. For MaxDist: Max-Medoid is the winter.

24 Performance – MinDist 24 Avg-Medoid seems to be the winner

25 Verdict Although result is insignificant in MinDist, overall AvgMeoid is better than Density Based on AvgDist and MinDist: Avg-Medoid Based on MaxDist: Max-Medoid In this paper, we choose average k-medoids – Our algorithm can extend to max-medoids with small changes 25 Statistical Significance of Result:

26 Challenges Metric challenge – What is the best set of representatives? Representative finding challenge – How to find them efficiently? Query challenge – How to efficiently adapt to user’s query operations? 26

27 Cover Tree Based Algorithm Cover Tree was proposed by Beygelzimer, Kakade, and Langford in 2006 [5] Briefly discuss Cover Tree properties Cover Tree based algorithms for computing k- medoids 27

28 Cover Tree Properties (1) 28 Figure modified from slides of Cover Tree authors CiCi C i+1 Points in the Data (One Dimension) Nesting: for all i, Assume all pair-wise distance <= 1. Repeating every node in each lower Level after its first appears Repeating every node in each lower Level after its first appears

29 Cover Tree Properties (2) 29 CiCi C i+1 Covering: node in C i is within distance of to its children in C i+1 Distance from node to any descendant is less than This value is called the “span” of the node. Distance from node to any descendant is less than This value is called the “span” of the node. Ensure that nodes are close enough to their children

30 Cover Tree Properties (3) 30 Figure modified from slides of Cover Tree authors CiCi C i+1 Points in the Data Separation: nodes in C i are separated by at least Nodes at higher levels are more separated Nodes at higher levels are more separated

31 Additional Stats for Cover Tree (2D Example) 31 Density (DS): number of points in the subtree DS = 10 DS = 3 Centroid (CT): geometric center of points in the subtree p

32 k-medoid Algorithm Outline We descend the cover tree to a level with more than k nodes Choose an initial k points as first set of medoids (seeds) – Bad seeds can lead to local minimums with a high distance cost Assigning nodes and repeated update until medoids converge 32

33 Cover Tree Based Seeding 33 Descend the cover tree to a level with more than k nodes (denote as level m) Use the parent level (m-1) as starting point for seeds – Each node has a weight, calculated as product of span and density (the contribution of the subtree to the distance cost) – Expand nodes using a priority queue – Fetch the first k nodes from the queue as seeds

34 A Simple Example: k = 4 34 Span = 2 Span = 1 Span = 1/2 Span = 1/4 Priority Queue on node weight (density * span): S 3 (5), S 8 (3), S 5 (2) S 8 (3/2), S 5 (1), S 3 (1), S 7 (1), S 2 (1/2) Final set of seeds

35 Update Process 1.Initially, assign all nodes to closest seed to form k clusters 2.For each cluster, calculate the geometric center Use centroid and density information to approximate subtree 3.Find the node that is closest to the geometric center, designate as a new medoid 4.Repeat from step 1 until medoids converge 35

36 Challenges Metric challenge – What is the best set of representatives? Representative finding challenge – How to find them efficiently? Query challenge – How to efficiently adapt to user’s query operations? 36

37 Query Adaptation Handle user actions – Zooming – Selection (filtering) Zooming – Expand all nodes assigned to the medoid – Run k-medoid algorithm on the new set of nodes 37

38 Selection Effect of selection on a node – Completely invalid – Fully valid – Partially valid Estimate the validity percentage (VG) of each node Multiply the VG with weight of each node 38

39 What about Projection? What if user removes one attribute? Distancebetween pair will change.. So should the cover tree be recomputed? 39

40 System Architecture 40 DBMS k-Medoid Generator k-Medoid Generator Zooming Operator Zooming Operator Query Operator Query Operator Client User Interface Initial Query Query results Medoids Cover-tree Indexer Zooming Operations Medoids Query Operations Medoids MUSIQLENSMUSIQLENS

41 Experiments – Initial Medoid Quality Compare with R-tree based method [6] Data sets – Synthetic dataset: 2D points with zipf distribution – Real dataset: LA data set from R-tree Portal, 130k points Measurement – Time to compute the medoids – Average distance from a data point to its medoid 41

42 Results on Synthetic Data 42 For various sizes of data, Cover-tree based method outperforms R-tree based method Time Distance

43 Result on Real Data 43 For various k values, Cover-tree based method outperforms R-tree based method on real data

44 Query Adaptation 44 Synthetic DataReal Data Compare with re-building the cover tree and running the k-medoid algorithm from scratch. Time cost of re-building is orders-of-magnitude higher than incremental computation.

45 Related Work Classic/textbook k-medoid methods – Partition Around Medoids (PAM) and Clustering LARge Applications (CLARA), L. Kaufman and P. Rousseeuw, 1990 – CLARANS, R. T. Ng and J. Han, TKDE 2002 Tree-based methods – Focusing on Representatives (FOR), M. Ester, H. Kriegel, and X. Xu, KDD 1996 – Tree-based Partitioning Querying (TPAQ), K. Mouratidis, D. Papadias, and S. Papadimitriou, VLDBJ 2008 45

46 Related Work (2) Clustering methods – For example, BIRCH, T. Zhang, R. Ramakrishnan, and M. Livny, SIGMOD 1996 Result presentation methods – Automatic result categorization, K.Chakrabarti, S.Chaudhuri, and S.wonHwang, SIGMOD 2004 – DataScope, T. Wu, et al, VLDB 2007 Other recent work – Finding representative set from massive data, ICDM 2005 – Generalized group by, C. Li, et al, SIGMOD 2007 – Query result diversification, E. Vee et al., ICDE 2008 46

47 Conclusion We proposed MusiqLens framework for solving the many-answer problem We conducted user study to select a metric for choosing representatives We proposed efficient method for computing and maintaining the representatives under user actions Part of the database usability project at Univ. of Michigan – Led by Prof. H.V. Jagadish – http://www.eecs.umich.edu/db/usable/ 47

48 Example of Questions How does the tree gets constructed in the given example? Would removing many attributes (using Projection) affecting the shape of the cover tree? What would be the examples of dataset using k with the value greater than 100? What’s the impact of increasing dimension on building a cover tree? 48


Download ppt "USING TREES TO DEPICT A FOREST Bin Liu, H. V. Jagadish EECS, University of Michigan, Ann Arbor 1 Reading Assignment Presentation Courtesy of 35 th International."

Similar presentations


Ads by Google