1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing.

Slides:



Advertisements
Similar presentations
Lecture outline Nearest-neighbor search in low dimensions
Advertisements

Applications Shingling Minhashing Locality-Sensitive Hashing
Hash-Based Improvements to A-Priori
1 CPS : Information Management and Mining Association Rules and Frequent Itemsets.
1 Frequent Itemset Mining: Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored basket-by-basket.
Similarity and Distance Sketching, Locality Sensitive Hashing
Data Mining of Very Large Data
Near-Duplicates Detection
1 CPS : Information Management and Mining Association Rules and Frequent Itemsets.
High Dimensional Search Min-Hashing Locality Sensitive Hashing
1 Mining Associations Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
MMDS Secs Slides adapted from: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, October.
1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.
Min-Hashing, Locality Sensitive Hashing Clustering
Data Mining, Frequent-Itemset Mining
1 Association Rules Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
1 Improvements to A-Priori Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Association Rule Mining
My Favorite Algorithms for Large-Scale Data Mining
Data Mining: Associations
1 Association Rules Market Baskets Frequent Itemsets A-priori Algorithm.
CS 361A1 CS 361A (Advanced Data Structures and Algorithms) Lecture 20 (Dec 7, 2005) Data Mining: Association Rules Rajeev Motwani (partially based on notes.
15-853Page :Algorithms in the Real World Indexing and Searching III (well actually II) – Link Analysis – Near duplicate removal.
Improvements to A-Priori
Near Duplicate Detection
Asssociation Rules Prof. Sin-Min Lee Department of Computer Science.
Data Mining, Frequent-Itemset Mining. Data Mining Some mining problems Find frequent itemsets in "market-basket" data – "50% of the people who buy hot.
1 More Stream-Mining Counting How Many Elements Computing “Moments”
Finding Similar Items. Set Similarity Problem: Find similar sets. Motivation: Many things can be modeled/represented as sets Applications: –Face Recognition.
More Stream-Mining Counting Distinct Elements Computing “Moments”
1 Clustering. 2 The Problem of Clustering uGiven a set of points, with a notion of distance between points, group the points into some number of clusters,
1 Near-Neighbor Search Applications Matrix Formulation Minhashing.
Finding Similar Items.
1 Near-Neighbor Search Applications Matrix Formulation Minhashing.
1 Finding Similar Pairs Divide-Compute-Merge Locality-Sensitive Hashing Applications.
Near-duplicates detection Comparison of the two algorithms seen in class Romain Colle.
1 Locality-Sensitive Hashing Basic Technique Hamming-LSH Applications.
Finding Near Duplicates (Adapted from slides and material from Rajeev Motwani and Jeff Ullman)
1 Still More Stream-Mining Frequent Itemsets Elephants and Troops Exponentially Decaying Windows.
1 “Association Rules” Market Baskets Frequent Itemsets A-priori Algorithm.
1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing Locality-Sensitive Hashing.
Information Retrieval from Data Bases for Decisions Dr. Gábor SZŰCS, Ph.D. Assistant professor BUTE, Department Information and Knowledge Management.
1 Applications of LSH (Locality-Sensitive Hashing) Entity Resolution Fingerprints Similar News Articles.
Frequent Itemsets and Association Rules 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 3: Frequent Itemsets.
Brief (non-technical) history Full-text index search engines Altavista, Excite, Infoseek, Inktomi, ca Taxonomies populated with web page Yahoo.
Finding Similar Items 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 10: Finding Similar Items Mining.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
1 Low-Support, High-Correlation Finding rare, but very similar items.
Sampling Large Databases for Association Rules Jingting Zeng CIS 664 Presentation March 13, 2007.
Frequent-Itemset Mining. Market-Basket Model A large set of items, e.g., things sold in a supermarket. A large set of baskets, each of which is a small.
Association Rule Mining
DATA MINING LECTURE 6 Similarity and Distance Sketching, Locality Sensitive Hashing.
Document duplication (exact or approximate) Paolo Ferragina Dipartimento di Informatica Università di Pisa Slides only!
DATA MINING LECTURE 6 Sketching, Min-Hashing, Locality Sensitive Hashing.
1 CPS216: Advanced Database Systems Data Mining Slides created by Jeffrey Ullman, Stanford.
DATA MINING LECTURE 6 Sketching, Locality Sensitive Hashing.
Shingling Minhashing Locality-Sensitive Hashing
CS276A Text Information Retrieval, Mining, and Exploitation
Near Duplicate Detection
Finding Similar Items Jeffrey D. Ullman Application: Similar Documents
Sketching, Locality Sensitive Hashing
Finding Similar Items: Locality Sensitive Hashing
Finding Similar Items: Locality Sensitive Hashing
Shingling Minhashing Locality-Sensitive Hashing
Theory of Locality Sensitive Hashing
Hash-Based Improvements to A-Priori
Near Neighbor Search in High Dimensional Data (1)
Three Essential Techniques for Similar Documents
Locality-Sensitive Hashing
Presentation transcript:

1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing

2 The Problem uRather than finding high-support item- pairs in basket data, look for items that are highly “correlated.” wIf one appears in a basket, there is a good chance that the other does. w“Yachts and caviar” as itemsets: low support, but often appear together.

3 Correlation Versus Support uA-Priori and similar methods are useless for low-support, high-correlation itemsets. uWhen support threshold is low, too many itemsets are frequent. wMemory requirements too high. uA-Priori does not address correlation.

4 Matrix Representation of Item/Basket Data uColumns = items. uRows = baskets. uEntry (r, c ) = 1 if item c is in basket r ; = 0 if not. uAssume matrix is almost all 0’s.

5 In Matrix Form mcpbj {m,c,b}11010 {m,p,b}10110 {m,b}10010 {c,j}01001 {m,p,j}10101 {m,c,b,j}11011 {c,b,j}01011 {c,b}01010

6 Applications --- (1) uRows = customers; columns = items. w(r, c ) = 1 if and only if customer r bought item c. wWell correlated columns are items that tend to be bought by the same customers. wUsed by on-line vendors to select items to “pitch” to individual customers.

7 Applications --- (2) uRows = (footprints of) shingles; columns = documents. w(r, c ) = 1 iff footprint r is present in document c. wFind similar documents.

8 Applications --- (3) uRows and columns are both Web pages. w(r, c) = 1 iff page r links to page c. wCorrelated columns are pages with many of the same in-links. wThese pages may be about the same topic.

9 Assumptions --- (1) 1.Number of items allows a small amount of main-memory/item. uE.g., main memory = Number of items * Too many items to store anything in main-memory for each pair of items.

10 Assumptions --- (2) 3. Too many baskets to store anything in main memory for each basket. 4. Data is very sparse: it is rare for an item to be in a basket.

11 From Correlation to Similarity uStatistical correlation is too hard to compute, and probably meaningless. wMost entries are 0, so correlation of columns is always high. uSubstitute “similarity,” as in shingles- and-documents study.

12 Similarity of Columns uThink of a column as the set of rows in which it has 1. uThe similarity of columns C 1 and C 2 = Sim (C 1,C 2 ) = is the ratio of the sizes of the intersection and union of C 1 and C 2. wSim (C 1,C 2 ) = |C 1  C 2 |/|C 1  C 2 | = Jaccard measure.

13 Example C 1 C Sim (C 1, C 2 ) = 002/5 =

14 Outline of Algorithm 1.Compute “signatures” (“sketches”) of columns = small summaries of columns. wRead from disk to main memory. 2.Examine signatures in main memory to find similar signatures. wEssential: similarity of signatures and columns are related. 3.Check that columns with similar signatures are really similar (optional).

15 Signatures uKey idea: “hash” each column C to a small signature Sig (C), such that: 1.Sig (C) is small enough that we can fit a signature in main memory for each column. 2.Sim (C 1, C 2 ) is the same as the “similarity” of Sig (C 1 ) and Sig (C 2 ).

16 An Idea That Doesn’t Work uPick 100 rows at random, and let the signature of column C be the 100 bits of C in those rows. uBecause the matrix is sparse, many columns would have as a signature, yet be very dissimilar because their 1’s are in different rows.

17 Four Types of Rows uGiven columns C 1 and C 2, rows may be classified as: C 1 C 2 a11 b10 c01 d00 uAlso, a = # rows of type a, etc. uNote Sim (C 1, C 2 ) = a /(a +b +c ).

18 Minhashing uImagine the rows permuted randomly. uDefine “hash” function h (C ) = the number of the first (in the permuted order) row in which column C has 1. uUse several (100?) independent hash functions to create a signature.

19 Minhashing Example Input matrix Signature matrix M

20 Surprising Property uThe probability (over all permutations of the rows) that h (C 1 ) = h (C 2 ) is the same as Sim (C 1, C 2 ). uBoth are a /(a +b +c )! uWhy? wLook down columns C 1 and C 2 until we see a 1. wIf it’s a type a row, then h (C 1 ) = h (C 2 ). If a type b or c row, then not.

21 Similarity for Signatures uThe similarity of signatures is the fraction of the rows in which they agree. wRemember, each row corresponds to a permutation or “hash function.”

22 Min Hashing – Example Input matrix Signature matrix M Similarities: Col.-Col Sig.-Sig

23 Minhash Signatures uPick (say) 100 random permutations of the rows. uThink of Sig (C) as a column vector. uLet Sig (C)[i] = row number of the first row with 1 in column C, for i th permutation.

24 Implementation --- (1) uNumber of rows = 1 billion. uHard to pick a random permutation from 1…billion. uRepresenting a random permutation requires billion entries. uAccessing rows in permuted order is tough! wThe number of passes would be prohibitive.

25 Implementation --- (2) 1.Pick (say) 100 hash functions. 2.For each column c and each hash function h i, keep a “slot” M (i, c ) for that minhash value. 3.for each row r, and for each column c with 1 in row r, and for each hash function h i do if h i (r ) is a smaller value than M (i, c ) then M (i, c ) := h i (r ). uNeeds only one pass through the data.

26 Example RowC1C h(x) = x mod 5 g(x) = 2x+1 mod 5 h(1) = 11- g(1) = 33- h(2) = 212 g(2) = 030 h(3) = 312 g(3) = 220 h(4) = 412 g(4) = 420 h(5) = 010 g(5) = 120

27 Candidate Generation uPick a similarity threshold s, a fraction < 1. uA pair of columns c and d is a candidate pair if their signatures agree in at least fraction s of the rows. wI.e., M (i, c ) = M (i, d ) for at least fraction s values of i.

28 The Problem with Checking Candidates uWhile the signatures of all columns may fit in main memory, comparing the signatures of all pairs of columns is quadratic in the number of columns. uExample: 10 6 columns implies 5*10 11 comparisons. uAt 1 microsecond/comparison: 6 days.