Download presentation
Presentation is loading. Please wait.
2
Entity Tables, Relationship Tables We Classify using any Table (as the Training Table) on any of its columns, the class label column. Medical Expert System: Using entity Training Table Patients(PID,Symptom1,Symptom2,Symptom3,Disease) and class label Disease, we can classify new patients based on symptoms using Nearest Neighbor, Model based (Decision Tree, Neural Network, SVM, SVD, etc.) classification. Netflix Contest: Using relationship TrainingTable, Rents(UID,MID,Rating,Date) and class label Rating, classify new (UID,MID,Date) tuples. How (since there are no feature columns really)? Movie User Rents UID CNAME AGE TranID Rating Date MID Mname Date Netflix Item Customer buys C# CNAME AGE TranID Count Date I# Iname Suppl Date Price Market Basket Course Student Enrollments S# SNAME GEN C# S# GR C# CNAME ST TERM Educational Medical Ward Patients Has is in PID PNAME Symptom1 Symptom2 Symptom3 Disease WID Wname Capacity We Cluster on a Table also, but usually just to "prepare" a classification TrainingTable (move up a semantic hierarchy, e.g., in Items, cluster on PriceRanges in 100 dollar intervals or to determine classes in the first place (each cluster is then declared a separate class) ). We do Association Rule Mining (ARM) on relationships (e.g., the Market Basket Research; ARM on "buys" relationship). We can let Near Neighbor Movies vote? What makes a movie, mid, "near" to MID? If mid is rated similarly to MID by users, uid 1,..., uid.. Here we use correlations for near instead of an actual distance once again. We can let Near Neighbor Users vote? What makes a user, uid, "near" to UID? If uid rates similarly to UID on movies, mid 1,..., mid. So we use correlations for near instead of distance. This is typical when classifying on a relationship table.
3
Key a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 =C a 11 a 12 a 13 a 14 a 15 a 16 a 17 a 18 a 19 a 20 t12 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1 t13 1 0 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1 t15 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 t16 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0 t21 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 t27 0 1 1 0 1 1 0 0 0 1 0 0 1 1 0 0 1 1 0 0 t31 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1 t32 0 1 0 0 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 t33 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 1 t35 0 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 1 0 0 t51 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 t53 0 1 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 t55 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 t57 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0 t61 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1 t72 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 t75 0 0 1 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 t12 0 0 1 0 1 1 0 2 t13 0 0 1 0 1 0 0 1 t15 0 0 1 0 1 0 1 2 a 5 a 6 a 10 =C a 11 a 12 a 13 a 14 distance from 000000 Workspace for the 3 nearest neighbors so far 0 0 0 distance=2, don’t replace 0 0 0 distance=4, don’t replace 0 0 0 distance=4, don’t replace 0 0 0 distance=3, don’t replace 0 0 0 distance=3, don’t replace 0 0 0 distance=2, don’t replace 0 0 0 distance=3, don’t replace 0 0 0 distance=2, don’t replace 0 0 0 distance=1, replace t53 0 0 0 0 1 0 0 1 0 0 0 distance=2, don’t replace 0 0 0 distance=2, don’t replace 0 0 0 distance=3, don’t replace 0 0 0 distance=2, don’t replace 0 1 C=1 wins! WALK THRU a kNN classification example : 3NN CLASSIFICATION of an unclassified sample, a=( a 5 a 6 a 11 a 12 a 13 a 14 ) = =( 0 0 0 0 0 0 ). 0 0 0 distance=2, don’t replace
4
Key a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 =C a 11 a 12 a 13 a 14 a 15 a 16 a 17 a 18 a 19 a 20 t12 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 1 0 0 0 1 t13 1 0 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1 t15 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 t16 1 0 1 0 0 0 1 1 0 1 1 0 1 0 1 0 0 0 1 0 t21 0 1 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 t27 0 1 1 0 1 1 0 0 0 1 0 0 1 1 0 0 1 1 0 0 t31 0 1 0 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 0 1 t32 0 1 0 0 1 0 0 0 1 1 0 1 1 0 1 1 0 0 0 1 t33 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 1 t35 0 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 1 0 0 t51 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 0 1 1 0 1 t53 0 1 0 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 t55 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 t57 0 1 0 1 0 0 1 1 0 0 0 0 1 1 0 0 1 1 0 0 t61 1 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1 t72 0 0 1 1 0 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 t75 0 0 1 1 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 WALK THRU of required 2 nd scan to find Closed 3NN set. Does it change vote? 0 0 0 d=2, include it also 0 0 0 d=4, don’t include 0 0 0 d=4, don’t include 0 0 0 d=3, don’t include 0 0 0 d=3, don’t include 0 0 0 d=2, include it also 0 0 0 d=3, don’t include 0 0 0 d=2, include it also 0 0 0 d=1, already voted 0 0 0 d=2, include it also 0 0 0 d=2, include it also 0 0 0 d=3, don’t replace 0 0 0 d=2, include it also 0 0 0 d=2, already voted 0 0 0 d=1, already voted 0 1 Vote after 1 st scan. YES! C=0 wins now! t12 0 0 1 0 1 1 0 2 t13 0 0 1 0 1 0 0 1 a 5 a 6 a 10 =C a 11 a 12 a 13 a 14 distance t53 0 0 0 0 1 0 0 1 Unclassified sample: 0 0 0 0 0 0 3NN set after 1 st scan 0 0 0 d=2, include it also 0 0 0 d=2, include it also
5
C00000000001111111C00000000001111111 C11111111110000000C11111111110000000 WALK THRU: C3NN using P-trees a 20 1 0 1 0 1 0 1 0 1 0 key t 12 t 13 t 15 t 16 t 21 t 27 t 31 t 32 t 33 t 35 t 51 t 53 t 55 t 57 t 61 t 72 t 75 a111110000000000100a111110000000000100 a200001111111111000a200001111111111000 a311111100000000111a311111100000000111 a400000000001111011a400000000001111011 a500001111110000100a500001111110000100 a600001100000000000a600001100000000000 a711110000001111011a711110000001111011 a811110000001111011a811110000001111011 a900000011110000100a900000011110000100 C11111111110000000C11111111110000000 a 11 0 1 0 1 0 1 0 1 0 a 12 1 0 1 0 1 0 1 a 13 1 0 1 0 1 0 1 0 a 14 0 1 0 1 0 1 0 1 0 1 a 15 1 0 1 0 1 0 1 0 1 0 a 16 1 0 1 0 1 0 a 17 0 1 0 1 0 1 0 1 0 1 a 18 0 1 0 1 0 1 0 1 0 1 a 19 0 1 0 1 0 1 0 1 0 Ps00000000000000000Ps00000000000000000 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 No neighbors at distance=0 First let all training points at distance=0 vote, then distance=1, then distance=2,... until 3 For distance=0 (exact matches) constructing the P-tree, P s then AND with P C and P C’ to compute the vote. (black denotes complement, red denotes uncomplemented
6
C00000000001111111C00000000001111111 C11111111110000000C11111111110000000 a 20 1 0 1 0 1 0 1 0 1 0 key t 12 t 13 t 15 t 16 t 21 t 27 t 31 t 32 t 33 t 35 t 51 t 53 t 55 t 57 t 61 t 72 t 75 a111110000000000100a111110000000000100 a200001111111111000a200001111111111000 a311111100000000111a311111100000000111 a400000000001111011a400000000001111011 a500001111110000100a500001111110000100 a600001100000000000a600001100000000000 a711110000001111011a711110000001111011 a811110000001111011a811110000001111011 a900000011110000100a900000011110000100 a 10 =C 1 0 a 11 0 1 0 1 0 1 0 1 0 a 12 1 0 1 0 1 0 1 a 13 1 0 1 0 1 0 1 0 a 14 0 1 0 1 0 1 0 1 0 1 a 15 1 0 1 0 1 0 1 0 1 0 a 16 1 0 1 0 1 0 a 17 0 1 0 1 0 1 0 1 0 1 a 18 0 1 0 1 0 1 0 1 0 1 a 19 0 1 0 1 0 1 0 1 0 P D(s,1) 0 1 0 1 0 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a500001111110000100a500001111110000100 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a600001100000000000a6000011000000000000 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 0 1 0 1 0 1 0 1 0 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 121 1 0 1 0 1 0 1 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 1 0 0 1 0 1 0 1 0 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 140 1 0 1 0 1 0 1 0 1 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 Construct Ptree, P S(s,1) = OR P i = P |s i -t i |=1; |s j -t j |=0, j i = OR P S(s i,1) S(s j,0) OR P5P5 P6P6 P 11 P 12 P 13 P 14 j {5,6,11,12,13,14}-{i} 0 1 i= 5,6,11,12,13,14 WALK THRU: C3NNC distance=1 nbrs:
7
key t 12 t 13 t 15 t 16 t 21 t 27 t 31 t 32 t 33 t 35 t 51 t 53 t 55 t 57 t 61 t 72 t 75 a500001111110000100a500001111110000100 a600001100000000000a600001100000000000 a 10 C 1 0 a 11 0 1 0 1 0 1 0 1 0 a 12 1 0 1 0 1 0 1 a 13 1 0 1 0 1 0 1 0 a 14 0 1 0 1 0 1 0 1 0 1 0 1 OR{all double-dim interval-Ptrees}; P D(s,2) = OR P i,j P i,j = P S(s i,1) S(s j,1) S(s k,0) k {5,6,11,12,13,14}-{i,j} i,j {5,6,11,12,13,14} a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a600001100000000000a6000011000000000000 a500001111110000100a500001111110000100 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 0 1 0 1 0 1 0 1 0 a611110011111111111a611110011111111111 a500001111110000100a500001111110000100 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 121 1 0 1 0 1 0 1 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a500001111110000100a500001111110000100 a 14 1 0 1 0 1 0 1 0 1 0 a 13 1 0 0 1 0 1 0 1 0 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a500001111110000100a500001111110000100 a 140 1 0 1 0 1 0 1 0 1 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a500001111110000100a500001111110000100 P 5,6 P 5,11 P 5,12 P 5,13 P 5,14 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 0 1 0 1 0 1 0 1 0 a600001100000000000a6000011000000000000 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 1 0 1 0 1 0 1 a 11 1 0 1 0 1 0 1 0 1 a600001100000000000a6000011000000000000 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a600001100000000000a6000011000000000000 a511110000001111011a511110000001111011 a 14 0 1 0 1 0 1 0 1 0 1 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a600001100000000000a6000011000000000000 a511110000001111011a511110000001111011 P 6,11 P 6,12 P 6,13 P 6,14 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 1 0 1 0 1 0 1 a 11 0 1 0 1 0 1 0 1 0 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 1 0 1 0 1 0 1 0 a 12 0 1 0 1 0 1 0 a 11 0 1 0 1 0 1 0 1 0 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 14 1 0 1 0 1 0 1 0 1 0 a 13 0 1 0 1 0 1 0 1 a 12 0 1 0 1 0 1 0 a 11 0 1 0 1 0 1 0 1 0 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 P 11,12 P 11,13 P 11,14 a 14 1 0 1 0 1 0 1 0 1 0 a 13 1 0 0 1 0 1 0 1 0 a 12 1 0 1 0 1 0 1 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 a 140 1 0 1 0 1 0 1 0 1 a 13 0 1 0 1 0 1 0 1 a 12 1 0 1 0 1 0 1 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 P 12,13 P 12,14 We now have 3 nearest nbrs. We could quite and declare C=1 winner? a 140 1 0 1 0 1 0 1 0 1 a 13 1 0 1 0 1 0 1 0 a 12 0 1 0 1 0 1 0 a 11 1 0 1 0 1 0 1 0 1 a611110011111111111a611110011111111111 a511110000001111011a511110000001111011 P 13,14 We now have the C3NN set and we can declare C=0 the winner! WALK THRU: C3NNC distance=2 nbrs:
8
In this example, there were no exact matches (dis=0 nbrs or similarity=6 neighbors) for the sample. There were two nbrs found at a distance of 1 (dis=1 or sim=5) and nine dis=2, sim=4 nbrs. All 11 neighbors got an equal votes even though the two sim=5 are much closer neighbors than the nine sim=4. Also processing for the 9 is costly. A better approach would be to weight each vote by the similarity of the voter to the sample (We will use a vote weight function which is linear in the similarity (admittedly, a better choice would be a function which is Gaussian in the similarity, but, so far, it has been too hard to compute). As long as we are weighting votes by similarity, we might as well also weight attributes by relevance also (assuming some attributes are more relevant than others. e.g., the relevance weight of a feature attribute could be the correlation of that attribute to the class label). P-trees accommodate this method very well (in fact, a variation on this theme won the KDD-cup competition in 02 ( http://www.biostat.wisc.edu/~craven/kddcup/ ) and is published in the so-called Podium Classification methods. Notice though that the Ptree method (Horizontal Processing of Vertical Data or HPVD) really relies on a bonafide distance, in this case Hamming Distance or L 1. However, the Vertical Processing of Horizontal Data (VPHD) doesn't. VPHD therefore works even when we use a correlation as the notion of "near". If you have been involve in the DataSURG Netflix Prize efforts, you will note that we used Ptree technology to calculate correlations but not to find near neighbors through correlations.
9
Movie User Rents UID CNAME AGE TID Rating Date MID Mname Date Again, Rents is also a Table ( Rents(TID, UID,MID,Rating,Date) ), using the class label column, Rating, we can classify potential new ratings. Using these predicted ratings, we can recommend Rating=5 rentals. Does ARM require a binary relationship? YES! But since every Table gives a binary relationship between any two of its columns we can do ARM on them. E.g., we could do ARM on Patients(symptom1, disease) to try to determine which diseases symptom1 alone implies (with high confidence) AND in some cases, the columns of a Table are really instances of another entity. E.g., Images, e.g., Landsat Satellite Images, LS(R, G, B, NIR, MIR, TIR), with wavelength intervals in micrometers: B=(.45,.52], G=(.52,.60], R=(.63,.69], NIR=(.76,.90], MIR=(2.08, 2.35], TIR=(10.4, 12.5] Known recording instruments (that record the number of photons detected in a small space of time in any a given wavelength range) seem to be very limited in capability (eg, our eyes, CCD cameras...). If we could record all consecutive bands (e.g., of radius.025 µm) then we would have a relationship between pixels (given by latitude-longitude) and wavelengths. Then we could do ARM on Imagery!!!! Netflix
10
Intro to Association Rule Mining (ARM) Given any relationship between entities, T (e.g., a set of Transactions an enterprise performs) and I (e.g., a set of Items which are acted upon by those transactions). e.g., in Market Basket Research (MBR) the transactions, T, are the checkout transactions (a customer going thru checkout) and the items, I, are the Items available for purchase in that store. The itemset, T(I), associated with (or related to) a particular transaction, T, is the subset of the items found in the shopping cart or market basket that the customer is bringing through check out at that time). an Association Rule, A C, associates two disjoint subsets of I (called Itemsets). (A is called the antecedent, C is called the consequent) The support [set] of itemset A, supp(A), is the set of of t's that are related to every a A, e.g., if A={i 1,i 2 } and C={i 4 } then supp(A)={t 2, t 4 } (support ratio = {t 2,t 4 }| / |{t 1,t 2,t 3,t 4,t 5 }| = 2/5 ) Note: | | means set size or count of elements in the set. The support [ratio] of rule A C, supp(A C), is the support of {A C}=|{t 2,t 4 }|/|{t 1,t 2,t 3,t 4, t 5 }|=2/5 The confidence of rule A C, conf(A C), is supp(A C) / supp(A) = (2/5) / (2/5) = 1 Data Miners typically want to find all STRONG RULES, A C, with supp(A C) ≥ minsupp and conf(A C) ≥ minconf (minsupp, minconf are threshold levels) Note that conf(A C) is also just the conditional probability of t being related to C, given that t is related to A). e.g., Horizontal Trans TT(I)Table t 1 i 1 t 2 i 1, i 2, i 4 t 3 i 1, i 3 t 4 i 1, i 2, i 4 t 5 i 3, i 4 T I A t1t1 t2t2 t3t3 t4t4 t5t5 i1i1 i2i2 i3i3 i4i4 C Its graph
11
APRIORI Association Rule Mining: Given a Transaction-Item Relationship, the APRIORI algorithm for finding all Strong I-rules can be done: 1. vertically processing of a Horizontal Transaction Table (HTT) or 2. horizontally processing of a Vertical Transaction Table (VTT). In 1., a Horizontal Transaction Table (HTT) is processed through vertical scans to find all Frequent I-sets (I-sets with support minsupp, e.g., I-sets "frequently" found in transaction market baskets). In 2. a Vertical Transaction Table (VTT) is processed thru horizontal operations to find all Frequent I-sets Then each Frequent I-set found is analyzed to determine if it is the support set of a strong rule. Finding all Frequent I-sets is the hard part. To do this efficiently, APRIORI Algorithm takes advantage of the "downward closure" property for Frequent I-sets: If an I-set is frequent, then all its subsets are also frequent. E.g., in the Market Basket Example, If A is an I-subset of B and if all of B is in a given Transaction's basket, the certainly all of A is in that basket too. Therefore Supp(A) Supp(B) whenever A B. First, APRIORI scans to determine all Frequent 1-item I-sets (contain 1 item; therefore called 1-Itemsets), next APRIORI uses downward closure to efficiently find candidates for Frequent 2-Itemsets, next APRIORI scans to determine which of those candidate 2-Itemsets is actually Frequent, next APRIORI uses downward closure to efficiently find candidates for Frequent 3-Itemsets, next APRIORI scans to determine which of those candidate 3-Itemsets is actually Frequent,... Until there are no candidates remaining (on the next slide we walk through an example using both a HTT and a VTT)
12
Horizontal Transaction Table (HTT) minsupp is set by the querier at 1/2 and minconf at 3/4 (note minsupp and minconf can expressed as counts rather than as ratios. If so, since there are 4 transactions, then as counts, minsupp=2 and minconf=3): or a Vertical Transaction Table (VTT) (downward closure property of "frequent") Any subset of a frequent itemset is frequent. APRIORI METHOD: Iteratively find the Frequent k-itemsets, k=1,2,... Find all strong association rules supported by each frequent Itemset. (C k will denote candidate k-itemsets generated at each step. F k will denote frequent k-itemsets). ARM The relationship between Transactions and Items can be expressed in a 2 3 3 1 3 1-Iset supports 2 3 3 3 Frequent (supp 2) Start by finding Frequent 1-ItemSets. 2 1 s, 3 2 s, 3 3 s, 1 4, 3 5 s TID 12345 10010110 20001101 30011101 40001001
13
HTT Scan D C1C1 TID 12345 10010110 20001101 30011101 40001001 C 2 Scan D C2C2 F 3 = L 3 Scan D P 1 2 //\\ 1010 P 2 3 //\\ 0111 P 3 3 //\\ 1110 P 4 1 //\\ 1000 P 5 3 //\\ 0111 Build Ptrees: Scan D L 1 ={1}{2}{3}{5} P 1 ^P 2 1 //\\ 0010 P 1 ^P 3 2 //\\ 1010 P 1 ^P 5 1 //\\ 0010 P 2 ^P 3 2 //\\ 0110 P 2 ^P 5 3 //\\ 0111 P 3 ^P 5 2 //\\ 0110 L 2 ={13}{23}{25}{35} P 1 ^P 2 ^P 3 1 //\\ 0010 P 1 ^P 3 ^P 5 1 //\\ 0010 P 2 ^P 3 ^P 5 2 //\\ 0110 L 3 ={235} F 1 = L 1 F 2 = L 2 {123} pruned since {12} not frequent {135} pruned since {15} not frequent Example ARM using uncompressed Ptrees (note: I have placed the 1-count at the root of each Ptree) C3C3 itemset {2 3 5} {1 2 3} {1,3,5} ARM
14
L3L3 L1L1 L2L2 1-ItemSets don’t support Association Rules (They will have no antecedent or no consequent). Are there any Strong Rules supported by Frequent=Large 2-ItemSets (at minconf=.75)? {1,3}conf{1} {3} = supp{1,3}/supp{1} = 2/2 = 1 ≥.75 STRONG conf{3} {1} = supp{1,3}/supp{3} = 2/3 =.67 <.75 {2,3}conf{2} {3} = supp{2,3}/supp{2} = 2/3 =.67 <.75 conf{3} {2} = supp{2,3}/supp{3} = 2/3 =.67 <.75 {2,5}conf{2} {5} = supp{2,5}/supp{2} = 3/3 = 1 ≥.75 STRONG! conf{5} {2} = supp{2,5}/supp{5} = 3/3 = 1 ≥.75 STRONG! {3,5}conf{3} {5} = supp{3,5}/supp{3} = 2/3 =.67 <.75 conf{5} {3} = supp{3,5}/supp{5} = 2/3 =.67 <.75 Are there any Strong Rules supported by Frequent or Large 3-ItemSets? {2,3,5}conf{2,3} {5} = supp{2,3,5}/supp{2,3} = 2/2 = 1 ≥.75 STRONG! conf{2,5} {3} = supp{2,3,5}/supp{2,5} = 2/3 =.67 <.75 conf{3,5} {2} = supp{2,3,5}/supp{3,5} = 2/3 =.67 <.75 No subset antecedent can yield a strong rule either (i.e., no need to check conf{2} {3,5} or conf{5} {2,3} since both denominators will be at least as large and therefore, both confidences will be at least as low. No need to check conf{3} {2,5} or conf{5} {2,3} DONE! 2-Itemsets do support ARs. ARM
15
E.g., the $1,000,000 Netflix Contest was to develop a ratings prediction program that can beat the one Netflix currently uses (called Cinematch) by 10% in predicting what rating users gave to movies. I.e., predict rating(M,U) where (M,U) QUALIFYING(MovieID, UserID). Netflix uses Cinematch to decide which movies a user will probably like next (based on all past rating history). All ratings are " 5-star " ratings (5 is highest. 1 is lowest. Caution: 0 means “ did not rate ” ). Unfortunately rating=0 does not mean that the user "disliked" that movie, but that it wasn't rated at all. Most “ ratings ” are 0. Therefore, the ratings data sets are NOT vector spaces! One can approach the Netflix contest problem as a data mining Classification/Prediction problem. A "history of ratings given by users to movies “, TRAINING(MovieID, UserID, Rating, Date) is provided, with which to train your predictor, which will predict the ratings given to QUALIFYING movie-user pairs (Netflix knows the rating given to Qualifying pairs, but we don't.) Since the TRAINING is very large, Netflix also provides a “ smaller, but representative subset ” of TRAINING, PROBE(MovieID, UserID) (~2 orders of magnitude smaller than TRAINING). Netflix gave 5 years to submit QUALIFYING predictions. That contest was won in the late summer of 2009, when the submission window was about 1/2 gone. The Netflix Contest Problem is an example of the Collaborative Filtering Problem which is ubiquitous in the retail business world (How do you filter out what a customer will want to buy or rent next, based on similar customers?). Collaborative Filtering is the prediction of likes and dislikes (retail or rental) from the history of previous expressed purchase of rental satisfactions (filtering new likes thru the historical filter of “collaborator” likes)
16
ARM in Netflix? Movie User Rents UID CNAME AGE Rating Date MID Mname Date ARM used for pattern mining in the Netflix data? In general we look for rating patterns (RP) that have the property RP true ==> MID r (movie, MID, is rated r). Singleton Homogeneous Rating Patterns:RP = N r Multiple Homogeneous Rating Patterns:RP = N 1 r &... & N k r Multiple Heterogeneous Rating Patterns:RP = N 1 r 1 &... & N k r k
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.