2Cox Ct=0 so x not in C 1class classify unclassified sample, x=(a,9). Let r=3. CoC+a pTrees CoC+a Cox pTrees 2Cox CoC+a>2Cox 1 Ct=4 so x in C"> 2Cox Ct=0 so x not in C 1class classify unclassified sample, x=(a,9). Let r=3. CoC+a pTrees CoC+a Cox pTrees 2Cox CoC+a>2Cox 1 Ct=4 so x in C">

Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classify x as class C iff there exists cC such that (c-x)o(c-x)  r2

Similar presentations


Presentation on theme: "Classify x as class C iff there exists cC such that (c-x)o(c-x)  r2"— Presentation transcript:

1 Classify x as class C iff there exists cC such that (c-x)o(c-x)  r2
FAUST Classifiers FAUST = Fast, Analytic, Unsupervised and Supervised Technology C = class, X = unclassified samples, r = a chosen minimum gap threshold. Classify x as class C iff there exists cC such that (c-x)o(c-x)  r2 FAUST One-Class Spherical (OCS) Classify x as class C iff the count of cCk s.t.(c-x)o(c-x)  r2 is max FAUST Multi-Class Spherical (MCS) FAUST One-Class Linear (OCL) Construct a hull, H, around C. x is class C iff xH. For a series of vectors, D, let loDmnCoD (or the 1st PCI); hiDmxCoD (or the last PCD). Classify xC iff loD  Dox  hiD D. E.g., let the D-series be the diagonals e1, e2, ...en, e1+e2, e1-e2, e1+e3, e1-e3, ...,e1-e2-...-en? (add more Ds until diamH-diamC < ε? FAUST Multi-Class Linear (MCL) Construct a hull, Hk, about Ck k as above. Then x isa Ck iff k, xHk. (allows for a "none of the classes" when xHk,  k.) The Hks can be constructed in parallel: mnAoD12 mnCoD12 mxCoD12 Convex hull Our hull, H c c c c c c c c cc c c c c cc c c c c c c c c c cc c c c c cc c a a a a a a a a aa a a a a aa a b b b b b b b b bb b b b b bb b D12=e1-e2 line mnC1 mxC1 mnA1 mxAoD12 mnB1 mxA1 mxB1 e1 e2 e3 mnBoD12 mxBoD12 D=e1+e2+e3 3D example of HULL1 D=e1+e3 D=e1-e3 e1 line 1-class classification Reference :

2 FAUST OCS One Class Spherical on the Spaeth dataset as a "lazy" classifier
So yf is not in C since it is spherically gapped away from C by r=3 units. How expensive is the algorithm? For each x 1. Compute SPTS, (C-x)o(C-x) 2. Compute mask pTree (C-x)o(C-x) < r2 3. Count 1-bits in that mask pTree. Let the Class be {yb,yc.yd,ye}. OCS classify x=yf. Let r=3. 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f C1 pTrees C2 pTrees C C C (c-x)o(c-x) yb yc yd ye yf Shortcut for 1.d,e,f by some comparison of hi bitslices of (C-x)o(C-x) with #9? (I think Yue Cui has a shortcut ???) C C yb yc yd ye x=yf C as a PTS is: and x=yf: Shortcut for 1.a,b,c,d,e,f: (C-x)o(C-x) = CoC -2Cox +|x|2 < r2 |x|2-r2 + CoC < 2Cox # # # 1a Compute SPTS (C-x)o(C-x): 1.f Conclude that yfC 1.b Form cosntant SPTSs 1.e Count the 1 bits = 0 1.c Construct (C-x)o(C-x) by SPTS arithmetic: (C-x)o(C-x)=(C1-#7)(C1-#7) + (C2-#8)(C2-#8) 1.d Construct the mask pTree (C-x)o(C-x)<#9 Precompute (1 time) SPTS CoC and PTS 2C (2C is just a re-labeling (shift left) of pTrees of C). For each new unclassified sample, x, add |x|2-r2 to CoC (adding one constant to one SPTS) compute 2Cox (n multiplications of one SPTS, 2Ci, by one constant, xi' then add the n resulting SPTSs. compare |x|2-r2 +CoC to 2Cox giving us a mask pTree. Count 1-bits in this mask pTree (shortcuts?, shortcuts?, shortcuts?) CoC pTrees CoC 2C1 pTrees C2 pTrees 2C C a = |x|2-r2 = 104 CoC+a pTrees CoC+a 2Cox pTrees 2Cox CoC+a>2Cox Ct=0 so x not in C 1class classify unclassified sample, x=(a,9). Let r=3. CoC+a pTrees CoC+a 2Cox pTrees 2Cox CoC+a>2Cox 1 Ct=4 so x in C

3 FAUST OCL One Class Linear classifier applied to IRIS, SEEDS, WINE, CONCRETE datasets
For series of D's = diagonals e1, e2, ...en, e1+e2, e1-e2, e1+e3, e1-e3, ...,e1-e2-...-en For IRIS with C=Vers, outliers=Virg, FAUST 1D: SLcutpts (49,70); SWcutpts(22,32); PLcutpts(33,49); PW Ctpts(10,16) 44 vers correct. 7 virg errors Trim outliers: ; :50, 1D_2D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class 1D_2D_3D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class 1D_2D_3D_4D model classifies 50 vers (no eliminated outliers) and 3 virg in the 1class The 3 persistent virg errors virg virg virg The 1D model classifies 50 class1 and 15 class2 incorrectly as class1. The 1D model classifies 50 class1 and 30 class3 incorrectly as class1. The 1D model classifies 50 class1 and 0 class3 incorrectly as class1. For SEEDS with C=class1 and outliers=class2 The 1D_2D model classifies 50 class1 and 8 class2 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 8 class2 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 8 class2 incorrectly as class1. For SEEDS with C=class1 and outliers=class3 The 1D_2D model classifies 50 class1 and 27 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 27 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 27 class3 incorrectly as class1. For SEEDS with C=class2 and outliers=class3 The 1D_2D model classifies 50 class1 and 0 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 0 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 0 class3 incorrectly as class1. For WINE with C=class4 and outliers=class7 (Class 4 was enhanced with 3 class3's to fill out the 50) The 1D model classifies 50 class1 and 48 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 42 class3 incorrectly as class1. For CONCRETE, concLH with C=class(8-40) and outliers=class(43-67) The 1D model classifies 50 class1 and 43 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 35 class3 incorrectly as class1. The 1D_2D_3D model classifies 50 class1 and 30 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 27 class3 incorrectly as class1. For CONCRETE, concM (class is the middle range of strengths) The 1D model classifies 50 class1 and 47 class3 incorrectly as class1. The 1D_2D model classifies 50 class1 and 37 class3 incorrectly as class1. The 1D_2D_3D_4D model classifies 50 class1 and 26 class3 incorrectly as class1.

4 FAUST MCL Class1=C1={y1,y2.y3,y4. Class2=C2={y7,y8.y9}.
C e e y y y y y y y yb yc yd ye mn1 1 mx1 3 mn2 1 mx mn mx mn mx mn1 14 mx1 15 mx2 3 mn mx mn mx mn1 9 mx mn2 9 mx mn mx mn mx Class1=C1={y1,y2.y3,y4. xCk iff lok,D  Dox  hik,D D. Class2=C2={y7,y8.y9}. Class3=C3={yb,yc.yd,ye} 1 y1y y7 2 y3 y y8 y 3 y y y9 ya 5 6 7 yf yb a x yc b yd ye a b c d e f Shortcuts for MCL? Pre-compute all diagonal minimums and maximums; e1, e2, e1+e2, e1-e2. Then in fact, there is no pTree processing left to do (just straight forward number comparisons). xf On basis of e1 it is "none-or-the-above" 9,a It is in class3 (red) only ya On the basis of e1 it is "none-or-the-above" Versicolor 1D min max n n n n4 x x x x4 y On the basis of e1 it is "none-or-the-above" f, It is in class2 (green) only 1D MCL Hversicolor has 7 virginica! Versicolor 2D min max n12 n13 n14 n23 n24 n34 n1-2 n1-3 n1-4 n2-3 n2-4 n3-4 x12 x13 x14 x23 x24 x34 x1-2 x1-3 x1-4 x2-3 x2-4 x3-4 1D_2D MCL Hversicolor has 3 virginica! 1D_2D_3D MCL Hversicolor has 3 virginica! Versicolor 3D min max n n n n234 n n1-23 n n n1-24 x x x x234 x x1-23 x x x1-24 n n n1-34 n n n2-34 n2-3-4 x x x1-34 x x x2-34 x2-3-4 Versicolor 4D min max n1234 n n n n n n n x1234 x x x x x x x 1D_2D_3D_4D MCL Hversicolor has 3 virginica (24,27,28) 1D_2D_3D_4D MCL Hvirginica has 20 versicolor errors!! Look at removing outliers (gapped>=3) from Hullvirginica 23 Ct gp ''' e1 Ct gp ... 79 1 e2 Ct gp ... 38 2 e3 Ct gp ... 69 1 e4 Ct gp 25 3 12 Ct gp ... 13 Ct gp 104 ... 146 14 Ct gp ... 102 1 24 Ct gp no outliers 34 Ct gp ... 92 1 Hvirginica 12 versic Hvirginica 15 versic Hvirginica 3 versic 1D MCL Hvirginica only 16 versicolors! One possibility would be to keep track of those that are outliers to their class but are not in any other class hull and put a sphere around them. Then any unclassified sample that doesn't fall in any class hull would be checked to see if it falls in any of the class outlier spheres???

5 FAUST Outlier Detector When the goal is to only to find outliers as quickly as possible.
FOD-1 recursively uses a vector, D=FurthestFromMedian-to-FurthestFromFurthestFromMedia. Mean=(8.53, 4,73) Median=(9, 3) 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f d2(y1,x), D=y1->y9 4 2 8 17 68 196 170 200 153 145 181 164 85 xoD = xo(14,2) 16 44 32 48 74 132 212 200 216 190 158 174 148 176 114 d2(med,x) 68 40 50 36 17 26 37 53 64 29 xoD2 13. 14. 12. 11. 20. 17. 16. 14 3.9 2.6 0.9 4.2 xoD3 1.6 4.3 3.3 5 7.3 13 20. 19. 21 18. 16. 18 15. 12 FOD-1 won't work for big data. Finding outliers is local. Big data has many localities to exhaustively search. We may need to enclose each outlier in a gapped hulls. Those gapped hulls will likely be filled in when projecting onto a randomly chosen line. I.e., barrel gapping suffers from a chicken-egg problem: First look for linear gaps and then radial gaps out from it. Unless line runs thru outlier radial gap not likely to appear y1 y2 y3 y4 y5 y6 y7 y8 y9 ya yb yc yd ye yf xoD Distribution down to 25: [0,32) [32,64) [64,96) [96,128) [128,160) [160,192) [192,224) Thinnings [0,32) and [64,128). So we check y1,y5,yf. y5 and yf check out as outliers, y1 does not. Note y6 does not either! Let D2 be mean to median and go down to 22: [0,4) [4,8) [8,12) [12,16) [16,20) [20,24) Thinnings [4,12), [20,24). yf checks out as outlier, y4 does not. Note y6 does not either! Let D3 be (Median to FurthestFromMedian)/6 and go down to 22: [0,4) [4,8) [8,12) [12,16) [16,20) [20,24) Thinnings [8,16) yf , y6 check out as outlier, yd does not. This D3 isd best? FOD-1 doesn't work well for interior outlier identifiction (which is the case for all Spaeth outliers. FOD-2 uses FAUST CC Clusterer (CC=Count Change) to find outliers. CC removes big clusters so that as it moves down the dendogram clusters gets smaller and smaller. Thus outliers are more likely to reveal themselves as singletons (and doubletons?) gapped away from their complements. With each dendogram iteration we will attempt to identify outlier candidates and construct the SPTS of distances from each candidate (If the minimum of those distance exceeds a threshold, declare that candidate an outlier.). E;g;, look for outliers using projections onto the sequence of D's = e1,...,en , then diagonals, e1+e2, e1-e2, ... We look for singleton (and doubleton?...) sets gapped away from the other points. We start out looking for coordinate hulls (rectangles) that provide a gap around 1 (or2? or 3?) points only. We can do this by intersecting "thinnings" in each DoX distribution. ote, if all we're interested in anomalies, then we might ignore all PCCs that are not involved in thinnings. This would save lots of time! (A "thinning" is a PCD to below threshold s.t. the next PCC is a PCI to above threshold. The threshold should be  PCC threshold.) 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f DensityCount/r2 labeled dendogram for FAUST on Spaeth with D=AvgMedian DET=.3 A A So intersect thinnings [1,1]1, [5,7]1 and [13,14]1 with [4,10]2

6 Choosing a clustering from a Density and/or Density Uniformity labeled Dendogram
The algorithm for choosing the optimal clustering from a labeled dendogram is as follows: Let DET=DEnsityThreshold=.4 and DUT=DensityUniformityThreshold=½ Since a full dendogram is far bigger than the original table, we set threshold(s), We build a partial dendogram (ending a branch when threshold(s) are met) Then a slider for density would work as follows: The user set the threshold(s). We give the clustering. The user increases threshold(s). We prune the dendogram and give clustering. The user decreases threshold(s). We build each branches down further until the new threshold(s) are exceeded and give the new clustering. We might want to also display the dendogram to the user and let him select a "node=cluster" for further analysis, etc. DEL=.1 DUL=1/6 DEL=.2 DUL=1/8 DEL=.5 DUL=½ DEL=.3 DUL=½ DEL=.4 DUL=1 DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= DEL= DUL= A B C D E F G

7 FAUST Clusterer Gap-based cut-points only. Spaeth Dataset
FAUST Clusterer Gap-based cut-points only. Spaeth Dataset. DensityCount/r2 labeled dendogram We make cuts only in the middle of Gaps in the XoD SPTS, rather than at all Precipitous Count Changes (PCCs). Since the Spaeth dataset is very small, we get by with gaps only. For large datasets, we would use PCC cuts. DensityCount/r No DensityUniformity is used here We create a labeled dendogram using FAUST Cluster on Spaeth a bc def UDR gives the above distribution of xoD values D is the Average-to-FurthestFromAverage (AFFA) vector Gap cuts are made at 7 and 11 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye c d e f a b c d e f D-line DensityThreshold=.3 cut-lines Y (Density=.15) {y1,y2,y3,y4,y5}(.37) {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye} (Density=.07) {y6}() {yf}() {y7,y8,y9,ya}(.39) {yb,yc,yd,ye}(1.01) On the next slide the Density Threshold is increased to In the User Interface this would presumably be done with a slider. The above DensityThreshold=.3 dendogram is the starting point. We build the 2 leaves that have a density <.5 down further until they do. One final note: For large datasets we don't apply UDR all the way down to singleton points since the distribution object is then bigger than the dataset itself. We build it down until the PCCs reveal themselves roughly but then we may want to build those PCC branches all the way down so that we get a precise placement for our cut point. There should be approximately two PCCs per cluster, and therefore the number of branches built all the way down will usually not be high.

8 FAUST Clusterer Gap-based cut-points only; applied to the Spaeth Dataset. Increase DensityThreshold from .3 to ,5 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye c d e f a b c d e f D=Average-to-FurthestFromAverage Density Threshold=.3 Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) Density Threshold=.5 {y1,y2,y3,y4}(.63) {y5}() {y6}() {yf}() {y7,y8,y9,ya}(.39) {yb,yc,yd,ye}(1.01) DensThresh=.5 {y7,y8,y9}(1.27) {ya}()

9 FAUST Clusterer Gap-based cut-points only; applied to the Spaeth Dataset. Increase DensityThreshold from .5 to 1 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye c d e f a b c d e f D=Average-to-FurthestFromAverage DensityThreshold=.5 Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) {y1,y2,y3,y4}(.63) {y5}() {y6}() {yf}() {y7,y8,y9,ya}(.39) {yb,yc,yd,ye}(1.01) DensityThreshold=1 {y7,y8,y9}(1.27) {ya}() {y1,y2,y3}(2.54) {y4}()

10 FAUST Clusterer Gap-based cut-points only; applied to the Spaeth Dataset. DensityThreshold = .3
Comparing D=AverageToFuirthestFromAverage (AFFA) applied recursively to D=diagonals applied cyclically. On this slide we see that we can use a different sequence of D-lines and get a different dendogram. However notice that the leaf clusterings arrived at in the end are the same.. 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f Labeled dendogram for FAUST Cluster on Spaeth with D=furthest-to-Avg, DensityThreshold=.3 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f DCount/r2 labeled dendogram for FAUST Clusteron Spaeth with D=cylces thru diagonals nnxx,nxxn,nnxx,nxxn..., DensThresh=.3 Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,y7,y8,y9,ya,yb.yc.yd.ye,yf}(.09) Y(.15) {y6,y7,y8,y9,ya}(.17) {yb,yc,yd,ye,yf}(.25) y1,2,3,4,5(.37 {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) {y7,y8,y9,ya}(.39) {y6}() {yf}() {yb,yc,yd,ye}(1.01) {y6}() {yf}() {y7,8,9,a}(.39) {yb,yc,yd,ye}(1.01)

11 UDR Univariate Distribution Revealer (on Spaeth:)
15 UDR Univariate Distribution Revealer (on Spaeth:) applied to S, a column of numbers in bistlice format (an SpTS), will produce the DistributionTree of S DT(S) depth=h=0 depth=h=1 Y y1 y2 y y y y y y y y y ya 13 4 pb 10 9 yc 11 10 yd 9 11 ye 11 11 yf 7 8 yofM 11 27 23 34 53 80 118 114 125 110 121 109 83 p6 1 p5 1 p4 1 p3 1 p2 1 p1 1 p0 1 p6' 1 p5' 1 p4' 1 p3' 1 p2' 1 p1' 1 p0' 1 node2,3 [96.128) f= depthDT(S)b≡BitWidth(S) h=depth of a node k=node offset Nodeh,k has a ptr to pTree{xS | F(x)[k2b-h+1, (k+1)2b-h+1)} and its 1count p6' 1 5/64 [0,64) p6 10/64 [64,128) p5' 1 3/32[0,32) 2/32[64,96) p5 2/32[32,64) ¼[96,128) p3' 1 0[0,8) p3 1[8,16) 1[16,24) 1[24,32) 1[32,40) 0[40,48) 1[48,56) 0[56,64) 2[80,88) 0[88,96) 0[96,104) 2[194,112) 3[112,120) 3[120,128) p4' 1 1/16[0,16) p4 2/16[16,32) 1[32,48) 1[48,64) 0[64,80) 2[80,96) 2[96,112) 6[112,128) Pre-compute and enter into the ToC, all DT(Yk) plus those for selected Linear Functionals (e.g., d=main diagonals, ModeVector . Suggestion: In our pTree-base, every pTree (basic, mask,...) should be referenced in ToC( pTree, pTreeLocationPointer, pTreeOneCount ).and these OneCts should be repeated everywhere (e.g., in every DT). The reason is that these OneCts help us in selecting the pertinent pTrees to access - and in fact are often all we need to know about the pTree to get the answers we are after.).

12 APPENDIX: UDR: Can we create distributionX1+2 etc. using only X1 and X2 basic ptrees (concurrently with the creation of distributionsX1, distributionX2)? An example: X1 3 1 2 P1,1 1 P1,0 1 X2 1 3 P2,1 1 P2,0 1 X1+2 4 1 3 P1+2,2 1 P1+2,1 1 P1+2,0 1 Let D=D1,2  e1+e2 Then DoX = 21P1,1+20P1, P2,1+20P2,0 = 21(P1,1+P2,1) (P1,0+P2,0) so we can make the 2 SPTS additions (the ones in parenthesis), shift the first left by 1 and add it to the second. But can we go directly to the UDR construction? P1+2,1 = (P1,1 XOR P2,1) AND (NOT(P1,0&P2,0)) ... Is there a more efficient way to get the X1+X2 distribution using this route? Md? We don't need the SPTS for X1+X2 and it's expensive to create it just to get its distribution. no carry from zero bit P1+2,0 = P1,0 XOR P2,0 Md: This seems like it would give us a tremendous advantage over the "horizontal datamining boys and girls" because even though they could concurrently create all diagonal distributions, X1, X2, X1+2, X1-2, ... in one pass down the table, we would be able to do it with concurrent programs which make one pass across the SPTSs for X1 and X2.

13 From Data Mining, Han, Kamber 2nd ed, pgs 24-27
Classifiers (supervised analytics) construct a model that describes and distinguishes data classes or concepts, for the purpose of being able to use the model to [quickly] predict the class of objects whose class label is unknown. The derived model is based on the analysis of a set of training data (i.e., data objects whose class label is known - usually a table with a class-label column or attribute). Classification is often called "prediction" if the class labels are numbers. Classification may need to be preceded by relevance analysis to identify and eliminate attributes that do not seem to contribute much information as to class. Clusterers (unsupervised analytics) analyze data objects without consulting a known class label (ususally none are present). Clustering can be used to generate class labels in a training set that doesn't have them (create a training set) Objects are clustered (grouped) by maximizing the intra-class similarity and minimizing the inter-class similarity. Clustering can facilitate taxonomy formation, i.e., organizing objects into a hierarchy of classes that group similar events together (using a dendogram?). Anomaly Detectors (outlier detection analytics) identify objects that don't comply with the behavior of the data model Most data mining methods discard outliers as noise or exceptions. However. in some applications, such as fraud detection, the rare events can be the more interesting. Outliers may be detected using statistical tests that assume a distribution or probability model for the data, using distance measures, or deviation-based methods which identify objects by examining differences in their main characteristics. "One person's noise may be another person's signal". So outlier mining is a analytic in its own right. Outlier mining can mean: 1. Given a set of n objects and k, find the top k objects in terms of dissimilarity from the rest of the objects. 2. Given a Classification Training Set, identify objects within each class that are outliers within that class (they are correctly classified but they are noticeably dissimilar from the other objects in the class. We may find outliers for an entire set of objects (those object that don't fit into a cluster or class) and then find objects within a cluster or class that are noticeably dissimilar to the other objects in that class. I.e., find set outliers, then class outliers. 3. Given a set of objects, determine "fuzzy" clusters, i.e., assign a degree of membership for each object in each cluster. In a way, a dendogram does that. There are Statistics-based, Distance-based, Density-based and Deviation-based outlier detectors (using a dissimilarity measure, reduce the overall dissimilarity of the object set by removing "deviation outliers".).

14 Size P_4bit P_8bit P_12bit P_16bit Horizontal 1 465 930 1390 1790 2990
Mohammad's results 2014_02_15: Experimental Result of Addition: Data size: 1 billion, 2 billion, 3 billion, 4 billion. Number of columns: 2 Bit width of the values: 4 bit, 8 bit, 12 bit and 16 bit . Vertical axis is time measured in millisecond. Horizontal axis is number of bit positions. Size P_4bit P_8bit P_12bit P_16bit Horizontal 1 465 930 1390 1790 2990 2 950 1910 2860 3590 5970 3 1420 2850 4270 5360 8950 4 1800 3560 5370 7140 12420

15 APPLYING FAUST GAP Clusterer TO SPAETH
DensityCount/r2 labeled dendogram for FAUST GAP Cluster on Spaeth with D=Avg-to-Furthest and DensityThreshold=.3 a bc def Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) D=Avg-to-Furthest cut at 7 and 11 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye c d e f a b c d e f {y1,y2,y3,y4}(.63) {y5}() {y6}() {yf}() {y7,y8,y9,ya}(.39) {yb,yc,yd,ye}(1.01) {y7,y8,y9}(1.27) {ya}() {y1,y2,y3}(2.54) {y4}() D=Avg-to-Furth DensityThresh=1 D=Avg-to-Furth DensityThresh=.5 D-line Labeled dendogram for FAUST Cluster on Spaeth with D=furthest-to-Avg, DensityThreshold=.3 1 y1y y7 2 y3 y y8 3 y y y9 ya 5 6 7 yf yb a yc b yd ye a b c d e f DCount/r2 labeled dendogram for FAUST Gap Cluster on Spaeth w D=cylces thru diagonals nnxx,nxxn,nnxx,nxxn..., DensThresh=.3 Y(.15) {y1,y2,y3,y4,y5}(.37) {y6,y7,y8,y9,ya,yb.yc.yd.ye,yf}(.09) Y(.15) {y6,y7,y8,y9,ya}(.17) {yb,yc,yd,ye,yf}(.25) y1,2,3,4,5(.37 {y6,yf}(.08) {y7,y8,y9,ya,yb.yc.yd.ye}(.07) {y7,y8,y9,ya}(.39) {y6}() {yf}() {yb,yc,yd,ye}(1.01) {y6}() {yf}() {y7,8,9,a}(.39) {yb,yc,yd,ye}(1.01)


Download ppt "Classify x as class C iff there exists cC such that (c-x)o(c-x)  r2"

Similar presentations


Ads by Google