Download presentation
Presentation is loading. Please wait.
1
Visualization of TV Space
TVX()=TVX(x33) TVX(x15) 1 2 3 4 5 X Y TV
2
Proof that graph of TVX is a steep hyper-parabola centered at the mean, µ=(xX xi)/|X|
Let f(c) = TVX(c) xX (x-c)o(x-c) = xX i=1..n (xi - ci)2 = xX i=1..n (xi) *x in X i=1..n xi*ci xX i=1..n ci2 This is clearly parabolic in each dimension, ci (fixing all other dimensions) f/ck = xX -2(xk - ck) = iff xXxk = xXck = |X|ck iff ck = (xXxk)/|X| = µ We can say more about the shape of the hyper-parabolic graph of f. Since f/ck = xX -2(xk - ck) = -2xXxk + 2xXck = 2 xXck - 2|X|μk = 2|X|(ck -μk) we see that on each dimensional slice the parabola has the same shape, since the parabola in xy centered at (x0,y0) has the form y - y0 = a (x - x0)2 and a obviously = f(x0+1) -f(x0) we note y' =2a(x-x0), so in our case a=|X| a very large number (steep parabola) and x0 = µk Since the slope of the graph is |X|, if one wants, roughly, an -radius-contour (hyper-circular) centered at a, one needs to take the pre-image of the |X|-interval about TVX(a), f-1( TVX(a)- |X| , TVX(a)+ |X| )
3
Proof that graph of IPX is a steep hyper-plane to µ = (xX xi)/|X|
Inner Product functional: IPX(c) = xX xc = xX i=1..nxici = = i=1..n ci xXxi = i=1..n ci |X|µi = i=1..n ci |X|µi = |X| i=1..n ci µi = |X| cµ, so IPX(c) = |X| |µ| |c| cosθ where θ is the angle between c and µ. We can use any form of these equivalent formulas, depending upon which one sheds the most light on the issue we are concerned with. The blue one tell us what that the graph is extremely steep vertically (slight change in length of c causes a tremendous change in IPX(c) ) and that the contour(IPX,a,r) about a point, a, is a linear slice perpendicular to µ and also tells us how to choose the interval radius on the IPX axis so that the contour has radius, r. Red version guides to efficient preprocessing. The steepness of the graph is evident from f/ ck = |X|µk or the gradient, f = |X| |µ 1 1 2 2 3 3 4 4 5 5 x2 X1 a
4
Proof that graph of Xa is a 45o hyper-plane nearly to µ = (xX xi)/|X|
aDomainA1..DomainAn, projection onto a, Xa(x)=xa =i=1..nxi*ai is a functional whose graph is a hyperplane at a 45 angle with a. Contour(Xa,X,b,r) is a linear (n-1)-dimensional hyper-bar through b perpendicular to a. Xi(x) = xi is just Xei which also has planar graphs and have linear hyper-slice (n-1 dimensional) contours perpendicular to their coordinate basis vector, ei. Xa is just as easily calculated as TV (easier!), but which ones? All of them? That's impractical! One could process each Xi though. To classify all s in S, we could first cluster S based on some notion of closeness (isotropic clusters), then take the cluster means as representatives of the entire cluster, classify those cluster means individually (giving the same class assignment to all other points in that cluster, addressing the curse of cardinality of S), or we can classify each s in S individually. In either case we the classify s in S as follows: For unclassified sample (mean of cluster or just any sample), s, find a set of epsilon contours (from TVX, IPX, Xi's ) that reduce the candidate near neighbor set to a manageable size. Select the candidate near neighbors that are Euclidean close enough (or the closest k of them). Let those selected near nbrs vote with Gaussian Radial Basis (RDF) weighted votes. Done!
5
e.g., with 2 contours, use the Xa-µ-contour
To prune halos with a small number of contours (fewer that the n+1 contours: {TV; Xi, i=1..n}, e.g., with 2 contours, use the Xa-µ-contour or use just a few Xi-contours corresponding to the largest coordinates of a-µ no halo, not too large no halo,but very large halos, but small X2(a)+ε a IPX(a)+ε X2(a)-ε Xa-μ(a)+ε Xa-μ(a)-ε X1(a-ε X1(a+ε IPX(a)-ε TVX(a)-ε TVX(a)+ε
6
Contours of TVX, IPX, Xa, Xa-μ
ContourIPX(c,r) ContourTVX(a,r) 1 1 2 2 3 3 4 4 5 Γ 5 c μ Xy(a) Y X a ContourXy(a,r) 1 1 2 2 3 3 4 4 5 5 μ a Y X ContourXx(a,r) ContourXa-μ(a,r) 1 1 2 2 3 3 4 4 5 5 b μ a-μ Γ Y a X Note: ContourTVX(a,r) = Contour(TV,X,a,r), etc. ContourXb (a,r)
7
How about higher dimensions?
two (n-1)-D hypersurfaces (isobars) bounding ContourXa (a,r) outside surface of (n-1)D surface, ContourTVX(a,r) ContourXy (a,r) outside surface of ContourTVX(a,r)
8
Type0: pure0 leaves omitted. pure1s tagged in Purity Field.
Type-0 P-trees Ex.: dimension=1 fanout=2dimensoin = depth=5 NumberOfPotentialLeaves=NOPL=32=fanoutdepth 101 . 1 100 111 001 The upper inode levels are not productive. Indicate which leaves are either mixed or pure1 in a Leaf Existence Array (LA) with Purity Field (PF) or Leaf Existence Map (LM) and Purity Map (PM) Only the mixed leaves that get stored. Leaf Existence Array, LA 31 30 29 28 23 22 21 20 15 14 13 12 11 9 8 3 2 1 . Type0: pure0 leaves omitted. pure1s tagged in Purity Field. Purity Field (PF) leaf length Leaf Existence Map, LM (size=NOPL) 1 . Purity Map (PM) Leaf Map (LM) Leaves are bit vectors, any or all of which could be compressed the same way (i.e., this 2-level structure can be nested to more levels). The LM is a "existential smoothing" of the Ptree (tells us precisely which leaves contain at least 1 1-bit). If we next enough, the LMs give us multiple smoothing levels. Needless to say, I prefer the LM/PM approach. The LM/PF may be clearer
9
Type-1 P-trees The Type-0 Ptree above, can be expressed as Type-1:
31 30 29 28 23 22 21 20 15 14 13 12 11 9 8 3 2 1 . Type 0 means pure0 leaves omitted, pure1s switched on Purity Field Physical Structure: Leaf Existence Array or LA Type-1 P-trees The Type-0 Ptree above, can be expressed as Type-1: Using either a Leaf Existence Array (LA) with Pure Field (PF) or a Leaf Existence Map (LM) and a Purity Map (PM) 4,5,6,7 Physical Structure: Leaf Exists Array or LA 31 30 29 28 ,25,26,27 22 21 20 ,17,18,19 14 13 12 10 9 8 3 2 1 . Type1 means pure1 omitted, pure0s tagged in Purity Field. Purity Field (PF) 11 1 Physical Structure: Leaf Exists Map or LM (size=NOPL=32) . Purity Map (PM) Leaf Map (LM)
10
How should Ptree be stored?
Type bit 0p1 impure leaves leaf length 0p2 1p3 1 Leaf Map1 Pure Map1 1 Leaf Map2 Pure Map2 0p4 1p5 1 Leaf Map3 Pure Map3 0p6 1 Leaf Map4 Pure Map4 1 Leaf Map5 Pure Map5 1 Leaf Map6 Pure Map6 The tempting way to store these structures is to cluster by Ptree (horizontally across the rows of this cube above). But since those leaves almost never get ANDed with oneanother (except 1-time preprocessing) better to cluster by Leaf Offset or LM position ? ( vertically down this cube) since these are precisely the bit vectors that get ANDed together. If there is good compression (not too many mixed leaves per Ptree), then storing each Leaf Offset (vertical slice of the cube) on a page (or extent), would mean that only that page would need to be brought in when an actual AND is called for (and prefetching is straight forward). The collection of LMs and PMs could be stored separately on one extent, since they're processed separately (before) the leaves (or processed as smoothings).
11
P-tree operation: COMPLEMENT
111 . 1 101 100 000 <-- COMPLEMENTing a P-tree --> Flip the Type Bit and complement the Leaves ( That's all! ) If the structure is nested, complement a leaf by flipping its TypeBit and complementing its leaves (leaves of the leaves), etc. 000 . 010 011 111 1 18 17 1 16 7 6 1 19 101 . 1 100 000 18 17 1 16 7 6 1 19 1 010 . 011 111
12
P-tree operation: AND 0 1 2 3 4 5 6 7 8 9 positions Even better:
P-tree operation: AND 0p2 0p4 0p1 0p6 1p3 1p5 1 0-Leaf Map1 0Pure Map1 0-Leaf Map2 0Pure Map2 0-Leaf Map4 0Pure Map4 1-Leaf Map5 1Pure Map5 1-Leaf Map3 1Pure Map3 0-Leaf Map6 0Pure Map6 positions Assumemixed leaves are clustered by Leaf Offset ( vertically down the cube), and the collection of LMs (and PMs) are stored separately on one additional extent. 1. AND all 0-LMs --> A 2. scan l-to-r across A for next 1 bit, if that position in any 1-PM=1, then GOTO 2 else fetch & AND nonpure leaves --> B; GOTO 2 3. A forms the LM of the result and the Bs are the nonpure leaves. 2. pos=1, PM3(1)=0 so fetch & AND p p p res E.g., p1 ^ p3 ^ p6 ^ ^ = 3. Result Ptree: 0-Leaf_Map: 0-Pure_Map: impure leaves: root-count = 1 Even better: ^{0LM} ^{1PM'} 0LM (result is always type0) Fetch & AND leaves corresp. to 1-bits in 0LM. Set Purity Map. In ASM, is there an operation, AND and COUNT? to count 1-bits as they are produced?
13
Leaf Maps (red are type-1)
K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U 1 3 7 1 2 3 1 3 4 1 22 2 3 1 2 1 3 4 2 1 3 5 2 1 3 13 12 11 10 13 1 0-p13 12 1 12 1 0-p12 11 1 1-p11 10 1 0-p10 23 22 21 20 23 1 23 1 0-p23 22 1 22 1 1-p22 21 1 21 1 1-p21 20 1 0-p20 LeafOff=0; LeafOff=1; Leaf Maps (red are type-1) LeafOff=2; 13 1 12 1 11 1 10 1 23 1 22 1 21 1 20 1 LeafOff=3; LeafOff=4; Purity Maps LeafOff=5; 12 1 23 1 22 1 21 1 LeafSize=8, NOPL=7 LeafOff=6;
14
P13^p12 P11^p10 ^{0LM} ^{1PM'} 0LM Fetch & AND lo=0,1,2,3,4,5,6
1 12 1 0LM 1 P11^p10 ^{0LM} ^{1PM'} 0LM Fetch & AND lo=0,1,2,3,4,5,6 10 1 LM 1 ^{0LM} ^{1PM'} 0LM lo=0; rc=2 Fetch & AND lo=3 lo3; 13 12 rc=5 lo=4; rc=1 lo=1; rc=2 lo=5; rc=1 lo=2; rc=2 lo=6; rc=1 lo=3; rc=3 Total rc=12 lo0; LeafMaps; (red=type1) 13 1 12 1 11 1 10 1 23 1 22 1 21 1 20 1 lo1; lo2; lo3; lo4; PureMaps; 12 1 23 1 22 1 21 1 lo5; lo6;
15
P22^p21 ^{0LM} ^{1PM'} 0LM Fetch & AND lo of 0LM 1-bit positions (i.e., 2,3,4,5,6) for P21, p22 (those that exits) 22' 1 21' 1 LM 1 rc=4 rc=7 rc=0 rc=4 lo2; lo3; lo4; lo5; lo6; Total rc=22 lo0; LeafMaps; (red=type1) 13 1 12 1 11 1 10 1 23 1 22 1 21 1 20 1 lo1; lo2; lo3; lo4; PureMaps; 12 1 23 1 22 1 21 1 lo5; lo6;
16
P22^p21^p13^p12 ^{0LM} ^{1PM'} 0LM
Fetch & AND lo of 0LM 1-bit positions (i.e., lo3) for 21, 22 13, 12 (those that exits) 22' 1 21' 13 12 LM 1 lo3; rc=4 lo0; LeafMaps; (red=type1) 13 1 12 1 11 1 10 1 23 1 22 1 21 1 20 1 lo1; lo2; lo3; lo4; PureMaps; 12 1 23 1 22 1 21 1 lo5; lo6;
17
P22^p21 ^{0LM} ^{1PM'} 0LM Fetch & AND lo of 0LM 1-bit positions (i.e., 2,3,4,5,6) for P21, p22 (those that exits) 22' 1 21' 1 LM 1 rc=4 rc=7 rc=0 rc=4 lo2; lo3; lo4; lo5; lo6; Total rc=22 lo0; LeafMaps; (red=type1) 13 1 12 1 11 1 10 1 23 1 22 1 21 1 20 1 lo1; lo2; lo3; lo4; PureMaps; 12 1 23 1 22 1 21 1 lo5; lo6;
18
Vertical Data Assistant (VDA) is a windows (or windows CE) application that can data mine massive datasets efficiently? (note that the a separate application can be built to convert an store properly. I am changing my thinking a bit on DataMIME and the whole idea of competing in the "big iron" community with a data mining system. There is no way to win there (too many players with too much money - Google, Microsoft, all the Bioinformatics and Drug development companies....). It occurred to me, as I was revising Masum's slides, that other than the size of the datasets, our methods and modules are getting surprisingly simple and compact. Developing DataMIME on big hardware may be a mistake, if no one uses it! Maybe it is wrong-headed? Maybe the killer app is a completely portable, tiny client system (on a desktop/laptop/PDA) that can do scalable data mining for "anyone, anytime, anywhere" provided their data is capture into a consistent universally format. If this would work, it could eventually be scaled up to supercomputers and as a Grid App, but first, let's do a tiny version (tiny in terms of system requirements, but not in terms of DM power). DataSURG software development (e.g., SMILEY, DataMIME) weren't too successful. The one software system has been used is TM-Lab, a small app on small PCs and requiring data be in a simple format (BSQ). I now favor a Windows and Windows CE Utility Suite approach. Each Utility does one thing we and can be invoked in a GUI drag and drop mode. (e.g., An AND operation pulls the specified Ptrees from the specified folder and returns either just the rootcount or drops the resulting "derived Ptree" back in the folder and returns its name. Each platform (PDA to supercomputer) will get an appropriate subset of the suite.
19
Body Area Network (BAN) and Network is the Computer (NITC) technologies may eventually converge to produce the next "sea change" technology. Imagine nano-computers, massive storage (e.g., 4G thumb drives), a wireless networks and applications such as real-time health monitors (blood_emzynes/body_temperature/blood_sugar/coronary_flow...); PDA apps (schedulers/reminders/auto_thought_recorders, DNA/FingerPrint/FaceGeo/IrisScan realtime name_recall/restraining_order_enforcement/homeland_security); Environmental sensors (eyes_in_back_of_head/sense_nearby_explosives/viruses_sensing...). All that data will have to be data mined for exceptions. Exception mining (i.e., classification into 2 classes, exceptional_situation or normal_situation) is what DataSURG does better than anyone in the world (add "ego monitoring" to VDA apps ;-) Our approach can be implemented on tiny processing platforms. Of course the datasets are large, so they must be broken up and compressed and store in an extremely simple and universal (no variations!) format. So simple that Windows CE can handle them and so that Joe Public can understand them. After all, simplicity and consistency made the Relational Model successful!
20
µi = (xX xi) / |X| =(1/57)x k 2kxi, (1/57)k 2kxxi,k
3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V 1 3 7 1 2 4 1 3 5 1 23 2 3 1 2 1 3 5 2 1 3 5 2 1 3 d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 µi = (xX xi) / |X| =(1/57)x k 2kxi, (1/57)k 2kxxi,k =(1/57)k 2krcPi,k for i=1 =(1/57)*(7*23 +24*22+35*21+23*20) =4.3 for i=2 =(1/57)*(22*23 +35*22+35*21+33*20) =7.35 Created a bit map for each {ring(f,Y,k)}k=1..K , of each Y={x}, one would have all reverse-kNN information and much more! ring(f,Y,k)s give up distance information, so {ring(f,Y,(kr, (k+1)r])}k=0..R holds more info, but makes it more difficult to computer reverse-kNN sets (though more useful sets can easily be calculated to do a better job of whatever reverse-kNN sets were to be used for anyway).
21
story but they may have higher cost? ra,0=skin(a,1)
rx1,1,k 3 1 4 3 1 2 3 2 1 d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V 1 3 7 1 2 4 1 3 5 1 23 2 3 1 2 1 3 5 2 1 3 5 2 1 3 1 6 1 1 2 4 2 1 7 2 1 8 2 1 We note that the ring(a,d*(k-1),d*k) tell an even greater story but they may have higher cost? ra,0=skin(a,1) ra,k=ring(a,2k-1,2k) ring(a,d*2k-1,d*2k ) k=0... tell a great story on neighbors. (here, d=1) rf,a,k=ring(f,f-1a,2k-1,2k)
22
µ x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V 4 1 7 4 1 4 2 1 5 S 1 2 S 1 8 S 2 1 4 U 1 3 U 1 U 2 1
23
µ x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V T 1 6 T 1 6 T 2 1 V 1 7 V 1 4 V 2 1 6 v 1 v 1 v 2 1
24
x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V q 1 q 1 q 2 1 7 w 1 2 w 1 w 2 1 s 1 3 s 1 s 2 1
25
x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V 1 5 1 4 1 2 8 5 1 8 5 1 4 5 2 1 8 9 1 9 1 8 9 2 1
26
x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V d 1 8 d 1 9 d 2 1 4 h 1 8 h 1 9 h 2 1 4 j 1 00 9 j 1 9 j 2 1 9
27
x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V l 1 9 l 1 4 l 2 1 8 n 1 5 n 1 4 n 2 1 3
28
x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V I 1 2 I 1 3 I 2 1 7 G 1 8 G 1 5 G 2 1 7 E 1 7 E 1 2 E 2
29
µ x y z A B t u C D r s E N F q G H I J K L M n o p O v l m P Q R j k
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V C 1 C 1 2 C 2 1 5 x 1 5 x 1 3 x 2 1 7
30
TV-countours bounded by isobar gaps of at
d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000 K 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J L M N O P Q R S T U V 1 m 1 4 T 1 3 M 1 9 H 1 8 F 1 7 C 1 8 x 1 9 1 2 q 1 s 1 4 u 1 x TVX gap p P Q O m o S T R k U l i K L J M V I n g h j f H G e c N v b F E d a D C z A y B x q r t w s u12257 57*1 57*2 57*3 57*4 57*5 57*6 57*7 TV-countours bounded by isobar gaps of at least 57*radial_distance_from_=(7.4, 4.3) 57*7 57*8 57*9 57*11 57*14 57*15
31
We note taking a = (1, 0) and a = (0, 2)
=(7.4, 4.3) We note taking a = (1, 0) and a = (0, 2) the 4 resulting TV-Xa-contours nearly partition a thick ring. Thickening the Xa-contours even more, gives better coverage without increasing the neighborhood much. Does this hold true in higher dimensions? Do we need to consider other diagonals, e.g., (0,0.., i,0,.., j,0,..,0) etc. ? d e f g 9 a b c n o l m j k h i t u r s v w x y z A B C D E N F G H I J K L M q p O P Q R S T U V 1111 1110 1101 1100 1011 1010 1001 1000 0111 0110 0101 0100 0011 0010 0001 0000
32
FAML Vector Space Classification using sorting (ala SMART-TV) Given T=R(A1,…,An,C) and X=R[A1,…,An]
Create and store the 2-column n tables, Xi(x,fi(x)) sorted on fi(x) equivalently, [ala 4thNF] decompose R(A1..An,C,Af1..Afm) into T, X1 ,..., Xm sorted on fi(x). equiv, create secondary indexes, Xi for each derived attribute, Afi on R(A1..An, C, Af1..Afm ) Store 3-col, Xi(x,fi(x),x.C) so votes are handy (# coding classes so bitwidth = log2|C| )? For an unclassified sample, a, calculate {fi(a)} and retrieve the {cskin(fi(a),r)} [or cskin(fi(a),k)]. Form contour(fi,cskin(fi(a),r)) and intersect them, to get a candidate near neighbor set. For every candidate, x, in that set, if d(x,a)<threshold, tally RBF-weighted vote of x. Sorting is expensive, (creating indexes) even tho it is 1-time activity which can be amortized over all classifications, it may be too expensive for very large data sets. (If SMART-TV is essentially "creating indexes on derived attributes") FAML Vector Space Classification using P-trees (includes PINE) To address curse of cardinality (mostly sorting) we use P-tree technology on derived attributes: Create basic P-trees for each derived attribute, Afi Using EIN technology, create P-tree mask for the contour(fi,cskin(a,r)), AND them For every x in the resulting mask, if d(x,a)<threshold, tally RBF-weighted vote of x. Next we examine some specific functionals (dually, derived attributes). First some notation: A functional, f:XR where Dom(R)={0,1} (binary) is called a predicate. The derived attribute, Af will have bitwidth=1 and is called a derived map. When starting with a functional, f:XR , the dual derived attribute will be denoted, Af. When starting with a derived attribute, A, the dual functional will be denoted fA:XR.
33
R(A1..An,C)=Training Space
FAML Classification Given R(A1..An,C)=Training Space X=R[A1..An]=Feature Space (since R,X same key, we'll use X.C) Functionals, {fi:dom(A1)..dom(An)Reals} (e.g., TV; IP; Projection onto a, Xa) Define: Contour(fi, X, a, r) = {xX :|fi(x)-fi(a)|<r} (polar). Isobar(fi, X, a, r) = {xX :|fi(x)-fi(a)|=r}. ContourNbrhd(X,a,r) = iContour(fi,X,a,r). Basic Contour classification algorithm: xContourNbrhd, if dist(a,x)<ε, then tally x.C-vote as a [weighted] vote for a.C. TVX r TVX(a) TVX()=TVX(x33) 1 1 2 2 3 3 4 4 5 5 b a Y X Contour(TV,X,a,r) Contour(Xa,X,b,r) Coordinate Projection Xi(x) =xi, is just Xei (Xi's define L∞-nbrhds in EIN-PINE technology). IPX(a) = xXxoa = xXXa(x) is TAPP with ContourIPX(a,r) linear hyper-bar to µ)
34
FAML Classification (early version) xX and f, compute f(X).
Store Table(fi) consisting of (x, fi(x))-tuples sorted on fi. (or basic Ptrees of derived attributes?) unclassified, a: Calculate {fi(a)}subset of i's (No calculation is required for Xi(a)=ai ), Retrieve portion of Table(fi) at fi(a), NBR(fi(a)). (Count/radius based? Different k/r for each fi?) Construct contours, f-1(NBR(fi(a)). Prune halos (by intersecting other contours until |Contour(X,fi,a,r)|<threshold. Prune to VoterSet, by checking d(a,x). x close enough, cast xs vote using weighted (Gaussian Radial Basis) vote. (Note that the purple part could be done in bulk on the set of all unclassified samples, S, by clustering S by distance, then apply Purple part to each component rather). FAML Clustering? One pervasive use of clustering is for class identification or class generation, that is, to identify sets of highly similar objects that might form the classes in a training set and for which subsequent samples can be classified against. These generated classes can be isotropic or density based (in either case subsequent classification can correctly based on near neighbors.) In fact, if class generation is done exclusively for the purpose of creating a training set for subsequent classification, then isotropic clustering will always suffice (that is, the round pre-clusters do not have to be joined into arbitrarily shaped clusters since the near neighbor set will determine the class assignment anyway (independent of the other round pre-clusters that might or might not get connected up to that one). Assume a [large] data set of historical tuples and that we want to use them to classify subsequent tuples we may receive from, say, a data stream. First we want to generate classes with historical data set thru isotropic clustering (from a recent window of the stream or the whole stream so far). Then we want to use that "classed" dataset as a training set for near neighbor classification. We can partition into core, core nbr, border, and noise points ala Rana's method) using functional near nbr sets (intersecting all or just some of the n+2 functional contours) instead of epsilon-nbrhds. Then the border pts can be attached to the best (closest/ most overlapping) core cluster. Then classify according to overlap with these clusters???
35
POLAR Neighborhoods Construct contours, f-1(NN) to prune halos on NN until |NN|<threshold, revisited. First cluster the unclassified sample set, S, Identify dense angular contours in S-μ, Acontour(a-μ,θ); Identify dense radial contours in S-μ, Rcontour(a-μ,r) ( TVXcontour(a,r) ) Intersect them and pick out the dense Polar Neighborhood, PN(a-μ,r,θ) (no halos) For dense PNs in S-μ roughly classify the entire PN (pruned by Euclidean dist. to be isotropic) by classifying one representative! Then classify the rest of S, one at a time. a-μ θ a-μ a-μ The Polar (r,θ)-Neighborhood at a, PN(a,r,θ) =AContour(a,θ) RContour(a,r) is an interesting gridding for grid based clustering. Can we easily construct this partition? Acontour(a,θ)={xX | ax <θ}={xX | x/|x|a/|a| < cosθ } = {xX | xa<|x||a|cosθ)}. Acontour(a-μ,θ)={xX | xa-μ<θ}={xX | x/|x|(a-μ)/|a-μ|<cosθ}
36
Non-uniform Polar Griddings
aDomA1..DomAn, projection onto a, Xa(x)=xa= i=1..nxi*ai Contour(Xa,X,b,r) is a linear (n-1)-dimensional hyper-bar through b perpendicular to a. Contour(TV,X,a,r) Contour(Xa,X,a,θ) is an approximation of PN(a,r,θ). Xa(x) values are easier to calculated than TVX(x)c a θ r Consider a very non-uniform gridding (for finding Dense Cores in clustering or for finding Near Neighbor Sets): Determine the densest rings (about μ) Within dense rings determine directions, a, such that the green nbrhds are dense and extract them. Treat the remainder of x one point at a time. 0 r 2r 3r Non-uniform Parallel Griddings dimension, ei, partition into r-slices (actually, we should use j-low griddings?) Starting on one side, determine the count in each r-slice by determining the count in the first one (one < inequality). P-tree mask it (for j-low griddings this is already determined). Do the same for the next one and then AND off the first.... As in previous slide, determine the dense cells (j-low cells) and treat the remaining points as one large (but sparse) partition. Each partition is P-tree masked.
37
APPENDIX (slides that may be unnecessary)
No halos! But what is the functional, Xa- ? The functional is clearly projection onto the a direction or fa-(x) = x o (a-)/|a-|. Is this PrePreprocessing Compliant? (Taufik Abidin PreProcessing Compliant)? I think (from the chapter in the book) we can conclude that it’s quickly computed. Xa = X {g(x,a)|xX} has mean(Xa) = a so IPXaContours are a x x-a (x-a)a/|a| - (x-a) a (x-a)a/|a| 1 1 2 2 ContourTVX(a,r) 3 3 4 4 5 g(x,a)=x+2[(x-a)a/|a| - (x-a)] 5 a Y X ContourXa(a-,r)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.