R r vv r m R r v v v v r r v m V v r v v r v FAUST Oblique (our best alg?) P R =P (X dot d)<a The forumula! 1 pass gives entire predicted class pTree D≡

Slides:



Advertisements
Similar presentations
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Advertisements

Searching on Multi-Dimensional Data
Statistical Classification Rong Jin. Classification Problems X Input Y Output ? Given input X={x 1, x 2, …, x m } Predict the class label y  Y Y = {-1,1},
With PGP-D, to get pTree info, you need: the ordering (the mapping of bit position to table row) the predicate (e.g., table column id and bit slice or.
1 Describing Categorical Data Here we study ways of describing a variable that is categorical.
Support Vector Machines and Kernel Methods
K nearest neighbor and Rocchio algorithm
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
Recommender systems Ram Akella November 26 th 2008.
Let's zoom in on one corner of the coordinate plane
EVALUATION David Kauchak CS 451 – Fall Admin Assignment 3 - change constructor to take zero parameters - instead, in the train method, call getFeatureIndices()
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Data Mining CS157B Fall 04 Professor Lee By Yanhua Xue.
Graphing Have fun Graphing. Data Data is information. Look at these examples: Data is information. Look at these examples:  Magic Johnson’s height 
Chapter 9 – Classification and Regression Trees
Collecting Things Together - Lists 1. We’ve seen that Python can store things in memory and retrieve, using names. Sometime we want to store a bunch of.
Step 2: Inviting to Challenge Group. DON’T! Before getting into the training, it’s important that you DON’T just randomly send someone a message asking.
Faust is fast… takes about 15 seconds on the same dataset that takes over 9 hours with knn, 40 minutes with pTree knn. I’m ready to take on oblique, need.
1 What to do before class starts??? Download the sample database from the k: drive to the u: drive or to your flash drive. The database is named “FormBelmont.accdb”
Data Mining Algorithms for Large-Scale Distributed Systems Presenter: Ran Wolff Joint work with Assaf Schuster 2003.
O 0 r 1 v 1 r 2 v 2 r 3 v 3 v 4 dim2 dim1 Algorithm-1: Look for dimension where clustering best. Below, dimension=1 (3 clusters: {r 1,r 2,r 3,O}, {v 1,v.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
Making Python Pretty!. How to Use This Presentation… Download a copy of this presentation to your ‘Computing’ folder. Follow the code examples, and put.
Testing hypotheses Continuous variables. H H H H H L H L L L L L H H L H L H H L High Murder Low Murder Low Income 31 High Income 24 High Murder Low Murder.
Chapter 7 The Practices: dX. 2 Outline Iterative Development Iterative Development Planning Planning Organizing the Iterations into Management Phases.
TEACHER EFFECTIVENESS INITIATIVE VALUE-ADDED TRAINING Value-Added Research Center (VARC)
Data Mining: Knowledge Discovery in Databases Peter van der Putten ALP Group, LIACS Pre-University College LAPP-Top Computer Science February 2005.
The Universality of Nearest Neighbor Sets in Classification and Prediction Dr. William Perrizo, Dr. Gregory Wettstein, Dr. Amal Shehan Perera and Tingda.
1 p1 p2 p7 2 p3 p5 p8 3 p4 p6 p9 4 pa pf 9 pb a pc b pd pe c d e f a b c d e f X x1 x2 p1 1 1 p2 3 1 p3 2 2 p4 3 3 p5 6 2 p6.
Clustering Prof. Ramin Zabih
Given k, k-means clustering is implemented in 4 steps, assumes the clustering criteria is to maximize intra- cluster similarity and minimize inter-cluster.
O 0 r 1 v r v3 r v2 v dim2 dim1 Anomaly detection using pTrees (AKA outlier determination?) Some pTree outlier detection papers “A P-tree-based Outlier.
Support Vector Machines and Gene Function Prediction Brown et al PNAS. CS 466 Saurabh Sinha.
K-means: Select k centroids. Assign pts to closest centroid. Calc new centroids (mean, vom,...). 1 Iterate until stop_cond. PK1-means: (One scan - to reassign.
6-hop myrrh example (from Damian). Market agency targeting advertising to friends of customers: Entities: 1. advertisements 2. markets 3. merchants 4.
Week 6. Statistics etc. GRS LX 865 Topics in Linguistics.
Level-0 FAUST for Satlog(landsat) is from a small section (82 rows, 100 cols) of a Landsat image: 6435 rows, 2000 are Tst, 4435 are Trn. Each row is center.
Elsayed Hemayed Data Mining Course
Knowledge Discovery in Protected Vertical Information Dr. William Perrizo University Distinguished Professor of Computer Science North Dakota State University,
Collaborative Filtering via Euclidean Embedding M. Khoshneshin and W. Street Proc. of ACM RecSys, pp , 2010.
Showing Up Accompanying SES; Strategies for Process Reflection and Guided Practice for Engaging Emotionally Charged Situations Like ACPE Certification.
Clustering Microarray Data based on Density and Shared Nearest Neighbor Measure CATA’06, March 23-25, 2006 Seattle, WA, USA Ranapratap Syamala, Taufik.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
5(I,C) (I,C) (I,C) (I,C)
Mark asked about anomaly detection using pTrees (AKA outlier determination?) Some pTree outlier detection papers (AKA, anomaly detection?) (note this was.
Near Neighbor Classifiers and FAUST Faust is really a Near Neighbor Classifier (NNC) in which, for each class, we construct a big box neighborhood (bbn)
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.
Data Mining Introduction to Classification using Linear Classifiers
Statistics 202: Statistical Aspects of Data Mining
AP CSP: Cleaning Data & Creating Summary Tables
Item-Based P-Tree Collaborative Filtering applied to the Netflix Data
But it's pure0 so this branch ends
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Faust is fast…  takes about 15 seconds on the same dataset that takes over 9 hours with knn, 40 minutes with pTree knn.   I’m ready to take on oblique,
North Dakota State University Fargo, ND USA
Data Mining (and machine learning)
PTrees (predicate Trees) fast, accurate , DM-ready horizontal processing of compressed, vertical data structures Project onto each attribute (4 files)
we call it the bip stride=m [level=1] pMap of pM
A Fast and Scalable Nearest Neighbor Based Classification
Data Mining extracting knowledge from a large amount of data
From: Perrizo, William Sent: Thursday, February 02, :45 AM To: 'Mark Silverman' The Satlog (Landsat Satellite) data set from UCI Machine Learning.
North Dakota State University Fargo, ND USA
Functional Analytic Unsupervised and Supervised data mining Technology
The Multi-hop closure theorem for the Rolodex Model using pTrees
North Dakota State University Fargo, ND USA
pTree-k-means-classification-sequential (pkmc-s)
PAj>c=Pj,m om...ok+1Pj,k oi is AND iff bi=1, k is rightmost bit position with bit-value "0", ops are right binding. c = bm ...
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
pTrees predicate Tree technologies
Presentation transcript:

r r vv r m R r v v v v r r v m V v r v v r v FAUST Oblique (our best alg?) P R =P (X dot d)<a The forumula! 1 pass gives entire predicted class pTree D≡ m R  m V d=D/|D| midpoint of means ( mom ) Separate classR, classV using midpoint of means ( mom ) method: calc a a a=(m R +(m V -m R )/2) o d= (m R +m V )/2 o d (Same when d pts left, eg, D=m V  m R,) d Training ≡ choosing "cut-hyper-plane" (CHP), which is always an (n-1)-dimensionl hyperplane (which cuts space in two). Classifying is one horizontal program (AND/OR) across pTrees to get a mask pTree for each entire class (bulk classification) Improve accuracy? e.g., by considering the dispersion within classes when placing the CHP. Use vom 1. Use vectors_of_median, vom, to represent each class, rather than m V, vom V ≡ ( median{v 1 |v  V}, mom_std, vom_std methods 2. mom_std, vom_std methods : project each class on d-line; then calculate std (one horizontal formula per class using Md's method); then use the std ratio to place CHP (No longer at the midpoint between m r [vom r ] and m v [vom v ] median{v 2 |v  V},...) vom V v1v1 v2v2 vom R std of these distances from origin along the d-line dim 2 dim 1 d-line Note that training (finding a and d) is a one-time process. If we don’t have training pTrees, we can use horizontal data to get a and d (one time) then apply the formula to the test data (as pTrees)

Mark S. said "Faust is fast... takes ~15 sec on the same dataset that takes over 9 hours with knn and 40 min with pTree knn. I’m ready to take on oblique, need better accuracy (still working on that with cut method ("best gap" method)." FAUST is this many times faster than, Horizontal KNN 2160 taking hours = minutes = 32,400 sec. pCKNN: 160 taking.670 hours = minutes = 2,400 sec. while Mdpt FAUST takes.004 hours =.25 minutes = 15 sec. "Doing experiments on faust to assess cutting off classification when gaps got too small (with an eye towards using knn or something from there). Results are pretty darn good… for faust this is still single gap, working on total gap (max of (min of prev and next gaps)) Here’s a new data sheet I’ve been working on focused on gov’t clients." Bill P: You might try tweaking BestClassAttributeGap-FAUST (BCAG FAUST) by using all gaps that meet a criteria (e.g., where the sum of the two stds from the two bounding classes add up to less than the gap width), Then just AND all of the mask pTrees. Also, Oblique FAUST is more accurate and faster as well. I will have Mohammad send what he has and please interact with him on quadratics - he will help you with the implementation. I wonder if in return we could get the datasets you are using for your performance analysis (with code of competitor algorithms etc.?) It would help us a lot in writing papers Mark S: I'm working on a number of benchmarks. Bill P: Maybe we can work together on Oblique FAUST performance analysis using your benchmarks. You'd be co-author. My students crunch numbers... Mark S: Vendor opp: Provides data mining solutions to telecom operators for call analysis, etc - using faust in an unsupervised mode - thots on that for anomaly detection. Bill P: FAUST should be great for that.

pc bc lc cc pe age ht wt Multi-hop Data Mining (MDM): relationship1 (e.g., Buys= B(P,I) ties table1 P=People 2345 F(P,P)=Friends P B(P,I)=Buys I=Items 2345 Define the NearestNeighbor VoterSet of {f} using strong R-rules with F in the consequent? A correlation is a relationship A strong cluster based on several self-relationships (but different relationships, so it's not just strong implication both ways) is a set that strongly implies itself (or strongly implies itself after several hops (or when closing a loop). P=People, I=Items, F(P,C)=Friends, B(C,I)=Buys Find all strong, A  C, A  P, C  I frequent iff ct(P A )>minsup and confident iff ct(& p  A P p AND & i  C P i ) > minconf ct(& p  A P p ) Says: "friend of all in A will buy C if all in A buy C." (AND always AND) closures: A freq then A + freq. A  C not conf, then A  C - not conf ct(| p  A P p AND& i  C P i )>mncf ct(| p  A P p ) friend of any in A will buy C if any in A buy C. ct(| p  A P p AND | i  C P i ) >mncnf ct(| p  A P p ) Change to "a friend of any in A will buy something in C if any in A buy C. (e.g., People=P=an axis with descriptive features columns) to table_2 (e.g., Items). Table2 (I=Items) is tied by relationship2 (e.g., Friends=F(P,P) ) to table3 (e.g., also P)... Can we do interesting clustering and/or classification on one of the tables using the relationships to define "close" or to define the other notions? Categorycolorsizewtstorecitystatecountry Dear Amal, Yes, we have looked at the 2012 cup too and you are right that it would form a good testbed for social media data mining work. Ya Zhu in our Sat gp is leading on "contests" and is looking at 2012 KDD Cup as well as Heritage Provider Network Health Prize (see kaggle.com). I am hoping also for a nice test bed involving the our Netflix datasets (which you and then Dr. Wettstein prepared as pTrees and all have worked on extensively - Matt Piehl and Tingda Lu particularly...). I am hoping to find (in the netflix contest related literature) a real-life social network (a social relationship between two copies of the netflix customers such as maybe, facebook friends, that we can use inconjunction with the netflix "rates" relationship between netflix customers and netflix movies. We would be able to do something with that set up (all as PTreeSet both ways). For those who are new to our little "group", Dr. Amal Shehan Perera is a senior professor in Sri Lanka and was (definitely a lead) researcher in our group for many years. He is the architect of using GAs to win the KDD Cup in both 2002 and He gets most of the credit for those wins, as it was definitely GA work in both cases that pushed us over the top (I believe anyway). He's the best!! You would be wise to stay in touch with him. Sat, Mar 24, Amal Shehan Perera Just had a peek into the slides last week and saw a request for social media data. Just wanted to point out that the 2012 KDD Cup is on social media data. I have not had a chance to explore the data yet. If I do I will update you. Rgds,-amal

Bioinformatics Data Mining: Most bioinformatics done so far is not really data mining but is more toward the database querying side. (e.g., a BLAST search). What would be real Bioinformatics Data Mining (BDM)? A radical approach View whole Human Genome as 4 binary relationships between People and base-pair-positions (ordered by chromosome first, then gene region?). AHG(P,bpp) P 7B bpp B AHG is the relationship between People and adenine (A) (1/0 for yes/no) THG is the relationship between People and thymine (T) (1/0 for yes/no) GHG is the relationship between People and guanine (G) (1/0 for yes/no) CHG is the relationship between People and cytosine (C) (1/0 for yes/no) Order bpp? By chromosome and by gene or region (level2 is chromosome, level1 is gene within chromosome.) Do it to facilitate cross-organism bioinformatics data mining? This is a comprehensive view of the human genome (plus other genomes). Create both a People-PTreeSet and PTreeSet vertical human genome DB with a human health records feature table associated with the people entity. Then use that as a training set for both classification and multi-hop ARM. A challenge would be to use some comprehensive decomposition (ordering of bpps) so that cross species genomic data mining would be facilitated. On the other hand, if we have separate PTreeSets for each chrmomsome (or even each regioin - gene, intron exon...) then we can may be able to dataming horizontally across the all of these vertical pTree databases. pc bc lc cc pe age ht wt AHG(P,bpp) P 7B bpp B The red person features used to define classes. AHG p pTrees for data mining. We can look for similarity (near neighbors) in a particular chromosome, a particular gene sequence, of overall or anything else. genechromosome

A facebook Member, m, purchases Item, x, tells all friends. Let's make everyone a friend of him/her self. Each friend responds back with the Items, y, she/he bought and liked. Facebook-Buys: Members 4321 F≡Friends(M,M) Members P≡Purchase(M,I) I≡Items 2345  X  I MX≡& x  X P x People that purchased everything in X. FX≡OR m  MX F b = Friends of a MX person. So,  X={x}, is Mx Purchases x strong" Mx=OR m  Px F m  x frequent if Mx large. This is a tractable calculation. Take one x at a time and do the OR. Mx=OR m  Px F m  x confident if Mx large. ct( Mx  P x ) / ct(Mx) > minconf K 2 = {1,2,4} P 2 = {2,4} ct(K 2 ) = 3 ct(K 2 &P 2 )/ct(K 2 ) = 2/3 To mine X, start with X={x}. If not confident then no superset is. Closure: X={x.y} for x and y forming confident rules themselves.... ct(OR m  P x F m & P x )/ct(OR m  P x F m )>mncnf Kx=OR O g  x frequent if Kx large (tractable- one x at a time and OR. g  OR b  Px F b Kiddos 4321 F≡Friends(K,B) Buddies P≡Purchase(B,I) I≡Items Groupies Others(G,K) K 2 ={1,2,3,4} P 2 ={2,4} ct(K 2 ) = 4 ct(K 2 &P 2 )/ct(K 2 )=2/ Fcbk buddy, b, purchases x, tells friends. Friend tells all friends. Strong purchase poss? Intersect rather than union (AND rather than OR). Ad to friends of friends Kiddos 4321 F≡Friends(K,B) Buddies P≡Purchase(B,I) I≡Items Groupies Compatriots (G,K) K 2 ={2,4} P 2 ={2,4} ct(K 2 ) = 2 ct(K 2 &P 2 )/ct(K 2 ) = 2/

R Multi-level pTrees for data tables: n-row table, row predicate (e.g., a bit slice pred, or a category map) and row ordering (e.g., asc on key; spatial data, col/row-raster, Z=Peano, Hilbert), sequence of pred truth bits (1/0) is raw or level-0 predicate map (pMap) for table, pred, row order. Raw pMap, pM, decomp to mutual excl, coll exh bit ints, bit-inte-pred, bip (e.g., pure1, pure0, gte50%One), bip stride=m level-1 pMap of pM is the string of bip truths gened by bip to consec ints of decomp. Decomp equiwidth, int seq is fully determined by width=m>1, AKA, stride=m IRIS Table Name SL SW PL PW Color setosa red setosa blue setosa red setosa white setosa blue versicolor red versicolor red versicolor white versicolor blue versicolor white virginica white virginica red virginica blue virginica red virginica red pM SL, predicate: remainder(SL/2)=1 order: the given table order pM Color=red pred: Color='red' order: given ord pM SL, pred: rem(div(SL/2)/2)=1 order: given order gte50% stride=5 pM SL,1 1 0 pure1 str=5 pM SL,1 0 gte25% str=5 pM SL,1 1 pM PW<7 1 0 pred: PW<7 order: given gte50% stride=5 pM PW<7 1 0 gte50% st=5 pMap predicts setosa. pM all its lev1 pMaps=pTree of same name as pM gte75% str=5 pM SL,1 1 0 gte50% str=5 pM C=red 0 1 pure1 str=5 pM C=red 0 gte25% str=5 pM C=red 1 gte75% str=5 pM C=red 0 1 pM SL, rem(SL/2)=1 ord: given gte50% stride=4 pM SL,0 0 1 gte50% stride=8 pM SL,0 0 1 gte50% stride=16 pM SL,0 0 lev2 pMap= lev1 pMap on a lev1. (1col tbl) pM SL, pred: rem(SL/2)=1 ord: given order pM gte50%,s=4,SL,0 ≡ gte50% stride=4 pM SL,0 0 1 level-2 gte50% stride=2 1 pM gte50%,s=4,SL,0 gte50%; es=4,8,16; SL,0 pTree: pT gte50%_s=4,8,16_SL, gte50_pTrees 11 raw level-0 pMap level-1 gt50 stride=4 pMap level-1 gt50 stride=2 pMap

gte50 Satlog-Landsat stride=64, classes: redsoil cotton greysoil dampgreysoil stubble verydampgreysoil R Rir R ir Rir R ir RG R G r clcl cgdsv Rclass Gir G ir G Gir G ir Gclass ir1 ir1ir ir1 ir ir1class ir2 ir2class r clcl cgdsv r clcl cgdsv r cl cgdsv gte50 Satlog-Landsat stride=320, get: Note: stride=320, means are way off and will produce inaccurate classification.. lev0 pVector is a bit string w 1bit/rec. lev1 pVector=bit string wbit/rec/stride, =pred_truth applied to record stride. levN pTree = levK pVec (K=0...N-1) all with same predicate and s.t each levK stride contained within 1 levK-1 stride. 320-bit strides start end cls cls 320 strd _ R G ir1 ir2 cls means stds means stds means stds means stds pixels R WL band Gir 1 ir 2 class pixels [w1,w2) WLs [w2,w3)[w3,w4)[w4,w5) class pixels Given a relationship, it generates 2 dual tables pixels w1 WLs w2...w pixels w1 WLs w2...w w w2 w1 WLs 1 pixels The table is (and it generates the [labeled by value] relationships):

FAUST Satlog evaluation R G ir1 ir2 mn R G ir1 ir2 std Oblique level-0 using midpoint of means 1's 2's 3's 4's 5's 7's True Positives: False Positives: NonOblique lev-0 1's 2's 3's 4's 5's 7's True Positives: Class actual-> NonOblq lev1 gt50 1's 2's 3's 4's 5's 7's True Positives: False Positives: Oblique level-0 using means and stds of projections (w/o cls elim) 1's 2's 3's 4's 5's 7's True Positives: False Positives: Oblique lev-0, means, stds of projections (w cls elim in order) Note that none occurs 1's 2's 3's 4's 5's 7's True Positives: False Positives: a = pm r + (pm v -pm r ) = pstd v +2pstd r 2pstd r pm r *pstd v + pm v *2pstd r pstd r +2pstd v Oblique level-0 using means and stds of projections, doubling pstd No elimination! 1's 2's 3's 4's 5's 7's True Positives: False Positives: Oblique lev-0, means, stds of projs, doubling pstd r, classify, eliminate in 2,3,4,5,7,1 ord 1's 2's 3's 4's 5's 7's True Positives: False Positives: s 1, # of FPs reduced and TPs somewhat reduced. Better? Parameterize the 2 to max TPs, min FPs. Best parameter? Oblique lev-0, means,stds of projs, doubling pstd r, classify, elim 3,4,7,5,1,2 ord 1's 2's 3's 4's 5's 7's True Positives: False Positives: above=(std+stdup)/gap below=(std+stddn)/gapdn suggest ord abv below abv below abv below abv below avg red green ir1 ir2 cls avg s1/(2s1+s2) elim ord: TP: FP: tot TP actual TP nonOb L0 pure TP nonOblique FP level-1 50% TP Obl level FP MeansMidPoint TP Obl level FP s1/(s1+s2) TP 2s1/(2s1+s2) FP Ob L0 no elim TP 2s1/(2s1+s2) FP Ob L TP 2s1/(2s1+s2) FP Ob L TP 2s1/(2s1+s2) FP Ob L TP BandClass rule FP mining (below) G[0,46]  2G[47,64]  5 G[65,81]  7 G[81,94]  4 G[94,255]  {1,3} R[0,48]  {1,2} R[49,62]  {1,5} R[82,255]  3 ir1[0,88]  {5,7}ir2[0,52]  5 Conclusion? MeansMidPoint and Oblique std1/(std1+std2) are best with the Oblique version slightly better. I wonder how these two methods would work on Netflix? Two ways: UTbl(User, M 1,...,M 17,770 )  (u,m); umTrainingTbl = SubUTbl(Support(m), Support(u), m) MTbl(Movie, U 1,...,U )  (m,u); muTrainingTbl = SubMTbl(Support(u), Support(m), u)

Mark Silverman Feb 29: speed-wise, knn on oakes (using 50% as training set and classifying the other 50%) using rapidminer over 9 hrs, vertical knn 40 min (resisting attempts to optimize). curious to see FAUST. accuracy is pretty similar (for the knns) very excited about MYRRH and classification problems - seems hugely innovative... know who would be interested in twitter bloom analysis.. tweaking Greg's faust impl to generalize it and look at gap split (currently looks for the max gap, not max gap on both side of mean -should be?) WP: looks like 50%ones impure pTrees can give cut-hyperplanes (for FAUST) as good as raw pTrees. what's the advantage? Since FAUST training is a 1-time process, it isn't speed critical. Very fast impure pTree batch classification (after training) would be very exciting. Once the cut-hyper-planes identified (e.g., FPGA spits out 50%ones impure pTrees for incoming unclassified datasets (e.g., satellite images) and sends them thro (FPGA) for "Md's "One-Pass-Across-Columns = OPAC" batch classification - all happening on-the-fly with nearly zero delay... For PINE (nearest neighbor), we don't even train a model, so the 50%ones impure pTree classification-phase could be very significantly better. Business Intelligence= "What does this customer want next, based on histories?": FAUST is model-based (training phase=build model of 1 hyperplane for Oblique or up to 1-per-col for non-Oblique). Use the model to classify. In Bus-Intel, with every new unclassified sample, a different vector space appears. (every customer rates a different set of items). So to use FAUST-PINE, there's the non-vector-space problem to solve. non-Oblique FAUST better than Oblique, since cols have different cardinalities (not a vector space to calculate oblique hyperplanes). In general, we're attempting is to marry MYRRH multi-hop Relationship or Rule Mining with FAUST-PINE Classification or Table Mining. On Social Network Mining: We have some social network mining research threads percolating: 1. facebook-friends multi-hopped with buying-preference relationships (or multi-hopped with security threat relationships or with?) 2. implications of twitter blooms for event prediction (e.g., commod/stock changes, events, political trends, bubbles/bursts, purchasing patterns... I would like to tie image classification with social networks somehow too ;-) WP: 3/1/12 Note on "...very excited about the discussions on MYRRH and applying it to classification problems, seems hugely innovative..." I want to try to view Images as relationships, rather than as tables, each row = a pixel and each cols is "the photon count in a frequency band". Any table=relationship (AKA, a matrix, rolodex card) w 2 entity axes: 1. usual row entity (e.g., pixels), 2. col entity(s) (e.g., wavlen interval). Any matrix is a dual pair of tables (via rotation). Cust-Item Rating matrix is rating tbl pair: Custs(Items) and its rotated dual, Item(Custs). When sufficient #of fine-band, hyper-spectral sensors in the air (plus on/in the ground), there will be a sufficient # of separate columns to do MYRRH on the relationship between pixels and wavelengths multi-hopped with the relationship between classes and pixels (...nearly every measurement is a summarization or a intervalization (even a pixel is a 2-D intervalization of an infinite set of points in space), so viewing wavelength as an intervalization of a continuous phenomenon is just as valid, right?). What if we do FAUST-PINE on the rotated image relationship, Wavelength(pixel_photon_count) instead of, Pixel(Wavelength_photon_count)? Note that classes which are not convex in Pix(WL) (that are spread out spatially all over the image) might be convex in WL(Pix)? tried prelims - disappointing for classification (tried applying concept on SatLogLandsat(R,G,ir1,ir2,class). too few bands or classes? Still, I'm hoping for "Wow! Look at this!" when, e.g., classes aren't known/clear and there are thousands of them and millions of bands...) e.g., 2 huge square-ish relationships to multi-hop. difficult (curse of dim = too many cols which are the relevant?) rule mining comes into its own. One last thought: regarding " the curse of dimensionality = too many columns - which are the relevant ones? ", FAUST automatically filters irrelevant cols to find those that reveal [convex] classes (all good classes are convex in proper feature space. e.g., Class=yellow_car may round-ish in Pix(RedWaveLen,GreenWaveLen, BlueWaveLen, OtherWaveLens), once R,G,B are isolated as relevant ones. Class=pavement is fragmented in Pix(RWL,GWL,BWL,OWLs) but may be convex in WL(pix_x, pix_y) (because pavement is color consistent?) Last point: We have to get you a FAUST implementation! It almost has to be orders of magnitude faster than pknn! The speedup should be very sublinear - almost constant (nearly independent of cardinality) - because it is a bulk classifier (one horizontal pass gains us a class_mask_pTree, distinguishing all points predicted to be in that class). So, not only is it model-based, but it is a batch classifier. Model-based classifiers that require scanning horizontal datasets cannot compete! Mark 3/2/12: Very close on faust. WP: it's important the classification step be done in bulk lest you lose the main huge benefit of FAUST. What happens at the end if you've peeled off all the classes and there are still some unclassified points left? have “mixed”/“default” (e.g., SatLog class=6=“mixed”) potential interest from some folks who have close relationship with Arbitron. Seems like a netflix story to me... Mar 06 Yes, pTREES for med informatics, Bill! We could work so many miracles.. data we can generate requires robust informatics, comp. bio. would put resources into this. Keith Murphy, Chair Genetics/Biochem Dir, Clemson U Genomics WP: March 06 I forgot to point out in the slides that we have applied pTrees to Bioinformatics successfully too (took second in the 2002 ACM KDD-cup in bioinformatics and took first in the 2006 ACM KDD-cup in medical informatics Association of Computing Machinery (ACM) Knowledge Discovery and Data Mining (KDD) Cup Winning Team Leader Task Assoc of Comp Machinery (ACM) Knowledge Discovery and Data Mining (KDD) Cup, Task 2. Yeast Gene Regulation Prediction: See

Netflix data {m k } k= uID rating date u i 1 r m k,u d m k,u u i 2. u i n k m k (u,r,d) avg:5655u/m mID uID rating date m 1 u 1 r m,u d m,u m 1 u 2. m u r 17770, d 17770, or U  ,480,  Main:(m,u,r,d) avg:209m/u u 1 u k u m 1 : m h : m rmhukrmhuk   47B   MTbl(mID,u 1...u ) u 0,2 u ,0 m 1 : m h : m /1   47B   MPTreeSet 3* bitslices wide  (u,m) to be predicted, from umTrainingTbl = SubUTbl(Support(m), Support(u),m) Of course, the two supports won't be tight together like that but they are put that way for clarity. Lots of 0 s in vector sp, umTraningTbl). Want the largest subtable without zeros. How? SubUTbl(  n  Sup(u)  m Sup(n), Sup(u),m)? Using Coordinate-wise FAUST (not Oblique), in each coordinate, n  Sup(u), divide up all users v  Sup(n)  Sup(m) into their rating classes, rating(m,v). then: 1. calculate the class means and stds. Sort means. 2. calculate gaps 3. choose best gap and define cutpoint using stds. This of course may be slow. How can we speed it up? Coord FAUST, in each coord, v  Sup(m), divide up all movies n  Sup(v)  Sup(u) to rating classes 1. calculate the class means and stds. Sort means. 2. calculate gaps 3. choose best gap and define cutpoint using stds. Gaps alone not best (especially since the sum of the gaps is no more than 4 and there are 4 gaps). Weighting (correlation(m,n)-based) useful (higher the correlation the more significant the gap??) Ctpts constructed for just this one prediction, rating(u,m). Make sense to find all of them. Should just find, e,g, which n-class-mean(s) rating(u,n) is closest to and make those the votes? m 1... m h... m u 1 : u k. u rmhukrmhuk   47B   UserTable(uID,m 1,...,m ) m 0,2... m 17769,0 u 1 : u k. u /0   47B   UPTreeSet 3*17770 bitslices wide  (u,m) to be predicted, form umTrainingTbl=SubUTbl(Support(m),Support(u),m) u ?45 m12455m12455 m12455m12455