In the recently concluded OpenWorld Conf Larry Ellison of Oracle announced a new in-memory offering to company’s latest 12c database platform.Oracle This much awaited in-memory option from Oracle comes almost 3 years after the launch of HANA, a similar product from competitor SAPSAP The in-memory platform utilizes a computer’s main memory instead of its disk drive to process queries faster through fewer interactions with the CPU. Such a platform improves query processing rate many-fold, as well as improve the performance of the core-CPU in the long run. The new in-memory option for Oracle DBs is expected to receive huge demand from its customers. Data is stored in both row as well as column formats in the database, with faster transactional operations in row format and faster analytical operations in the column format. Oracle expects the new platform would increase query rates by 100 times and improve processing rates by three times. [1]1 3 years since the launch of HANA, SAP has seen tremendous growth with > 2,500 customers shifting to the hybrid cloud-based platform. [2]2 With ~100K customers for its DB business alone Oracle’s decision to offer an in-memory option to its new cloud based DB looks promising. This offering could provide some resistance to SAP HANA, and therefore expect demand for the cloud-based in-memory platform to be robust. Check out our complete analysis for Oracle Cloud Computing Bubble Signals Transition From On-Premise Software Streamlined processes and increased demand for productivity from businesses has brought forth various cloud options into the software market. Cloud based services provide faster provisioning, on-demand access and agile resource scheduling by using countless virtualized servers on the cloud to cope with increased processing requirements. A survey by Oracle shows 65% would shift to a DB-as-a-Service (DBaaS) system from an on-premise DBS because DBaaS’ are quicker. [3]3 Altho continuously growing data requirements will keep driving growth in the DB market, cloud-based DBs are on the rise. The adoption of cloud- based services by businesses is growing at a rapid pace as the DBaaS model costs less than on-premise software. Increase in virtualized offerings from cloud players and weakness in IT spending resulted in slow growth in on-premise DB sftwre. Currently, the cloud-based DBaaS market is estimated to be worth $150 million. However, the market is estimated to grow at an annualized rate of 86%, to reach $1.8 billion by In comparison, the market for on-premise deployments is expected to grow at 33% annually until [4]4 With new products like 12c database coupled with its efforts to make inroads into the cloud market, we expect the company to leverage from this incredible growth opportunity in cloud services. The latest in-memory offering for the company’s first cloud-ready database offers seamless transitioning from older databases and no challenges in data migration for customers. [1] We believe this could be the start of a transition from on-premise database management services to DBaaS for the company, and expect business to be a big growth opp for market leader Oracle.1 Oracle Announces Enhanced In-Memory Applications with New Oracle Database In-Memory OptionOracle Announces Enhanced In-Memory Applications with New Oracle Database In-Memory Option, Oracle PressRoom, September 2013 [ ↩ ] [ ↩ ] ↩ Seven More Questions for SAP’s Co-CEO Bill McDermottSeven More Questions for SAP’s Co-CEO Bill McDermott, AllThingsD, January 2013 [ ↩ ] ↩ Delivering Database as a Service (DBaaS) using Oracle Enterprise Manager 12cDelivering Database as a Service (DBaaS) using Oracle Enterprise Manager 12c, oracle.com, March 2013 [ ↩ ] ↩ DBaaS poised to drive next-generation database growthDBaaS poised to drive next-generation database growth, 451Research, August 2013 [ ↩ ] ↩ From another source: SAPs HANA has one columnar store for both transactions and analytics. Oracle's approach leeps 2 redundant copies of data, one for transactions in a row and another for analytics in a column. Other players in the in-memory market: ScaleOut Software announced hServerV2 which brings realtime analytics to Map/Reduce in Hadoop. JaveOne has released HazelCast in-memory Datastore.
From: Arjun Roy Sent: Tuesday, November 19, :47 PM To: Perrizo, William Subject: Barrel Clustering Is it right that in high dimensions, considering 'd' in any direction would lead to no linear gaps before proceeding to radial gaps? Yes – but maybe we should say “is highly likely to lead to few linear gaps” rather than “would lead to no linear gaps. Remote outliers should be isolated by a gap on one extreme of the projection line for some d. It may be unlikely that we'd see any “interior” gaps on the projection line. “interior”means other than gaps at one end of the projection line separating out a singleton or doubleton outlier from the rest of the set. I would like to test 2 things a) Reducing the dimension - (Han's book talks about Information gain on specific attributes). Just use the attributes which give the highest info gains and then using Functional Gap Analysis. Just wondering if reduced dimensions will preserve the essence of actual data. Yes that is a good step. In general terms it is called attribute selection (selecting out the “important” attributes and throwing out the “unimportant”. Important means those that retain the essence of the information we are trying to data mine out. Info gain is one way to filter out the “important” attributes. b) Reducing dimensions to say 10%. E.g. consider 5 attributes out of 50. So its going to be a Vector rather than a functional value. Not sure how to tackle this situation. Is this something different from attribute selection (selecting the most important 5 out of 50)? I have found sequential FAUST (~2011) gave good results - comparable and even better than functional gap approach. There was FAUST Unsupervised in which we did classification, but I don’t recall a method which we called FAUST which did clustering by other means than dot product gap analysis. Possibly the sequential you are thinking of was when we used unit vectors, dk, such that d1 = e1 = (1,0,…,0), d2 = e2 = (0,1,0,…,0) etc>? The nice part about that is that the dot product values take no computation since they are given as the original columns of numbers already. So in an n-dimensional space, starting with d1,d2,…,dn makes good sense and often finishes the job. But if it doesn’t then we go to d’s that are linear combinations of the dk’s (i.e., diagonal and midpt to diagonal d;’s etc.) Note that the d’s we called the midline d’s are exactly the dimensional dk = ek cases. So you have reminded me that they are certainly the first ones we should consider because the dot product step can be skipped. Thanks for forcing me to think about this. First, my analysis last Saturday was no only wrong but also incomplete. There are many more d’s to be considered in a comprehensive set than I listed. In 2D I had them all, but even in 3D I left out the 4 d’s that run from a side-midpoint to and adjacent edge midpoint. I will try to get it right for next Saturday ;-) So starting by considering gaps in each columns themselves (dk=ek k=1..n) should be the first step. Even for large datasets, at least one of these might find an “interior” gap too. Once one interior gap is found then the spaces considered in the next round are ½ the size. So there is twice the potential for a gap in the next round, etc. ? I think Mark is finding that mostly even large datasets reveal gaps under some d. How to approach barrel? The thing to remember is that there are really two functionals; 1. The Linear dot product with a d, (y-p) dot d 2. The Spherical (y-p) dot (y-p) Barrel is then just [ (y-p) dot (y-p) ] – [ (y-p) dot d ]^2
Oblique FAUST with comprehensive initial linear step Ways to handle negative numbers generated in the dot product projection step. 1. Only pair p with d if (y-p) o d 0 2. Use sign mask to separate into 2 parts (positive part, negative part) then analyze whether there's a gap at After computing SpTS which represents the dot product result and is expressed in 2's complement, before converting to regular decimal bitslice pTrees, compute the minimum and subtract it from the result. Then convert to a decimal bitslice PTreeSet. 1. Min i 1..i k ≡ vector with minY h at pos h for h=1 i..i k and maxY h elsewhere (MinVec=Min 1..n MaxVec=Min - ) For d=e k, k=1..n, use p=MinVec For d=Diagonal i 1..1 k ≡(q-p)/|q-p|, where p=Min i 1..i k and q=Min comp{i 1..i k }, use p. In the Barrel method it is paramount to locate the barrel carefully. Find mode(s) of every column (use pTree Gap Finder, but watch for denseness, not sparseness (next slide)). For the column, k, with the maximum density at the mode, let d=e k Let p=VecMod≡(modeY 1,...,modeY n ) Let's take our pulse wrt the first linear FAUST step (done prior to any barrel-based reach limitation masking) 1. Exhaustive search for a good unit vector, d (which produces good linear gaps): The idea is to sequence through a comprehensive collection of d's (pairing each with a starting pt, p, s.t. (Y-p) o d 0 or to deal with negatives by analyzing for gaps in 3 steps, in the negative range, around 0, in the positive range). At a rough coverage, this is easy: Take d {e 1..e n } and p=MinVec. Take d {Diag c | c {1..n}} where for c any subset of {1..n}. Diag c is the unit diagonal from the corner, p, with p k =MinY k k c else p k =MaxY k, to the corner, q, with q k =MaxY k k c, else q k =MinY k. Here negatives never appear. When AvgPoints of sides and hypersides are used for p and/or q, care must be taken in picking p since negatives can appear. 2. Selective choices of d: ModeVec seems very valuable (along with MedVec and Mean.)
X x1 x2 p1 1 1 p2 3 1 p3 2 2 p4 3 3 p5 6 2 p6 9 3 p p p pa 13 4 pb 10 9 pc pd 9 11 pe pf 7 8 xofM No zero counts yet (=gaps) p6 0 1 p p p p p p p6' 1 0 p5' p4' p3' p2' p1' p0' p3' p p3' p p3' p p3' p p3' p p3' p p3' p p3' p p4' p4' p p p4' p4' p p p4' p4' p p p4' p4' p p p5' p5' p5' p5' p5' p5' p5' p5' p p p p p p p p p6' 1 0 p6' 1 0 p6' 1 0 p6' 1 0 p6' 1 0 p6' 1 0 p6' 1 0 p6' 1 0 p6 0 1 p6 0 1 p6 0 1 p6 0 1 p6 0 1 p6 0 1 p6 0 1 p6 0 1 f=p1 and x o fM-GT=2 3. First round of finding L p gaps width = 2 4 =16 gap: [ , ]= [64,80) width=2 3 =8 gap: [ , ] =[40,48) width=2 3 =8 gap: [ , ] =[56,64) width= 2 4 =16 gap: [ , ]=[88,104) width=2 3 =8 gap: [ , ]=[0,8) OR between gap 1 & 2 for cluster C 1 ={p1,p3,p2,p4} OR between gap 2 and 3 for cluster C 2 ={p5} between 3,4 cluster C 3 ={p6,pf} Or for cluster C 4 ={p7,p8,p9,pa,pb,pc,pd,pe} f= pTree Gap Finder can also be used to find the mode(s) of the distribution of F values! Instead of watching for spareness (and ultimately a zero count) we watch for large (the largest?) counts
Oblique FAUST Pipe 1 d p 1. Linear project onto d-line from within a pipe. If no good gaps, start over with new p, d, else if good linear gapped region(s) appear, 2. Look for good radial gaps, use them as in OFP0, If none, for each linear gapped region [x o d=a 1,x o d=a 2 ] (corresp. pts on pd-line are b k =p+a k d=p o d k=1,2), do: in a narrowed region around p 1 =avg(b 1,b 2 ), incr radius until 1st gap or thinning appears (at r 1 Let q 1 =mean(post-thinning barrel stave ring (radius from r 1 to r 1 + delta) and let d 1 =(q 1 -p 1 )/|q 1 -p 1 |.q1.q1.p1.p1 r1 3. Start over with p1, d1 (more likely to be down the middle of the round cluster and therefore produce barrel gaps (linear and radial)). b 1 =+a 1 d-p o d. b 2 =p+a 2 d-p o d Alternatively, one could keep finding points on the sphere (like b1 and b2) until one has n-1 of them (n=dimension of space). Then there is a formula for the center of the circle through those points (at least in low dimensions???). However, even if there is a formula in high dimensions, it would be a nasty one and would take lots of rounds of the above to preduce the n-1 points (~ n/2 rounds). If n=17,000 for instance, that makes it infeasible.
Oblique FAUST Pipe 2 d p 1. Linear project onto d-line from within a pipe. If no good gaps, reset p, d 2. linear gapped region (mean=m1), increase radius until there appears a region between gaps or thinnings (at r1 and at r2). Let m2=mean(2nd pre-thinning barrel stave).. m2. m1. p' r1 r2 r3 r4 3. Look again for radial gap. if none goto1. 4. If radial gap, restrict to full barrel and look for linear gaps. If none goto If linear gap, declare subcluster, mask off and goto 1. Let r3=(r2+r1)/2, r4=r2 - r3 Reset d-line thru p'=(r3*m2+r4*m1)/2
Q&A f=distance dominated functional, avgGap=(f max -f min )/|f(X)| may be a good measurement for setting thresholds, e.g., x is an outlier=anomaly if gap around {x} > 3*avgGap? If d and t are trained over DocumentTerm (DT) Gradient(F)=G=(G d, G t ). Instead of a LineSearch using F(s)=f +sG, always use 2D-RectangleSearch, F(s d,s t )=F(f + s d *G d + s t *G t ). Set F/ s d =0 and F/ s t =0. It may be a better approach to find dense cells (sphere, barrel, cone) then fuse them, because it's difficult to position themaround clusters (due to bumps, protrusion etc.) (Not true for outlier clusters (singleton\doubleton)) An Akg: Start with a line and a small radius barrel around it. Find dense regions between 2 consecutive gaps in this pipe. This should identify portion of a dense cluster. Lots of ways to go from there: a. Use centroid of dense pipe piece as sphere|barrel center. b. Move to a better centroid for that cluster by a gradient asc/desc process c. In a "GA mutation" fashion, jump to a nearby centroid, governed by some fitness function (e.g., count in dense pipe piece). If the minimum barrel radii >> 0, we have chosen a d-line far from the data. It may be advisable to pick p to ba an actual data point. Here are the formulas from the spreadsheet: G=(B12-B$6)*B$9+(C12-C$6)*C$9+(D12-D$6)*D$9+(E12-E$6)*E$9 H=G12-$G$9 L=(x-p)od-min I=(B12-B$6)^2+(C12-C$6)^2+(D12-D$6)^2+(E12-E$6)^2 B=SQRT[(x-p)o(x-p)-(x-p)od^2] Note we don't round, so we are calculating pTree bitslices by truncating. We don't even need to do that! For fixed piont, here are the bislice formulas: Keep going (take bitslices to the right of decimal pt ) Floating point? Bitslice the mantissa. The exponent shifts the slice name. E.g.,.1011 Gap analytic tools: L(x)=x o d, S(x)=(x-p) o (x-p) and then from those, B(x)=S(x)-L 2 (x) (If T is the minimum gap threshold, use T 2 for S and B ) Oblique FAUST, Barrel (OFLB) Alternate L pq x, B pq x to get a cluster dendogram (topdown). Take p=1st_TR pt? d=vom avg Defining Avg Density? AvD = count / k=1..dim (max k -min k )? This is for choosing good Thresholds. MinGapThres=T b,AvD ≡ b*(1/ AvD) 1/dim b=adjustable param If we're given a TrainingSet, TR, with K classes, is avg k=1..K vom k a better mediod than VoM? Take p=MinCorner, q=MaxCorner of box circumscribing {VoM k } k=1..K better than not circ box of TR? SSPTS = set of all SPTSs (columns of reals); V = n-dim vector space. Code operations on SSPTS (both 1 level or multi-level): DP v (Dot Product with a fixed vector, v V) SSPTS SSPTS SSPTS (Binary Algebraic Operations): including: +, -, /, RWP =Row_Wise_Product SSPTS SSPTS (Unary Operations) including: SP c =Scalar_Product (Multiply each SPTS row by same constant, c. Use const SPTS? all rows=c, then RWP. More efficient? w/o forming const SPTS? Use c's bit pattern c only? (subset of previous with n = |SSPTS|?) {SPTS k } k=1..n SSPTS (Unary ops.Typically SPTS k =V k ) incl: SD v (Square Distance from a fixed vector, v V) Note, SSPTS includes SPTSs of all cardinalities (= depths = # of rows) It seems best to code on SSPTS rather than on SSPTSn (card(SPTS)=n). Of course, it is very important to know what the rows represent so as to avoid nonsense results, however, why restrict the operations themselves? When SPTS operands are of different depths, the result SPTS's depth = depth of the shallowest operand (operate from the top of each). ER a = FP's EinRings (n=1, r R) result masks rows s.t. row < a SPTS R includes AG a = YC's Aggregates and iceberg queies: count, sum, avg, max, min, median, rank_k, top_k, IceBergQueries /16
p T=MGW=12 d=x-n= CONCRETE ST CM WA FA AG L1 M1 L2 M12 H 17 C3 OF LB...LB Clustering on Concrete(STrength,ConcreteMix,WAter,FineAggregate, AGgregate). Assess STerror L<40 M<60 H (x-p)od/4 Ct Gp 3 C if 1 st B radius>>0, use p=min_radius_pt L2 M1 C0 L20 M9 H4 C1 H4 M3 H1 C4 H1 L18 M26 H28 C2 Br/4 ct gp 3 C H1 M3 Br/4 ct gp 3 C L1 M3 H3 C31 L1 M1 H4 C32 H1 M1 H5 C33 H1 H2 H1 M1 M2 M1 M3 (x-p) o d/4 gp 3 C M3. H3 (x-p) o d/4 gp 3 C L1 M1 d=4. H4 Br/4 gp 3 C L1 L9 M1 C21 L4 M3 H1 C22 M1 L2 M4 H3 C23 M1 L2 M3 H16 C24 L2 M3 H4 C25 M1 M1 H3 C26 M1 M2 M3 H1 C27 M2 (x-p)od/4 gp 3 C M3. H1 (x-p)od/4 gp 3 C M1 ' H3 (x-p)od/4 gp 3 C M2 H2 M1 H1C251 H1 Br/4 gp 3 C M1 H1 (x-p)od/4 gp 3 C L1 L1 M2 H16 C241 M1 Br/4 gp 3 C L1 M1 H5 C2411 H5 M1 c (Clust dendogram w/o purity) c0 c1 c2 c3 c4 c31 c32 c33 c21 c22 c23 c24 c25 c26 c27 (x-p) o d/4 gp 3 C M1. H5 c251 c241 (x-p) o d/4 g 3 C M1. H5 c2411 (x-p)od/4 gp 3 C M3 L1 H1 L1 M1 C231 c231 Br/4 gp 3 C L1 M1 (x-p)od/4 gp 3 C L3 M2. H1 M1 L1 (x-p)od/4 gp 3 C L6 L3 M1 C211 Br/4 gp 3 C c211 L1 M1 L1 Br/4 gp 3 C L2 L11 M3 C11 L1 L4 M1 M2 L1 M2 H1 C12 H3 M1 L1 c11 c12 (x-p)od/4 gp 3 C L1 H1 M2 (x-p)od/4 gp 3 C L11 M3 Br/4 gp 3 C L1 L1 M1 d=4
C p,d (x)=(x-p) o d / (x-p) o (x-p) Oblique FAUST Cone (OFC) (Enclose clusters with cone gaps) gap Barrel Oblique FAUST (OF) Clustering: Linear (default) OFL, Spherical OFS, Barrel OFB, Conical OFC) B p,d (x)=(x-p) o (x-p)-((x-p) o d) 2 Oblique FAUST Barrel (OFB) (Enclose clusters with barrel gaps) Search for Gap Lower >T, Gap Upper >T and Gap Barrel >T 2 (BR≡Barrel_Radius) Search S p for spherical gap, {x | r 2 S p (x) < (r+T) 2 }= so that the interior of the r-sphere about p encloses a sub-cluster. S p (x)=(x-p) o (x-p) Oblique FAUST Spherical (OFS) (Enclose clusters with spherical gaps) No gaps show on the red, blue or green projection lines d p r p L p,d :X R: L p,d (x)=(x-p) o d Oblique FAUST Linear (OFL) clustering (Enclose clusters between (n-1)-dimensional hyperplanar gaps) Find a 1 <a 2 such that =Gap Lower ={x | a 1 <L pd (x)<a 1 +T} and =Gap Upper ={x | a 2 <L pd (x)<a 2 +T} and C={x|a 1 +T<L pd (x)<a 2 } Gap Upper d p Gap Lower a1a1 a2a2 B pd x x Note: B pd (x) = S p (x) - L 2 pd (x) Note: C 2 pd (x) = L 2 pd (x) / S p (x) Assume a real number table, T, converted to a PTreeSet. Each method uses a real valued functional from X to R and all methods are completely data parallel (data can be distributed over a cluster, processed in parallel (dot product), then the partial results sent home to be added.
p=min q=max ClsAreaLnkeAcoeLnk) F<4.5 R Ct gp Oblique FAUST Pipe 0 Clustering on SEEDS( Thinning d p 0. Always start with a pd-line linear gap analysis, then: 1. Find gapped regions in pipe: Project inside of pd-pipe (small radius). full linear gapped region, analyze for an Initial Radial Gap (IRG). 2. If IRG, increase linear region width until cap gaps appear.. 3. Mask off that cluster 4. GOTO 1, using revise p,d at this point or if either 2, 3 have no gaps Notes: 1. OFP0 may not work well if the pipe runs through the edge of a round cluster. The philosophy is the probabilistically, pipes are more likely to run through the center region of a round cluster since there is more of it. Next we try ways of adjusting p so that the pipe is more in the center of the subcluster. 2. I also tried Spherical when it appeared from the pipe analysis that we were at the center of a cluster. So far this didn't work out. xod Ct gp Thinning pipe xod Ct gp in pipe region! Look for radial gaps R Ct gp The alg only specifies looking at the first region, but it is interesting that other clusters are revealed Next, Lin gap anal in r=26 barrel xod Ct gp <F<7.5 R Ct gp MinRad too high! reset p 4.5<F<7.5 R Ct gp xod Ct gp in pipe region! Look for radial gaps
Oblique FAUST Pipe 01 Clustering on SEEDS 1. Linearly project in a pdr 0 -pipe. 2. For every very dense region (take just the most dense middle portion? Find first radial density falloff at r 1. If none GOTO Linearly project pdr 1 barrel to determine the linear extent of that dense region. If it fails to show up. GOTO Mask off that cluster 5. Revise p and d and GOTO 1. p=nnnn q=xxxx R Ct gp r 0 =1.5 pipe L Ct gp No thinnings so it's just one dense region In middle of dense portion, [=3], find radial falloff, r 1 R Ct gp
US≡ Universe of Scalar pTreeSets A ScalarpTreeSet is the complete set of pTrees for a column of real numbers (Complete: a pTree bit pos, - to DP v :=Dot Product with a fixed real vector, v V Again, use v's bit pattern? n-ary operations: US ... US US SP c =Scalar Product (multiplying every row of an SpTS by a constant, c). One can use * above or possibly, use c's bit pattern to avoid constructing a constant SpTS) ER a = EinRings = pTree mask of rows < a apply < above? Better, use a's bit pattern only? AG a = YC's Aggregates, count, sum, avg, max, min, median, rank_k, top_k, IceBergQueries (Here, the result is a number, but a number is a depth=1, width=1 SpTS.) SD v =Square Distance from fixed real vector, v V Again use v's bit pattern only? add (row-wise) SpTS SpTS m SpTS result SpTS result,k =SpTS 1,k +..+SpTS m,k |SpTS result | ≡ depth = min{|SpTS 1 |,..., |SpTS m |} -, /, * are similarly defined row_wise operations. =, >, <, , are binary ops producing mask pTree (i.e., bitwidth(SpTS result )=1 RBaDB ToC entry for a SpTSs in a RBaDB Real Bitarray DataBase predicate? level=? depth=? Min=? Rank¼ Median=Rank½? Rank¾> Max=? Sum=? bit posptr|purity1count (n+1, )pure0 0 n pointer n n-1pointer n n'pure1depth... n"pure mpointer -m (- ,-m-1)pure0 Notes: 0. pointer k =pointer to a bit vector. 1.1st and last rows can be implied? 2. Other cols? (e.g., Identical Twins) 3. Need separate ToC for PTreeSets (as sequences of same-depth SpTSs for tables of real numbers). 4. It's OK to black box SpTSs as real columns since then complex columns can be defined at a higher ToC level via pointers to their real and complex parts. 5. To mult by 2 k shift bit pos defs only. So a cleaner ToC might be: How do we define (black box) ScalarpTreeSets (SpTSs) in a ToC or VDB Catalog? ToC for SpTSs in a RBaDB predicate?.. Sum? Highest_NonPure0_Bit_Position=n PointerArray=( ptr(n),...,ptr(-m) ) CountArray=( cnt(n),...,cnt(-m) ) Some red info is redundant. How much pre-computed (redundant?) info should be placed in the ToC? Rules of thumb: Pre-compute everything that might be useful and certainly pre-compute all info that might require Horizontal Processing of Vertical pTrees. I believe Min, Med, Max, Sum can be derived from ToC info (the counts) without accessing actual pTrees and thus, should be store in the ToC only if doing so save significant processing time. I.e., use offsets instead of keywords to implement the pTree pointer table.
Oblique FAUST with a comprehensive initial linear step (done in parallel?) For table,Y=(Y 1.Y 2.Y 3.Y 4 ), let n=minY k, x=maxY k a=avgY k m=medianY k k=1|2|3|4 y Y, L pq (y)=(y-p) o (q-p/|p-q|), p,q any of p and q form diagonals: nnnn - xxxx nnnx - xxxn nnxn - xxnx nnxx - xxnn nxnn - xnxx nxnx - xnxn nxxn - xnnx nxxx - xnnn xnnn - nxxx xnnx - nxxn xnxn - nxnx xnxx - nxnn xxnn - nnxx xxnx - nnxn xxxn - nnnx xxxx - nnnn abcdef abcdef 8 thru f are the same diagonals as 7 thru 0, so we only need 2 n-1 =8 0 thru 7 or p and q midlines combo(n,n-1) of them =n!/(n-(n-1))!(n-1)!=n=4 aaan - aaax aana - aaxa anaa - axaa naaa - xaaa aana - xxxx xxxn xnxx xnxn nxxx nxxn nnxx nnxn or p and q from side-mid-pt to a opposite corner (n2 n-1 = 32) aaan - xxxx xxnx xnxx xnnx nxxx nxnx nnxx nnnx anaa - xxxx xxxn xxnx xxnn nxxx nxxn nxnx nxnn naaa - xxxx xxxn xxnx xxnn xnxx xnxn xnnx xnnn By substituting m for a (median for avg) we could get 32 more, but they seem likely to be essentially the same lines as given by a? Possibility: Always use m instead of a as a center? 32 mdpt-corner are more and more distinct from the diagonals as the dimension increases. There are (n+1)(2 n-1 +1) (p,q) pairs to consider. I have not used column numbers. e.g., n=minimum means n is a number and it is the minimum of a column of numbers (indicated here by the position in which it appears, not by subscript (offset, not keyword identification of the column)). Each of n,x,a,m are width=4 vectors or 4-tuples of numbers, so they are: (n 1, n 2, n 3, n 4 ) (x 1, x 2, x 3, x 4 )(a 1, a 2, a 3, a 4 )(m 1, m 2, m 3, m 4 ) As on slide 1, these are precomputed and stored as ToC info y Y = PTreeSet for a table of reals, for any of the 44 (p,q), then in a first dot product with (q-p) step do all 44 dot products and gap analyses! Lot of work? Yes, but there is computational parallelism (Note there is no need to insist on unit vectors - we can dot with q-p and then realize that the gaps will be |q-p| wider than they would have been, had we used d pq instead (affects choice of threshold only)). So if p=(n 1, x 2, n 3, x 4 ) then q=(x 1, n 2, x 3, n 4 ), and then So if, given Y pre-compute all these scalar-times-SpS binary multiplications of Y k with the 16 pre-computed numbers, n k, x k, a k and m k, then any of these dot product SpSs is a 8-sum of those we precomputed and is thus not much work (4*45=180 8-sums). It may be possible to engineer 8-summing process to cut time even further? Or to engineer an efficient way to do 8 scalar multiplications and the 8-summings in one efficient operation. Of course, it won't always be 8. For the Netflix Movie PTreeSet, there are 17,000 columns, not 4; and for the User PTreeSet there are 500,000. If finding outliers (anomalies), since most anomalies occur at the outside boundaries of a set, this simple method might find all of them without further processing. The other aspect of OF that needs attention is making this local in the sense that clusters are revealed by local application of linear gap analysis even though for the entire space, no gaps appear. This is the reason to introduce barrel gaps if the initial linear gap analysis fails. (y-p) o (q-p)=y o q-y o p-p o q+p o p= (y 1 y 2 y 3 y 4 ) o (n 1,x 2,n 3,x 4 )-(y 1 y 2 y 3 y 4 ) o (x 1,n 2,x 3,n 4 )-p o q+p o p n1y1+n1y1+x2y2+x2y2+n3y3+n3y3+x4y4+x4y4+x1y1+x1y1+n2y2+n2y2+x3y3+x3y3+n4y4n4y4 precomputed fixed vectorss
Oblique FAUST a comprehensive initial linear step using a central point, p For Y=(Y 1.Y 2.Y 3.Y 4 ), n=minY k, x=maxY k m=medY k o=rank¼Y k t=rank¾Y k k=1|2|3|4 y Y, L pq (y)=y o (q-p)/|q-p|, p=central pt= the1 st pt in the 1 st ring around p in S VoM (Y) q is any of (2 n-1 of them) xxxx xxxn xxnx xxnn xnxx xnxn xnnx xnnn or q: m=median pt = Rank½ (n of them) mmmx mmxm mxmm xmmm ttxt ttxo toxt toxo otxt otxo ooxt ooxo or q: o=Rank¼, t=Rank¾ pt (n2 n-1 ) tttx ttox totx toox ottx otox ootx ooox txtt txto txot txoo oxtt oxto oxot oxoo xttt xtto xtot xtoo xott xoto xoot xooo p xx xn mx xm txox xt xo What we are attempting to do here is get a full coverage of ~evenly spaced projection lines (pq-lines or d-lines). We are doing so by attempting to evenly space q on [half of] the sides. That seems easier than evenly spacing points on [half of] the sphere (using angles) If dim=N is high, there may be unevenness to this method? However, using rank(k/N), k=1..N-1 instead of length increments by =(max-min)/N should ameliorate that? We can calculate the ranks using our logn pTree rank procedures. Furthermore, once we move to barrel gapping to limit the radial reach (and therefore materialize gaps for large dataset that would not appear with just linear analysis), using an actual point in the set as p has definite advantages (always centers the barrel on actual points in the space so we get a r=0 radius every time). Oblique FAUST 2nd Barrel step (paralleling?) 1. L (q-p)/|q-p| (y)=(y-p) o (q-p)/|q-p| find outliers and linearly gapped clusters. (1/|q-p|=a) 2. Then look for barrel gaps around all dense regions (between thinnings) using S p (y)=(y-p) o (y-p)= y o y -2y o p +p o p and B (q-p)/|q-p| (y)=S p (y)-L (q-p)/|q-p| (y) 2 = y o y -2y o p + p o p - (y o p-y o q-p o q+p o p) 2 /a 2 Every dot product in the above is a linear combination of SpSs of the form, b*Y k for some b {n.x,a,0,m,t}. Thus, if we precompute those SpSs we can put together L and B projections efficiently.