Download presentation
Presentation is loading. Please wait.
Published byFelix Adams Modified over 8 years ago
1
EDU=E S# C# SNAME AGE CNAME SITE GRADE 17 5 BAID 19 3UA ND 96 25 6 CLAY 21 3UA NJ 76 25 7 CLAY 21 CUS ND 68 32 6 THAISZ 18 3UA NJ 62 32 7 THAISZ 18 CUS ND 91 32 8 THAISZ 18 DSDE ND 89 38 6 GOOD 20 3UA NJ 98 Data Cube is a table, the Universal Relation, UR S# C# SNAME AGE CNAME SITE GR+1 17 5 BAID 19 3UA ND 97 17 6 BAID 19 3UA Nj 17 7 BAID 19 CUS ND 17 8 BAID 19 DSDE ND 25 5 CLAY 21 3UA ND 25 6 CLAY 21 3UA NJ 77 25 7 CLAY 21 CUS ND 69 25 8 CLAY 21 DSDE ND 32 5 THAISZ 18 3UA ND 32 6 THAISZ 18 3UA NJ 63 32 7 THAISZ 18 CUS ND 92 32 8 THAISZ 18 DSDE ND 90 38 5 GOOD 19 3UA ND 38 6 GOOD 19 3UA NJ 99 38 7 GOOD 19 CUS ND 38 8 GOOD 19 DSDE ND 57 5 BROWN 20 3UA ND 57 6 BROWN 20 3UA NJ 57 7 BROWN 20 CUS ND 57 8 BROWN 20 DSDE ND We can convert this UR to pTrees. For numeric cols, blanks are set to 0 since 0 does not contribute to sums and for avg we need nonblank count only (meta-data). Beyond that, watch out for interval masks (e.g., P x>a ). Don't mask zero if blanks are to be excluded. GRADE=G S# C# GR 17 5 96 25 6 76 25 7 68 32 6 62 32 7 91 32 8 89 38 6 98 COURSE=C C# CNAME SITE 5 3UA ND 6 3UA NJ 7 CUS ND 8 DSDE ND STUDENTS=S S# SNAME AGE 17 BAID 19 25 CLAY 21 32 THAISZ 18 38 GOOD 19 57 BROWN 20 Boyce Codd normal [relational] form G S#\C# 5 6 7 8 17 96 25 76 68 32 62 91 89 38 98 57 S SNAME AGE BAID 19 CLAY 21 THAISZ 18 GOOD 19 BROWN 20 C CNAME SITE 3UA ND 3UA NJ CUS ND DSDE ND Data Cube form IT'S ALL TABLES! For AGE: P 4,0 1111111100001111000011111111000011110000 For level-1 pTrees use predicate, stride 1 1 0 1 0 P(gte50%,4) 4,1 1 0 1 1 0 P(gte50%,4) 4,2 0 1 0 0 1 P(gte50%,4) 4,3 0 0 0 0 0 P(gte50%,4) 4,4 1 1 1 1 1 These level-1 UR pTrees are the basic S.AGE pTrees! What about GRADE? Since 0 is a grade, use GR+! column? P 7,0 1000011001000100000010000110010001000000 P ½,4 7,0 0 1 0 0 0 P ½,4 7,1 0 0 1 0 0 P ½,4 7,2 0 1 1 0 0 P ½,4 7,3 0 1 1 0 0 P ½,4 7,4 0 0 1 0 0 P ½,4 7,5 0 0 0 0 0 P ½,4 7,6 0 1 1 0 0 P ½GR 0 77 94 0 0 Level-1 pTrees for GR+1 are not as useful. S# = 17,38,57, GR=0 AvgGR is close: truAvgGR=82.8. L1AvgGR=84.5 P(gte50%,4) 4,0 Try Lev1 pTrees for GR+1 by applying a nonblank mask prior to evaluating the predicate. 1 1 0 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 1 0 1 1 1 1 0 97 77 94 99 0 So, best use: P ½(nbMask),4 Conclusion: pTreeize DataCube as a rotatable table. Create pTree for both rotations and include pTrees of approp entity table with each. Since the lev0 pTrees of UR hold no info (pure strides) and lev1 are exactly the above entity pTrees, all useful pTrees are contained in DataCube set
2
DSRDSR 3D Social Media Communications Analytics (prediction/anomaly detection for emails, tweets, phone, text fSfS 0101 fDfD 0101 fTfT 0 1 0 0 fTfT 0101 FUFU 0 1 0 0 U 2 3 4 5 D 2 3 4 55 4 3 T 2 fDfD 0 1 0 0 TDTD UTUT We do pTrees conversions and train F in the CLOUD; then download the resulting F to user's personal devices for predictions, anomaly detections. The same setup should work for phone record Documents, tweet Documents (in the US Library of Congress) and text Documents, etc. fRfR 0 0 0 0 fDfD 00 10 00 01 fDfD 0001 0101 fTfT 00 10 00 01 fTfT 0001 0101 fUfU 00 10 00 01 f S f 1,S f 2,S 0001 0101 fRfR 0 0 0 0 0 0 0 0 DSR U 2 3 4 5 TD 1001 0111 1000 1100 D 2 3 4 5 5 4 3 T 2 UT 0001 0010 0001 0100 0001 1010 0001 0101 0001 1010 0001 0101 0001 1010 0001 0101 0001 1010 0001 0101 sender rec Using a 3-dimensional DSR matrix (Document Sender Receiver) with 2-dimensional TD (Term Doc) and UT (User Term) matrixes. The pSVD trick is to replace these massive relationship matrixes with small feature matrixes. Use GradientDescent+LineSearch to minimize sum of square errors, sse, where sse is the sum over all nonblanks in TD, UT and DSR. Should we train User feature segments separately (train f U with UT only and train f S and f R with DSR only?) or train U with UT and DSR, then let f S = f R = f U, so f = This will be called 3D f. 0101 0101 0101 0110 1001 Distinguishing Senders and Receivers; i.e., replace 2 dimensional DU (Document User) matrix with a 3 dimensional DSR (Doc Sender Receiver) Email is emphasized, but the same analytics and data structures apply to phone_records/tweets/SMStext (distinguish senders and receivers) Or training User the feature segment just once, f = This will be called 3DTU f 0101 0101 0101 Replace UT with f U and f T feature matrixes (2 features) Replace TD with f T and f D Replace DSR with f D, f S, f R Using just one feature, replace with vectors, f=f D f T f U f S f R or f=f D f T f U
3
3DTU: Structure relationship as a rotatable matrix, then create PTreeSets for each rotation (attach entity tbl PTreeSet to its rotation Always treat an entity as an attr of another entity if possible? Rather than add it as a new dimension of a matrix? E.g., Treat Sender as a Document attribute instead of as the 3rd dim of matix DSR. The reason: Sender is a candidate key for Doc (while Receiver is not). (Problem to solve: mechanism for SVD prediction of Sender?) DR U D T UT 121 354 11 10 2 1 3 2 1 TD 3 5 4 1 2 1 Sender CT LN 1 3 1 2 1 2 DT 3 5 1 4 pDT T1,2 1010 pDT T1,1 0000 pDT T1,0 0101 pDT T2,2 1010 pDT T2,1 0000 pDT T2,0 1010 pDT T2,Mask 1010 pDT T1,Mask 1111 pDT T3,2 0000 pDT T3,1 0101 pDT T3,0 0101 pDT T3,Mask 0101 pD Sh,0 1010 pD S,1 0101 pTD D1,2 110110 pTD D1,1 000000 pTD D1,0 010010 pTD D1,Mask 110110 pTD D2,2 000000 pTD D2,1 010010 pTD D2,0 101101 pTD D2,Mask 101101 TU 1 2 1 4 5 3 pTU U1,2 011011 pTU U1,1 100100 pTU U1,0 110110 pTU U1,Mask 111111 pTU U2,2 000000 pTU U2,1 010010 pTU U2,0 101101 pTU U2,Mask 111111 pDR R1 1111 pDR R2 0101 pDR Mask 1111 pUT T1,2 0000 pUT T1,1 1010 pUT T1,0 1111 pUT T1,Mask 1111 pUT T2,2 1010 pUT T2,1 0101 pUT T2,0 1010 pUT T3,2 1010 pUT T3,1 0000 pUT T3,0 0101 pUT T2,Mask 1111 1111 RD 1 1 0 1 pRD D1,2 1010 pRD D2,1 1111 pRD Mask 1111 Only provide blankmask when blanks pTrees might be provided for DST (SendTime) and D(LN (Length): pD CT,0 1111 pD CT,1 1010 pD LN,0 1010 pD LN,1 0101 Next: Create the scalar trees also? sDT T1 4141 sDT T2 5 sDT T3 3 sTD T1 4545 sTD T2 1313 sTU T1 354354 sTU T2 121121 sUT T1 3131 sUT T2 5252 sUT T3 4141 Next: Train feature vectors from these pTrees?
4
1 1 1 1 1 1 1 1 1 1 1 f t 1.279 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse 55.090 1.27 1.27 1.27 1.27 1.27 1.27 1.27 1.27 1.27 1.27 1.27 f*t DT _________ e fD e*fDe*fT 1 3 -0.6 1.36 1.27 * * -0.813 4 5 2.36 3.36 1.27 ** 3.0237 fT 1.27 1.27 1.27 UT e fU e*fUe*fT 3 5 4 1.36 3.36 2.36 1.27 *** 1.7447 1 2 1 -0.6 0.36 -0.6 1.27 *** -0.813 fT 1.27 1.27 1.27 DS e fS e*fS e*fD 0 1 -1.6 -0.6 1.27 -2.0 * ** 1 0 -0.6 -1.6 1.27 -0.8 * ** fD 1.27 1.27 DR e fR e*fR e*fD 1 0 -0.6 -1.6 1.27 -0.8 * ** 1 1 -0.6 -0.6 1.27 -0.8 * ** fD 1.27 1.27 3.14 9.07 3.95 -4.8 2.79 9.07 -1.1 -2.9 -2.9 -1.6 -2.9 Gradient 3D f: f = (f T, f D, f U, f S, f R ) with gradient descent to minimize sse taken over 2D matrixes only, DT, UT, DS, DR [em6] 3.14 9.07 3.95 -4.8 2.79 9.07 -1.1 -2.9 -2.9 -1.6 -2.9 f t 0.114 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 ss19.329 1.63 2.31 1.72 0.72 1.59 2.31 1.14 0.94 0.94 1.09 0.94 f+t DT _________ e fD e*fDe*fT 1 3 -0.1 1.74 0.72 * * -0.300 4 5 1.38 1.30 1.59 ** 2.2666 fT 1.63 2.31 1.72 UT e fU e*fUe*fT 3 5 4 -0.7 -0.3 -0.0 2.31 *** -1.288 1 2 1 -0.8 -0.6 -0.9 1.14 *** -1.436 fT 1.63 2.31 1.72 DS e fS e*fS e*fD 0 1 -0.6 -0.5 0.94 -0.6 * ** 1 0 0.31 -1.5 0.94 0.29 * ** fD 0.72 1.59 DR e fR e*fR e*fD 1 0 0.20 -1.7 1.09 0.22 * ** 1 1 0.31 -0.5 0.94 0.29 * ** fD 0.72 1.59 -0.7 0.52 0.13 -0.0 3.95 -2.1 -4.6 -0.2 -3.2 0.37 -3.6 Gradient -0.7 0.52 0.13 -0.0 3.95 -2.1 -4.6 -0.2 -3.2 0.37 -3.6 f t 0.09 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 ss15.756 1.56 2.36 1.74 0.71 1.95 2.12 0.72 0.92 0.65 1.12 0.62 f+t DT _________ e fD e*fDe*fT 1 3 -0.1 1.75 0.71 * * -0.191 4 5 0.93 0.38 1.95 ** 1.4639 fT 1.56 2.36 1.74 UT e fU e*fUe*fT 3 5 4 -0.3 -0.0 0.30 2.12 *** -0.525 1 2 1 -0.1 0.28 -0.2 0.72 *** -0.225 fT 1.56 2.36 1.74 DS e fS e*fS e*fD 0 1 -0.6 -0.8 0.92 -0.6 * ** 1 0 0.53 -1.2 0.65 0.34 * ** fD 0.71 1.95 DR e fR e*fR e*fD 1 0 0.19 -2.2 1.12 0.21 * ** 1 1 0.55 -0.2 0.62 0.34 * ** fD 0.71 1.95 0.91 0.93 1.69 -0.7 2.09 -0.0 -0.0 -0.0 -4.0 0.53 -4.7 Gradient 0.91 0.93 1.69 -0.7 2.09 -0.0 -0.0 -0.0 -4.0 0.53 -4.7 f t 0.04 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 ss15.305 1.60 2.39 1.80 0.68 2.03 2.12 0.72 0.92 0.49 1.14 0.43 f+t DT _________ e fD e*fDe*fT 1 3 -0.0 1.76 0.68 * * -0.160 4 5 0.72 0.11 2.03 ** 1.1661 fT 1.60 2.39 1.80 UT e fU e*fUe*fT 3 5 4 -0.4 -0.0 0.15 2.12 *** -0.659 1 2 1 -0.1 0.25 -0.3 0.72 *** -0.270 fT 1.60 2.39 1.80 DS e fS e*fS e*fD 0 1 -0.6 -0.8 0.92 -0.5 * ** 1 0 0.66 -1.0 0.49 0.32 * ** fD 0.68 2.03 DR e fR e*fR e*fD 1 0 0.21 -2.3 1.14 0.24 * ** 1 1 0.70 0.11 0.43 0.30 * ** fD 0.68 2.03 0.41 0.22 1.31 -0.8 1.62 -0.5 -0.2 0.02 -3.8 0.62 -4.5 Gradient 0.41 0.22 1.31 -0.8 1.62 -0.5 -0.2 0.02 -3.8 0.62 -4.5 f t-0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 ss15.304 1.60 2.39 1.80 0.68 2.03 2.12 0.72 0.92 0.49 1.14 0.43 f+t DT _________ e fD e*fDe*fT 1 3 -0.1 1.76 0.68 * * -0.161 4 5 0.72 0.11 2.03 ** 1.1713 fT 1.60 2.39 1.80 UT e fU e*fUe*fT 3 5 4 -0.4 -0.0 0.16 2.12 *** -0.659 1 2 1 -0.1 0.25 -0.3 0.72 *** -0.270 fT 1.60 2.39 1.80 DS e fS e*fS e*fD 0 1 -0.6 -0.8 0.92 -0.5 * ** 1 0 0.65 -1.0 0.49 0.32 * ** fD 0.68 2.03 DR e fR e*fR e*fD 1 0 0.21 -2.3 1.14 0.24 * ** 1 1 0.69 0.10 0.43 0.30 * ** fD 0.68 2.03 0.42 0.23 1.31 -0.8 1.63 -0.5 -0.2 0.01 -3.8 0.62 -4.5 Gradient
5
Here we try a comprehensive comparison of the 3 alternatives, 3D (DSR); 2D (DS, DR); DTU(2D) [em9 em10] DT 1 3 4 5 UT 5 5 u1 5 u2 DSR s1 s2 s1 s2 1 d1 r1 d1 r2 d2 1 d2 1 1 1 1 1 1 1 1 1 1 1 sseDTU 65.198 tDSU 1.1 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 85.339 t2D 1.2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 sse3D 88.579 t3D 1.06 5.8 7.0 4.9 -1. 1.8 7.0 1.7 -4. -4. -4. -4. sseDTU 59.968 tDSU 0.028 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 21.766 t2D 0.14 1.8 2.0 1.7 0.8 1.3 2.0 1.3 0.4 0.4 0.4 0.4 sse3D 47.721 t3D 0.14 0.6 1.2 -2. 1.5 7.4 -2. -5. 0.0 -0. 0.1 -0. sseDTU 59.934 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 13.612 t2D 0.056 1.9 2.1 1.4 1.0 2.1 1.7 0.7 0.4 0.4 0.4 0.4 sse3D 36.011 t3D 0.11 0.3 1.5 -0. 0.2 0.2 1.5 -0. 0.0 -0. -0. -0. sseDTU 59.900 tDSU -0.002 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 11.576 t2D 0.08 1.9 2.3 1.4 1.0 2.1 1.9 0.6 0.4 0.3 0.4 0.4 sse3D 35.266 t3D 0.09 -0. -0. -1. 0.0 -0. -0. -0. 0.0 -0. 0.0 -0. sseDTU 59.899 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 11.337 t2D 0.04 1.9 2.3 1.3 1.0 2.0 1.8 0.6 0.4 0.3 0.4 0.3 sse3D 34.936 t3D 0.1 DT 1 4 3 4 5 2 UT 5 5 u1 5 u2 t1 t2 t3 DSR s1 s2 s1 s2 1 d1 r1 d1 r2 d2 1 d2 1 1 1 1 1 1 1 1 1 1 1 sseDTU 65.066 tDSU 1.12 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 85.339 t2D 1.2 1 1 1 1 1 1 1 1 1 1 1 sse3D 89 t3D 1 6 7 5 -1 4 7 2 -3 -3 -3 -3 sseDTU 60.875 tDSU 0.025 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 21.460 t2D 0.13 1.9 2.0 1.7 0.8 1.6 2.0 1.3 0.5 0.5 0.5 0.5 sse3D 44.253 t3D 0.15 0.0 0.9 -2. 1.3 5.2 -2. -5. -0. -0. -0. -0. sseDTU 60.812 tDSU -0.003 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 13.511 t2D 0.084 1.9 2.1 1.4 0.9 2.1 1.7 0.7 0.5 0.4 0.5 0.4 sse3D 36.291 t3D 0.11 0.7 1.7 -0. 0.3 0.3 1.8 -0. -0. -1. -0. -0. sseDTU 60.809 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 12.098 t2D 0.082 1.9 2.3 1.4 1.0 2.2 1.9 0.7 0.5 0.3 0.4 0.3 sse3D 35.339 t3D 0.09 DT 1 3 4 5 UT 3 5 4 u1 1 2 1 u2 DSR s1 s2 s1 s2 1 d1 r1 d1 r2 1 d2 1 d2 1 1 1 1 1 1 1 1 1 1 1 sseDTU 65.066 tDSU 1.12 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 59.455 t2D 1.24 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 sse3D 64.380 t3D 1.09 4.6 9.1 4.8 -3. 3.4 9.1 0.4 -3. -4. -3. -4. sseDTU 60.875 tDSU 0.025 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 19.197 t2D 0.11 1.6 2.2 1.7 0.6 1.5 2.2 1.1 0.6 0.4 0.6 0.4 sse3D 13.877 t3D 0.129 -0. 1.1 0.3 2.8 6.0 -1. -4. 0.4 -0. 0.2 -0. sseDTU 60.841 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 15.334 t2D 0.086 1.5 2.4 1.7 0.9 2.1 2.0 0.6 0.6 0.3 0.6 0.4 sse3D 6.0480 t3D 0.11 -0. -0. 1.8 0.9 0.2 -0. 0.8 0.4 -1. -0. -0. sseDTU 60.808 tDSU -0.002 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 14.196 t2D 0.07 1.5 2.3 1.9 1.1 2.2 2.0 0.7 0.7 0.2 0.5 0.2 sse3D 5.1468 t3D 0.121 -0. -0. 0.2 0.1 0.3 -0. -0. 0.4 -0. -0. -0. sseDTU 60.806 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 14.151 t2D -0.015 1.5 2.3 1.9 1.1 2.2 2.0 0.7 0.7 0.1 0.5 0.2 sse3D 5.0888 t3D 0.06 DT 1 3 4 5 UT 5 5 u1 5 u2 DSR s1 s2 s1 s2 1 d1 r1 d1 r2 d2 1 d2 1 1 1 1 1 1 1 1 1 1 1 sseDTU 65.066 tDSU 1.12 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 42.594 t2D 1.37 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 1.1 sse3D 50.552 t3D 1.19 3.9 9.2 4.4 -2. 3.4 9.2 -0. -3. -3. -3. -3. sseDTU 60.875 tDSU 0.025 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 21.909 t2D 0.12 1.6 2.2 1.6 0.9 1.5 2.2 1.1 0.7 0.7 0.7 0.7 sse3D 10.604 t3D 0.11 -0. 2.0 0.8 2.3 6.3 -0. -4. 1.3 0.4 0.8 0.8 sseDTU 60.841 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 16.522 t2D 0.09 1.5 2.3 1.7 1.1 2.0 2.1 0.8 0.8 0.7 0.8 0.8 sse3D 4.3033 t3D 0.08 -0. -0. 1.1 0.2 1.1 -0. -1. 0.8 -3. -1. -1. sseDTU 60.808 tDSU -0.002 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 15.626 t2D 0.04 1.5 2.3 1.8 1.1 2.1 2.1 0.7 0.9 0.6 0.7 0.7 sse3D 3.8386 t3D 0.052 -0. -0. 1.0 0.2 1.3 -0. -0. 1.1 -1. -0. -0. sseDTU 60.806 tDSU -0.001 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse2D 15.599 t2D 0.01 1.5 2.3 1.7 1.1 2.1 2.1 0.7 0.8 0.6 0.7 0.7 sse3D 3.8098 t3D -0.02
6
DSRDSR fSfS 0101 fDfD 0101 fTfT 0 1 0 0 fTfT 0101 fUfU 0 1 0 0 U 2 3 4 5 D 2 3 4 55 4 3 T 2 fDfD 0 1 0 0 TDTD UTUT fRfR 0 0 0 0 2DTU f: train User feature seg just once makes sense, f = (assuming DSR(duu)=0 always - noone sends to self) 0101 0101 0101 p DSR =dsr, p TD =td, p UT =ut, where d=f D (d), s=f S (s), r=f R (r), t=f T (t), u=f U (u) sse = nonblankDSR (dsr-DSR dsr ) 2 + nonblankTD (td-TD td ) 2 + nonblankUT (ut-UT ut ) 2 sse/ d = 2[ sr Supp DSR (d) sr(dsr-DSR dsr )+ t Supp TD (d) t(td-TD td )] sse/ t = 2[ d Supp TD (t) d(td-TD td )+ u Supp TD (t) u(ut-TD ut )] sse/ u = 2[ dr Supp DSR (s=u) dr(dur-DSR dur )+ ds Supp DSR (r=u) ds(dsu-DSR dau ) + t Supp UT (u) t(ut-UT ut )]
7
3DTU: f = (f T, f D, f U ) using gradient descent to minimize sse taken over DT, UT, DSR (equating S=R=U). 1 1 1 1 1 1 1 f t 0.125 fT1 fT2 fT3 fD1 fD2 fU1 fU2 sse 65.061 DT________ error fD e*fD e*fT eeDT 1 3 -0. 1. 1. -0 1. -0.298 1. 0.0 3.0 4 5 2.7 3 1. 3. 4. 3.0761 4. 7.4 13. fT 1.1 1 1. UT____ error fU e*fU e*fT eeUT 3 5 4 1.7 3 2. 1. 1. 4. 3. 1.9511 4. 3. 3.0 13. 7.4 1 2 1 -0. 0 -0 1. -0 0. -0 -0.298 0. -0 0.0 0.5 0.0 fT 1.1 1 1. D(US)(UR1) error fU fU e*fU e*fD eeDUSUR1 0 1 -0. 0 1. 1. -0 1. -0.158 0. 0.0 0.7 1 0 0.8 **1. 1. -0 0.9667 -0 0.7 0.0 fD 1.1 1 D(US)(UR2) error fU fU e*fU e*fD eeDUSUR1 0 0 -1. **1. 1. -1 -1 -1.601 -1 2.0 2.0 1 0 2.8 **1. 3. -1 3.1522 -1 7.8 2.0 fU 1.1 1 4.4 9.2 4.7 -1. 9.9 6.8 2.5 grad t 0.02 T1 T2 T3 D1 D2 U1 U2 sse 62.769 1.2 1.3 1.2 1.1 1.3 1.2 1.1 f+gt DT________ e fD e*fD e*fT eeDT 1 3 -0. 1. 1. -0 1. -0.412 2. 0.1 2.7 4 5 2.3 3 1. 3. 4. 2.9049 4. 5.7 10. fT 1.2 1 1. UT____ e fU e*fU e*fT eeUT 3 5 4 1.4 3 2. 1. 1. 4. 3. 1.7825 4. 3. 2.1 11. 6.0 1 2 1 -0. 0 -0 1. -0 0. -0 -0.519 0. -0 0.1 0.2 0.1 fT 1.2 1 1. D(US)(UR1) e fU fU e*fU e*fD eeDUSUR1 0 1 -0. 0 1. 1. -0 0. -0.145 0. 0.0 0.3 1 0 0.9 **1. 1. -0 1.0625 -0 0.9 0.0 fD 1.1 1 D(US)(UR2) e fU fU e*fU e*fD eeDUSUR1 0 0 -1. **1. 1. -2 -2 -2.363 -2 3.5 3.0 1 0 3.5 **1. 4. -2 4.5342 -1 12. 2.6 fU 1.2 1 4.1 9.0 4.4 -3. 10. 5.3 2.8 Gradient of sse / 2 4.1 9.0 4.4 -3. 10. 5.3 2.8 grad t 0.001 T1 T2 T3 D1 D2 U1 U2 sse 62.757 1.2 1.3 1.2 1.1 1.3 1.2 1.1 f+gt DT________ e fD e*fD e*fT eeDT 1 3 -0. 1. 1. -0 1. -0.415 2. 0.1 2.7 4 5 2.3 3 1. 3. 4. 2.8920 4. 5.6 10. fT 1.2 1 1. UT____ e fU e*fU e*fT eeUT 3 5 4 1.4 3 2. 1. 1. 4. 3. 1.7743 4. 2. 2.1 11. 5.9 1 2 1 -0. 0 -0 1. -0 0. -0 -0.531 0. -0 0.1 0.1 0.1 fT 1.2 1 1. D(US)(UR1) e fU fU e*fU e*fD eeDUSUR1 0 1 -0. 0 1. 1. -0 0. -0.141 0. 0.0 0.3 1 0 0.9 **1. 1. -0 1.0660 -0 0.9 0.0 fD 1.1 1 D(US)(UR2) e fU fU e*fU e*fD eeDUSUR1 0 0 -1. **1. 1. -2 -2 -2.399 -2 3.5 3.1 1 0 3.6 **1. 5. -2 4.6057 -1 13. 2.6 fU 1.2 1 4.1 9.0 4.3 -3. 10. 5.3 2.8 Gradient of sse / 2 4.1 9.0 4.3 -3. 10. 5.3 2.8 grad t 0.001 T1 T2 T3 D1 D2 U1 U2 sse 62.755 1.2 1.3 1.2 1.0 1.3 1.2 1.1 f+gt DT________ e fD e*fD e*fT eeDT 1 3 -0. 1. 1. -0 1. -0.417 2. 0.1 2.7 4 5 2.3 3 1. 3. 4. 2.8787 4. 5.5 10. fT 1.2 1 1. UT____ e fU e*fU e*fT eeUT 3 5 4 1.4 3 2. 1. 1. 4. 3. 1.7660 4. 2. 2.0 10. 5.9 1 2 1 -0. 0 -0 1. -0 0. -0 -0.543 0. -0 0.1 0.1 0.2 fT 1.2 1 1. D(US)(UR1) e fU fU e*fU e*fD eeDUSUR1 0 1 -0. 0 1. 1. -0 0. -0.136 0. 0.0 0.3 1 0 0.9 **1. 1. -0 1.0695 -0 0.9 0.1 fD 1.0 1 D(US)(UR2) e fU fU e*fU e*fD eeDUSUR1 0 0 -1. **1. 1. -2 -2 -2.435 -2 3.6 3.1 1 0 3.6 **1. 5. -2 4.6777 -1 13. 2.7 fU 1.2 1 4.1 9.0 4.3 -3. 10. 5.2 2.8 Gradient of sse / 2 4.1 9.0 4.3 -3. 10. 5.2 2.8 grad t 0 T1 T2 T3 D1 D2 U1 U2 sse 62.755 1.2 1.3 1.2 1.0 1.3 1.2 1.1 f+gt DT________ e fD e*fD e*fT eeDT 1 3 -0. 1. 1. -0 1. -0.417 2. 0.1 2.7 4 5 2.3 3 1. 3. 4. 2.8787 4. 5.5 10. fT 1.2 1 1. UT____ e fU e*fU e*fT eeUT 3 5 4 1.4 3 2. 1. 1. 4. 3. 1.7660 4. 2. 2.0 10. 5.9 1 2 1 -0. 0 -0 1. -0 0. -0 -0.543 0. -0 0.1 0.1 0.2 fT 1.2 1 1. D(US)(UR1) e fU fU e*fU e*fD eeDUSUR1 0 1 -0. 0 1. 1. -0 0. -0.136 0. 0.0 0.3 1 0 0.9 **1. 1. -0 1.0695 -0 0.9 0.1 fD 1.0 1 D(US)(UR2) e fU fU e*fU e*fD eeDUSUR1 0 0 -1. **1. 1. -2 -2 -2.435 -2 3.6 3.1 1 0 3.6 **1. 5. -2 4.6777 -1 13. 2.7 fU 1.2 1 4.1 9.0 4.3 -3. 10. 5.2 2.8 Gradient of sse / 2 For this data, f does not train up well (to represnt the matrixes) when equating S=R=U. The data is random (i.e., S and R portions are not reflective of U necessarily. In real data, they may be more so and the training may be more successful) In this tiny example, we walk through the training process when S=R=U. There are 2 documents, 3 terms, 2 users. So f = (fT1, fT2, fT3, fD1, fD2, FU1, fU2) DT term1 term2 term3 doc1 1 3 doc2 4 5 UT term1 term2 term3 user1 3 5 4 user2 1 2 1 DSR sender1 sender2 doc1 0 1 receiver1 doc2 1 0 sender1 sender2 0 0 receiver2 1 0 4.4 9.2 4.7 -1. 9.9 6.8 2.5 =g (gradient)
8
The training is much more successful! The line search formula in t is degree=6 with derivative of degree=5. It is known that there is no closed form pentic formula (for the roots of a degree=5 equation). So we find the t that minimizes sse by line search, since it is known that there is no closed form solution for roots of degree=5 polynomials. 1 1 1 1 1 1 1 1 1 1 1 =f t1 1.09 fT1 fT2 fT3 fD1 fD2 fU1 fU2 fS1 fS2 fR1 fR2 sse 64.380 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. =f1=f*t1 DT _______ e fD e*fD e*fT eeDT 1 3 -0 1. 1. -0 1. -0.205 1. 0. 3. 4 5 2. 3. 1. 3. 4. 3.0649 4. 7. 14 fT 1. 1. 1. UT e fU e*fU e*fT eeUT 3 5 4 1. 3. 2. 1. 1. 4. 3. 1.9749 4. 3. 3. 14 7. 1 2 1 -0 0. -0 1. -0 0. -0 -0.205 0. -0 0. 0. 0. fT 1. 1. 1. DSR1 e fS fR1e*fSR1 e*fDfR1 e*fDfS(1)eeDSR1 0 1 -1 -0 1. 1. -1 -0 -1 -0 -1 -0 1. 0. 1 0 -0 -1 1. -0 -1 -0 -1 -0 -1 0. 1. fD 1. 1. DSR2 e fS fR2e*fSR2 e*fDfR2 e*fDfS(2)eeDSR2 0 0 -1 -1 1. 1. -1 -1 -1 -1 -1 -1 1. 1. 1 0 -0 -1 1. -0 -1 -0 -1 -0 -1 0. 1. fD 1. 1. 4. 9. 4. -3 3. 9. 0. -3 -4 -3 -4 =g (gradient) 4. 9. 4. -3 3. 9. 0. -3 -4 -3 -4 f 2 0.129 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse 13.877 1. 2. 1. 0. 1. 2. 1. 0. 0. 0. 0. f2=f1+t2G DT _______ e fD e*fD e*fT eeDT 1 3 -0 1. 0. -0 1. -0.241 3. 0. 3. 4 5 1. 1. 1. 2. 2. 2.3820 3. 1. 2. fT 1. 2. 1. UT e fU e*fU e*fT eeUT 3 5 4 -0 -0 0. 2. -1 -0 0. -1.418 -0 0. 0. 0. 0. 1 2 1 -0 -0 -0 1. -1 -0 -1 -1.590 -1 -1 0. 0. 0. fT 1. 2. 1. DSR1 e fS fR1e*fSR1 e*fDfR1 e*fDfS(1)eeDSR1 0 1 -0 0. 0. 0. -0 0. -0 0. -0 0. 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 0. 1. DSR2 e fS fR2e*fSR2 e*fDfR2 e*fDfS(2)eeDSR2 0 0 -0 -0 0. 0. -0 -0 -0 -0 -0 -0 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 0. 1. -0 1. 0. 2. 6. -1 -4 0. -0 0. -0 Gradient of sse / 2 -0 1. 0. 2. 6. -1 -4 0. -0 0. -0 f t3 0.11 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse 6.0480 1. 2. 1. 0. 2. 2. 0. 0. 0. 0. 0. f3=f2+t3G DT _______ e fD e*fD e*fT eeDT 1 3 -0 1. 0. -0 1. -0.896 2. 0. 1. 4 5 0. -0 2. 1. -0 0.8182 -0 0. 0. fT 1. 2. 1. UT e fU e*fU e*fT eeUT 3 5 4 -0 -0 0. 2. -0 -0 0. -0.504 -0 0. 0. 0. 0. 1 2 1 -0 0. -0 0. -0 0. -0 -0.016 1. -0 0. 0. 0. fT 1. 2. 1. DSR1 e fS fR1e*fSR1 e*fDfR1 e*fDfS(1)eeDSR1 0 1 -0 0. 0. 0. -0 0. -0 0. -0 0. 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 0. 2. DSR2 e fS fR2e*fSR2 e*fDfR2 e*fDfS(2)eeDSR2 0 0 -0 -0 0. 0. -0 -0 -0 -0 -0 -0 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 0. 2. -0 -0 1. 0. 0. -0 0. 0. -1 -0 -0 Gradient of sse / 2 -0 -0 1. 0. 0. -0 0. 0. -1 -0 -0 f t4 0.121 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse 5.1468 1. 2. 1. 1. 2. 2. 0. 0. 0. 0. 0. f4=f3+t4G DT _______ e fD e*fD e*fT eeDT 1 3 -0 0. 1. -0 0. -1.164 1. 0. 0. 4 5 0. -0 2. 1. -0 0.7783 -0 0. 0. fT 1. 2. 1. UT e fU e*fU e*fT eeUT 3 5 4 -0 0. -0 2. -0 0. -0 -0.456 0. -0 0. 0. 0. 1 2 1 -0 0. -0 0. -0 0. -0 -0.273 0. -0 0. 0. 0. fT 1. 2. 1. DSR1 e fS fR1e*fSR1 e*fDfR1 e*fDfS(1)eeDSR1 0 1 -0 0. 0. 0. -0 0. -0 0. -0 0. 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 1. 2. DSR2 e fS fR2e*fSR2 e*fDfR2 e*fDfS(2)eeDSR2 0 0 -0 -0 0. 0. -0 -0 -0 -0 -0 -0 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 1. 2. -0 -0 0. 0. 0. -0 -0 0. -0 -0 -0 Gradient of sse / 2 -0 -0 0. 0. 0. -0 -0 0. -0 -0 -0 f t5 0.06 T1 T2 T3 D1 D2 U1 U2 S1 S2 R1 R2 sse 5.0888 1. 2. 1. 1. 2. 2. 0. 0. 0. 0. 0. f5=f4+t5G DT _______ e fD e*fD e*fT eeDT 1 3 -0 0. 1. -0 0. -1.117 1. 0. 0. 4 5 0. -0 2. 1. -0 0.8132 -0 0. 0. fT 1. 2. 1. UT e fU e*fU e*fT eeUT 3 5 4 -0 0. -0 2. -0 0. -0 -0.280 0. -0 0. 0. 0. 1 2 1 -0 0. -0 0. -0 0. -0 -0.144 0. -0 0. 0. 0. fT 1. 2. 1. DSR1 e fS fR1e*fSR1 e*fDfR1 e*fDfS(1)eeDSR1 0 1 -0 0. 0. 0. -0 0. -0 0. -0 0. 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 1. 2. DSR2 e fS fR2e*fSR2 e*fDfR2 e*fDfS(2)eeDSR2 0 0 -0 -0 0. 0. -0 -0 -0 -0 -0 -0 0. 0. 1 0 0. -0 0. 0. -0 0. -0 0. -0 0. 0. fD 1. 2. -0 0. 0. 0. 0. -0 -0 0. -0 -0 -0 Gradient of sse / 2 3D f: f = (f T, f D, f U, f S, f R ) with gradient descent to minimize sse taken over DT,UT,DSR, not equating S=R=U.
9
Using just DT, train f = (f T, f D ) and using gradient descent to minimizing sse over DT, but this time we use a vector of t values rather than just one t value. T=(t T1, t T2, t T3, t D1, t D2 ) After many rounds, we optimize the t i 's one at a time according to a sequencing of the nonblanks. This approach still needs to be formulated mathematically? In this simple example we are able to zero out all square errors, first e(T1, D1), then e(T1, D2), then e(T2, D2), then e(T3, D1). 1 1 1 1 1 f t 1.8 T1 T2 T3 D1 D2 sse 8.7504 1.8 1.8 1.8 1.8 1.8 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -4.032 * 5.0176 0.0576 4 5 ** *** 1.368 * 0.5776 3.0976 -2.66 3.168 -0.43 -4.46 4.536 Gradient of sse / 2 f t 0.12 T1 T2 T3 D1 D2 sse 1.67626 1.480 2.180 1.748 1.264 2.344 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -1.2902 *0.75968 0.62373 4 5 ** *** 0.78406 * 0.28053 0.01231 0.139 -0.26 0.998 0.090 0.542 Gradient of sse / 2 t 0.38 T1 T2 T3 D1 D2 sse 1.14179 1.533 2.081 2.127 1.298 2.550 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -1.5202 *0.98285 0.05614 4 5 ** *** 0.13698 * 0.00798 0.09481 -1.05 -0.78 0.307 -1.01 -0.50 Gradient of sse / 2 t 0.08 T1 T2 T3 D1 D2 sse 0.86418 1.448 2.018 2.152 1.217 2.509 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -1.1061 *0.58300 0.14434 4 5 ** *** 0.52719 * 0.13244 0.00440 -0.01 -0.16 0.462 -0.28 0.393 Gradient of sse / 2 t 0.69 T1 T2 T3 D1 D2 sse 0.53553 1.437 1.903 2.471 1.018 2.781 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -0.6669 *0.21525 0.23351 4 5 ** *** 0.00237 * 0.00000 0.08676 -0.46 -0.81 0.492 0.527 -0.55 Gradient of sse / 2 t 0.094 T1 T2 T3 D1 D2 sse 0.37440 1.393 1.826 2.517 1.067 2.728 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -0.6803 *0.23837 0.09693 4 5 ** *** 0.27470 * 0.03885 0.00024 0.016 0.042 0.332 0.103 0.303 Gradient of sse / 2 Etc., until we get down to the following after 26 rounds : T1 T2 T3 D1 D2 sse=0.05366 t=.08 1.270 1.614 3.065 0.953 3.107 f*t DT __________ e fe*fDe*fT eeDT 1 3 * *** * -0.2687 *0.04471 0.00606 4 5 ** *** 0.06504 * 0.00261 0.00026 -0.04 -0.05 0.074 -0.02 0.038 Gradient of sse / 2 Here we use t' = t T1 to zero out e(T1,D1) only. 4.2 1 =t' t=0.08 T1 T2 T3 D1 D2 sse=0.40602 1.088 1.610 3.071 0.920 3.110 f'=f*(t+t')G DT __________ eeDT 1 3 * *** * -0.0029 0.00000 0.02940 4 5 ** *** 0.66810 0.37654 0.00007 1.906 -0.02 0.157 0.523 0.654 Gradient of sse / 2 Note that sse shoots up here, but be patient! Next we use t'=t D2 to xero out e(T1,D2): 0 0 0 0 0.86 =t' t=0 T1 T2 T3 D1 D2 sse=0.86584 1.088 1.610 3.071 0.920 3.672 =f'=f*(t+t')G DT __________ eeDT 1 3 * *** * -0.0029 0.00000 0.02940 4 5 ** *** 0.00127 0.00000 0.83643 0.001 -3.35 0.157 0.523 -1.47 Gradient of sse / 2 Next we use t' to xero out e(T2,D2): 0 0.074 0 0=t' t=0 T1 T2 T3 D1 D2 sse=0.02941 1.088 1.361 3.071 0.920 3.672 DT __________ eeDT 1 3 * *** * -0.0029 0.00000 0.02940 4 5 ** *** 0.00127 0.00000 0.00000 0.001 -0.00 0.157 0.523 -0.00 Gradient of sse / 2 Next we use t' to xero out e(T3,D1): 0 0 1.183 0 0 =t' t=0 T1 T2 T3 D1 D2 f sse=0.00001 1.088 1.361 3.258 0.920 3.672 DT __________ eeDT 1 3 * *** * -0.0029 0.00000 0.00000 4 5 ** *** 0.00127 0.00000 0.00000 0.001 -0.00 -0.00 -0.00 -0.00 Gradient of sse / 2 We zero out all error using t'=(4.2,.074, 1.183, 1,.86). f = (1.088, 1.361, 3.258, 0.920, 3.672) is a lossless DT representation. Do we need the initial 26 rounds at all? No! Next slide. After the 26 rounds of gradient descent and line search is f= 1.270 1.614 3.065 0.953 3.107 After using the t' vector method it is f= 1.088 1.361 3.258 0.920 3.672 1 1.248 3 1 4.003 What this tells us is that we probably would have reached zero sse eventually with more gd+ls rounds, sincce we seem to be going toward the same vector.
10
Using just TD, train f = (fT, fD) using a vector of t' values rather than just one t value and we use it right away (not after 26 standard gd+ls rounds). We optimize the ti's one at a time according to a sequencing of the nonblanks. We are able to optimize to zero out all square errors. 3 4 2 2 7 G ZERO OUT SQ ERR AT EE(D1,T1) 0 0 0 0 0 t' DT____________ sse T1 T2 T3 D1 D2 1 3 29 1 1 1 1 1 f+G*t' 4 5 e______________ fD__ e*fD__________ e*fT__________ eeDT__________ 0 2 1 0 2 0 2 0 4 3 4 1 3 4 3 4 9 16 1 1 1 fT 3 4 2 2 7 G ZERO OUT SQ ERR AT EE(D2,T1) 0 0 0 0 0.429 t' DT____________ sse T1 T2 T3 D1 D2 1 3 4.99 1 1 1 1 4.003 f+G*t' 4 5 e______________ fD__ e*fD___________ e*fT__________ eeDT__________ 0 2 1 0 2 0 2 0 4 -0.0 0.997 4.00 -0.01 3.99 -0.0 0.99 0.00 0.99 1 1 1 fT 3 4 2 2 7 G ZERO OUT SQ ERR AT EE(D2,T2) 0 0.062 0 0 0.429 t' DT____________ sse T1 T2 T3 D1 D2 1 3 4.00 1 1.248 1 1 4.003 f+G*t' 4 5 e______________ fD__ e*fD_ _________ e*fT__________ eeDT__________ 0 2 1 0 2 0 2 0 4 -0.0 0.004 4.00 -0.01 0.01 -0.0 0.00 0.00 0.00 1 1.248 1 fT 3 4 2 2 7 G ZERO OUT SQ ERR AT EE(D2,T3) 0 0.062 1 0 0.429 t' DT____________ sse T1 T2 T3 D1 D2 1 3 0.00 1 1.248 3 1 4.003 f+G*t' 4 5 e______________ fD__ e*fD_ _________ e*fT__________ eeDT__________ 0 0 1 0 0 0 0 0 0 -0.0 0.004 4.00 -0.01 0.01 -0.0 0.00 0.00 0.00 1 1.248 3 fT There seems to be something fishy here. We use the same gradient in every round so we aren't using gradient decent. We could start with any vector instead of G here. Then just tune one ti at a time to get that error to zero(don't need to square errors either). This would require a round for every nonblank cell (looks good when the data is toy small but what about netflix sized data?) When it is possible to find a sequence through the noblank cells so that the ith one can be zeroed by the right choice of ti, we can find a f such that sse=0. Netflix is mostly blanks (98%) - may be possible? It seems productive to explore doing standard gradient descent until it converges and then to try introducing a this t' vectorized method to further reduce only the high error individual cells?? The other thing that comes to mind is that we may delete away all but the "pertinent" cells for a particular difficult prediction, and do it so that it IS possible to find a t' that zeros out the sse???
11
pDT T1,2 1010 pDT T1,1 0000 pDT T1,0 0101 pDT T2,2 1010 pDT T2,1 0000 pDT T2,0 1010 pDT T2,Mask 1010 pDT T1,Mask 1111 pDT T3,2 0000 pDT T3,1 0101 pDT T3,0 0101 pDT T3,Mask 0101 pD Sender,0 1010 pD Sender,1 0101 pTD D1,2 110110 pTD D1,1 000000 pTD D1,0 010010 pTD D1,Mask 110110 pTD D2,2 000000 pTD D2,1 010010 pTD D2,0 101101 pTD D2,Mask 101101 pTU U1,2 011011 pTU U1,1 100100 pTU U1,0 110110 pTU U1,Mask 111111 pTU U2,2 000000 pTU U2,1 010010 pTU U2,0 101101 pTU U2,Mask 111111 pDR R1 1111 pDR R2 0101 pDR Mask 1111 pUT T1,2 0000 pUT T1,1 1010 pUT T1,0 1111 pUT T1,Mask 1111 pUT T2,2 1010 pUT T2,1 0101 pUT T2,0 1010 pUT T3,2 1010 pUT T3,1 0000 pUT T3,0 0101 pUT T2,Mask 1111 1111 UT 121 354 DR 11 10 U 2 1 T 3 2 1 TD 3 5 4 1 D 2 1 Sender Time Length 1 3 1 2 1 2 pRD D2,1 1111 pRD D1,2 1010 pRD Mask 1111 pD Time,0 1111 pD Time,1 1010 pD Length,0 1010 pD Length,1 0101 f=(f D f T f U f S f R ), sse(f) = dr DR (f d f r -DR dr ) 2 + td TD (f t f d -TD td ) 2 + ut UT (f u f t -UT ut ) 2 + s D.S (f s -D s ) 2 sse(f+xG) = dr DR ((f d +xG d )(f r +xG r )-DR dr ) 2 + s D.S (f s +xG s -D s ) 2 td TD ((f t +xG t )(f d +xG d )-DR td ) 2 + ut UT ((f u +xG u )(f t +xG t )-DR ut ) 2 + G=( sse/ f d sse/ f t sse/ f u sse/ f s sse/ f r ) = 2 ( dr DR f r e dr + dt DT f r e dt td TD f d e td + ut UT f u e ut ut UT f t e ut s D.S e s rd RD f d e rd ) but what us f S ? There is no such thing! An alternative possibility is to have DR and a separate DS matrixes. Next slide.
12
pDT T1,2 1010 pDT T1,1 0000 pDT T1,0 0101 pDT T2,2 1010 pDT T2,1 0000 pDT T2,0 1010 pDT T2,m 1010 pDT T1,m 1111 pDT T3,2 0000 pDT T3,1 0101 pDT T3,0 0101 pDT T3,m 0101 pD Sender,0 1010 pD Sender,1 0101 pTD D1,2 110110 pTD D1,1 000000 pTD D1,0 010010 pTD D1,Mask 110110 pTD D2,2 000000 pTD D2,1 010010 pTD D2,0 101101 pTD D2,Mask 101101 pTU U1,2 011011 pTU U1,1 100100 pTU U1,0 110110 pTU U1,Mask 111111 pTU U2,2 000000 pTU U2,1 010010 pTU U2,0 101101 pTU U2,Mask 111111 pDR R1 1111 pDR R2 0101 pUT T1,2 0000 pUT T1,1 1010 pUT T1,0 1111 pUT T1,Mask 1111 pUT T2,2 1010 pUT T2,1 0101 pUT T2,0 1010 pUT T3,2 1010 pUT T3,1 0000 pUT T3,0 0101 pUT T2,Mask 1111 1111 UT 121 354 DR 11 10 U 2 1 T 3 2 1 TD 3 5 4 1 D 2 1 pD Time,0 1111 pD Time,1 1010 pD Length,0 1010 pD Length,1 0101 f=(f D f T f U f S f R ) sse(f) = dr DR (f d f r -DR dr ) 2 + td TD (f t f d -TD td ) 2 + ut UT (f u f t -UT ut ) 2 + ds DS (f ds -DS ds ) 2 G=( sse/ f d sse/ f t sse/ f u sse/ f s sse/ f r ) = 2( dr DR f r e dr + dt DT f t e dt td TD f d e td + tu TU f u e tu ut UT f t e ut sd SD f d e sd rd RD f d e rd ) DS 10 01 Time Length 3 1 1 2 R 1 2 S 1 2 pDS S1 0101 pDS S2 1010 pSD S1 0101 pSD S2 1010 pRD R1 1010 pRD R2 1111
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.