Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Least-Squares Approximation Ellen and Jason.

Similar presentations


Presentation on theme: "Linear Least-Squares Approximation Ellen and Jason."— Presentation transcript:

1 Linear Least-Squares Approximation Ellen and Jason

2 Problem the Algorithm Solves Finds a function that most clearly passes through a set of points Algorithm is used for – Summarizing data – Predicting data

3 Standard variable names & expressions p =  X k q =  Y k r =  X k Y k s =  X k ) 2 d = (m + 1) s - (p) 2 a = [(m + 1) r - pq] / d b =[sq – pr] / d y = ax + b

4 Standard Terms & Definitions l 1 Approximation:  ax k + b – y k  l 2 Approximation:  a, b  ax k + b – y k  2

5 Principle behind algorithm/How it was derived  (y i – ax i - b) 2 This equation gives us the error.

6 Principle behind algorithm/How it was derived Now we take the derivative with respect to m, and with respect to b. df/dm =  2(y i - mx i - b)(-x i ) =  -2x i y i + 2mx i 2 + 2bx i df/db =  2(Y i - mx i - b)(-1) =  -2y i + 2mx i + 2b

7 Principle behind algorithm/How it was derived Now we want to set the equations to 0 because we are trying to find the minimum error. On the graph you can see that the minimum is at a point where the derivative would equal 0.  -2y i + 2mx i + 2b =  -2y i ) + m  (2x i ) + b  (2)  -2x i y i + 2mx i 2 + 2bx i  (-2x i y i ) + m  (2x i 2 ) + b  (2x i )

8 Principle behind algorithm/How it was derived  (-2x i y i ) + m  (2x i 2 ) + b  (2x i ) = 0 m  (2x i 2 ) + b  (2x i ) =  (2x i y i ) m  (x i 2 ) + b  (x i ) =  (x i y i )  -2y i ) + m  (2x i ) + b  (2) = 0 m  (2x i ) + b  (2) =  (2y i ) m  (x i ) + b =  (y i )

9 Principle behind algorithm/How it was derived Am + Bb = E Cm + Db = F To get rid of variables A, C and m we multiply the first equation by C and the second equation by -A C(Am +Bb = E) -A(Cm + Db = F) => ACm + BCb = EC -ACm - ADb = -AF ----------------------- (BC – AD)b = EC – AF b = (EC-AF)/(BC-AD)

10 Principle behind algorithm/How it was derived Am + Bb = E Cm + Db = F Now, in order to get rid of b, we multiply the first equation by D and the second equation by (-B) D(Am +Bb = E) -B(Cm + Db = F) => ADm + BDb = ED -BCm - BDb = -BF ----------------------- (AD - BC)m = ED - BF m = (AD - BC)/(ED - BF)

11 Example Find the linear least-squares solution for the table of values p = 1 + 3 + 4 + 9 = 17 q = 2 + 6 + 9 + 8 = 25 r = (1*2) + (3*6) + (4*9) + (9*8) = 128 s = (1 2 ) + (3 2 ) + (4 2 ) + (9 2 ) = 107 d =(3+1)*107 - (17 2 ) = 139 a = [(3+1)*128- (17*25)] / 139 = 87/139 b = [(107*25) - (17*128)] / 139 = 499/139 Y = ax+b Y = (87/139)x + (499/139)

12 Example: Graph

13 Example: Error Point y - (ax +b) Point 1: [2 - (0.6259*1 +3.5900)] 2 = 4.9102 Point 2: [6 - (0.6259*3 + 3.5900)] 2 = 0.2833 Point 3: [9 - (0.6259*4 + 3.5900)] 2 = 8.4472 Point 4: [8 - (0.6259*9 + 3.5900)] 2 = 1.4960 Error: 15.1367

14 Advantages Least Squares RegressionLeast Absolute Deviation Regressions Not Very RobustRobust Stable SolutionUnstable Solution Always One SolutionPossibly Multiple Solutions

15 Disadvantages Limitations in shapes – May not be effective for data that is nonlinear. For example, a linear function would not represent the sets of points in this graph very well.


Download ppt "Linear Least-Squares Approximation Ellen and Jason."

Similar presentations


Ads by Google