Download presentation
Presentation is loading. Please wait.
1
1 Chapter 6: LSA Chapter 6: LSA by CAS CAS: Computer Algebra Systems ideal for heavy yet routine analytical derivation (also useful for numerical/ programming tasks); independent method to check spreadsheet results Mathematics involved: Taylor-series expansion of vector functions Analytical, calculus-based theory of LSA
2
Taylor Series Expansion Taylor’s Theorem: gives approximation of f(x) at x near x 0 where x = x 0 + x. Requires: (i) values of f & (various) f’, both evaluated at x 0, and (ii) small quantities x: f(x) f(x 0 + x) = f(x 0 ) + + H.O.T. (6.1) H.O.T. = “Higher Order Terms” To approximate m multi-variate functions f 1 (x), f 2 (x),…, f m (x): view collectively as components of vector function f(x), then f(x) = f(x 0 + x) = f(x 0 ) + x + H.O.T. (6.2a) Define: A ij = (6.2b)
3
Variation of Coordinates via Series Expansion Resection w/ redundant targets: measured: many (m) angles Objective: obtain the best set of (n) coordinates (i.e. E, N) for unknown station(s), that will fit the m observed data as closely as possible. Assume: m > n. Arrange observed data into column vector: =
4
Apply least-squares (LS) condition: f( x ) x = LS solution for coordinates, e.g. x = [E U, N U ] T in Section 3.5.2 (n = 2); f( x ): Calculated version of measured angles or/ and distances Computed using values of the coordinates x (what’s the best x?)
5
f 1 = calculated angle A-U-B in Fig. 3-13, where Hence f 1 as a function of the unknown coordinates x is Fig. 3-13 (6.4) Example:
6
How to find the best solution x? Utilize the fact: x = x 0 + x x 0 = some approximate solution. Thus f(x 0 + x) Apply 6.2(a)(b): f(x 0 ) + x + H.O.T. Hence, x – [ – f(x 0 )] + H.O.T. 0 (6.5) Note: x is the only unknown in this problem Rephrasing (6.5): Minimize || x – k + H.O.T. || 2, where k – f(x 0 ) (a weighted problem, with weight matrix W) ** If we modified a problem very slightly (dropping H.O.T.) then the solution should only differ slightly the solution should only differ slightly **
7
First obtain approx. solution (really: minimizes ||A x – k|| 2 ) : x = k (6.7) Solution improved to x new = x 0 + x (6.8) This updated (still approximate) solution: provides a new (better) “x 0 “ Fig. 6.1 Improving provisional coordinates by (approximate) x Use new x 0 to repeat procedure until convergence is met
8
Calculation of derivatives (6.2b) for matrix elements A ij : (i = 1 to m, j = 1 to n) By hand: lengthy (m can be >> 1; n also) & error-prone Symbolic expression to be numerically evaluated repeatedly by substituting x 0 ; also for k = – f(x 0 ) Seek help from CAS tools Maple V, Mathematica (“Mtka”), REDUCE, DERIVE, MACSYMA, MuMath, MathCAD, etc. URL for free Mtka download (save-disallowed): http://www.wolfram.com/products/mathematica/trial.cgi CAS calculators
9
9 Resection example: 1.Download and install trial version of Mtka 2.Enter Program 6.1 given in book 3.Press Shift + Enter to run each line 4.Results should agree with Solver results in Ch. 3
10
10 Generic procedure 1.Define unknowns = x (n x 1 vector) 2.Put “observed data” into (m x 1 vector) 3.Prepare computed version of as f(x) (m x 1) 4.Prepare A ij = D[f i,x j ] (m x n) (symbolic) 5.Prepare a reasonable provisional solution, x 0 6. k = – f(x 0 ); A -> A(x 0 ) (now numerical) 7. x = (A T WA) -1 A T Wk 8.Update x 0 to x 0 + x; repeat from step 6 until solution converges
11
11 Potential applications Recovering missing parameters of a circle using (4 or more) observed points Locating the center, major & minor axes of an ellipse by observed points Parameters of a comet trajectory using observed data Etc.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.