CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor Yarmouk University Computer Engineering Department Math 685/CSI 700 Spring 08 CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor Chapter 3: Interpolation and Curve Fitting George Mason University, Department of Mathematical Sciences
Interpolation Basic problem: for given data (set of points) (xi , yi), i=1,2,….,m with x1 < x2 < … < xm determine the function f(xi) = yi, i=1,2,….,m such that f is interpolating function, for the given data
Purposes of Interpolation Plotting smooth curve through discrete data points Reading between lines of table Differentiating or integrating tabular data Replacing complicated function by simple one
Interpolation vs Approximation Interpolation function fits given data points exactly Interpolation is inappropriate if data points subject to significant errors Approximation is usually preferable for smoothing noisy data
Interpolating Functions Families of functions commonly used for interpolation include Polynomials Piecewise polynomials Trigonometric functions Exponential functions Rational functions We will focus on interpolation by polynomial and piecewise polynomials for now
Polynomial Interpolations Simplest type of interpolation uses Polynomials Unique polynomial of degree at most n-1 passes through n data points (xi yi), i = 1, …, n, where xi are distinct There are many ways to represent or compute polynomial, but in theory all must give same result
Lagrangian Polynomials Example We choose 4 points for the third degree polynomial : We need to find coefficients a, b, c, d Can be found using previous chapter methods, by formulating 4 equations for a,b,c and d, using the points above
Lagrangian Polynomials A simpler way is to use lagrangian Polynomials. For a cubic polynomial case, 4 points should be available, (x0,f0 ),(x1,f1),(x2,f2),(x3,f3) The interpolating polynomial is then defined by
Lagrangian Polynomials Example Find the interpolated value for x = 3.0 using a cubic polynomial fitting the first 4 data points of the Table in previous slides
Divided Difference Polynomial Need to re-compute the interpolation function if adding or removing a data point Divided-differences method avoids this problem using fewer arithmetic operations Divided-differences gives the same polynomial as Lagrangian interpolation
Divided Difference Consider the Interpolating polynomial is written as: If we choose ai so that Pn(x)=f(x) at the points (xi , fi ),i=0,…,n, then Pn(x) is an interpolating polynomial ai ’s are determined by the divided differences of the tabulated data
Divided Difference Given data points (xi, yi), I = 0,…,n, the divided differences, denoted by f[], is defined recursively by Where
Divided Difference Using the standard notations, the divided difference can be
Divided Difference Example
Divided Difference In the equation : Lets write the polynomial equations with x=x0, x=x1, x=x2, …, x=xn, we get
Divided Difference If Pn(x) is the interpolating polynomial , then it should match the table for all n+1 points
Divided Difference Pn(x) can be written now in terms of divided differences :
Divided Difference Using the data obtained in the divided difference table The interpolating polynomial of degree 3 is : The degree 4 polynomial is found by adding one term to P3(x)
Divided Differences For The divided difference table is For an nth-degree polynomial, Pn(x), whose highest power term has the coefficient an, the nth divided differences will always be equal to an.
Error of Polynomial Interpolation Interpolation works better for x within xi ‘s Error is smaller if x is centered within xi The error term of polynomial interpolation is : with ξ in the smallest interval that contains {x, ,x1 ,x2,…,xn }. Not very useful for computing real error as f is usually unknown. If the function is "smooth," a low-degree polynomial should work satisfactorily.
Error Estimation: Next Term Rule Error of the interpolates for f(1.75) using polynomials of degrees one, two, and three can be found by taking the derivatives and evaluating the minimum and maximum within an interval of the original function using:
Error Estimation: Next Term Rule En(x) = (approximately) the value of the next term that would be added to Pn(x). For the previous example
Evenly Spaced Data If data is given at evenly spaced intervals, arrange the date with the x values in ascending order. The difference table is then calculated “ without dividing by x difference” as
Evenly Spaced Data : difference table Where :
Polynomial for Evenly Spaced Data Newton-Gregory forward polynomial passes through equi_spaced points with an h distance between consecutive points Where :
Polynomial for Evenly Spaced Data For the data in the difference table Write a Newton-Gregory forward polynomial of degree 3 that fits for the four points at x = 0.4 to x = 1.0. Use it to interpolate for f(O. 73). To make the polynomial fit as specified, we must index the x's so that x0=4, it follows
Polynomial for Evenly Spaced Data
Least Square Approximation Given a set of (x,y) data points, Approximation is the process of finding a function (usually a line or a polynomial) that comes the “closest” to the data points. Data has “noise” – cannot find interpolating line.
Least Square Approximation : Linear Data Assume we have experimental data for the effect of temperature on resistance The graph suggest a linear relationship
Least Square Approximation: Linear Data The criterion used to find a and b is to minimize the sum of the squares of the errors, the "least-squares“ principle Let Yi represent an experimental value, and let yi be a value from the equation yi= a xi + b, least Square criterion requires
Least Square Approximation To find the minimum of S, the partial derivates Should be zero. Reducing we get :
Least Square Approximation For the Temperature data we have, Y is R and x is T The normal equation are then a = 3.395, b = 702.2,
Least Square Approximation: Nonlinear Data Nonlinear data can be fitted using exponential functions Perform linearization by taking the logarithms Rebuild the table to represent ln y and ln x instead of x and y
Least Square Approximation: Nonlinear Data Polynomial Approximation is the common method used to approximate nonlinear data . We assume the functional relationship to be : The error defined as The sum of squares defined by S is
Least Square Approximation: Polynomial approximation of nonlinear data At the minimum all partial derivates should be zero
Least Square Approximation: Polynomial approximation of nonlinear data Dividing each by -2 and rearranging gives the n + 1 normal equations to be solved simultaneously:
Least Square Approximation: Polynomial approximation of nonlinear data Putting the Previous Equation in Matrix Notation
Least Square Approximation: Polynomial approximation of nonlinear data Use quadratic polynomial to fit the data in the following table We need to calculate the normal sums as follows
Least Square Approximation: Polynomial approximation of nonlinear data Applying these sums in the normal equations we get Solving sets of equations for the coefficients we get The least square polynomial is then