CURVE FITTING ENGR 351 Numerical Methods for Engineers

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Chapter 10 Curve Fitting and Regression Analysis
P M V Subbarao Professor Mechanical Engineering Department
The General Linear Model. The Simple Linear Model Linear Regression.
NUMERICAL DIFFERENTIATION AND INTEGRATION
Chapter 18 Interpolation The Islamic University of Gaza
1 Curve-Fitting Spline Interpolation. 2 Curve Fitting Regression Linear Regression Polynomial Regression Multiple Linear Regression Non-linear Regression.
ES 240: Scientific and Engineering Computation. InterpolationPolynomial  Definition –a function f(x) that can be written as a finite series of power functions.
Read Chapter 17 of the textbook
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
Section 4.2 Fitting Curves and Surfaces by Least Squares.
NUMERICAL ERROR ENGR 351 Numerical Methods for Engineers
Curve-Fitting Interpolation
Least Square Regression
Curve-Fitting Regression
Least Square Regression
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 17 Least Square Regression.
Petter Mostad Linear regression Petter Mostad
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 181 Interpolation Chapter 18 Estimation of intermediate.
Engineering Computation Curve Fitting 1 Curve Fitting By Least-Squares Regression and Spline Interpolation Part 7.
NUMERICAL ERROR ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 CURVE.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 181 Interpolation.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Interpolation Chapter 18.
Chapter 6 Numerical Interpolation
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 Least.
Simple Linear Regression Analysis
Copyright ©2011 Pearson Education 15-1 Chapter 15 Multiple Regression Model Building Statistics for Managers using Microsoft Excel 6 th Global Edition.
Calibration & Curve Fitting
Objectives of Multiple Regression
Least-Squares Regression
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Simple Linear Regression
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
3/2003 Rev 1 I – slide 1 of 33 Session I Part I Review of Fundamentals Module 2Basic Physics and Mathematics Used in Radiation Protection.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
Applications The General Linear Model. Transformations.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Today’s class Spline Interpolation Quadratic Spline Cubic Spline Fourier Approximation Numerical Methods Lecture 21 Prof. Jinbo Bi CSE, UConn 1.
Chapter 8 Curve Fitting.
Curve-Fitting Regression
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Lecture 22 Numerical Analysis. Chapter 5 Interpolation.
Principles of Extrapolation
Curve Fitting Introduction Least-Squares Regression Linear Regression Polynomial Regression Multiple Linear Regression Today’s class Numerical Methods.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Interpolation - Introduction
Chapter 4 Basic Estimation Techniques
Chapter 7. Classification and Prediction
Part 5 - Chapter
Part 5 - Chapter 17.
Linear Regression.
Interpolation Estimation of intermediate values between precise data points. The most common method is: Although there is one and only one nth-order.
Chapter 18.
NUMERICAL DIFFERENTIATION AND INTEGRATION
Chapter 18.
Correlation and Regression
Statistical Methods For Engineers
Chapter 12 Curve Fitting : Fitting a Straight Line Gab-Byung Chae
Part 5 - Chapter 17.
Today’s class Multiple Variable Linear Regression
Least Square Regression
SKTN 2393 Numerical Methods for Nuclear Engineers
Presentation transcript:

CURVE FITTING ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier

Copyright © 2000 by Lizette R. Chevalier Permission is granted to students at Southern Illinois University at Carbondale to make one copy of this material for use in the class ENGR 351, Numerical Methods for Engineers. No other permission is granted. All other rights are reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the copyright owner.

Applications Specific Growth Rate, m Need to determine parameters for saturation-growth rate model to characterize microbial kinetics Food Available, S

Applications Epilimnion Thermocline Hypolimnion

Applications Interpolation of data What is kinematic viscosity at 7.5º C?

We want to find the best “fit” of a curve through the data. f(x) x We want to find the best “fit” of a curve through the data.

Mathematical Background The prerequisite mathematical background for interpolation is found in the material on the Taylor series expansion and finite divided differences Simple statistics average standard deviation normal distribution

Normal Distribution A histogram used to depict the distributions of an exam grade.

Material to be Covered in Curve Fitting Linear Regression Polynomial Regression Multiple Regression General linear least squares Nonlinear regression Interpolation Newton’s Polynomial Lagrange polynomial Coefficients of polynomials

Specific Study Objectives Understand the fundamental difference between regression and interpolation and realize why confusing the two could lead to serious problems Understand the derivation of linear least squares regression and be able to assess the reliability of the fit using graphical and quantitative assessments.

Specific Study Objectives Know how to linearize data by transformation Understand situations where polynomial, multiple and nonlinear regression are appropriate Understand the general matrix formulation of linear least squares Understand that there is one and only one polynomial of degree n or less that passes exactly through n+1 points

Specific Study Objectives Realize that more accurate results are obtained if data used for interpolation is centered around and close to the unknown point Recognize the liabilities and risks associated with extrapolation Understand why spline functions have utility for data with local areas of abrupt change

Least Squares Regression Simplest is fitting a straight line to a set of paired observations (x1,y1), (x2, y2).....(xn, yn) The resulting mathematical expression is y = ao + a1x + e We will consider the error introduced at each data point to develop a strategy for determining the “best fit” equations

f(x) x

To determine the values for ao and a1, differentiate with respect to each coefficient Note: we have simplified the summation symbols. What mathematics technique will minimize Sr?

Setting the derivative equal to zero will minimizing Sr. If this is done, the equations can be expressed as:

Note: We have two simultaneous equations, with two unknowns, ao and a1. What are these equations? (hint: only place terms with ao and a1 on the LHS of the equations) What are the final equations for ao and a1?

These first two equations are called the normal equations

Error Recall: f(x) x The most common measure of the “spread” of a sample is the standard deviation about the mean:

Introduce a term to measure the standard error of the estimate: Coefficient of determination r2: r is the correlation coefficient

The following signifies that the line explains 100 percent of the variability of the data: Sr = 0 r = r2 = 1 If r = r2 = 0, then Sr = St and the fit is invalid.

Consider the following four sets of data

Linearization of non-linear relationships Some data is simply ill-suited for linear least squares regression.... or so it appears. f(x) x

EXPONENTIAL EQUATIONS Linearize intercept = ln P0 slope = r why? P t

Can you see the similarity with the equation for a line: y = b + mx where b is the y-intercept and m is the slope? lnP intercept = ln Po slope = r t

After taking the natural log of the y-data, perform linear regression. From this regression: The value of b will give us ln (P0). Hence, P0 = eb The value of m will give us r directly. ln P0 intercept = ln P0 slope = r t

POWER EQUATIONS Here we linearize the equation by taking the log of (Flow over a weir) Here we linearize the equation by taking the log of H and Q data. What is the resulting intercept and slope? log Q log H

log Q slope = a log H intercept = log c

performing regression on the log H vs log Q data? From : y = mx + b So how do we get c and a from performing regression on the log H vs log Q data? From : y = mx + b b = log c c = 10b m = a log Q slope = a log H intercept = log c

Here, m is the growth rate of a microbial population, SATURATION-GROWTH RATE EQUATION m S Here, m is the growth rate of a microbial population, mmax is the maximum growth rate, S is the substrate or food concentration, Ks is the substrate concentration at a value of m = mmax/2 1/m 1/ S slope = Ks/mmax intercept = 1/mmax

General Comments of Linear Regression You should be cognizant of the fact that there are theoretical aspects of regression that are of practical importance but are beyond the scope of this book Statistical assumptions are inherent in the linear least squares procedure

General Comments of Linear Regression x has a fixed value; it is not random and is measured without error The y values are independent random variable and all have the same variance The y values for a given x must be normally distributed

General Comments of Linear Regression The regression of y versus x is not the same as x versus y The error of y versus x is not the same as x versus y f(x) x y-direction x-direction

Polynomial Regression One of the reasons you were presented with the theory behind linear regression was to allow you the insight behind similar procedures for higher order polynomials y = a0 + a1x mth - degree polynomial y = a0 + a1x + a2x2 +....amxm + e

Based on the sum of the squares of the residuals 1. Take the derivative of the above equation with respect to each of the unknown coefficients: i.e. the partial with respect to a2

2. These equations are set to zero to minimize Sr., i.e. minimize the error. 3. Set all unknowns values on the LHS of the equation. Again, using the partial of Sr. wrt a2 4. This set of normal equations result in m+1 simultaneous equations which can be solved using matrix methods to determine a0, a1, a2......am

Multiple Linear Regression A useful extension of linear regression is the case where y is a linear function of two or more variables y = ao + a1x1 + a2x2 We follow the same procedure y = ao + a1x1 + a2x2 + e

Multiple Linear Regression For two variables, we would solve a 3 x 3 matrix in the following form: [A] and {c}are clearly based on data given for x1, x2 and y to solve for the unknowns in {x}.

Interpolation General formula for an n-th order polynomial y = a0 + a1x + a2x2 +....amxm For m+1 data points, there is one, and only one polynomial of order m or less that passes through all points Example: y = a0 + a1x fits between 2 points 1st order

Interpolation We will explore two mathematical methods well suited for computer implementation Newton’s Divided Difference Interpolating Polynomials Lagrange Interpolating Polynomial

Newton’s Divided Difference Interpolating Polynomials Linear Interpolation Quadratic Interpolation General Form Errors

Linear Interpolation How would you approach estimating the density at 17 C?

??? 999.1 > r > 998.2 r T 998.2 999.1 15 20

Assume a straight line between the known data. 998.2 999.1 15 20 Assume a straight line between the known data. Then calculate the slope.

Assuming this linear relationship is constant, 15 20 998.2 999.1 17 Assuming this linear relationship is constant, the slope is the same between the unknown point and a known point.

r T 15 20 998.2 999.1 17 Solve for r Therefore, the slope of one interval will equal the slope of the other interval.

Note: The notation f1(x) designates that this is a first order interpolating polynomial

provide a better estimate true solution 1 2 f(x) x smaller intervals provide a better estimate

true solution 1 2 f(x) x Alternative approach would be to include a third point and estimate f(x) from a 2nd order polynomial.

true solution f(x) x Alternative approach would be to include a third point and estimate f(x) from a 2nd order polynomial.

Quadratic Interpolation Prove that this a 2nd order polynomial of the form:

First, multiply the terms Collect terms and recognize that:

Procedure for Quadratic Interpolation x2, f(x2) x, f(x) f(x) x1, f(x1)

Procedure for Quadratic Interpolation

Procedure for Quadratic Interpolation

Example Include 10 degrees in your calculation of the density at 17 degrees.

General Form of Newton’s Interpolating Polynomials for the nth-order polynomial To establish a methodical approach to a solution define the first finite divided difference as:

if we let i=1 and j=0 then this is b1 Similarly, we can define the second finite divided difference, which expresses both b2 and the difference of the first two divided difference

Similarly, we can define the second finite divided difference, which expresses both b2 and the difference of the first two divided difference Following the same scheme, the third divided difference is the difference of two second finite divided difference.

This leads to a scheme that can easily lead to the use of spreadsheets i xi f(xi) first second third 0 x0 f(x0) f[x1,x0] f[x2,x1,x0] f[x3,x2,x1,x0] 1 x1 f(x1) f[x2,x1] f[x3,x2,x0] 2 x2 f(x2) f[x2,x3] 3 x3 f(x3)

These difference can be used to evaluate the b-coefficients. The result is the following interpolation polynomial called the Newton’s Divided Difference Interpolating Polynomial To determine the error we need an extra point. The error would follow a relationship analogous to the error in the Taylor Series.

Lagrange Interpolating Polynomial where P designates the “product of” The linear version of this expression is at n=1

Your text shows you how to do n=2 (second order). Linear version: n=1 Your text shows you how to do n=2 (second order). What would third order be?

Note: x1 is not being subtracted from the constant term x or xi = x1 in the numerator or the denominator j= 1

Note: x2 is not being subtracted from the constant term x or xi = x2 in the numerator or the denominator j= 2

Note: x3 is not being subtracted from the constant term x or xi = x3 in the numerator or the denominator j= 3

Example Determine the density at 17 degrees.

Using Newton’s Interpolating Polynomial In fact, you can derive Lagrange directly from Newton’s Interpolating Polynomial

Coefficients of an Interpolating Polynomial y = a0 + a1x + a2x2 +....amxm HOW CAN WE BE MORE STRAIGHT FORWARD IN GETTING VALUES?

This is a 2nd order polynomial. We need three data points. Plug the value of xi and f(xi) directly into equations. This gives three simultaneous equations to solve for a0 , a1 , and a2

Example Determine the density at 17 degrees.

Spline Interpolation Our previous approach was to derive an nth order polynomial for n+1 data points. An alternative approach is to apply lower-order polynomials to subset of data points Such connecting polynomials are called spline functions Adaptation of drafting techniques

Spline interpolation is an adaptation of the drafting technique of using a spline to draw smooth curves through a series of points

Linear Splines

Quadratic Spline

Example A well pumping at 250 gallons per minute has observation wells located at 15, 42, 128, 317 and 433 ft away along a straight line from the well. After three hours of pumping, the following drawdowns in the five wells were observed: 14.6, 10.7, 4.8 1.7 and 0.3 ft respectively. Derive equations of each quadratic spline.

Splines To ensure that the mth derivatives are continuous at the “knots”, a spline of at least m+1 order must be used 3rd order polynomials or cubic splines that ensure continuous first and second derivatives are most frequently used in practice Although third and higher derivatives may be discontinuous when using cubic splines, they usually cannot be detected visually and consequently are ignored.

Splines The derivation of cubic splines is somewhat involved First illustrate the concepts of spline interpolation using second order polynomials. These “quadratic splines” have continuous first derivatives at the “knots” Note: This does not ensure equal second derivatives at the “knots”

Quadratic Spline 1.The function must be equal at the interior knots. This condition can be represented as: note: we are referencing the same x and f(x)

This occurs between i = 2, n Using the interior knots (n-1) this will provide 2n -2 equations.

2. The first and last functions must pass through the end points. This will add two more equations. We now have 2n - 2 +2 = 2n equations. How many do we need?

3. The first derivative at the interior knots must be equal. This provides another n-1 equations for 2n + n-1 =3n -1. We need 3n

4. Unless we have some additional information regarding the functions or their derivatives, we must make an arbitrary choice in order to successfully compute the constants. 5. Assume the second derivative is zero at the first point. The visual interpretation of this condition is that the first two points will be connected by a straight line. a1 = 0

Cubic Splines Third order polynomial Need n+1 = 3+1 = 4 intervals Consequently there are 4n unknown constants to evaluate What are these equations?

Cubic Splines The function values must be equal at the interior knots (2n -2) The first and last functions must pass through the end points (2) The first derivatives at the interior knots must be equal (n-1) The second derivatives at the interior knots must be equal (n-1) The second derivative at the end knots are zero (2)

Cubic Splines The function values must be equal at the interior knots (2n -2) The first and last functions must pass through the end points (2) The first derivatives at the interior knots must be equal (n-1) The second derivatives at the interior knots must be equal (n-1) The second derivative at the end knots are zero (2)

SPECIAL NOTE On the surface it may appear that a third order approximation using splines would be inferior to higher order polynomials. Consider a situation where a spline may perform better: A generally smooth function undergoes an abrupt change in a region of interest.

The abrupt change induces oscillations in interpolating polynomials. In contrast, the cubic spline provides a much more acceptable approximation

Previous Exam Question Given the following data, develop the simultaneous equations for a quadratic spline. Express your final answers in matrix form.

First derivative cot. at interior knots 8a1 + b1 = 8a2 + b2 (4, 4.6) Interior knots: 16a1 + 4b1 + c1 = 4.6 16a2 + 4b2 + c2 = 4.6 36a2 + 6b2 + c2 = 1.5 36a3 + 6b3 + c3 = 1.5 End conditions a1 + b1 + c1 = 0.5 49a3 + 7b3 + c3 = 3.0 First derivative cot. at interior knots 8a1 + b1 = 8a2 + b2 12a2 + b2 = 12a3 + b3 Extra equation a1 =0

End conditions a1 + b1 + c1 = 0.5 49a3 + 7b3 + c3 = 3.0 First derivative cont. at interior knots 8a1 + b1 = 8a2 + b2 12a2 + b2 = 12a3 + b3 Extra equation a1 =0 Interior knots: 16a1 + 4b1 + c1 = 4.6 16a2 + 4b2 + c2 = 4.6 36a2 + 6b2 + c2 = 1.5 36a3 + 6b3 + c3 = 1.5