Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.

Slides:



Advertisements
Similar presentations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
Advertisements

Definition  Regression Model  Regression Equation Y i =  0 +  1 X i ^ Given a collection of paired data, the regression equation algebraically describes.
CITS2401 Computer Analysis & Visualisation
ES 240: Scientific and Engineering Computation. InterpolationPolynomial  Definition –a function f(x) that can be written as a finite series of power functions.
Read Chapter 17 of the textbook
Curve-Fitting Interpolation
Function Approximation
Least Square Regression
Curve-Fitting Regression
General Linear Least-Squares and Nonlinear Regression
Least Square Regression
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 17 Least Square Regression.
Curve-Fitting Polynomial Interpolation
Part 4 Chapter 13 Linear Regression
Polynomial Interpolation
Engineering Computation Curve Fitting 1 Curve Fitting By Least-Squares Regression and Spline Interpolation Part 7.
Chapter 11 Multiple Regression.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 CURVE.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Interpolation Chapter 18.
Lagrange interpolation Gives the same results as Newton, but different method.
Chapter 6 Numerical Interpolation
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 Least.
Least-Squares Regression
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
CMPS1371 Introduction to Computing for Engineers NUMERICAL METHODS.
Introduction to MATLAB for Engineers, Third Edition Chapter 6 Model Building and Regression PowerPoint to accompany Copyright © The McGraw-Hill Companies,
Curve Fitting and Interpolation: Lecture (I)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Biostatistics Lecture 17 6/15 & 6/16/2015. Chapter 17 – Correlation & Regression Correlation (Pearson’s correlation coefficient) Linear Regression Multiple.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Applied Numerical Methods With MATLAB ® for Engineers.
Curve Fitting and Regression EEE 244. Descriptive Statistics in MATLAB MATLAB has several built-in commands to compute and display descriptive statistics.
Part 4 Chapter 17 Polynomial Interpolation PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill.
Polynomial Interpolation You will frequently have occasions to estimate intermediate values between precise data points. The function you use to interpolate.
Chapter 8 Curve Fitting.
Chapter 14 Curve Fitting : Polynomial Interpolation Gab Byung Chae.
ES 240: Scientific and Engineering Computation. Chapter 13: Linear Regression 13. 1: Statistical Review Uchechukwu Ofoegbu Temple University.
Curve-Fitting Regression
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Polynomials, Curve Fitting and Interpolation. In this chapter will study Polynomials – functions of a special form that arise often in science and engineering.
Chapter 13. General Least-Squares and Nonlinear Regression Gab-Byung Chae.
MODEL FITTING jiangyushan. Introduction The goal of model fitting is to choose values for the parameters in a function to best describe a set of data.
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Chapter 15 General Least Squares and Non- Linear.
Principles of Extrapolation
Chapter 13 Objectives Familiarizing yourself with some basic descriptive statistics and the normal distribution. Knowing how to compute the slope and intercept.
Curve Fitting Introduction Least-Squares Regression Linear Regression Polynomial Regression Multiple Linear Regression Today’s class Numerical Methods.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Simple Linear Regression The Coefficients of Correlation and Determination Two Quantitative Variables x variable – independent variable or explanatory.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Part 5 - Chapter
Part 5 - Chapter 17.
Polynomial Interpolation
Interpolation Estimation of intermediate values between precise data points. The most common method is: Although there is one and only one nth-order.
Chapter 18.
Chapter 18.
Chapter 12 Curve Fitting : Fitting a Straight Line Gab-Byung Chae
Part 5 - Chapter 17.
Today’s class Multiple Variable Linear Regression
Linear regression Fitting a straight line to observations.
General Linear Least-Squares and Nonlinear Regression
Least Square Regression
SKTN 2393 Numerical Methods for Nuclear Engineers
Engineering Analysis ENG 3420 Fall 2009
Presentation transcript:

Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00

22 Lecture 22 Attention: The last homework HW5 and the last project are due on Tuesday November 24!! Last time:  Linear regression  Exponential, power, and saturation non-linear models  Linear least squares regression Today  Linear regression versus sample mean. Coefficient of determination  Polynomial least squares fit  Multiple linear regression  General linear squares  More on non-linear models  Interpolation (Chapter 15) Polynomial interpolation Newton interpolating polynomials Lagrange interpolating polynomials Next Time  Splines

Quantification of Errors For a straight line the sum of the squares of the estimate residuals is: The standard error of the estimate:

Linear regression versus the sample mean What is the difference between linear regression and the case when we simply compute the sample mean and draw a line corresponding to the sample mean? The spread  the histogram of the differences between the values predicted by linear regression and the actual sample values. Regression data showing (a) the spread of data around the mean of the dependent data and (b) the spread of the data around the best fit line: The reduction in spread represents the improvement due to linear regression.

Coefficient of Determination The coefficient of determination r 2  r 2 represents the percentage of the original uncertainty explained by the model. For a perfect fit, S r =0 and r 2 =1. If r 2 =0, there is no improvement over simply picking the mean. If r 2 <0, the model is worse than simply picking the mean!

Example V (m/s) F (N) ixixi yiyi a 0 +a 1 x i (y i - ) 2 (y i -a 0 -a 1 x i )  % of the original uncertainty has been explained by the linear model

Polynomial least-fit squares MATLAB has a built-in function polyfit that fits a least-squares n-th order polynomial to data:  p = polyfit(x, y, n) x: independent data y: dependent data n: order of polynomial to fit p: coefficients of polynomial f(x)=p 1 x n +p 2 x n-1 +…+p n x+p n+1 MATLAB’s polyval command can be used to compute a value using the coefficients.  y = polyval(p, x)

Fitting an m th order polynomial to n data points Minimize: The standard error is: because the m th order polynomial has (m+1) coefficients. The coefficient of determination r 2 is:

Multiple Linear Regression Now y is a linear function of two or more independent variables. The best fit  minimize the sum of the squares of the estimate residuals: For example when: instead of a line we have a plane

General Linear Least Squares Linear, polynomial, and multiple linear regression all belong to the general linear least-squares model: where z 0, z 1, …, z m are a set of m+1 basis functions and e is the error of the fit. The basis functions can be any function data but cannot contain any of the coefficients a 0, a 1, etc. The equation can be re-written for each data point as a matrix equation: where {y} is a vector of n dependent data, {a} is a vector of (m+1) coefficients of the equation, {e} contains the error at each point, and [Z] is: with z ji representing the value of the j th basis function calculated at the i th point.

Solving General Linear Least Squares Coefficients Generally, [Z] is an n x (m+1) matrix. Simple inversion cannot be used to solve for the (m+1) {a}. Instead the sum of the squares of the estimate residuals is minimized: The outcome of this minimization yields:

Example Given the colum vectors x and y, find the coefficients for best fit line y=a 0 +a 1 x+a 2 x 2 Z = [ones(size(x) x x.^2] a = (Z’*Z)\(Z’*y)  MATLAB’s left-divide will automatically include the [Z] T terms if the matrix is not square, so a = Z\y would work as well To calculate measures of fit: St = sum((y-mean(y)).^2) Sr = sum((y-Z*a).^2) r2 = 1-Sr/St syx = sqrt(Sr/(length(x)-length(a)))

Nonlinear Models How to deal with nonlinear models (when we cannot fit a straight line) to the sample data  Transform the variables and solve for the best fit of the transformed variables. This works well for exponential, power, saturation models but not all equations can be transformed easily or at all.  Perform nonlinear regression to directly determine the least-squares fit. To perform nonlinear regression:  write a function that returns the sum of the squares of the estimate residuals for a fit and then  use fminsearch function to find the values of the coefficients where a minimum occurs. The arguments to the function to compute S r should be the coefficients, the independent variables, and the dependent variables.

Example Given two vectors of n observations ym for the force F and xm for the velocity v find the coefficients a 0 and a 1 for the best fit of the equation: First - write a function called fSSR.m containing the following: function f = fSSR(a, xm, ym) yp = a(1)*xm.^a(2); f = sum((ym-yp).^2); Use fminsearch in the command window to obtain the values of a that minimize fSSR: a = [1, 1], [], v, F) where [1, 1] is an initial guess for the [a0, a1] vector, [] is a placeholder for the options

Comparison between the transformed of the power equation and the direct method in our example In the general case the two methods produce different results (the coefficients of the equations are different). The direct method produces the largest r 2.

Polynomial Interpolation Problem  estimate intermediate values between precise data points.  Related to data fitting but  The function uses to interpolate must pass through the data points - this makes interpolation more restrictive than fitting. Polynomial interpolation  an (n-1) th order polynomial is found that passes through n data points: How to find the coefficients of the polynomial  Use linear algebra to solve a system of n linear equations.  Use polyfit and polyval built-in functions. Making sure the order of the fit for n data points is n-1.

Matrix formulation of polynomial interpolation: find the coefficients p 1, p 2 … p n knowing the values of the function f(x 1 ),f(x 2 )…f(x n )

Ill conditioned linear problems A matrix is ill-conditioned if small changes in the coefficients of the solution have drastic effects on the results, which makes iterating the solution to a small residual a tricky operation. Another type of ill-conditioned matrix is when we have matrix values that vary by several degrees of magnitude. Numerical round-off in the system can be challenging for solving a model having an ill-conditioned matrix.

Problems Vandermonde matrices are very ill-conditioned  their solutions are very sensitive to round-off errors. Matrices such as that on the left are known as The issue can be minimized by scaling and shifting the data.

Newton Interpolating Polynomials The differences between a simple polynomial and Newton’s interpolating polynomial for first and second order interpolations are:

First-order Newton interpolating polynomial The first-order Newton interpolating polynomial may be obtained from linear interpolation and similar triangles, as shown. The resulting formula based on known points x 1 and x 2 and the values of the dependent function at those points is:

Second-order Newton interpolating polynomial The second-order Newton interpolating polynomial introduces some curvature to the line connecting the points, but still goes through the first two points. The resulting formula based on known points x 1, x 2, and x 3 and the values of the dependent function at those points is:

Newton interpolating polynomial of degree n-1 In general, an (n-1) th Newton interpolating polynomial has all the terms of the (n-2) th polynomial plus one extra. The general formula is: where and the f[…] represent divided differences.

Divided differences Divided difference are calculated as follows: Divided differences are calculated using divided difference of a smaller number of terms:

Lagrange interpolating polynomials Another method that uses shifted value to express an interpolating polynomial is the Lagrange interpolating polynomial. The differences between a simply polynomial and Lagrange interpolating polynomials for first and second order polynomials is: where the L i are weighting coefficients that are functions of x.

First-order Lagrange interpolating polynomial The first-order Lagrange interpolating polynomial may be obtained from a weighted combination of two linear interpolations, as shown. The resulting formula based on known points x 1 and x 2 and the values of the dependent function at those points is:

Lagrange interpolating polynomial for n points In general, the Lagrange polynomial interpolation for n points is: where L i is given by:

Inverse Interpolation Interpolation general means finding some value f(x) for some x that is between given independent data points. Sometimes, it will be useful to find the x for which f(x) is a certain value - this is inverse interpolation. Rather than finding an interpolation of x as a function of f(x), it may be useful to find an equation for f(x) as a function of x using interpolation and then solve the corresponding roots problem: f(x)-f desired =0 for x.

Extrapolation Extrapolation is the process of estimating a value of f(x) that lies outside the range of the known base points x 1, x 2, …, x n. Extrapolation represents a step into the unknown, and extreme care should be exercised when extrapolating!

Extrapolation Hazards The following shows the results of extrapolating a seventh-order population data set:

Oscillations Higher-order polynomials can not only lead to round-off errors due to ill- conditioning, but can also introduce oscillations to an interpolation or fit where they should not be. In the figures below, the dashed line represents an function, the circles represent samples of the function, and the solid line represents the results of a polynomial interpolation: