Young Modulus Example The pairs (1,1), (2,2), (4,3) represent strain (millistrains) and stress (ksi) measurements. Estimate Young modulus using the three.

Slides:



Advertisements
Similar presentations
Lesson 1.1 Essential Ideas A relation is a set of ordered pairs mapping between two sets, the domain and range. A relation is a set of ordered pairs mapping.
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
P.3A Polynomials and Special Factoring In the following polynomial, what is the degree and leading coefficient? 4x 2 - 5x x Degree = Leading coef.
When you see… Find the zeros You think….
Kin 304 Regression Linear Regression Least Sum of Squares
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Maximum and Minimum Values
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 26 Regression Analysis-Chapter 17.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 25 Regression Analysis-Chapter 17.
Short Term Load Forecasting with Expert Fuzzy-Logic System
1 Chapter 17: Introduction to Regression. 2 Introduction to Linear Regression The Pearson correlation measures the degree to which a set of data points.
Curve fit noise=randn(1,30); x=1:1:30; y=x+noise ………………………………… [p,s]=polyfit(x,y,1);
Math – Getting Information from the Graph of a Function 1.
Regression Analysis What is regression ?What is regression ? Best-fit lineBest-fit line Least squareLeast square What is regression ?What is regression.
Linear Regression Least Squares Method: the Meaning of r 2.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 4 Section 2 – Slide 1 of 20 Chapter 4 Section 2 Least-Squares Regression.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
Section 2.6 – Draw Scatter Plots and Best Fitting Lines A scatterplot is a graph of a set of data pairs (x, y). If y tends to increase as x increases,
Ch. 1-5 Absolute Value Equations and Inequalities.
Unit 1 Review Standards 1-8. Standard 1: Describe subsets of real numbers.
Chapter 10: Determining How Costs Behave 1 Horngren 13e.
5.2 Polynomials, Linear Factors, and Zeros P
Linear Prediction Correlation can be used to make predictions – Values on X can be used to predict values on Y – Stronger relationships between X and Y.
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
© 2001 Prentice-Hall, Inc.Chap 13-1 BA 201 Lecture 18 Introduction to Simple Linear Regression (Data)Data.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
MAT 2401 Linear Algebra 2.5 Applications of Matrix Operations
Section 1.6 Fitting Linear Functions to Data. Consider the set of points {(3,1), (4,3), (6,6), (8,12)} Plot these points on a graph –This is called a.
Simple Linear Regression The Coefficients of Correlation and Determination Two Quantitative Variables x variable – independent variable or explanatory.
Method 3: Least squares regression. Another method for finding the equation of a straight line which is fitted to data is known as the method of least-squares.
3.3 Linear Programming. Vocabulary Constraints: linear inequalities; boundary lines Objective Function: Equation in standard form used to determine the.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
6/13/ Linear Regression Major: All Engineering Majors Authors: Autar Kaw, Charlie Barker Presented by: دکتر ابوالفضل.
Algebra 2 Chapter 1 Section 6 Objectives: 1.Solve compound inequalities 2.Solve absolute value inequalities Standards: A2.2.1c, A2.2.1d, and SMP 1,2,5,7,8.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Questions from lectures
ENME 392 Regression Theory
Regression Chapter 6 I Introduction to Regression
Kin 304 Regression Linear Regression Least Sum of Squares
Ch12.1 Simple Linear Regression
Validation of Regression Models
BPK 304W Regression Linear Regression Least Sum of Squares
The Least-Squares Regression Line
CHAPTER 10 Correlation and Regression (Objectives)
Statistics in Data Mining on Finance by Jian Chen
1.3 Modeling with Linear Functions Exploration 1 & 2
Linear Regression.
Absolute or Global Maximum Absolute or Global Minimum
3.1 Extreme Values Absolute or Global Maximum
Lesson 11 Functions & Transformations
Linear regression Fitting a straight line to observations.
Least Squares Method: the Meaning of r2
مدلسازي تجربي – تخمين پارامتر
Parent Functions.
The Least-Squares Line Introduction
What is the function of the graph? {applet}
Parent Functions.
Section 2: Linear Regression.
Correlation and Regression
Discrete Least Squares Approximation
11C Line of Best Fit By Eye, 11D Linear Regression
Mathematical Sciences
Multivariate Analysis Regression
Parent Function Notes Unit 4
Analysis of Absolute Value Functions Date:______________________
6.1.1 Deriving OLS OLS is obtained by minimizing the sum of the square errors. This is done using the partial derivative 6.
Optimization formulation
Presentation transcript:

Young Modulus Example The pairs (1,1), (2,2), (4,3) represent strain (millistrains) and stress (ksi) measurements. Estimate Young modulus using the three commonly used error norms. Estimate the error in Young modulus using cross validation Our model is y=Ex, where x is strain and y is stress

Best fits Data provides y(1)=1, y(2)=2, y(4)=3. With y=Ex then the First determine range of reasonable moduli: Then the errors are Sum of absolute errors – Must take minimum value of E=0.75 Sum of square of errors – E=17/21=0.81 For max error – E=5/6=0.83

Sketch of fits Average absolute error has linear objective so it ends at the extreme range. Max equated the two largest errors and rms is, as expected, in between.

Cross validation for average absolute error Cross validation gives us error in prediction not coefficients If at point x the error in y is e, we will consider it equivalent to an error in Young modulus of e/x y(1)=1, y(2)=2, y(4)=3. If we leave out the first point, Error at x=1 is ¼, which is also error in E. If we leave out second point Error at x=2 is ½, so error in E is again ¼. If we omit the third point, obviously E=1, error at point is 1, error in modulus is ¼. So average cross validation error is ¼.

Cross validation for average absolute error Cross validation gives us error in prediction not coefficients If at point x the error in y is e, we will consider it equivalent to an error in Young modulus of e/x y(1)=1, y(2)=2, y(4)=3. If we leave out the first point, Error at x=1 is ¼, which is also error in E. If we leave out second point Error at x=2 is ½, so error in E is again ¼. If we omit the third point, obviously E=1, error at point is 1, error in modulus is ¼. So average cross validation error is ¼.

Cross validation for max error y(1)=1, y(2)=2, y(4)=3. When we omit first point, need e 2 =-e 3, E=5/6, error at point one is 1/6. When we omit second point, e 1 =-e 3, E=4/5, error at point 2 is 2/5, error in E is 1/5. When we omit third point obviously E=1, error at point 3 is 1, error in E is ¼. Maximum error is ¼. Note that the average absolute error is When we minimized average absolute error was How come? I leave the rms case to you.