Curve fit noise=randn(1,30); x=1:1:30; y=x+noise 3.908 2.825 4.379 2.942 4.5314 5.7275 8.098 …………………………………25.84 27.47 27.00 30.96 [p,s]=polyfit(x,y,1);

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Uncertainty in fall time surrogate Prediction variance vs. data sensitivity – Non-uniform noise – Example Uncertainty in fall time data Bootstrapping.
Pattern Recognition and Machine Learning
Kriging.
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to form.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Experimental Design, Response Surface Analysis, and Optimization
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to.
Chapter 10 Curve Fitting and Regression Analysis
Ch11 Curve Fitting Dr. Deshi Ye
Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned.
Curve fit metrics When we fit a curve to data we ask: –What is the error metric for the best fit? –What is more accurate, the data or the fit? This lecture.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
Forecasting JY Le Boudec 1. Contents 1.What is forecasting ? 2.Linear Regression 3.Avoiding Overfitting 4.Differencing 5.ARMA models 6.Sparse ARMA models.
Function Approximation
Curve-Fitting Regression
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 26 Regression Analysis-Chapter 17.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 25 Regression Analysis-Chapter 17.
Petter Mostad Linear regression Petter Mostad
Accuracy of Prediction How accurate are predictions based on a correlation?
Engineering Computation Curve Fitting 1 Curve Fitting By Least-Squares Regression and Spline Interpolation Part 7.
Course AE4-T40 Lecture 5: Control Apllication
Classification and Prediction: Regression Analysis
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
Least-Squares Regression
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
PATTERN RECOGNITION AND MACHINE LEARNING
Dr. Richard Young Optronic Laboratories, Inc..  Uncertainty budgets are a growing requirement of measurements.  Multiple measurements are generally.
University of Ottawa - Bio 4118 – Applied Biostatistics © Antoine Morin and Scott Findlay 08/10/ :23 PM 1 Some basic statistical concepts, statistics.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Local surrogates To model a complex wavy function we need a lot of data. Modeling a wavy function with high order polynomials is inherently ill-conditioned.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Curve-Fitting Regression
Overview of Supervised Learning Overview of Supervised Learning2 Outline Linear Regression and Nearest Neighbors method Statistical Decision.
Propagation of Error Ch En 475 Unit Operations. Quantifying variables (i.e. answering a question with a number) 1. Directly measure the variable. - referred.
1 Using Multiple Surrogates for Metamodeling Raphael T. Haftka (and Felipe A. C. Viana University of Florida.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Linear Prediction Correlation can be used to make predictions – Values on X can be used to predict values on Y – Stronger relationships between X and Y.
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
INCLUDING UNCERTAINTY MODELS FOR SURROGATE BASED GLOBAL DESIGN OPTIMIZATION The EGO algorithm STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION GROUP Thanks.
Machine Learning 5. Parametric Methods.
Curve Fitting Introduction Least-Squares Regression Linear Regression Polynomial Regression Multiple Linear Regression Today’s class Numerical Methods.
Variability Introduction to Statistics Chapter 4 Jan 22, 2009 Class #4.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Statistics Presentation Ch En 475 Unit Operations.
Optimization formulation Optimization methods help us find solutions to problems where we seek to find the best of something. This lecture is about how.
Nonlinear regression Review of Linear Regression.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
Computacion Inteligente Least-Square Methods for System Identification.
Curve fit metrics When we fit a curve to data we ask: –What is the error metric for the best fit? –What is more accurate, the data or the fit? This lecture.
Bias-Variance Analysis in Regression  True function is y = f(x) +  where  is normally distributed with zero mean and standard deviation .  Given a.
Kriging - Introduction Method invented in the 1950s by South African geologist Daniel Krige (1919-) for predicting distribution of minerals. Became very.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
Regression Chapter 6 I Introduction to Regression
CSE 4705 Artificial Intelligence
Bias and Variance of the Estimator
Statistics Review ChE 477 Winter 2018 Dr. Harding.
Linear Regression.
Linear regression Fitting a straight line to observations.
10701 / Machine Learning Today: - Cross validation,
Curve fit metrics When we fit a curve to data we ask:
5.2 Least-Squares Fit to a Straight Line
Curve fit metrics When we fit a curve to data we ask:
Curve fitting with polyfit
Presentation transcript:

Curve fit noise=randn(1,30); x=1:1:30; y=x+noise ………………………………… [p,s]=polyfit(x,y,1); yfit=polyval(p,x); plot(x,y,'+',x,x,'r',x,yfit,'b') With dense data, functional form is clear. Fit serves to filter out noise

Regression The process of fitting data with a curve by minimizing root mean square error is known as regression Term originated from first paper to use regression “regression of heights to the mean” Can get the same curve from a lot of data or very little. So confidence in fit is major concern.

Surrogate (approximations) Originated from experimental optimization where measurements are very noisy In the 1920s it was used to maximize crop yields by changing inputs such as water and fertilizer With a lot of data, can use curve fit to filter out noise “Approximation” can be then more accurate than data! The term “surrogate” captures the purpose of the fit: using it instead of the data for prediction. Most important when data is expensive

Surrogates for Simulation based optimization Great interest now in applying these techniques to computer simulations Computer simulations are also subject to noise (numerical) However, simulations are exactly repeatable, and if noise is small may be viewed as exact. Some surrogates (e.g. polynomial response surfaces) cater mostly to noisy data. Some (e.g. Kriging) to exact data.

Polynomial response surface approximations Data is assumed to be “contaminated” with normally distributed error of zero mean and standard deviation  Response surface approximation has no bias error, and by having more points than polynomial coefficients it filters out some of the noise. Consequently, approximation may be more accurate than data

Fitting approximation to given data Noisy response model Data from n y experiments Linear approximation Rational approximation Error measures

Linear Regression Functional form For linear approximation Estimate of coefficient vector denoted as b Rms error Minimize rms error e T e=(y-Xb T ) T (y-Xb T ) Differentiate to obtain Beware of ill-conditioning !

Example Data: y(0)=0, y(1)=1, y(2)=0 Fit linear polynomial y=b 0 +b 1 x Then Obtain b 0 =1/3, b 1 =0.

Comparison with alternate fits Errors for regression fit To minimize maximum error obviously y=0.5. Then e av =e rms =e max =0.5 To minimize average error, y=0 e av =1/3, e max =1, e rms =0.577 What should be the order of the progression from low to high?

Three lines