Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonparametric Smoothing Methods and Model Selections T.C. Lin Dept. of Statistics National Taipei University 5/4/2005.

Similar presentations


Presentation on theme: "Nonparametric Smoothing Methods and Model Selections T.C. Lin Dept. of Statistics National Taipei University 5/4/2005."— Presentation transcript:

1 Nonparametric Smoothing Methods and Model Selections T.C. Lin tsair@mail.ntpu.edu.tw Dept. of Statistics National Taipei University 5/4/2005

2 Outline Outline: 1.Motivations 2.Smoothing 3.Degree of Freedom

3 Motivation: Why nonparametric ? Simple Linear Regression: E(Y|X)=α+βX assume that the mean of Y is a linear function of X (+)easy in computation, description, interpretation, …etc. (-) limit of uses

4 Note that the hat matrix in LSE of regression I.symmetric and idempotent II.constant preserving i.e. S1=1 III. = # of linearly independent predictors in a model = # of parameters in a model

5 If the dependence of E(Y) on X is far from linear, one can extend straight-line regression by adding terms like X 2 to the model but it is difficult to guess the most appropriate function form just from looking at the data.

6 Example: Diabetes data 1. Diabetes data (Sockett et al., 1987): a study of the factors affecting patterns of insulin- dependent diabetes mellitus in children. Response: logarithm (C-peptide concentration at diagnosis) Predictors: age and base deficit.

7

8

9 What is smoothers? A tool for summarizing the trend of a response Y as a function of one or more predictor measurements X 1,X 2,…,X p.

10 Idea of smoothers Simplest Smoothers occur in the case of a categorical predictor, Example: sex (male, female), Example: color (red, blue, green) To smooth Y simply average the values of Y in each category

11 How about non-categorical predictor? usually lack replicates at each predictor value mimic category averaging through “ local averaging ” i.e. average the Y values in neighborhoods around each target value

12 Two main uses of smoothers I.Description: to enhance the visual appearance of the scatterplot of Y vs. X. II.Estimate the dependence of the mean of Y on the predictor

13 Two main decisions to be made in scatterplot smoothing 1.how to averaging the response values in each neighborhood ?(which brand of smoother?) 2.how big to take the neighborhoods ? (smoothing parameters=?)

14 Scatterplot Smoothing Notations: y=(y 1, y 2, …,y n ) T x=( x 1, x 2, …,x n ) T with x 1 < x 2 < … <x n Def: s(x 0 )=S(y|x=x 0 )

15 Some scatterplot smoothers : 1. Bin smoothers Choose cut points Def : the indices of data points in each region. (-): estimate is not smooth (jumps at each cut points).

16 2.Running-mean smoothers (moving average) Choose a symmetric nearest neighborhood define the running mean (+):simple (-):don ’ t work well (wiggly), severely biased near the end points

17 3.Running-line smoothers Def: where and are the LSE for the data points in (-): jagged =>weighted LSE

18 4. Kernel smoothers Def: where d(t) is a smooth even function decreasing in |t|, =bandwidth, C 0 chosen so that the weights sum to 1 Example. Gaussian kernel Example. Epanechnikov kernel Example. the minimum variance kernel

19 5.Running medians smoothers Def: make the smoother resistant to outliers in the data nonlinear smoother

20 6. Regression splines  The regions are separated by a sequences of knots  Piecewise polynomial e.g. piecewise cubic polynomial joint smoothly at these knots ps. more knots more flexible

21 6a. piecewise-cubic spline (1) s is cubic polynomial in any subintervals (2) s has the two continuous derivates (3) s has a third derivative that is step function with jumps at knots

22 where a + denotes the positive part of a it can be rewritten as a linear combination of K+4 basis function de Boor (1978): B-spline basis functions (Continue) its parametric expression

23 6b. Nature spline Def: Regression spline (see 6a) + boundary regions

24 7. Cubic smoothing splines Find f that minimize the penalized residual sum of square first term: closeness to the data second term: penalize curvature in the function : (1) large values produce smoother curve (2) small values produce wiggly curve

25 8. Locally-weighted running-line smoothers (loess) Cleveland (1979) define N(x 0 )=k nearest neighbors of x 0 Using tri-cube weight function in WLSE

26

27 Smoothers for multiple predictors 1.multiple-predictor smoothers: example: kernel (see figure) (-):difficulty of interpretation and computation 2.Additive model 3.semi-parametric model

28

29 “ Curse of dimensionality ” Neighborhoods with a fixed number of points become less local as the dimensions increase (Bellman, 1961) For p=1 and span=.1 should length.1. For p=10 the side length need to be.8.

30 additive model Additive: Y = f 1 (X 1 )+...+ f p (X 2 ) + e The selection, estimation are usually based on the smoothing, backfitting, BRUTO, ACE, Projector, etc. (Hastie, 1990)

31 Backfitting (see HT 90) BRUTO Algorithm (see HT 90) is a forward model selection procedure using a modified GCV, defined latter, to choose the significant variables and their smoothing parameters.

32 (Smoothing in details) assume where, X independent with

33 The bias-variance trade-off Example. running-mean

34

35 To expand f in Taylor series assuming data are equally spaced with, and ignoring R

36 and the optimal k is chosen by minimizing as

37 Automatic selection of smoothing parameters (1)Average mean-squared error (2)Average predictive squared error where is a new observation at X i.

38 Some estimates of PSE: 1. CV Cross Validation (CV) where indicates the fit at x i, computed by leaving out the ith data point.

39 Fact: Since

40 2. Average squared residual(ASR) is not a good estimate of PSE

41 linear smoothers def1: def2: where is called smoother matrix. (free of y) e.g. running-mean, running-line, smoothing spline, kernel, loess and regression spline

42 The Bias-variance trade-off for linear smoothers

43 Cross Validation (CV) constant preserving: weights S ij => S ij / (1-S ii ) =>

44 Generalized Cross Validation

45 Degree of freedom of a smoother Why need df? (here?) The same data set and computational power of modern computers are used routinely in the formulation, selection, estimation, diagnostic and prediction of statistical model. 1. 2. 3.

46 EDF (Ye, 1998) Idea: A modeling / forecast procedure said to be stable if small changes in Y produces small changes in the fitted values.

47 More precisely (EDF) for, we would like to have, where is a small matrix. => can be viewed as the slope of the straight line

48 Data Perturbation Procedure For an integer m > 1 (the Monte Carlo sample size), generate δ 1,..., δ m as i.i.d. N(0, t 2 I n ) where t > 0 and In is the n × n identity matrix. Use the “perturbed” data Y + δ j, to refit For i =1,2,..., n, the slope of the LS line fitted to ( (Y i + δ ij ), δ ij ), j=1,..., m, gives an estimate of h ii.

49 An application Table 1. MSE & SD of five models fitted to lynx data About SD: Fit the same class of models to the first 100 obs., keeping the last 100 for out-of-sample predictions. SD = the standard deviation of the multi-step ahead prediction errors.


Download ppt "Nonparametric Smoothing Methods and Model Selections T.C. Lin Dept. of Statistics National Taipei University 5/4/2005."

Similar presentations


Ads by Google