Introduction to Smoothing Splines Tongtong Wu Feb 29, 2004
Outline Introduction Interpolating and Smoothing splines Linear and polynomial regression, and interpolation Roughness penalties Interpolating and Smoothing splines Cubic splines Interpolating splines Smoothing splines Natural cubic splines Choosing the smoothing parameter Available software
Key Words roughness penalty penalized sum of squares natural cubic splines
Motivation
Motivation
Motivation
Motivation Spline(y18)
Introduction Linear and polynomial regression : Interpolation Global influence Increasing of polynomial degrees happens in discrete steps and can not be controlled continuously Interpolation Unsatisfactory as explanations of the given data Purposes of modeling Provide a summary to explore and present the relationship between the explanatory variable and the response variable Prediction To study the varying trend in the data Regard the very local variation in the curve as random noise in order to study the more slowly varying trend in the data
Roughness penalty approach A method for relaxing the model assumptions in classical linear regression along lines a little different from polynomial regression.
Roughness penalty approach Aims of curving fitting A good fit to the data To obtain a curve estimate that does not display too much rapid fluctuation Basic idea: making a necessary compromise between the two rather different aims in curve estimation
Roughness penalty approach Quantifying the roughness of a curve An intuitive way: (g: a twice-differentiable curve) Motivation from a formalization of a mechanical device: if a thin piece of flexible wood, called a spline, is bent to the shape of the graph g, then the leading term in the strain energy is proportional to The measure of roughness should not be affected by the addition of a constant or linear function. This leads naturally to the idea of a roughness function that depends on the second derivative of the curve under consideration. Other measurements: max of |g’’|, # of inflection points in g Integrated squared second derivative is a global measure of roughness that has considerable computational advantages.
Roughness penalty approach Penalized sum of squares g: any twice-differentiable function on [a,b] : smoothing parameter (‘rate of exchange’ between residual error and local variation) Penalized least squares estimator Cost of S(g) is determined not only by its goodness-of-fit to the data quantified by the residual sum of squares, but also by its roughness. For given alpha, minimizing S(g) will give the best compromise between smoothness and goodness-of-fit.
Roughness penalty approach Curve for a large value of The main component in S(g) will be roughness penalty term and hence minimizer g_hat will display very little curvature. In the limiting case as alpha tends to infinity the penalty term will be forced to zero and the curve g_hat will approach the linear regression line.
Roughness penalty approach Curve for a small value of The main component in S(g) will be residual SS, and the curve estimate g_hat will track the data closely
Interpolating and Smoothing Splines Cubic splines Interpolating splines Smoothing splines Choosing the smoothing parameter
Cubic Splines Given a<t1<t2<…<tn<b, a function g is a cubic spline if On each interval (a,t1), (t1,t2), …, (tn,b), g is a cubic polynomial The polynomial pieces fit together at points ti (called knots) s.t. g itself and its first and second derivatives are continuous at each ti, and hence on the whole [a,b]
Cubic Splines How to specify a cubic spline Natural cubic spline (NCS) if its second and third derivatives are zero at a and b, which implies d0=c0=dn=cn=0, so that g is linear on the two extreme intervals [a,t1] and [tn,b]. The continuity conditions on g and on its first two derivatives imply various relations between coefficients.
Natural Cubic Splines Value-second derivative representation We can specify a NCS by giving its value and second derivative at each knot ti. Define which specify the curve g completely. However, not all possible vectors represent a natural spline! This expression is not the most convenient representation of a natural cubic spline for computation or for mathematical discussion.
Natural Cubic Splines Value-second derivative representation Theorem 2.1 The vector and specify a natural spline g if and only if Then the roughness penalty will satisfy This expression is not the most convenient representation of a natural cubic spline for computation or for mathematical discussion.
Natural Cubic Splines Value-second derivative representation
Natural Cubic Splines Value-second derivative representation R is strictly diagonal dominant, i.e. R is positive definite, so we can define
Interpolating Splines To find a smooth curve that interpolate (ti,zi), i.e. g(ti)=zi for all i. Theorem 2.2 Suppose and t1<…<tn. Given any values z1,…,zn, there is a unique natural cubic spline g with knots ti satisfying
Interpolating Splines The natural cubic spline interpolant is the unique minimizer of over S2[a,b] that interpolate the data. Theorem 2.3 Suppose g is the interpolant natural cubic spline, then S2[a,b]: space of functions that are differentiable on [a,b] and have absolutely continuous first derivative
Smoothing Splines Penalized sum of squares g: any twice-differentiable function on [a,b] : smoothing parameter (‘rate of exchange’ between residual error and local variation) Penalized least squares estimator
Smoothing Splines 1. The curve estimator is necessarily a natural cubic spline with knots at ti, for i=1,…,n. Proof: suppose g is the NCS
Smoothing Splines 2. Existence and uniqueness Let then since be precisely the vector of . Express ,
Smoothing Splines 2. Theorem 2.4 Let be the natural cubic spline with knots at ti for which . Then for any in S2[a,b]
Smoothing Splines 3. The Reinsch algorithm The matrix has bandwidth 5 and is symmetric and strictly positive-definite, therefore it has a Cholesky decomposition Where D is a strictly positive diagonal matrix and L is a lower triangular band matrix with L_ij=0 for j<i-2 and j>i, and L_ii=1 for all i. The matrix Q and R can all be found in O(n) algebraic operations, provided only the non-zero diagonals are stored. Hence, the matrices L and D require only linear time for their computation.
Smoothing Splines 3. The Reinsch algorithm for spline smoothing Step 1: Evaluate the vector . Step 2: Find the non-zero diagonals of and hence the Cholesky decomposition factors L and D. Step 3: Solve for by forward and back substitution. Step 4: Find g by .
Smoothing Splines 4. Some concluding remarks Minimizing curve essentially does not depend on a and b, as long as all the data points lie between a and b. If n=2, for any , setting to be the straight line through the two points (t1,Y1) and (t2,Y2) will reduce S(g) to zero. If n=1, the minimizer is no longer unique, since any straight line through (t1,Y1) will yield a zero value S(g).
Choosing the Smoothing Parameter Two different philosophical approaches Subjective choice Automatic method – chosen by data Cross-validation Generalized cross-validation
Choosing the Smoothing Parameter Cross-validation Generalized cross-validation EDF=tr(I-A(alpha))
Available Software smooth.spline in R Description: Usage: Fits a cubic smoothing spline to the supplied data. Usage: plot(speed, dist) cars.spl <- smooth.spline(speed, dist) cars.spl2 <- smooth.spline(speed, dist, df=10) lines(cars.spl, col = "blue") lines(cars.spl2, lty=2, col = "red") EDF=tr(I-A(alpha))
Available Software Example 1 library(modreg) y18 <- c(1:3,5,4,7:3,2*(2:5),rep(10,4)) xx <- seq(1,length(y18), len=201) (s2 <- smooth.spline(y18)) # GCV (s02 <- smooth.spline(y18, spar = 0.2)) plot(y18, main=deparse(s2$call), col.main=2) lines(s2, col = "blue"); lines(s02, col = "orange"); lines(predict(s2, xx), col = 2) lines(predict(s02, xx), col = 3); mtext(deparse(s02$call), col = 3) EDF=tr(I-A(alpha))
Available Software Example 1 EDF=tr(I-A(alpha))
Available Software Example 2 data(cars) ## N=50, n (# of distinct x) =19 attach(cars) plot(speed, dist, main = "data(cars) & smoothing splines") cars.spl <- smooth.spline(speed, dist) cars.spl2 <- smooth.spline(speed, dist, df=10) lines(cars.spl, col = "blue") lines(cars.spl2, lty=2, col = "red") lines(smooth.spline(cars, spar=0.1)) ## spar: smoothing parameter (alpha) in (0,1] legend(5,120,c(paste("default [C.V.] => df =",round(cars.spl$df,1)), "s( * , df = 10)"), col = c("blue","red"), lty = 1:2, bg='bisque') detach() EDF=tr(I-A(alpha))
Available Software Example 2 EDF=tr(I-A(alpha))
Extensions of Roughness penalty approach Semiparametric modeling: a simple application to multiple regression Generalized linear models (GLM) To allow all the explanatory variables to be nonlinear Additive model approach by relaxing the assumption of linearity on just on of the explanatory variables
Reference P.J. Green and B.W. Silverman (1994) Nonparametric Regression and Generalized Linear Models. London: Chapman & Hall