Stable Runge-Free, Gibbs-Free Rational Interpolation Scheme: A New Hope A New Hope Qiqi Wang.

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

Irina Vaviļčenkova, Svetlana Asmuss ELEVENTH INTERNATIONAL CONFERENCE ON FUZZY SET THEORY AND APPLICATIONS Liptovský Ján, Slovak Republic, January 30 -
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Accuracy and Precision of Fitting Methods
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
CE33500 – Computational Methods in Civil Engineering Differentiation Provided by : Shahab Afshari
Data mining and statistical learning - lecture 6
MATH 685/ CSI 700/ OR 682 Lecture Notes
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
EARS1160 – Numerical Methods notes by G. Houseman
Guillaume Bouchard Xerox Research Centre Europe
Al Parker October 29, 2009 Using linear solvers to sample large Gaussians.
Using Rational Approximations for Evaluating the Reliability of Highly Reliable Systems Z. Koren, J. Rajagopal, C. M. Krishna, and I. Koren Dept. of Elect.
Curve-Fitting Regression
Optimal Bandwidth Selection for MLS Surfaces
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Chapter 6 Numerical Interpolation
Numerical Schemes for Advection Reaction Equation Ramaz Botchorishvili Faculty of Exact and Natural Sciences Tbilisi State University GGSWBS,Tbilisi, July.
Chapter 3 Root Finding.
1 Chapter 6 Numerical Methods for Ordinary Differential Equations.
Function approximation: Fourier, Chebyshev, Lagrange
3. Numerical integration (Numerical quadrature) .
Chapters 5 and 6: Numerical Integration
The horseshoe estimator for sparse signals CARLOS M. CARVALHO NICHOLAS G. POLSON JAMES G. SCOTT Biometrika (2010) Presented by Eric Wang 10/14/2010.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
FDA- A scalable evolutionary algorithm for the optimization of ADFs By Hossein Momeni.
RFP Workshop Oct 2008 – J Scheffel 1 A generalized weighted residual method for RFP plasma simulation Jan Scheffel Fusion Plasma Physics Alfvén Laboratory,
Extrapolation Models for Convergence Acceleration and Function ’ s Extension David Levin Tel-Aviv University MAIA Erice 2013.
International Environmental Agreements with Uncertain Environmental Damage and Learning Michèle Breton, HEC Montréal Lucia Sbragia, Durham University Game.
Polynomial Interpolation You will frequently have occasions to estimate intermediate values between precise data points. The function you use to interpolate.
Splines Vida Movahedi January 2007.
Integration of 3-body encounter. Figure taken from
Curve-Fitting Regression
Fourier TheoryModern Seismology – Data processing and inversion 1 Fourier Series and Transforms Orthogonal functions Fourier Series Discrete Fourier Series.
1. Interpolating polynomials Polynomial of degree n,, is a linear combination of Definitions: (interval, continuous function, abscissas, and polynomial)
6. Introduction to Spectral method. Finite difference method – approximate a function locally using lower order interpolating polynomials. Spectral method.
Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems.
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
MECH4450 Introduction to Finite Element Methods Chapter 9 Advanced Topics II - Nonlinear Problems Error and Convergence.
Al Parker July 19, 2011 Polynomial Accelerated Iterative Sampling of Normal Distributions.
AMS 691 Special Topics in Applied Mathematics Lecture 8
Al Parker January 18, 2009 Accelerating Gibbs sampling of Gaussians using matrix decompositions.
MECH593 Introduction to Finite Element Methods
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Neural Network Approximation of High- dimensional Functions Peter Andras School of Computing and Mathematics Keele University
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Quadrature – Concepts (numerical integration) Don Allen.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Nira Dyn Tel-Aviv University joint work with Shay Kels.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Interpolation Methods
The Taylor Polynomial Remainder (aka: the Lagrange Error Bound)
CSCI 5822 Probabilistic Models of Human and Machine Learning
Class Notes 18: Numerical Methods (1/2)
Class Notes 19: Numerical Methods (2/2)
Chapter 23.
Today’s class Multiple Variable Linear Regression
Chapter 27.
Filtering and State Estimation: Basic Concepts
Interpolation Methods
High Accuracy Schemes for Inviscid Traffic Models
Splines There are cases where polynomial interpolation is bad
SKTN 2393 Numerical Methods for Nuclear Engineers
Comparison of CFEM and DG methods
Chapter 5 Integration Section R Review.
Numerical Modeling Ramaz Botchorishvili
Generalized Finite Element Methods
Uncertainty Propagation
Presentation transcript:

Stable Runge-Free, Gibbs-Free Rational Interpolation Scheme: A New Hope A New Hope Qiqi Wang

Quest for an adaptive, stable, fast convergent interpolation scheme ● Would be immensely useful in engineering, optimization, uncertainty quantification, etc. ● Radial basis function interpolation (Kriging): – Adaptive, – Unstable for fixed absolute shape parameter, – Algebraically convergent for fixed relative shape parameter. ● Grid based interpolation (Chebyshev, Smolyak) – Stable and geometrically convergent, – Not adaptive.

Runge's phenomenon Platte, Trefethan and Kuijlaars Impossibility of aproximating analytic functions from equispaced samples, Platte, Trefethan and Kuijlaars, 2010 – If a scheme tries to be geometrically convergent for all analytic functions on uniform grid, it must be very ill- conditioned.

Gibbs phenomenon ● Existing methods for dealing with Gibbs oscillations either rely on knowing the location of the discontinuity (e.g. reconstruction), or sacrifices geometric rate of convergence.

Rational blending of accurate but unstable global approximations ● p k (x): participant approximations, ● q k (x): an error estimator of p k (x). ● g(x) is a weighted average of approximations; the (locally) most accurate approximation is weighted most heavily.

When is g(x) an interpolant? ● Theorem: g(x) is an interpolant if for every data points, at least one p k (x) interpolates the data point, with q k (x) at the point. ● Similar ideas from: WENO, Floater-Hormann

Special case with polynomial participants ● Participant pproximations p k (x) are polynomial interpolants on all contiguous subsets of data points. ● Error estimate: ● where

How fast does g(x) converge? ● Theorem: For every analytic function f in [0,1] there exists positive finite a and c, such that ● where is the largest grid spacing. ● In other words, the approximation g converges uniformly at a geometrical rate to any analytic function, on uniform and almost arbitrary grids. ● A adaptive, stable, fast convergent interpolation scheme.

Compatibility with “Impossibility” proven by Platte et al. ● If a scheme tries to be geometrically convergent for all analytic functions on uniform grid, it must be very ill- conditioned – Platte, Trefethan and Kuijlaars, 2010 ● Ill-conditioned: linear sensitivity of interpolant with respect to individual data points grows exponentially. – Does not directly imply instability for nonlinear schemes. – Beneficial for schemes such as WENO, which wants to switch to a lower order scheme upon detection of even small high order oscillations.

Compatibility with “Impossibility” proven by Platte et al. ● If a scheme tries to be geometrically convergent for all analytic functions on uniform grid, it must be very ill- conditioned – Platte, Trefethan and Kuijlaars, 2010 ● Ill-conditioned: linear sensitivity of interpolant with respect to individual data points grows exponentially. – Does not directly imply instability for nonlinear schemes. – Beneficial for schemes such as WENO, which wants to switch to a lower order scheme upon detection of even small high order oscillations. ● e k in our error estimator depends on f, and must be estimated.

A Bayesian way of estimating e k ● For f to be an analytic functions, there must exist a finite C, such that ● A natural model for the growth of derivatives: ● Our estimator computes the lower derivatives of f from polynomial interpolations, estimates C 0 and C, then extrapolate to higher derivatives.

Interpolating Runge's function

Convergence to Runge's function L-infinity error L-2 error Grid points

Participating sub-interpolants

Interpolating

Convergence to L-infinity error L-2 error Excluding interval [- 0.01,0.01] Grid points

Participating Subintervals

Conclusion ● Weighted average of participant approximations – For polynomial participant approximations, we can prove uniform geometric convergence for arbitrary grid, with exact function derivative estimate. – Bayesian approach of estimating high order derivatives. ● Demonstrated to be Runge free and Gibbs free ● Extension to high dimension: Need a accurate multivariate participant approximation with a reliable error estimate.