REGULARIZATION THEORY OF INVERSE PROBLEMS - A BRIEF REVIEW - Michele Piana, Dipartimento di Matematica, Università di Genova.

Slides:



Advertisements
Similar presentations
Learning Riemannian metrics for motion classification Fabio Cuzzolin INRIA Rhone-Alpes Computational Imaging Group, Pompeu Fabra University, Barcellona.
Advertisements

Image Enhancement by Regularization Methods Andrey S. Krylov, Andrey V. Nasonov, Alexey S. Lukin Moscow State University Faculty of Computational Mathematics.
Ter Haar Romeny, MICCAI 2008 Regularization and scale-space.
IPIM, IST, José Bioucas, Convolution Operators Spectral Representation Bandlimited Signals/Systems Inverse Operator Null and Range Spaces Sampling,
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Edge Preserving Image Restoration using L1 norm
Prediction with Regression
MEG/EEG Inverse problem and solutions In a Bayesian Framework EEG/MEG SPM course, Bruxelles, 2011 Jérémie Mattout Lyon Neuroscience Research Centre ? ?
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
USING A PRIORI INFORMATION FOR CONSTRUCTING REGULARIZING ALGORITHMS
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
On Systems with Limited Communication PhD Thesis Defense Jian Zou May 6, 2004.
Instructor : Dr. Saeed Shiry
INVERSE PROBLEMS and REGULARIZATION THEORY – Part II AIP 2011 Texas A&M University MAY 22, 2011 CHUCK GROETSCH Every restriction corresponds to a law of.
Giansalvo EXIN Cirrincione unit #6 Problem: given a mapping: x  d  t  and a TS of N points, find a function h(x) such that: h(x n ) = t n n = 1,
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
1 Adaptive error estimation of the Trefftz method for solving the Cauchy problem Presenter: C.-T. Chen Co-author: K.-H. Chen, J.-F. Lee & J.-T. Chen BEM/MRM.
Sparse Kernels Methods Steve Gunn.
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
Glass 1 Radiative Heat transfer and Applications for Glass Production Processes Axel Klar and Norbert Siedow Department of Mathematics, TU Kaiserslautern.
Inverse Problems. Example Direct problem given polynomial find zeros Inverse problem given zeros find polynomial.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Overview of Kernel Methods Prof. Bennett Math Model of Learning and Discovery 2/27/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.
Image Processing in Freq. Domain Restoration / Enhancement Inverse Filtering Match Filtering / Pattern Detection Tomography.
Inverse Problems and Applications Chaiwoot Boonyasiriwat Last modified on December 6, 2011.
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
SIAM Annual Meeting An Iterative, Projection-Based Algorithm for General Form Tikhonov Regularization Misha Kilmer, Tufts University Per Christian.
Fast Low-Frequency Impedance Extraction using a Volumetric 3D Integral Formulation A.MAFFUCCI, A. TAMBURRINO, S. VENTRE, F. VILLONE EURATOM/ENEA/CREATE.
Kernel Classifiers from a Machine Learning Perspective (sec ) Jin-San Yang Biointelligence Laboratory School of Computer Science and Engineering.
Tensor and Emission Tomography Problems on the Plane A. L. Bukhgeim S. G. Kazantsev A. A. Bukhgeim Sobolev Institute of Mathematics Novosibirsk, RUSSIA.
Trust-Aware Optimal Crowdsourcing With Budget Constraint Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department.
STE 6239 Simulering Friday, Week 1: 5. Scientific computing: basic solvers.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
Computed Tomography References The Essential Physics of Medical Imaging 2 nd ed n. Bushberg J.T. et.al Computed Tomography 2 nd ed n : Seeram Physics of.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Eran Treister and Irad Yavneh Computer Science, Technion (with thanks to Michael Elad)
THE INVERSE PROBLEM OF RECONSTRUCTING A TSUNAMI SOURCE WITH NUMERICAL SIMULATION T.Voronina Institute of Computational Mathematics and Mathematical Geophysics.
EEG/MEG Source Localisation SPM Course – Wellcome Trust Centre for Neuroimaging – Oct ? ? Jérémie Mattout, Christophe Phillips Jean Daunizeau Guillaume.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
CHAPTER 5 S TOCHASTIC G RADIENT F ORM OF S TOCHASTIC A PROXIMATION Organization of chapter in ISSO –Stochastic gradient Core algorithm Basic principles.
Transductive Regression Piloted by Inter-Manifold Relations.
1. 2  A Hilbert space H is a real or complex inner product space that is also a complete metric space with respect to the distance function induced.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Kernel adaptive filtering Lecture slides for EEL6502 Spring 2011 Sohan Seth.
Image deconvolution, denoising and compression T.E. Gureyev and Ya.I.Nesterets
Regularized Mean and Accelerated Electron Flux Spectra in Solar Flares Eduard P. Kontar University of Glasgow Michele Piana, Anna Maria Massone (INFM,
Inference of Poisson Count Processes using Low-rank Tensor Data Juan Andrés Bazerque, Gonzalo Mateos, and Georgios B. Giannakis May 29, 2013 SPiNCOM, University.
TMA 4180 Optimeringsteori 2007 INVERSE PROBLEMS AND OPTIMIZATION PART 1 TMA 4180 Optimeringsteori Harald E. Krogstad, IMF, Spring
Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter and Andrew Y. Ng Computer Science Department Stanford.
Gaussian Processes Li An Li An
Weak Solutions of Kinematic Aggregation Equations Andrea Bertozzi Department of Mathematics UCLA Thanks to Thomas Laurent for contributing to the slides.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
High-dimensional Error Analysis of Regularized M-Estimators Ehsan AbbasiChristos ThrampoulidisBabak Hassibi Allerton Conference Wednesday September 30,
Regularized Mean and Accelerated Electron Flux Spectra in Solar Flares Eduard P. Kontar University of Glasgow Michele Piana, Anna Maria Massone (INFM,
LMS Algorithm in a Reproducing Kernel Hilbert Space Weifeng Liu, P. P. Pokharel, J. C. Principe Computational NeuroEngineering Laboratory, University of.
Optimization of Nonlinear Singularly Perturbed Systems with Hypersphere Control Restriction A.I. Kalinin and J.O. Grudo Belarusian State University, Minsk,
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Nonnegative polynomials and applications to learning
COSC 4335: Other Classification Techniques
Computational Intelligence
Part 3. Linear Programming
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
LAB MEETING Speaker : Cheolsun Kim
Presentation transcript:

REGULARIZATION THEORY OF INVERSE PROBLEMS - A BRIEF REVIEW - Michele Piana, Dipartimento di Matematica, Università di Genova

PLAN Ill-posedness Applications Regularization theory Algorithms

SOME DATES 1902 (Hadamard) - A problem is ill-posed when its solution is not unique, or it does not exist or it does not depend continuously on the data Early sixties - ‘…The crux of the difficulty was that numerical inversions were producing results which were physically unacceptable but were mathematically acceptable…’ (Twomey, 1977) 1963 (Tikhonov) - One may obtain stability by exploiting additional information on the solution 1979 (Cormack and Hounsfield) – The Nobel prize for Medicine and Physiology is assigned ‘for the developement of computed assisted tomography’

EXAMPLES Differentiation: Edge-detection is ill-posed! Image restoration: blurred imageunknown object Interpolation response function band-limited: invisible objects existExistence if and only if Givenfind such that

WHAT ABOUT LEARNING? Learning from examples can be regarded as the problem of approximating a multivariate function from sparse data Unknown: an estimatorsuch thatpredicts with high probability Is this an ill-posed problem? Is this an inverse problem? Next talk! Data: the training set obtained by sampling according to some probability distribution

MATHEMATICAL FRAMEWORK findgiven a noisysuch that the minimum norm pseudosolution Generalized solution Generalized inverse Remark: well-posedness does not imply stability Pseudosolutions:or Findingis ill-posedboundedclosed Hilbert spaces;linear continuous: Ill-posedness:or is unbounded

REGULARIZATION ALGORITHMS A regularization algorithm for the ill-posed problem is a one-parameter familyof operators such that: Semiconvergence: givennoisy version ofthere exists such that Tikhonov method Two major points:1) how to compute the minimum 2) how to fix the regularization parameter is linear and continuous

COMPUTATION Two ‘easy’ cases: Singular system: is a compact operator is a convolution operator with kernel

THE REGULARIZATION PARAMETER Basic definition: Then a choiceis optimal if Discrepancy principle: solve Generalized to the case of noisy models Often oversmoothing be a measure of the amount of noise affecting the datumLet Example: Other methods: GCV, L-curve…

ITERATIVE METHODS In iterative regularization schemes: the role of the regularization parameter is played by the iteration number The computational effort is affordable for non-sparse matrix New, tighter prior constraints can be introduced Example: Open problem:is this a regularization method? convex subset of the source space Iterative methods can be used: to solve the Tikhonov minimum problem as regularization algorithms

CONCLUSIONS There are plenty of ill-posed problems in the applied sciences Regularization theory is THE framework for solving linear ill-posed problems What’s up for non-linear ill-posed problems?