Indirect imaging of stellar non-radial pulsations Svetlana V. Berdyugina University of Oulu, Finland Institute of Astronomy, ETH Zurich, Switzerland.

Slides:



Advertisements
Similar presentations
J. Daunizeau Institute of Empirical Research in Economics, Zurich, Switzerland Brain and Spine Institute, Paris, France Bayesian inference.
Advertisements

Bayesian inference Jean Daunizeau Wellcome Trust Centre for Neuroimaging 16 / 05 / 2008.
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Pattern Recognition and Machine Learning
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Probability and Statistics Basic concepts II (from a physicist point of view) Benoit CLEMENT – Université J. Fourier / LPSC
Psychology 290 Special Topics Study Course: Advanced Meta-analysis April 7, 2014.
Dimension reduction (1)
1 Parametric Sensitivity Analysis For Cancer Survival Models Using Large- Sample Normal Approximations To The Bayesian Posterior Distribution Gordon B.
TNO orbit computation: analysing the observed population Jenni Virtanen Observatory, University of Helsinki Workshop on Transneptunian objects - Dynamical.
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Part 2b Parameter Estimation CSE717, FALL 2008 CUBS, Univ at Buffalo.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
Estimation of parameters. Maximum likelihood What has happened was most likely.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
J. Daunizeau Wellcome Trust Centre for Neuroimaging, London, UK Institute of Empirical Research in Economics, Zurich, Switzerland Bayesian inference.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Independent Component Analysis (ICA) and Factor Analysis (FA)
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Maximum likelihood (ML)
Rician Noise Removal in Diffusion Tensor MRI
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Factor Analysis Psy 524 Ainsworth. Assumptions Assumes reliable correlations Highly affected by missing data, outlying cases and truncated data Data screening.
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
INTRODUCTION TO Machine Learning 3rd Edition
1 Analytic Solution of Hierarchical Variational Bayes Approach in Linear Inverse Problem Shinichi Nakajima, Sumio Watanabe Nikon Corporation Tokyo Institute.
July 11, 2006Bayesian Inference and Maximum Entropy Probing the covariance matrix Kenneth M. Hanson T-16, Nuclear Physics; Theoretical Division Los.
Lecture 2: Statistical learning primer for biologists
5. Maximum Likelihood –II Prof. Yuille. Stat 231. Fall 2004.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Machine Learning 5. Parametric Methods.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Updating Probabilities Ariel Caticha and Adom Giffin Department of Physics University at Albany - SUNY MaxEnt 2006.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Geology 5670/6670 Inverse Theory 6 Feb 2015 © A.R. Lowry 2015 Read for Mon 9 Feb: Menke Ch 5 (89-114) Last time: The Generalized Inverse; Damped LS The.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Crash course in probability theory and statistics – part 2 Machine Learning, Wed Apr 16, 2008.
Lecture 2. Bayesian Decision Theory
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
CS479/679 Pattern Recognition Dr. George Bebis
HST 583 fMRI DATA ANALYSIS AND ACQUISITION
Probability Theory and Parameter Estimation I
Ch3: Model Building through Regression
Parameter Estimation 主講人:虞台文.
Special Topics In Scientific Computing
OVERVIEW OF BAYESIAN INFERENCE: PART 1
More about Posterior Distributions
Probabilistic Models with Latent Variables
POINT ESTIMATOR OF PARAMETERS
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
OVERVIEW OF LINEAR MODELS
Introduction to Unfolding
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
Advanced deconvolution techniques and medical radiography
Applied Statistics and Probability for Engineers
Presentation transcript:

Indirect imaging of stellar non-radial pulsations Svetlana V. Berdyugina University of Oulu, Finland Institute of Astronomy, ETH Zurich, Switzerland

Moletai, August Overview Inversion methods in astrophysics  Inverse problem  Maximum likelihood method  Regularization Stellar surface imaging  Line profile distortions  Localization of inhomogeneities Imaging of stellar non-radial pulsations  Temperature variations  Velocity field Mode identification  sectoral modes:  symmetric tesseral modes:  antisymmetric tesseral modes:  zonal modes:

Moletai, August Inversion methods in astrophysics Inverse problem Maximum likelihood method Regularization  Maximum Entropy  Tikhonov  Spherical harmonics  Occamian approach

Moletai, August Inverse problem Determine true properties of phenomena (objects) from observed effects All problems in astronomy are inverse

Moletai, August Inverse problem Trial-and-error method  Response operator (PSF, model) is known  Direct modeling while assuming various properties of the object Inversion  True inversion:  unstable solution due to noise  ill-posed problem  Parameter estimation: fighting the noise DataObject Response operator

Moletai, August Inverse problem Estimate true properties of phenomena (objects) from observed effects Parameter estimation problem

Moletai, August Maximum likelihood method Probability density function (PDF): Normal distribution: Likelihood function Maximum likelihood

Moletai, August Maximum likelihood method Maximum likelihood Normal distribution Residual minimization

Moletai, August Maximum likelihood method Maximum likelihood solution:  Unique  Unbiased  Minimum variance  UNSTABLE !!! Reduce the overall probability Statistical tests  test  Kolmogorov  Mean information

Moletai, August Maximum likelihood method A multitude of solutions with probability New solution  Biased only within noise level  Stable  NOT UNIQUE !!! Likelihood Solutions

Moletai, August Regularization Provide a unique solution  Invoke additional constraints  Assign special properties of a new solution Maximize the functional Regularized solution is forced to possess properties

Moletai, August Bayesian approach Thomas Bayes ( )  Posterior and prior probabilities Prior information on the solution Using a priori constraints is the Bayesian approach

Moletai, August Maximum entropy regularization Entropy  In physics: a measure of ”disorder”  In math (Shannon): a measure of “uninformativeness” Maximum entropy method (MEM, Skilling & Bryan, 1984): MEM solution  Largest entropy (within the noise level of data)  Minimum information (minimum correlation)

Moletai, August Tikhonov regularization Tikhonov (1963): Goncharsky et al. (1982): TR solution  Least gradient (within the noise level of data)  Smoothest solution (maximum correlation)

Moletai, August Spherical harmonics regularization Piskunov & Kochukhov (2002): multipole regularization MPR solution  Closest to the spherical harmonics expansion  Can be justified by the physics of a phenomenon Mixed regularization:

Moletai, August Occamian approach William of Occam ( ):  Occam's Razor: the simplest explanation to any problem is the best explanation Terebizh & Biryukov (1994, 1995):  Simplest solution (within the noise level of data)  No a priori information Fisher information matrix:

Moletai, August Occamian approach Orthogonal transform Principal components Simplest solution  Unique  Stable

Moletai, August Key issues Inverse problem is to estimate true properties of phenomena (objects) from observed effects Maximum likelihood method results in the unique but unstable solution Statistical tests provide a multitude of stable solutions Regularization is needed to choose a unique solution Regularized solution is forced to possess assigned properties MEM solution  minimum correlation between parameters TR solution  maximum correlation between parameters MPR solution  closest to the spherical harmonics expansion OA solution  simplest among statistically acceptable