MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
Generalized Method of Moments: Introduction
Structural Equation Modeling. What is SEM Swiss Army Knife of Statistics Can replicate virtually any model from “canned” stats packages (some limitations.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Maximum likelihood (ML) and likelihood ratio (LR) test
458 Fitting models to data – II (The Basics of Maximum Likelihood Estimation) Fish 458, Lecture 9.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
Computer vision: models, learning and inference
Chi Square Distribution (c2) and Least Squares Fitting
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
3.1 Ch. 3 Simple Linear Regression 1.To estimate relationships among economic variables, such as y = f(x) or c = f(i) 2.To test hypotheses about these.
Model Inference and Averaging
Learning Theory Reza Shadmehr logistic regression, iterative re-weighted least squares.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Estimating Volatilities and Correlations
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
MathematicalMarketing Slide 4b.1 Distributions Chapter 4: Part b – The Multivariate Normal Distribution We will be discussing  The Multivariate Normal.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Computacion Inteligente Least-Square Methods for System Identification.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
Estimating standard error using bootstrap
Data Modeling Patrice Koehl Department of Biological Sciences
The Maximum Likelihood Method
Chapter 3: Maximum-Likelihood Parameter Estimation
12. Principles of Parameter Estimation
Limited Dependent Variables
(5) Notes on the Least Squares Estimate
The Simple Linear Regression Model: Specification and Estimation
Model Inference and Averaging
Ch3: Model Building through Regression
Parameter Estimation 主講人:虞台文.
CH 5: Multivariate Methods
The Maximum Likelihood Method
A Brief Introduction of RANSAC
Latent Variables, Mixture Models and EM
The Maximum Likelihood Method
CONCEPTS OF ESTIMATION
Modelling data and curve fitting
Chi Square Distribution (c2) and Least Squares Fitting
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
POINT ESTIMATOR OF PARAMETERS
Linear regression Fitting a straight line to observations.
EC 331 The Theory of and applications of Maximum Likelihood Method
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Regression Lecture-5 Additional chapters of mathematics
Econometrics Chengyuan Yin School of Mathematics.
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
Parametric Methods Berlin Chen, 2005 References:
12. Principles of Parameter Estimation
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Causal Relationships with measurement error in the data
Presentation transcript:

MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum Likelihood Parameter Estimation (These topics are needed for Chapters 9, 12, 14 and 15)

MathematicalMarketing Slide 3c.2 Mathematical Tools Why Do We Need Nonlinear Parameter Estimation?. With the Linear Model, y = X  + e, we end up with a closed form, algebraic solution. Sometimes there is no algebraic solution for the unknowns in a Marketing Model Suppose the data depend in a nonlinear way on an unknown parameter , lets say y = f(  ) + e To minimize e′e, we need to find the spot at which de′e/d  = 0. But if there is no way to get  by itself on one side of an equation and stuff that we know on the other….

MathematicalMarketing Slide 3c.3 Mathematical Tools Steps to the Algorithm of Nonlinear Estimation 1.We take a stab at the unknown, inventing a starting value for it. 2.We assess the derivative of the objective function at the current value of . If the derivative is not zero, we modify  by moving it in the direction in which the derivative getting closer to 0. We keep repeating this step until the derivative arrives at zero.

MathematicalMarketing Slide 3c.4 Mathematical Tools A Picture of Nonlinear Estimation 11 22  f If the derivative is positive, we should move to the left (go more negative) If the derivative is negative, we should move to the right (go more positive) This suggests the rule:

MathematicalMarketing Slide 3c.5 Mathematical Tools A Brief Introduction to Maximum Likelihood  ML is an alternative philosophy to Least Squares.  If ML estimators exist, they will be consistent  If ML estimators exist they will be normally distributed.  If ML estimators exist, they will be asymptotically efficient.  ML leads to a Chi Square test of the model  The Covariance Matrix for ML estimators can be calculated from the second order derivatives.  Marketing Scientists really like ML estimators.

MathematicalMarketing Slide 3c.6 Mathematical Tools The Likelihood Principle We wish to maximize the probability of the data given the model. We will start with the example of estimating the population mean, .  x Pr(x) Assume we draw a sample of 3 values, x 1 = 4, x 2 = 5 and x 3 = 6.

MathematicalMarketing Slide 3c.7 Mathematical Tools The Likelihood of The Sample What would be the likelihood of observing x 1, x 2 and x 3 given that  = 212? How about if  = 5? With ML we choose an estimate for  that maximizes the likelihood of the sample. The sample that we observed was presumably more likely on average than the samples that we did not observe. We should make its probability as large as possible.

MathematicalMarketing Slide 3c.8 Mathematical Tools Steps to ML Estimation Derive the probability of an observation given the parameters, Pr(y i |  ). Derive the likelihood of the sample, which typically involves multiplication when we assume independent sampling, Derive the likelihood under the general alternative that the data are arbitrary. Pick elements of the unknown parameter vector  so that is as small as possible.