Today (2/16/16) Learning objectives (Sections 5.1, 5.2, and 5.3):

Slides:



Advertisements
Similar presentations
Dummy Dependent variable Models
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
The Maximum Likelihood Method
Computer vision: models, learning and inference Chapter 8 Regression.
Discrete Probability Distributions
Estimation A major purpose of statistics is to estimate some characteristics of a population. Take a sample from the population under study and Compute.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Chapter 6 Section 1 Introduction. Probability of an Event The probability of an event is a number that expresses the long run likelihood that an event.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Estimation of parameters. Maximum likelihood What has happened was most likely.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Linear and generalised linear models
Chi Square Distribution (c2) and Least Squares Fitting
Linear and generalised linear models
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Statistical inference.
Generalized Linear Models
Solving Systems of Equations: Elimination Method.
Likelihood probability of observing the data given a model with certain parameters Maximum Likelihood Estimation (MLE) –find the parameter combination.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
The Negative Binomial Distribution An experiment is called a negative binomial experiment if it satisfies the following conditions: 1.The experiment of.
R. Kass/W03P416/Lecture 7 1 Lecture 7 Some Advanced Topics using Propagation of Errors and Least Squares Fitting Error on the mean (review from Lecture.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
13.6 MATRIX SOLUTION OF A LINEAR SYSTEM.  Examine the matrix equation below.  How would you solve for X?  In order to solve this type of equation,
1 GLM I: Introduction to Generalized Linear Models By Curtis Gary Dean Distinguished Professor of Actuarial Science Ball State University By Curtis Gary.
Negative Binomial Regression NASCAR Lead Changes
Section 5.3 Solving Systems of Equations Using the Elimination Method There are two methods to solve systems of equations: The Substitution Method The.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Solving Linear Systems by Substitution
Notes 6.5, Date__________ (Substitution). To solve using Substitution: 1.Solve one equation for one variable (choose the variable with a coefficient of.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Algebra Review. Systems of Equations Review: Substitution Linear Combination 2 Methods to Solve:
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Solving Systems of Linear Equations in 2 Variables Section 4.1.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
R. Kass/Sp07P416/Lecture 71 More on Least Squares Fit (LSQF) In Lec 5, we discussed how we can fit our data points to a linear function (straight line)
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Solving Linear Systems by Substitution
6) x + 2y = 2 x – 4y = 14.
Negative Binomial Regression
Chapter 12 Section 1.
STATISTICS POINT ESTIMATION
Solving Systems of Linear Equations in 3 Variables.
Algebra 1 Section 7.3 Solve linear systems by linear combinations
The Maximum Likelihood Method
Generalized Linear Models
Section 11.2: Solving Linear Systems by Substitution
6-2 Solving Systems using Substitution
G Lecture 6 Multilevel Notation; Level 1 and Level 2 Equations
Solving Linear Systems Algebraically
Solve a system of linear equation in two variables
Today (2/11/16) Learning objectives (Sections 5.1 and 5.2):
Chi Square Distribution (c2) and Least Squares Fitting
Writing Linear Equations Given Two Points
Like Terms and Evaluating Expressions
6.5 Taylor Series Linearization
5.2 Least-Squares Fit to a Straight Line
5.1 Introduction to Curve Fitting why do we fit data to a function?
Nonlinear Fitting.
Solving Systems of Linear Equations in 3 Variables.
A. Draw a trend line. It will be easier to write an equation
Systems of Equations Solve by Graphing.
Multivariable Linear Systems
Solving a System of Linear Equations
Do Now 12/11/18 Take out HW from last night.
Presentation transcript:

Today (2/16/16) Learning objectives (Sections 5.1, 5.2, and 5.3): Apply the method of maximum likelihood to determine the most probable set of parameters in a linear fit. Be able to perform both weighted and unweighted linear fits. 1/4

Worked problem Consider the following attempt to quantify the intensity and offset in a fluorescence lifetime according to the linear equation below. Import the data in the file named “y-data.dat”, which are Poisson-distributed. Using the measured counts as initial estimates for the variance, write a MathCad file to invert the problem and solve for the best-fit values of a0 and a1. Refine your estimates for the variance based on the equation above rather than the experimental values of y. 1/4

Worked problem Consider the following attempt to quantify the intensity and offset in a fluorescence lifetime according to the linear equation below. Refine your estimates for the variance based on the equation above rather than the experimental values of y. Substitute your new values of the coefficients and update the new expression for  until a0 and a1 stop changing with additional iterations. Compare your result based on Poisson weighting of the measurements with what you would get if you instead performed an unweighted fit (i.e., assumed each measurement has identical variance), consistent with most software packages (Excel, Mathcad, etc.)? 1/4

Next time Introduction to 2-space. 1/4