Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,

Slides:



Advertisements
Similar presentations
“Students” t-test.
Advertisements

Hypothesis Testing Steps in Hypothesis Testing:
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 9 Hypothesis Testing Developing Null and Alternative Hypotheses Developing Null and.
Inferential Statistics & Hypothesis Testing
9-1 Hypothesis Testing Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 8 Introduction to Hypothesis Testing.
4-1 Statistical Inference The field of statistical inference consists of those methods used to make decisions or draw conclusions about a population.
Inference about a Mean Part II
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Inferences About Process Quality
Chapter 9 Hypothesis Testing.
BCOR 1020 Business Statistics
5-3 Inference on the Means of Two Populations, Variances Unknown
Hypothesis Testing Using The One-Sample t-Test
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
Chapter 9 Title and Outline 1 9 Tests of Hypotheses for a Single Sample 9-1 Hypothesis Testing Statistical Hypotheses Tests of Statistical.
Statistical Inference for Two Samples
AM Recitation 2/10/11.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 9 Hypothesis Testing.
Overview of Statistical Hypothesis Testing: The z-Test
Confidence Intervals and Hypothesis Testing - II
Hypothesis Testing.
Jeopardy Hypothesis Testing T-test Basics T for Indep. Samples Z-scores Probability $100 $200$200 $300 $500 $400 $300 $400 $300 $400 $500 $400.
Statistical inference: confidence intervals and hypothesis testing.
Fundamentals of Hypothesis Testing: One-Sample Tests
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
4-1 Statistical Inference The field of statistical inference consists of those methods used to make decisions or draw conclusions about a population.
Copyright © Cengage Learning. All rights reserved. 13 Linear Correlation and Regression Analysis.
Chapter 9.3 (323) A Test of the Mean of a Normal Distribution: Population Variance Unknown Given a random sample of n observations from a normal population.
1 Level of Significance α is a predetermined value by convention usually 0.05 α = 0.05 corresponds to the 95% confidence level We are accepting the risk.
Go to Index Analysis of Means Farrokh Alemi, Ph.D. Kashif Haqqi M.D.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
Chapter 9: Testing Hypotheses
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
9-1 Hypothesis Testing Statistical Hypotheses Definition Statistical hypothesis testing and confidence interval estimation of parameters are.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
Biostatistics Class 6 Hypothesis Testing: One-Sample Inference 2/29/2000.
Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests.
1 ConceptsDescriptionHypothesis TheoryLawsModel organizesurprise validate formalize The Scientific Method.
Statistical Hypotheses & Hypothesis Testing. Statistical Hypotheses There are two types of statistical hypotheses. Null Hypothesis The null hypothesis,
Chapter 8 Introduction to Hypothesis Testing ©. Chapter 8 - Chapter Outcomes After studying the material in this chapter, you should be able to: 4 Formulate.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Interval Estimation and Hypothesis Testing Prepared by Vera Tabakova, East Carolina University.
1 9 Tests of Hypotheses for a Single Sample. © John Wiley & Sons, Inc. Applied Statistics and Probability for Engineers, by Montgomery and Runger. 9-1.
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall 9-1 σ σ.
Fall 2002Biostat Statistical Inference - Confidence Intervals General (1 -  ) Confidence Intervals: a random interval that will include a fixed.
Chap 8-1 Fundamentals of Hypothesis Testing: One-Sample Tests.
Ex St 801 Statistical Methods Inference about a Single Population Mean.
Chapter 9: Testing Hypotheses Overview Research and null hypotheses One and two-tailed tests Type I and II Errors Testing the difference between two means.
© Copyright McGraw-Hill 2004
Formulating the Hypothesis null hypothesis 4 The null hypothesis is a statement about the population value that will be tested. null hypothesis 4 The null.
Inferences Concerning Variances
1 Testing Statistical Hypothesis The One Sample t-Test Heibatollah Baghi, and Mastee Badii.
Hypothesis Tests u Structure of hypothesis tests 1. choose the appropriate test »based on: data characteristics, study objectives »parametric or nonparametric.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Inferences Concerning Means.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
4-1 Statistical Inference Statistical inference is to make decisions or draw conclusions about a population using the information contained in a sample.
Chapter 9 Introduction to the t Statistic
Review of Hypothesis Testing: –see Figures 7.3 & 7.4 on page 239 for an important issue in testing the hypothesis that  =20. There are two types of error.
Chapter 9 Hypothesis Testing.
Hypothesis Testing: Hypotheses
CONCEPTS OF HYPOTHESIS TESTING
Chapter 9 Hypothesis Testing.
Chapter 9 Hypothesis Testing.
Presentation transcript:

Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example, suppose we have a random variable X which has a normal distribution with unknown mean and variance. Examples of statistical inference are hypothesis tests about the population mean or the derivation of intervals within which the mean is likely to lie.

Hypothesis Tests To conduct a hypothesis test we need the following: 1.A hypothesis to be tested (usually described as the null hypothesis) and an alternative against which it can be tested. 2. A test statistic whose distribution is known under the null hypothesis. 3. A decision rule which tells us when to reject the null hypothesis and when not to reject it.

Hypotheses can either have one-sided or two-sided alternatives. Two-sided alternatives lead to two-tailed tests while one- sided alternative lead to one-tailed tests.

The following outcomes are possible when we perform a test: Null hypothesis is true Alternative hypothesis is true Accept null hypothesis Decision CorrectType II Error Reject null hypothesis Type I ErrorDecision Correct When we choose our decision rule we would like to avoid making errors of either kind.

It is relatively easy to determine the probability of making a Type I error. We choose a critical value which determines a particular probability of falsely rejecting the null. One-tailed testTwo-tailed test This is known as the size of the test. Note that the smaller we make the probability of a Type I error then the larger is the probability of a Type II error.

The power of a test is defined as: Therefore the stricter is the test (i.e. the smaller is its size) then the lower is its power. It is hard to attach a specific number to the power of a test but we can often rank different tests in terms of their relative power. If a number of different tests are available then we would normally choose the most powerful test.

P-Values When deciding whether or not to reject the null hypothesis, the following are often considered. The P-Value is the probability of obtaining a result at least as extreme as the test statistic under the assumption that the null is true. The p-value tells us the probability that we will be making a Type I error if we reject the null. For example, suppose we have a variable which is assumed to follow a standard normal distribution under the null. If we obtain a test statistic of 1.5 then the p-value is for a 1-tailed test and for a 2-tailed test.

Critical Values A critical value is a value corresponding to a predetermined p-value. For example a 5% critical value is a value of the test statistic which would yield a p-value of Critical values are often set at 10%, 5% or 1% levels. For example, for the standard normal distribution critical values for a 1-tailed test are: 10% % % If the test statistic exceeds the critical value then the test is said to reject the null hypothesis at that particular critical value.

Confidence Intervals The confidence interval is an interval estimate of a parameter of interest. It indicates an interval within which the unknown parameter is likely to lie. Confidence intervals are particularly associated with classical or frequentist statistical theory. A confidence interval is normally expressed in percentage terms e.g. we are ’95% confident that the population mean lies between the limits L and U’. The way to interpret a 95% confidence interval is that if we were to repeat a random experiment 100 times then 95 of the interval estimates we calculate would contain the true population parameter.

Example: Suppose we have a random variable X which follows a normal distribution. We wish to test: We have a sample of N observations which we use to calculate the sample mean and the sample standard deviation. Under the null hypothesis the following random variable is N(0,1) We can calculate this for a particular sample mean and compare it with a critical value from the t tables to decide if we should reject the null.

Alternatively we can use the z transformation to derive a confidence interval. This allows us to derive a confidence interval for the true population mean.

The problem with the test statistics we have looked at so far is that they depend on an unknown population variance. If we replace the unknown population variance with the sample variance then the resulting test statistic can be shown to follow a student’s t distribution. Tests based on the t-distribution This is an operational test statistic because we can evaluate it for particular values of the sample mean and the sample standard deviation.

Chi-squared tests can be used to test if the standard deviation is equal to a particular value: e.g. suppose we wish to test Under the null hypothesis the following test statistic will follow a chi-squared distribution with N-1 degrees of freedom (where n is the sample size). Tests based on the chi-squared distribution

The F-test can be used when we wish to test if the variance or the standard deviation has changed. For example, suppose we estimate the standard deviation based on n 1 data points. We then observe another n 2 data points and wish to test: Under the null hypothesis we have a test statistic of the form: Tests based on the F-distribution

There is an interesting relationship between the t distribution and the F distribution. Let It follows that: We can therefore think of the t distribution as a special case of the F distribution and, in this special case, we can perform either a t test or an F test with identical results.