Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inferential Statistics Inferential statistics: The part of statistics that allows researchers to generalize their findings beyond data collected. Statistical.

Similar presentations


Presentation on theme: "Inferential Statistics Inferential statistics: The part of statistics that allows researchers to generalize their findings beyond data collected. Statistical."— Presentation transcript:

1 Inferential Statistics Inferential statistics: The part of statistics that allows researchers to generalize their findings beyond data collected. Statistical inference: a procedure for making inferences or generalizations about a larger population from a sample of that population Research is about trying to make valid inferences

2 How Statistical Inference Works

3 Basic Terminology Population (statistical population): Any collection of entities that have at least one characteristic in common A collection (a aggregate) of measurement about which an inference is desired Everything you wish to study Parameter: The numbers that describe characteristics of scores in the population (mean, variance, standard deviation, correlation coefficient etc.)

4 N = 28 μ = 44 σ² = 1.214 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) PopulationPopulation

5 Basic Terminology Sample: A part of the population A finite number of measurements chosen from a population Statistics: The numbers that describe characteristics of scores in the sample (mean, variance, standard deviation, correlation coefficient, reliability coefficient, etc.)

6 n = 1 value … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) X1: 43 X: student body weight

7 n = 2 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) x1: 43x2: 44 X: student body weight

8 n = 3 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) x1: 43x2: 44x3: 45 X: student body weight

9 n = 4 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) x1: 43x2: 44x3: 45x4: 44 x: student body weight

10 5 values … 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) x1: 43x2: 44 x3: 45 x4: 44x5: 44 a sample that has been selected in such a way that all members of the population have an equal chance of being picked (A Simple Random Sample )

11 Basic concept of statistics Measures of central Measures of central tendency Measures of dispersion & variability

12 Measures of tendency central Arithmetic mean (= simple average) summation measurement in population index of measurement Best estimate of population mean is the sample mean, X sample size

13 Measures of variability All describe how “spread out” the data 1.Sum of squares, sum of squared deviations from the mean For a sample,

14 2.Average or mean sum of squares = variance, s 2 : For a sample, Why?

15 n – 1 represents the degrees of freedom,, or number of independent quantities in the estimate s 2. n – 1 represents the degrees of freedom,, or number of independent quantities in the estimate s 2. therefore, once n – 1 of all deviations are specified, the last deviation is already determined. Greek letter “nu”

16 3.Standard deviation, s For a sample, Variance has squared measurement units – to regain original units, take the square root

17 4.Standard error of the mean For a sample, Standard error of the mean is a measure of variability among the means of repeated samples from a population.

18 Basic Statistical Symbols

19 N = 28 μ = 44 σ² = 1.214 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) PopulationPopulation

20 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 43

21 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 4344

22 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 434445

23 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 43444544

24 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 4344454444

25 repeated random sampling, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg)

26 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 46

27 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 4644

28 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 464446

29 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 46444645

30 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 4644464544

31 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg)

32 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 42

33 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg)42

34 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 424243

35 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 42424345

36 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg) 4242434543

37 Repeated random samples, each with sample size, n = 5 values … 44 45 44 42 43 46 42 44 45 A Population of Values Body Weight Data (Kg)

38 Summary SampleSampling 1Sampling 2 First43 (-1)46 (+1)42 (-1) Second44 (+0)44 (-1)42 (-1) Third45 (+1)46 (+1)43 (+0) Fourth44 (+0)45 (+0)45 (+2) Fifth44 (+0)44 (-1)43 (+0) Average444543 Sum of square246 Mean square0.501.001.50 Standard deviation 0.7071.001.225

39 For a large enough number of large samples, the frequency distribution of the sample means (= sampling distribution), approaches a normal distribution.

40 Normal distribution: bell-shaped curve

41 Testing statistical hypotheses between 2 means Testing statistical hypotheses between 2 means 1.State the research question in terms of statistical hypotheses. It is always started with a statement that hypothesizes “no difference”, called the null hypothesis = H 0.  H 0 : Mean heightof female student is equal to mean height of male student

42 Then we formulate a statement that must be true if the null hypothesis is false, called the alternate hypothesis = H A.  H A : Mean height of female student is not equal to mean height of male student If we reject H 0 as a result of sample evidence, then we conclude that H A is true.

43 2.Choose an appropriate statistical test that would allow you to reject H 0 if H 0 were false. E.g., Student’s t test for hypotheses about means William Sealey Gosset (“Student”)

44 Standard error of the difference between the sample means To estimate s (X 1 - X 2 ), we must first know the relation between both populations. Mean of sample 2 Mean of sample 1 t Statistic,

45 How to evaluate the success of this experimental design class Compare the score of statistics and experimental design of several student Compare the score of experimental design of several student from two serial classes Compare the score of experimental design of several student from two different classes

46 1. 1.Comparing the score of statistics and experimental experimental design of several student Similar Student Dependent populations Identical Variance Different Student Independent populations Identical Variance Not Identical Variance

47 Different Student Independent populations Identical Variance Not Identical Variance 2. Comparing the score of experimental design of several student from two serial classes

48 3. Comparing the score of experimental design of several student from two classes Different Student Independent populations Identical Variance Not Identical Variance

49 Relation between populations Dependent populations Independent populations 1.Identical (homogenous ) variance 2.Not identical (heterogeneous) variance

50 Sample Null hypothesis: The mean difference is equal to  o Dependent Populations Test statistic Null distribution t with n-1 df *n is the number of pairs compare How unusual is this test statistic? P < 0.05 P > 0.05 Reject H o Fail to reject H o

51 Pooled variance: Then, Independent Population with homogenous variances

52

53 When sample sizes are small, the sampling distribution is described better by the t distribution than by the standard normal (Z) distribution. Shape of t distribution depends on degrees of freedom, = n – 1.

54 Z = t ( =  ) t ( =25) t ( =1) t ( =5) t

55 t Area of Rejection Area of Acceptance Area of Rejection Lower critical value Upper critical value 0 0.950.025 For  = 0.05 The distribution of a test statistic is divided into an area of acceptance and an area of rejection.

56 Critical t for a test about equality = t  (2),

57 Independent Population with heterogenous variances

58 Analysis of Variance Analysis of Variance(ANOVA)

59 Independent T-test Compares the means of one variable for TWO groups of cases. Statistical formula: Meaning: compare ‘standardized’ mean difference But this is limited to two groups. What if groups > 2? Pair wised T Test (previous example) ANOVA (Analysis of Variance)

60 From T Test to ANOVA 1 1. Pairwise T-Test If you compare three or more groups using t- tests with the usual 0.05 level of significance, you would have to compare each pairs (A to B, A to C, B to C), so the chance of getting the wrong result would be: 1 - (0.95 x 0.95 x 0.95) = 14.3% Multiple T-Tests will increase the false alarm.

61 2. 2. Analysis of Variance In T-Test, mean difference is used. Similar, in ANOVA test comparing the observed variance among means is used. The logic behind ANOVA: If groups are from the same population, variance among means will be small (Note that the means from the groups are not exactly the same.) If groups are from different population, variance among means will be large. From T Test to ANOVA

62 What is ANOVA? Analysis of Variance A procedure designed to determine if the manipulation of one or more independent variables in an experiment has a statistically significant influence on the value of the dependent variable. Assumption: Each independent variable is categorical (nominal scale). Independent variables are called Factors and their values are called levels. The dependent variable is numerical (ratio scale)

63 What is ANOVA? The basic idea of Anova: The “variance” of the dependent variable given the influence of one or more independent variables {Expected Sum of Squares for a Factor} is checked to see if it is significantly greater than the “variance” of the dependent variable (assuming no influence of the independent variables) {also known as the Mean-Square-Error (MSE)}.

64 Pair-t-Test Amir6Budi9 Abas8Berta4 Abi1010Bambang7 Aura6Banu5 Ana10Betty5 Average86 n55 Var. sample44 Pooled Var.= 4tcalc=1.581 t-table 2.306

65 ANOVA TABLE OF 2 POPULATIONS S V SSDF Mean square (M.S.) Between populations Within populations SSbetween 1 MSB SSB DFB SSWithin (n1-1)+ (n2-1) SSW DFW = MSW = TOTAL SSTotal n1 + n2 -1 S²

66 ANOVA TABLE OF 2 POPULATIONS S V SSDF Mean square (M.S.) Between populations Within populations 10 1 32 8 4 TOTAL 429 Fcalc = 2.50 Ftable = 5.318

67 Rationale for ANOVA We can break the total variance in a study into meaningful pieces that correspond to treatment effects and error. That’s why we call this Analysis of Variance. We can break the total variance in a study into meaningful pieces that correspond to treatment effects and error. That’s why we call this Analysis of Variance. The Grand Mean, taken over all observations. The mean of any group. The mean of a specific group (1 in this case). The observation or raw data for the ith subject.

68 The ANOVA Model Trial i The grand mean A treatment effect Error SS Total = SS Treatment + SS Error

69 Analysis of Variance (ANOVA) can be used to test for the equality of three or more population means using data obtained from observational or experimental studies. Use the sample results to test the following hypotheses.   H 0 :  1 = 2 = 3 =... =  k  H a : Not all population means are equal If H 0 is rejected, we cannot conclude that all population means are different. Rejecting H 0 means that at least two population means have different values. Analysis of Variance

70 Assumptions for Analysis of Variance For each population, the response variable is normally distributed. The variance of the response variable, denoted  2, is the same for all of the populations. The effect of independent variable is additive The observations must be independent.

71 Analysis of Variance: Between-Treatments Estimate of Population Variance Within-Treatments Estimate of Population Variance Comparing the Variance Estimates: The F Test ANOVA Table Testing for the Equality of t Population Means

72 A between-treatments estimate of σ 2 is called the mean square due to treatments (MSTR). The numerator of MSTR is called the sum of squares due to treatments (SSTR). The denominator of MSTR represents the degrees of freedom associated with SSTR. Between-Treatments Estimate of Population Variance

73 The estimate of  2 based on the variation of the sample observations within each treatment is called the mean square due to error (MSE). The numerator of MSE is called the sum of squares due to error (SSE). The denominator of MSE represents the degrees of freedom associated with SSE. Within-Treatments Estimate of Population Variance

74 Comparing the Variance Estimates: The F Test If the null hypothesis is true and the ANOVA assumptions are valid, the sampling distribution of MSTR/MSE is an F distribution with MSTR d.f. equal to k - 1 and MSE d.f. equal to n T - k. If the means of the k populations are not equal, the value of MSTR/MSE will be inflated because MSTR overestimates σ among 2 Hence, we will reject H 0 if the resulting value of MSTR/MSE appears to be too large to have been selected at random from the appropriate F distribution.

75 Test for the Equality of k Population Means Hypotheses H 0 :  1 = 2 = 3 =... =  k  H a : Not all population means are equal Test Statistic F = MSTR/MSE

76 Test for the Equality of k Population Means Rejection Rule Using test statistic: Reject H 0 if F > F a Using p-value: Reject H 0 if p-value < a where the value of F a is based on an F distribution with t - 1 numerator degrees of freedom and n T - t denominator degrees of freedom

77 The figure below shows the rejection region associated with a level of significance equal to  where F  denotes the critical value. Sampling Distribution of MSTR/MSE Do Not Reject H 0 Reject H 0 MSTR/MSE Critical Value FF FF

78 ANOVA Table Source of Sum of Degrees of Mean Variation Squares Freedom Squares F TreatmentSSTR k- 1MSTR MSTR/MSE Error SSE n T - MSE Error SSE n T - kMSE Total SST n T - 1 SST divided by its degrees of freedom n T - 1 is simply the overall sample variance that would be obtained if we treated the entire n T observations as one data set.

79 What does Anova tell us? ANOVA will tell us whether we have sufficient evidence to say that measurements from at least one treatment differ significantly from at least one other. It will not tell us which ones differ, or how many differ.

80 ANOVA vs t-test ANOVA is like a t-test among multiple data sets simultaneously t-tests can only be done between two data sets, or between one set and a “true” value ANOVA uses the F distribution instead of the t- distribution ANOVA assumes that all of the data sets have equal variances Use caution on close decisions if they don’t

81 ANOVA – a Hypothesis Test H 0 : There is no significant difference among the results provided by treatments. H a : At least one of the treatments provides results significantly different from at least one other.

82 Y ij =  +  j +  ij By definition,   j = 0 t j=1 The experiment produces (r x t) Y ij data values. The analysis produces estimates of         t  (We can then get estimates of the  ij by subtraction ). Linear Model

83 Y 11 Y 12 Y 13 Y 14 Y 15 Y 16 … Y 1t Y 21 Y 22 Y 23 Y 24 Y 25 Y 26 … Y 2t Y 31 Y 32 Y 33 Y 34 Y 35 Y 36 … Y 3t Y 41 Y 42 Y 43 Y 44 Y 45 Y 46 … Y 4t......…. Y r1 Y r2 Y r3 Y r4 Y r5 Y r6 … Y rt _______________________________________________________________________________ __ __ __ __ __ __ __ Y.1 Y.2 Y.3 Y.4 Y.5 Y.6 … Y.t 1 2 3 4 56… t Y 1, Y 2, …, are Column Means _ _

84 Y =  Y j / t = “GRAND MEAN” (assuming same # data points in each column) (otherwise, Y = mean of all the data) j=1 t

85 MODEL: Y ij =  +  j +  ij Y estimates  Y j - Y estimates   j (=  j –  ) (for all j) These estimates are based on Gauss’ (1796) PRINCIPLE OF LEAST SQUARES and on COMMON SENSE

86 MODEL: Y ij =  +  j +  ij If you insert the estimates into the MODEL, (1) Y ij = Y + (Y j - Y ) +  ij. it follows that our estimate of  ij is (2)  ij = Y ij - Y j < <

87 Then, Y ij = Y + (Y j - Y ) + ( Y ij - Y j ) or, (Y ij - Y ) = (Y j - Y ) + (Y ij - Y j ) { { { (3) TOTAL VARIABILITY in Y = Variability in Y associated with X Variability in Y associated with all other factors +

88 If you square both sides of (3), and double sum both sides (over i and j), you get, [after some unpleasant algebra, but lots of terms which “cancel”]  (Y ij - Y ) 2 = R  (Y j - Y ) 2 +  (Y ij - Y j ) 2 t r j=1 i=1 { { j=1 tt r j=1 i=1 TSS TOTAL SUM OF SQUARES ==== SSB C SUM OF SQUARES BETWEEN COLUMNS ++++ SSW (SSE) SUM OF SQUARES WITHIN COLUMNS ( ( ( ( ( (

89 ANOVA TABLE S V SSDF Mean square (M.S.) Among treatment (among columns) Within Columns (due to error) SSA c t - 1 MSA C SSA C t- 1 SSW c (r - 1) t SSW c (r-1)t = MSW = TOTAL TSS tr -1

90 Hypothesis, H O :  1 =  2 =  c = 0 H I : not all  j = 0 Or H O :  1 =  2 =  c H I : not all  j are EQUAL (All column means are equal)

91 The probability Law of MSB C MSW c = “F calc ”, is The F - distribution with (t-1, (r-1)t) degrees of freedom Assuming H O true. Table Value 

92 Example: Reed Manufacturing Faculty of Agriculture, GMU would like to know if the teaching quality of xperimental design is similar among classes. A simple random sample of 5 student from 3 classes was taken and the grade of experimental design was collected

93 Sample Data Sample Data ObservationAdvance Broadway Cindy ObservationAdvance Broadway Cindy 1 06 09 04 1 06 09 04 2 08 04 10 2 08 04 10 3 10 07 10 3 10 07 10 4 06 05 05 4 06 05 05 5 10 05 06 5 10 05 06 Sample Mean 08 06 07 Sample Mean 08 06 07 Sample Variance 04 04 08 Sample Variance 04 04 08 Example: Grade of experimental design

94 Hypotheses Hypotheses H 0 :  1 = 2 = 3  H a : Not all the means are equal where:  1 = Advance class  2 = Broadway class  2 = Broadway class  3 = Cindy class Example: Experimental Design

95 Mean Square Due to Treatments Mean Square Due to Treatments Since the sample sizes are all equal Since the sample sizes are all equal μ= (8 + 6 + 7)/3 = 7 μ= (8 + 6 + 7)/3 = 7 SSTR = 5(8 - 7) 2 + 5(6 - 7) 2 + 5(7 - 7) 2 = 10 SSTR = 5(8 - 7) 2 + 5(6 - 7) 2 + 5(7 - 7) 2 = 10 MSTR = 10/(3 - 1) = 5 MSTR = 10/(3 - 1) = 5 Mean Square Due to Error Mean Square Due to Error SSE = 4(4) + 4(4) + 4(8) = 64 MSE = 64/(15 - 3) = 5.33 = = Example: Experimental Design

96 F - Test F - Test If H 0 is true, the ratio MSTR/MSE should be near 1 because both MSTR and MSE are estimating  2. If H a is true, the ratio should be significantly larger than 1 because MSTR tends to overestimate  2. Example: Experimental Design

97 Rejection Rule Rejection Rule Using test statistic: Reject H 0 if F > 3.89 Using p-value : Reject H 0 if p-value <.05 where F.05 = 3.89 is based on an F distribution with 2 numerator degrees of freedom and 12 denominator degrees of freedom

98 Example: Experimental Design Test Statistic Test Statistic F = MSTR/MSE = 5.00/5.33 = 0.938 Conclusion Conclusion F =0.938 < F.05 = 3.89, so we accept H 0. There is no significant different quality among experimental design classes

99 ANOVA Table ANOVA Table Source of Sum of Degrees of Mean Source of Sum of Degrees of Mean Variation Squares Freedom Square Fcalc. Variation Squares Freedom Square Fcalc. Among classes 10 2 5.00 0.938 Among classes 10 2 5.00 0.938 Within classes 64 12 5.33 Within classes 64 12 5.33 Total 74 14 Total 74 14 Example: Experimental Design

100 Step 1 Select the Tools pull-down menu Step 1 Select the Tools pull-down menu Step 2 Choose the Data Analysis option Step 2 Choose the Data Analysis option Step 3 Choose Anova: Single Factor Step 3 Choose Anova: Single Factor from the list of Analysis Tools Using Excel’s Anova: Single Factor Tool

101 Step 4 When the Anova: Single Factor dialog box appears: Step 4 When the Anova: Single Factor dialog box appears: Enter B1:D6 in the Input Range box Enter B1:D6 in the Input Range box Select Grouped By Columns Select Grouped By Columns Select Labels in First Row Select Labels in First Row Enter.05 in the Alpha box Enter.05 in the Alpha box Select Output Range Select Output Range Enter A8 (your choice) in the Output Range box Enter A8 (your choice) in the Output Range box Click OK Click OK Using Excel’s Anova: Single Factor Tool

102 Value Worksheet (top portion) Value Worksheet (top portion) Using Excel’s Anova: Single Factor Tool

103 Value Worksheet (bottom portion) Value Worksheet (bottom portion) Using Excel’s Anova: Single Factor Tool

104 Using the p-Value Using the p-Value The value worksheet shows that the p-value is.00331 The value worksheet shows that the p-value is.00331 The rejection rule is “Reject H 0 if p-value <.05” The rejection rule is “Reject H 0 if p-value <.05” Thus, we reject H 0 because the p-value =.00331 <  =.05 Thus, we reject H 0 because the p-value =.00331 <  =.05 We conclude that the quality of among experimental design classes is similar We conclude that the quality of among experimental design classes is similar Using Excel’s Anova: Single Factor Tool


Download ppt "Inferential Statistics Inferential statistics: The part of statistics that allows researchers to generalize their findings beyond data collected. Statistical."

Similar presentations


Ads by Google