Download presentation
Presentation is loading. Please wait.
Published byEarl Howard Modified over 9 years ago
1
Sampling Distribution of the Mean Central Limit Theorem Given population with and the sampling distribution will have: A mean A variance Standard Error (mean) As N increases, the shape of the distribution becomes normal (whatever the shape of the population)
2
Testing Hypothesis Known and Remember: We could test a hypothesis concerning a population and a single score by Obtainand use z table We will continue the same logic Given: Behavior Problem Score of 10 years olds Sample of 10 year olds under stress Because we know and, we can use the Central Limit Theorem to obtain the Sampling Distribution when H 0 is true.
3
Sampling Distribution will have We can find areas under the distribution by referring to Z table We need to know Minor change from z score NOWor With our data Changes in formula because we are dealing with distribution of means NOT individual scores.
4
From Z table we find is 0.0901 Because we want a two-tailed test we double 0.0901 (2)0.0901 = 0.1802 NOT REJECT H 0 or is
5
One-Sample t test Pop’n = known & unknown we must estimate with Because we use S, we can no longer declare the answer to be a Z, now it is a t Why? Sampling Distribution of t - - S 2 is unbiased estimator of The problem is the shape of the S 2 distribution positively skewed
6
thus:S 2 is more likely to UNDERESTIMATE (especially with small N) thus:t is likely to be larger than Z (S 2 is in denominator) t - statistic and substitute S 2 for To treat t as a Z would give us too many significant results
7
Guinness Brewing Company (student) Student’s t distribution we switch to the t Table when we use S 2 Go to Table Unlike Z, distribution is a function of with Degrees of Freedom For one-sample cases, lost because we used (sample mean) to calculate S 2 all x can vary save for 1
8
Example: One-Sample Unknown Effect of statistic tutorials: Last 100 years: this years: (no tutorials) (tutorials) N = 20, S = 6.4
9
Go to t-Table t-Table - not area (p) above or below value of t - gives t values that cut off critical areas, e.g., 0.05 - t also defined for each df N=20 df = (N-1) = 20-1 = 19 Go to Table t.05 (19) is 2.093 critical value reject
10
Difference between and Factors Affecting Magnitude of t & Decision 1. the larger the numerator, the larger the t value 2. as S 2 decreases, t increases Size of S 2 3.Size of N as N increases, denominator decreases, t increases 4.level 5.One-, or two-tailed test
11
Confidence Limits on Mean Point estimate Specific value taken as estimator of a parameter Interval estimates A range of values estimated to include parameter Confidence limits Range of values that has a specific (p) of bracketing the parameter. End Points = confidence limits. How large or small could be without rejecting if we ran a t-test on the obtained sample mean.
12
Confidence Limits (C.I.) We already know, S and We know critical value for t at We solve for Rearranging Using +2.993 and -2.993
13
Two Related Samples t Related Samples Design in which the same subject is observed under more than one condition (repeated measures, matched samples) Each subject will have 2 measures and that will be correlated. This must be taken into account. Promoting social skills in adolescents Before and after intervention beforeafter Difference Scores Set of scores representing the difference between the subject’s performance or two occasions
14
our data can be the D column from we are testing a hypothesis using ONE sample
15
Related Samples t now remember N = # of D scores Degrees of Freedom same as for one-sample case = (N - 1) = (15 - 1) = 14 our data Go to table
16
Advantages of Related Samples 2. Avoids problems that come with subject to subject variability. The difference between(x 1 ) 26 and (x 2 ) 24 is the same as between (x1) 6 and (x2) 4 (increases power) (less variance, lower denominator, greater t) 1. Control of extraneous variables 3.Requires fewer subjects Disadvantages 1.Order effects 2.Carry-over effects
17
record means and and the differences between, and for each pair of samples Two Independent Samples t Sampling distribution of differences between means Suppose:2 pop’ns and draw pairs of samples: sizes N 1, and N 2 repeat times
18
Mean Difference Mean Variance Standard Error Variance Sum Law Variance of a sum or difference of two INDEPENDENT variables = sum of their variances The distribution of the differences is also normal
19
We must estimate with t Difference Between Means Because or
20
When we need a better estimate of is O.K. only when the N’s are the same size We must assume homogeneity of variance Rather than using or to estimate, we use their average. Because we need a Weighted Average weighted by their degrees of freedom Pooled Variance
21
Now come from formula for Standard Error Degrees of Freedom two means have been used to calculate
22
Example:
23
We have numerator We need denominator 18.00 – 15.25 ??????? Pooled Variance because Denominator becomes =
24
Go to Table
25
If is known and is unknown, then replaces in Z score formula; replaces Summary If and are known, then treat as in If two related samples, then replaces and replaces
26
then and are replaced by size, then is replaced by If two independent samples, and Ns are of equal If two independent samples, and Ns are NOT equal,
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.