Download presentation
Presentation is loading. Please wait.
Published bySheryl Curtis Modified over 8 years ago
1
Bootstrapping Byrne Chapter 12
2
Bootstrapping Quick distinction: – Population distribution – Sample distribution – Sampling distribution
3
Bootstrapping Sampling distribution is approximately normal if: – Underlying population distribution is normal – N > 30 Central Limit Theorem
4
Bootstrapping We can’t really assume the underlying population is normal usually So we are going to use large samples to approximate normality, even if that population is not
5
Bootstrapping Bootstrapping is especially helpful for skewed/kurtotic data – Which Likert scales have a bad tendency to be!
6
Bootstrapping Reminder on why non-normal data is bad: – Chi-square is adversely affected – Also makes your modification indices look like OH YEAH, when it’s actually just an inflated chi-square
7
Bootstrapping Reminder on why non-normal data is bad: – CFI and TLI are decreased – S.E. is underestimated, which makes paths look significant that might not be.
8
Bootstrapping How does it work? – Start with your large sample – Treat it like the population (sort of pretend it’s everyone ever that you wanted to sample). – Take lots of samples from that population with the same N
9
Bootstrapping How does it work? – Repeat! Lots of samples! Usually 500. – Run ML estimation of each of those samples – Average the ML estimates to create a bootstrapped estimate
10
Bootstrapping How does it work? – So you get the highest probability of the estimation, averaged over the many fake samples you’ve taken from this population
11
Bootstrapping Why? – To be able to test if your mean and standard error (or estimates and standard error) are stable if you were to sample the population over and over again – Deal with non-normality
12
Bootstrapping How’s that different than Bayes? – Bayes: starts with a proposed distribution of the data (prior distribution) Remember that can be any type of distribution – Then it takes a random walk around that distribution to come up with estimated samples (MCMC) – Then you get estimates based on the data + that prior
13
Bootstrapping How’s that different than Bayes? – BS: uses your data to estimate lots of samples (with replacement!) – Then averages those samples
14
Bootstrapping Good: – Gives you an idea if the estimates are going to be stable – Tells you how consistent the samples are
15
Bootstrapping Bad: – Really normal data gives you biased estimates – Sample must be representative of the population – Must be sure independence is met
16
Bootstrapping View > analysis properties > bootstrap!
17
What to look at Compare loadings from ML and BS Compare SE from ML and BS SESE = standard error of the standard error – You want these to be very small
18
What to look at Mean is the estimate column to be able to compare – (make sure you compare standardized to standardized and not mix and match with unstandardized)
19
What to look at Bias is the difference between ML and BS – Want small! – SE bias should be small too You can also get BS CIs!
20
Let’s try it! Go back to the one-factor RS model (simple CFA) to test bootstrappping.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.