From Wikipedia: “Parametric statistics is a branch of statistics that assumes (that) data come from a type of probability distribution and makes inferences about the parameters of the distribution. Most well-known elementary statistical methods (e.g. the ones from our class) are parametric.” But there are alternative methods that don’t require any assumptions about the shape of the population’s probability distribution. Resampling methods are an example. Resampling Methods
There are three kinds of resampling methods: Permutation methods – used most commonly with correlations where the probability of the observed data is estimated by comparing the observed parings to a large number of random parings of the data. Monte Carlo methods – estimate the population probability distribution through simulation. Bootstrap methods – the population distribution of an observed statistic is estimated by repeatedly resampling the data with replacement and calculating the statistic.
Example of a permutation method: Suppose you measured the IQ’s of 25 pairs of twins and found a correlation of r=0.36. The scatter plot of your data is shown below. Is the observed correlation significantly greater than zero? (use =.01) Correlation r = 0.36 IQ Twin 1 IQ Twin 2 The (parametric) test used in our class would have found an r crit value of We would reject H 0 and conclude that a correlation 0.36 is (barely) significantly greater than zero.
The distribution under the null hypothesis can be estimated by repeatedly shuffling (or ‘permuting’) the relationship between the X and Y values and calculating the correlation: X Y’ r = X Y’ r = X Y r = X Y’ r =.20 …
r= 0.05r=-0.01r=-0.00r=-0.22r= 0.25r= 0.05r=-0.32 r=-0.47r=-0.34r=-0.18 r=-0.01r= 0.31r=-0.20r=-0.25r= 0.15r=-0.37r=-0.11 r=-0.24r=-0.38r=-0.36r=-0.26r=-0.30r=-0.09r=-0.24 r= 0.07r= 0.05r= 0.13r=-0.05r=-0.16r= 0.02r=-0.17 r= 0.11r=-0.12r=-0.19r=-0.01 This generates a distribution of correlations that should be centered around zero. We can then use this distribution to calculate the probability of making our observed sample correlation.
After reps, Pr(r> 0.36)= Only 3.78% of the correlations generated by permutation exceeds the observed correlation of 0.36, so we’d reject the null hypothesis using =.05
Example of a Monte Carlo simulation: Liar’s dice This is a game where n players roll 40 6-sided dice and keep the outcome hidden under their own separate cups. The goal is to guess how many dice equal the mode. After a player makes a guess, the next player must decide if the guess is too high, or otherwise guess a higher number. If it is decided that the guess is too high, the cups are lifted and the number of dice equal to the mode is computed. If the he/she wins and the player that made the guess must drink (lemonade). Suppose there are eight players, each with 5 dice. The player to your right just guessed that the modal value is 14. What is the probability that the mode of the 40 dice is that high or higher? Here’s an example of 40 throws. The mode is 5, and 10 of these throws equals the mode. 313 mode#
Example of 20 simulations. Each row is a throw of 40 dice. The last column is the number of throws that equal the mode rep #mode#
A computer simulation of one million rolls generated this histogram. Shown in red are the examples when the number of dice equal to the mode is 14 or higher. Only 2.31% of the simulations found a count of 14 or higher. This small number means that the player should ask all players to lift their cups and calculate the value.
Third method of resampling: bootstrapping to conduct a hypothesis test on medians. Suppose you measured the amount of time it takes for a subject to perform a simple mental rotation. Previous research shows that it should take a median of 2 seconds to conduct this task. Your subject conducts 500 trials and generates the distribution of response times below, which has a median of 2.15 seconds. Is this number significantly greater than 2? (use =.05)
The trick to bootstrapping is to generate an estimate of the sampling distribution of your observed statistic by repeatedly sampling the data with replacement and recalculating the statistic median = median = median = median = median = median = median = median = median = median = median = median = 2.22 For our example, we can count the proportion of times that the median falls below 2.
Since more than 5% of our bootstrapped medians fall below 2, we (just barely) cannot conclude that our observed median is significantly greater than 2.