Download presentation
Presentation is loading. Please wait.
Published byAnnice Terry Modified over 9 years ago
1
Biostatistics in Practice Youngju Pak Biostatistician Peter D. Christenson http://research.LABioMed.org/Biostat Session 1: Quantitative and Inferential Issues
2
Class Note We will typically have many more slides than are covered in class.
3
Session 1 Objectives General quantitative needs in biological research Statistical software Protocol examples, with statistical sections Overview of statistical issues using a published paper
4
Session 1 Objectives General quantitative needs in biological research Statistical software Protocol examples, with statistical sections Overview of statistical issues using a published paper
5
General Quantitative Needs Descriptive: Appropriate summarization to meet scientific questions: e.g., changes, or % changes, or reaching threshold? mean, or minimum, or range of response? average time to death, or chances of dying by a fixed time?
6
General Quantitative Needs, Cont’d Inferential: Could results be spurious, a fluke, due to “natural” variations or chance? Sensitivity/Power: How many subjects are needed? Validity: Issues such as bias and valid inference are general scientific ones, but can be addressed statistically.
7
Session 1 Objectives General quantitative needs in biological research Statistical software Protocol examples, with statistical sections Overview of statistical issues using a published paper
8
Professional Statistics Software Package Output Enter code; syntax. Stored data; access- ible.
9
Typical Statistics Software Package Select Methods from Menus Output after menu selection Data in spreadsheet www.ncss.com www.minitab.com
10
Microsoft Excel for Statistics Primarily for descriptive statistics. Limited output. No analyses for %s.
11
Almost Free On-Line Statistics Software Run from browser, not local. Can store data, results on statcrunch server. $5/ 6 months usage. www.statcrunch.com
12
Free Statistics Software: Mystat www.systat.com
13
Free Study Size Software www.stat.uiowa.edu/~rlenth/Power
14
Session 1 Objectives General quantitative needs in biological research Statistical software Protocol examples, with statistical sections Overview of statistical issues using a published paper
15
Typical Statistics Section of Protocol Overview of study design and goals Randomization/treatment assignment Study size Missing data / subject withdrawal or incompletion Definitions / outcomes Analysis populations Data analysis methods Interim analyses
16
Public Protocol Registration www.clincialtrials.gov www.controlled-trials.com Attempt to allow the public to be aware of studies that may be negative. Many journals now require registration in order to consider future publication.
17
Public Protocol Registration
18
Example of Protocol --- Displayed in Class ---
19
Session 1 Objectives General quantitative needs in biological research Statistical software Protocol examples, with statistical sections Overview of statistical issues
20
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
21
Paper with Common Statistical Issues Case Study:
22
McCann, et al., Lancet 2007 Nov 3;370(9598):1560-7 Food additives and hyperactive behaviour in 3-year-old and 8/9-year- old children in the community: a randomised, double-blinded, placebo- controlled trial. Target population: 3-4, 8-9 years old children Study design: randomized, double-blinded, controlled, crossover trial Sample size: 153 (3 years), 144(8-9 years) in Southampton UK Objective: test whether intake of artificial food color and additive (AFCA) affects childhood behavior Sampling: Stratified sampling based on SES Baseline measure: 24h recall by the parent of the child’s pretrial diet Group: three groups (mix A, mix B, placebo) Outcomes: ADHD rating scale IV by teachers, WWP hyperactivity score by parents, classroom observation code, Conners continuous performance test II (CPTII) GHA score
23
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
24
Selecting Study Subjects
25
Representative or Random Samples How were the children to be studied selected (second column on the first page)? The authors purposely selected "representative" social classes. Is this better than a "randomly" chosen sample that ignores social class? Often hear: Non-random = Non-scientific.
26
Case Study: Participant Selection No mention of random samples.
27
Case Study: Participant Selection It may be that only a few schools are needed to get sufficient individuals. If, among all possible schools, there are few that are lower SES, none of these schools may be chosen. So, a random sample of schools is chosen from the lower SES schools, and another random sample from the higher SES schools.
28
Selection by Over-Sampling It is not necessary that the % lower SES in the study is the same as in the population. There may still be too few subjects in a rare subgroup to get reliable data. Can “over-sample” a rare subgroup, and then weight overall results by proportions of subgroups in the population. The CDC NHANES studies do this.
29
Random Samples vs. Randomization We have been discussing the selection of subjects to study, often a random sample. An observational study would, well, just observe them. An interventional study assigns each subject to one or more treatments in order to compare treatments. Randomization refers to making these assignments in a random way.
30
Why Randomize? ABABAB BABABA ABABAB BABABA ABABAB BABABA Plant breeding example: Compare yields of varieties A and B, planting each to 18 plots. Which design is better? BABABB AABABA BABBAA BBAABB ABABBA AABABA SystematicRandomized
31
Why Randomize? So that groups will be similar except for the intervention. So that, when enrolling, we will not unconsciously choose an “appropriate” treatment for a particular subject. Minimizes the chances of introducing bias when attempting to systematically remove it, as in plant yield example.
32
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
33
Basic Study Designs 1. Prospective (longitudinal) 2. Retrospective(Case-Control) 3. Cross sectional 4. Randomized-Control
34
Case Study: Crossover Design Each child is studied on 3 occasions under different diets. Is this better than three separate groups of children? Why, intuitively? How could you scientifically prove your intuition?
35
Blocked vs. Unblocked Studies AKA matched vs. unmatched. AKA paired vs. unpaired. Block = Pair = Set receiving all treatments. Set could be an individual at multiple times (pre and post), or left and right arms for sunscreen comparison; twins or family; centers in multi- center study, etc. Block ↔ Homogeneous. Blocking is efficient because treatment differences are usually more consistent among subjects than each separate treatment is.
36
Potential Efficiency Due to Pairing........ ….......... ….......................... A BA B Δ=B-A … …. Δ 33 3 Unpaired A and B Separate Groups Paired A and B in a Paired Set
37
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
38
Outcome Measures Generally, how were the outcome measures defined (third page)? They are more complicated here than for most studies. What are the units (e.g., kg, mmol, $, years)? Outcome measures are specific and pre- defined. Aims and goals may be more general.
39
Summarization / Data Reduction How are the outcome measures summarized? e.g., Table 2:
40
Case Study: Statistical Comparisons How might you intuitively decide from the summarized results whether the additives have an effect? Different Enough? Clinically? Statistically?
41
Statistical Comparisons: Figure 3
42
Statistical Comparisons and Tests of Hypotheses Engineering analogy: Signal and Noise Signal = Diet effect Noise = Degree of precision Statistical Tests: Effect is probably real if signal-to-noise ratio Signal/Noise is large enough. Importance of reducing “noise”, which incorporates subject variability and N.
43
Back to Efficiency of Design........ ….......... ….......................... A BA B Δ=B-A … …. Δ 33 3 Unpaired A and B Separate Groups Paired A and B in a Paired Set Noise Signal = 3
44
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
45
Number of Subjects The authors say, in the second column on the fourth page: Intuitively, what should go into selecting the study size? We will make this intuition rigorous in Session 4.
46
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
47
Other Effects, Potential Biases The top of the second column on the fourth page mentions other effects on diet : The issue here is: Could apparent diet differences (e.g., -0.26 B vs. -0.44 Placebo) be attributable to something else?
48
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
49
Non-Completing or Non-Adhering Subjects What is the most relevant group of studied subjects: all randomized, mostly adherent, fully adherent? Study Goal: Scientific effect? Societal impact?
50
Statistical Issues Subject selection Randomization Efficiency from study design Summarizing study results Making comparisons Study size Attributability of results Efficacy vs. effectiveness Exploring vs. proving
51
Multiple and Mid-Study Analyses Many more analyses could have been performed on each of the individual behavior ratings that are described in the first column of the 3rd page. Wouldn’t it be negligent not to do them, and miss something? Is there a downside to doing them? Should effects be monitored as more and more subjects complete?
52
Multiple Analyses GHA: Global Hyperactivity Aggregate Teacher ADHD Parent ADHD Class ADHD Conner … … … … Many Separate Measures Torture data long enough and it will confess to something
53
Mid-Study Analyses Effect 0 Number of Subjects Enrolled Time → Too many analyses, as on previous slide Wrong early conclusion Need to monitor, but also account for many analyses
54
Bad Science That May Seem Good 1.Re-examining data, or using many outcomes, seeming to be due diligence. 2.Adding subjects to a study that is showing marginal effects; stopping early due to strong results. 3.Emphasizing effects in subgroups. Actually bad? Could be negligent NOT to do these, but need to account for doing them.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.