Download presentation
Presentation is loading. Please wait.
Published byShannon Richardson Modified over 9 years ago
1
The Role of Statistics and Statisticians in Effectiveness Evaluations Wendy Bergerud Research Branch BC Min. of Forests November 2002
2
Statistician’s roles § Team member l or act as an advisor l ==> Either way, get us involved early § Educate l Prepare a guidebook l Prepare topic specific pamphlets l Present a workshop § Develop new statistical methods
3
Effectiveness Evaluation Teams § Teams provide the needed expertise and person-power to complete complicated effectiveness evaluations. § Teams benefit from having members with different expertise, backgrounds and points of view. § A Statistician brings knowledge and skills about the methods used to obtain reliable results.
4
Evaluation Methods § The statistician’s expertise is particularly relevant during: l Design of the evaluation l Analysis and interpretation of collected data
5
Questions we ask about design: § What questions are we trying to answer with this project? What is the objective? § How can we design this project to answer these questions and yet do so within the given logical and resource restraints? § What internal and external threats could undermine our confidence in the final results? How can we mitigate against these threats?
6
Questions we ask about analysis and interpretation: § How do we analyse the resulting data to answer the questions? § What assumptions are necessary to do this analysis and make these conclusions? § And how well do we know these conclusions? (e.g. confidence limits around a mean).
7
“Statistical designs always involve compromises between the desirable and the possible”. - Leslie Kish § Some assumptions and simplifications must always be made. § We must understand their consequences: l Will others agree with them? l How do they weaken our ability to generalize the results? l How do they weaken any cause and effect statements that we might like to make?
8
Evaluation Objective §The team should: l Refine its mandate into specific and detailed objectives. l Define the population under study. l Consider a range of possible outcomes and what that means for design and data analysis and interpretation. l If looking for differences, determine the minimum difference of practical importance.
9
Categories of Evaluations § Evaluation Projects § Continuous Improvement Processes
10
Evaluation Projects § One Group: Descriptive Survey § Several Groups: l Analytical Survey l Controlled Experiment l Observational Study § Changes over Time
11
Continuous Improvement Processes § Acceptance Sampling § Statistical Quality Control § Two-Phase Sampling (Double Sampling) § Bayesian Statistics
12
Acceptance Sampling § Used to decide whether a threshold value has been reached. § Control overall chances of making wrong decisions. § Uses standard sampling theory but to achieve a different objective.
13
Statistical Quality Control § Units produced by some process are sampled on a regular but random basis to determine if the response values are within set limits. § If response within the limits then the process is ‘under control’ and all is well.
14
Two-Phase Sampling § Population is intensively sampled once to obtain auxiliary information. § A subsample of the first sample is selected and a response variable is measured. § The strong relationship is used to develop a population estimate for the response variable. § aka Double Sampling.
15
Bayesian Statistics §Uses prior information. §New data is used to update the prior information.
16
Designing the Evaluation § Using existing work l Literature Search l Meta-analysis § Collecting new data l Retrospective Approach l Prospective Approach
17
Designing the Evaluation § Arranging the material l Can we randomly sample units from the desired target population? l Can we randomly assign levels of interesting factors to the population units, or can we only observe what they are? l Can we get sufficient replication?
18
Populations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.