Download presentation
Presentation is loading. Please wait.
Published byCharles Ferguson Modified over 5 years ago
1
A modest attempt at measuring and communicating about quality
Laurie Reedman Statistics Canada June 28, 2018 Session 5
2
Outline Context Proposed quality indicator framework
Metadata about quality Accuracy Next steps
3
Context Moving from survey-centric business model to survey- supported, or survey-free Making more use of data we did not produce Increasing appetite for data – more timely and more detail Quality challenge – how to estimate accuracy when we don’t have sampling error Research question at Statistics Canada: How should we measure and report quality in this context? We have just initiated a research project. We don’t have answers yet, we just have lots of questions.
4
Unpack the research question
How to measure: Which dimensions or aspects of quality are important? How to report: Who is the audience? Stakeholders Internal External Casual data users Researchers Policy makers Other Let’s unpack the question. To figure out how to measure, we need to know what to measure, which dimensions or aspects of are important. To figure out how to report, we need to know the audience. I think we have several different audiences, and they all understand at different levels, And they need different information. Internally, we measure and monitor all aspects of quality, to manage our own resources. Externally, we have casual data users, such as journalists, who look at aggregate totals But don’t do complex analysis. Then we have power users, who could be researchers, or they could be policy and decision makers. There could very well be other users whose needs we do not yet understand. So perhaps we need to report quality in several different ways, in order to reach the different audiences.
5
Quality dimensions or aspects
Users will judge for themselves Relevance Timeliness Interpretability Accessibility Coherence with other sources We measure (estimate) Accuracy Reliability Coverage Bias We assess when deciding to use data Perception of authority and credibility of data producer Quality Assurance practices followed by data producer Processability of the data Combinability or linkability of the data Coherence with standards When we talk about quality we break it down into different dimensions. The regular ones are still there but when we start using data that comes from different sources, There are additional dimensions or aspects of quality that we need to consider. This is not an exhaustive list. What I’ve tried to capture here is what are the key things to consider about quality from three different perspectives. At the input stage we evaluate these things and make a decision about whether we are going to use the data or not. At the throughout stage we do editing, data integration, analysis, imputation, calibration, and we measure or Estimate the accuracy, bias and coverage of the data. At the output stage we create metadata, and while we are honest and transparent about the data, We also want to showcase everything that is good about the data, so that people will trust it and use it. So for example users will judge for themselves if our data is relevant for their purposes, but we can inform Their decision by telling them where it comes from, what the concepts, what is the reference period, What is the coverage.
6
DRAFT Quality Indicator Framework Monitor Internally Report Externally
Quality Dimension or Aspect Yes Credibility QA practices followed Maybe not Processability Linkability Coherence with standards Accuracy Maybe Reliability Coverage Bias Sort of Relevance Timeliness Metadata Interpretability Accessibility yes Coherence with other sources Quality Indicator Framework DRAFT It was deliberate that I wrote DRAFT across this slide and made it difficult for you to read. Remember that we have just started our research project at Statistics Canada, and we don’t have answers yet. This is an early draft of a quality indicator framework. In the right-hand column it has all of the quality dimensions from the previous slide. On the left is a whole column of Yes’s showing that we measure and monitor all of these things internally. The fun is in the middle column. Here I am trying to show which dimensions or aspects we should report to data users. When I showed this slide to my colleagues we had a big discussion about which ones should be yes And which ones should be maybe or maybe not. We don’t agree yet which ones we should report, never mind how to report them. That’s why it’s a research project. Let’s look at linkability for example, that one we agreed is a “yes”. When we are deciding whether to use a particular dataset we consider how easy It will be to link it with other data sources that we have. We could report to data users the linkage rate. We might also want to point out to users what variables are on the dataset that they can use to do their own linkages. This would help them assess the linkability or usefulness of our dataset for their purposes.
7
Metadata about quality
Quality indicator framework List, like a nutritional label Composite index Infographic (spiderweb, weather symbols, …) Assumptions, compromises and limitations Why we made assumptions and compromises, what is the impact on quality and usability of the data, what are the limitations of the data Certification What is the standard, who did the assessment, what’s included (Only the product? Also the process?) I like the idea of a list like on a nutritional label, where all the elements are in the same order And are reported with standardized units of measure. A composite quality index would be really cool but I think we are a long way from being able to build one. Infographics convey the least amount of information in the quickest amount of time. These might be Ideal for the casual user but would likely not satisfy all our users. So maybe we need different layers to the metadata about quality that target different audiences. For a long time now we have been explaining in our metadata the assumptions and compromises we Make, but maybe we should say a bit more about the impact these things have on the quality or usability of the data. What limitations are there, what we would recommend you not try to do with this data. If we are going to have some sort of certification then we should describe what the standards are, and ideally it would Be an independent party doing the assessment, so we should say who that is. I would also like to see the assessment look at the quality of statistical processes as well as the quality of statistical products.
8
A few words about accuracy
How to estimate accuracy and reliability when there is no sampling error Accuracy – how close are the numbers to reality Reliability – how accurate are the numbers through time Non-sampling errors can be: Systematic – resulting in bias Random – resulting in increased variability (noise) To estimate bias – data validation, confrontation, resampling methods (Statistics Netherlands report, 2014) To estimate noise – the bootstrap method has potential under certain circumstances
9
Next steps Research project at Statistics Canada
Don’t want to re-invent the wheel We welcome collaboration on methods, standards, terminology
10
A modest attempt at measuring and communicating about quality
Thank you / Merci Laurie Reedman, Statistics Canada
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.