“Mixed Methods in Program Evaluation” Presented by Tom Chapel

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

Mixed Methods Research: Developing a Deeper Understanding of Teacher and Librarian Collaboration. CISSL-ILILE Research Symposium Multiple Faces of Collaboration.
Engaging Your Community in Practical Program Evaluation Stephanie Welch, MS-MPH, RD, LDN Knox County Health Department Phone (865)
Presented by: Tom Chapel Focus On…Thinking About Design.
TIFFANY KARNEY ELENI DERESIN SPCD 505 OCTOBER 31, 2013 Case Study.
Mixed Research Santiago Cortez CSC 426.
Measuring Value: Using Program Evaluation to Understand What’s Working -- Or Isn’t Juliana M. Blome, Ph.D. , MPH Office of Program Analysis and Evaluation.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Gathering Credible Data Seattle Indian Health Board Urban Indian Health Institute Shayla R. Compton, MPH University of Washington School of Public Health.
About this Template In the past decades, mixed methods research has been increasingly applied to the dominant research fields of social, behavioral or.
SOCIAL MEDIA FOR CONSUMER INSIGHT Chapter Chapter Objectives  Describe the types of data used in social media research  Explain the different.
Research Methodologies
Planning a Mixed Methods Study
Chapter 17 Mixed Method Designs
Chapter 17 Mixed Methods Designs
1 Developments (last 20 years) Increasing interest in and advocacy for mixed methods Evolving understanding of what is mixed methods research Developing.
Evaluation. Practical Evaluation Michael Quinn Patton.
Educational Research Chapter 19
Formulating the research design
Research Design Mixed Methods
MIXED METHODS RESEARCH Anneke Fitzgerald. Objectives  To identify the advantages and disadvantages of employing multiple research methods;  To consider.
A rationale for this study a)Deciding on the type of design b)Identifying the design approach to use c)Matching the design to the study’s problem, purpose.
Edwin D. Bell Winston-Salem State University
Introduction to Theory & Research Design
R ESEARCH METHODOLOGY : AN ENTRY TO SOCIAL SCIENCE RESEARCH Megh R. Dangal Kathmandu University.
“Walking Through the Steps and Standards” Presented by: Tom Chapel Focus On…
Chapter 19: Mixed Methods Research
Chapter 9 Qualitative Data Analysis Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Qualitative Research.
Chapter 12: Survey Designs
RESEARCH IN MATH EDUCATION-3
Research Design & the Research Proposal Qualitative, Quantitative, and Mixed Methods Approaches Dr. Mary Alberici PY550 Research Methods and Statistics.
Chapter Ten: Mixed Methods Procedures
Lecturer: Dr. Johan Presented by :- Huwaida binti Tajudin Hanafi bin Hasbullah Saiful Nizam bin Mat Saat Mohd Faizal Ridha.
Evaluating a Research Report
Setting the Stage: Workshop Framing and Crosscutting Issues Simon Hearn, ODI Evaluation Methods for Large-Scale, Complex, Multi- National Global Health.
Experimental Research Methods in Language Learning Chapter 16 Experimental Research Proposals.
Thoughts on the Role of Surveys and Qualitative Methods in Evaluating Health IT National Resource Center for HIT 2005 AHRQ Annual Conference for Patient.
Evaluating Research Articles Approach With Skepticism Rebecca L. Fiedler January 16, 2002.
FOR 500 PRINCIPLES OF RESEARCH: PROPOSAL WRITING PROCESS
Chapter One: The Selection of a Research Design
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 17: Qualitative and Mixed-Method Research 1.
Quantitative and Qualitative Approaches
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
CHAPTER 1 Understanding RESEARCH
16-1 Chapter 16 Analyzing Information & Writing Reports   Analyzing Data   Choosing Information   Organizing Reports   Seven Organization Patterns.
The Major Steps of a Public Health Evaluation 1. Engage Stakeholders 2. Describe the program 3. Focus on the evaluation design 4. Gather credible evidence.
Nursing research Is a systematic inquiry into a subject that uses various approach quantitative and qualitative methods) to answer questions and solve.
Copyright © 2014 by The University of Kansas A Framework for Program Evaluation.
Evaluation Approaches, Frameworks & Designs HSC 489 – Chapter 14.
Qualitative and Quantitative Research Methods
Chapter 13: Grounded Theory Designs
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 17 Mixed Methods Designs.
Quantitative Research Qualitative Research? A type of educational research in which the researcher decides what to study. A type of educational research.
Chapter Ten: Mixed Methods Procedures. Chapter Outline Components of Mixed Methods Procedures – The Nature of Mixed Methods Research – Types of Mixed.
Type author names here Social Research Methods Chapter 27: Mixed methods research: combining quantitative and qualitative research Alan Bryman Slides authored.
Analyzing & evaluating qualitative data Kim McDonough Northern Arizona University.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
What is Research?. Intro.  Research- “Any honest attempt to study a problem systematically or to add to man’s knowledge of a problem may be regarded.
 The key concept when looking at research methods is to determine the ways in which sociologist go about developing theories.  A theory is a general.
Module 6: MM Purpose Statements Key Terms Concurrent and sequential designs Exploratory and explanatory designs.
COMP6043 w6 interviews COMP6043 Week 6 Interviews November 2009 Dr Su White.
Qualitative research: an overview
Triangulation.
MPU 1024 Mixed Methods.
Action Research Project III: ARP María del Pilar Fernández Pedraza Chía, Colombia October 14th, 2017 SUCCESSFUL STRATEGIES TO FOSTER SELF-DIRECTED.
INTRODUCTION TO RESEARCH PROJECT
Research Methodologies
A Framework for Program Evaluation
Mixed-method design Chong Ho Yu.
Presentation transcript:

“Mixed Methods in Program Evaluation” Presented by Tom Chapel Thomas J. Chapel, MA, MBA Tchapel@cdc.gov Chief Evaluation Officer 404-639-2116 Centers for Disease Control and Prevention Now I’m going to jump in to mixed methods as a special case of how we may get a collection of choices.

Agenda 2 1. The why and how of mixed methods: Rationale Options Challenges Criteria for making choices 2. Apply points to some simple examples We’ll talk about rationale and options, challenges and how you make those choices of mixed methods. I brought a couple of very, very simple—I wouldn’t even call them case illustrations, they’re really just examples, and we’ll work those through. 2

CDC’s Evaluation Framework Ensure use and share lessons learned Gather credible evidence Engage stakeholders Describe the program Focus the evaluation design Justify conclusions STEPS Standards Utility Feasibility Propriety Accuracy The Standards apply especially when we’re trying to make data collection choices. You’ll see that at the center of this idea of evaluation of standards comes into play at any one of our steps. We’ll see that these standards really apply especially when we’re trying to make data collection choices.

CDC’s Evaluation Standards Utility Feasibility Propriety Accuracy The Standards provide a quick and easy way to identify the 2 or 3 best data collection choices for this evaluation. The standards keep us from overkill or under kill. We have hundreds of potential ways of gathering evidence, a lot of ways of analyzing evidence. The standards are a quick and easy way to say instead of a hundred ways what are the two or three best ways, what are the choices that I need to make at this stage of the game?

CDC’s Evaluation Framework Ensure use and share lessons learned Gather credible evidence Engage stakeholders Describe the program Focus the evaluation design Justify conclusions STEPS Standards Utility Feasibility Propriety Accuracy Not “Collect data” Not “Analyze data” Rather… “Gather credible evidence” Now, if you look at our framework, our framework doesn’t call steps four and five collect the data and analyze the data which you would usually see in a cycle of a normal data collection project. Instead we call it gather credible evidence. Why?

Steps 1-3 Help You Focus Design And Data Collection Choices After the first 3 steps of the Evaluation Framework, we know which evidence will work for these stakeholders in this situation. Randomized control trials? Qualitative data? Quantitative data? Performance measures? Because the early steps of our framework, talking about stakeholders, understanding your program through a program description, and especially this idea on the focus of deciding what are the key questions that have to be asked, help us guarantee that by the time we get to gathering the credible evidence we know which evidence is really going to sing. Are these people that really want qualitative data? Are these people that want quantitative data? Are these people expecting a randomized control trial design? Are these people happy with a dashboard of performance measures? 6

CDC’s Evaluation Standards Utility Feasibility Propriety Accuracy The Evaluation Standards help us narrow down our data collection choices to the handful of methods that will work for this evaluation at this time. So at step four the standards that help us narrow us down from hundreds of ways of collecting data to a handful of ways that are going to work best for this evaluation this time.

Mixed Methods Data collection methods that will work for this evaluation at this time sometimes means surveys or focus groups. But sometimes there is no one best way. The best choice would be a combination of methods or “mixed methods”. So that’s enormously helpful and what we’ll see today is sometimes the answer is do a survey, sometimes the answer is do a focus group, but sometimes—and this is where we’re going to go today—the answer is there is no one best way, this is going to benefit from a combination of methods. 8

Six (Most) Common Ways to Collect Data ● Surveys ● Interviews ● Focus Groups ● Document Review ● Observation ● Secondary Data Here are probably the six kind of standard ways that we cluster the types of data collection methods that we choose. This is, again, at a high level. We’re going to turn to how to mix these in a few minutes but right now at a high level if I’m looking for how to collect data on a certain question or a certain phenomenon, I would say with near certainty that just about every method of collecting the data falls into one of these six categories. It’s some sort of survey I’m going to do whether it’s phone, a questionnaire people take in person, a questionnaire they take through the mail, some kind of personal interview whether it’s a structured or unstructured one, some sort of group discussion or a focus group, document review which can mean something as simple as analyzing the minutes of a meeting, analyzing the attendance at a meeting, all the way up to doing a content analysis on the contents of a patient’s chart. Observation, covert or overt. Again, these days covert observations are pretty uncommon technique for data collections. And as we talked about before, secondary data analysis. Any kind of database the precedes the study which I can draw on perhaps as a method of collecting some of the variables of interest. Now, the question becomes if I always have six in play, when do I decide and how do I decide which of these six I’m gonna use to examine any question of interest?   9

How Standards Inform the Choice of Methods Consider the context : How soon do I need the results? What resources can I use? Are there any ethical issues to consider? Standards Utility Feasibility Propriety Accuracy Again, the evaluation standards come to the rescue here. So here’s some standard ways that we usually make decisions about which of those methods to use. First thing is the context. So when I’m doing an evaluation, I get to the data collection phase, I’m always asking at least these three questions. How soon do I need the results? And that’s going to point me to some methods more than others. How much resources can I really put into this, how much is it gonna cost? And that’s gonna point me to some sources as opposed to others. And then the ethical quandary. Even if it’s the best method possible, is there an ethical issue about this? So, for example, there’s some phenomenon which probably should be collected by survey but unless I can promise absolute confidentiality, the survey would be unethical to do and I have to look for another method. Covert observation; as I said before, absolutely the best way to collect certain things these days there’s just too much ethical quandary for it, so even if that’s the best way to do it, the ethics stand in my way. 10

How Standards Inform the Choice of Methods Also consider the content : Sensitivity of the issue Standards Utility Feasibility Propriety Accuracy The second domain I look at when I’m trying to make these choices is something about the content to be measured. What’s the nature of the question I’m looking at? So the first one is the sensitivity of the issue. Am I asking a sensitive question or a question where people are likely to give an accurate response; they’re not going to be distorted by the opinions of others? 11

How Standards Inform the Choice of Methods Also consider the content : “The Hawthorne Effect” Will the act of being observed cause someone to distort their response? Standards Utility Feasibility Propriety Accuracy The Hawthorn effect is not unlike that, and that means that’s an issue of obtrusiveness/unobtrusiveness and it comes from some ancient studies in the 1930’s of worker productivity where they varied different dimensions of the work setting, concluded after awhile that worker productivity went up no matter what they changed. They changed the lighting, worker productivity went up, then it came down. They changed the heat and it went up and then came down. Changed the number of snacks, it went up and then they came down. And they concluded in the end what the workers were responding to was being observed and not responding to any of the changes having to do with the intervention. So sometimes the content we’re measuring, we’ve got to ask ourselves is this the kind of thing where the act of being measured is gonna cause someone to distort their response? Now that most often comes up when an issue is sensitive, but we’ll see sometimes that with sensitivity of the issue we worry about this Hawthorn idea, this obtrusiveness, and other times we don’t. 12

How Standards Inform the Choice of Methods Also consider the content : Validity Reliability Standards Utility Feasibility Propriety Accuracy The last two, validity and reliability, are a little bit different. There are validity and reliability issues to any method of data collection, but sometimes when I look at the question I’m asking, I see that if I used this method it’s gonna be a lot harder to figure out if what I’m seeing is real or not versus using this method. These validity/reliability issues are often also the lead in to when we do mixed methods or when we do what we call triangulation, using more than one method. 13

Mixed Methods Address Concerns Key Concept: Regardless of the method, when there are validity and reliability concerns, often using more than one method-- i.e., mixed methods-- will help. If I have validity and reliability concerns, no matter what method I’m gonna use, often the answer—this is how we’ll turn to the mixed methods in a second—is to use more than one method. I may not feel really strongly that I’m getting a right answer from a single method like a survey. But if I do a survey and I also do some observation; if I do a survey and it’s verified by something I see through some secondary analysis or whatever, then I feel a little bit more certain that I’m getting a valid and reliable response. 14

Mixed Methods: Definition “The combination of at least one qualitative and at least one quantitative component in a single research project or program.” (Bergman 2008) So first, the world’s shortest definition of mixed methods. Mixed methods is a combination of at least one qualitative and at least one quantitative component in a single research project or program. You set the basic definition, so when you’re mixing methods, it almost always implies that you’re pulling something from column A, the quantitative column, and column B, the qualitative column.

Use Complementary Methods Mixed methods is: A combination of methods that has complementary strengths and non-overlapping weaknesses. The purpose is to supplement or complement the validity and reliability of the information. A different way to look at it is that it’s not only just pulling any old thing from column A and any old thing from column B, but that the choice is made intentionally to balance strengths and to avoid weaknesses. And so you’ll see when we make mixed methods choices, a lot of times what we’re doing is saying here’s the weakness in this qualitative method; I can supplement that with this quantitative method, or here’s the weakness in this quantitative method by adding this qualitative piece I can circumvent that.

Strengths of Quantitative Methods Require less time than qualitative methods Cost less Permit researcher control Quantitative data is considered to be “scientific” Easier to explain validity and reliability Easily amenable to statistical analyses Now, why do we bother with mixed methods? Almost always it depends upon what we’re trying to understand in the content or what we’re trying to understand in the context. So when we use quantitative methods, what we buy with quantitative methods is we generally buy ourselves less time, less cost, we get a lot of researcher control in the process, quantitative data for good or ill, and probably less so today than a decade ago are usually viewed by peers as more “scientific”. Generally the validity and reliability are a little easier to explain when doing quantitative methods, and quantitative methods can be used researcher control so you’ll see the spread, you’ll see the disparity of responses, etc., and so you can kind of look at what the variation and the variance is in a way that’s sort of harder to do with narrative data. So this looks great and so you think, gosh, what possible reason would I ever veer from quantitative methods?

Strengths of Qualitative Methods Choose qualitative methods when you are trying to: Explore or describe a phenomenon Well, when I go to qualitative methods, I generally go because the nature of what I’m trying to get at is less about point in time estimates or firm explanation of variance. I’m trying to explore, I’m trying to describe,

Strengths of Qualitative Methods Choose qualitative methods when you are trying to: Look for induction (i.e., “surprise”) or I’m looking for induction or what we call surprise. Sometimes surprises will jump out of me when I collect data quantitatively

Strengths of Qualitative Methods Choose qualitative methods when you are trying to: Identify patterns but it’s much more common for me to get surprised or to learn patterns I wouldn’t have otherwise discovered when I’m asking a whole bunch of qualitative things.

Strengths of Qualitative Methods Qualitative data can help you understand not just “what” but “WHY”. Now, hopefully this speaks to your life. You may, gosh, what I do is I have this logic model and I have these performance measures so I’m always in quantitative mode. But it’s not true when you look at what you’re trying to accomplish with a lot of the intervention we do with cancer or diabetes or heart disease or whatever. Even if on a routine basis quantitative data solves it, almost always that quantitative data continues to raise questions for why is this going on? is there anything I can understand that will help me better understand why I’m not getting higher knowledge, attitude and belief? or why knowledge, attitude and belief is not more rapidly turning into behavior change.

When to Use Mixed Methods 1. Corroboration better understanding; more credibility “triangulation” – measuring the same thing from several different viewpoints 2. Clarification trying to understand why we got this result So when do we resort to mixed methods? That is when we decide that this combo of quantitative and qualitative is going to be the best way to go. Here’s four of the most common functions that mixed methods will play or the most common added value of mixed methods. And I’m mentioning all four of these because we’ll see that depending upon which of these reasons we’re using may lead us to one method as opposed to another or it may lead us to a certain order. Do I do the quantitative thing first or do I do the qualitative thing first? So the first is what we call corroboration. I see something and I want to understand it more completely or more credibly. So I see something but I don’t quite believe it, so I want to understand it. That is triangulation. It comes from surveying. If you’ve ever seen surveyors measuring out a road, what they’ll usually do is stand on three or four different positions, measure the same thing but kind of look and see whether their measurements converge or not. The same thing with corroboration. I use more than one method because, you know, I’m really not sure that a survey is going to give me the valid data I’m looking for so let me add this method or this method. If I’m seeing something that looks like the same answer, then I’m gonna feel like I have something defensible and valid. The second reason is probably a more common one. It’s clarification, and that is that we’re always surprised, “What the heck? Only ex-percent of people did X, Y, and Z. What is that about?” So generally here we’re using mixed methods because we’re trying to clarify, we’re trying to understand more comprehensively or completely.

When to Use Mixed Methods Mixed Methods are most commonly used for: 3. Explanation – similar to clarification want to know the “why” or “what” behind the situation 4. Exploration – similar to explanation charting new territory trying to observe patterns examine different situations and varying results to induce patterns Explanation is almost the same thing. We’re blindsided by our result or we just want to know the why or the what behind it. So, for example, in the example I just gave of the domestic violence case I wanted to really understand what was going on in a case of domestic violence. I wanted to understand more of what was going on in the situation. So clearly I couldn’t get by just with the quantitative situation. I probably could have with a 20-page questionnaire that would have been a really horrible thing that would never have gotten through human subjects review, but even that still would not have given me the control I need to be able to probe and understand. And the final one is a lot like explanation but we call it exploration, and it’s when I really don’t understand much at all, I’m kind of charting new territory and I’m out there observing just trying to see if I see any patterns. So our domestic violence case sort of lived in this land between explanation and exploration. I picked people who had exhibited domestic violence or had an incident of domestic violence; I wanted to understand what was going on in that situation but what I was really doing was exploring. I wanted to hear enough cases about enough instances with enough variation in how they turned out that I could start seeing inductively—what we call inductively—I could start seeing emerging some patterns. You know what? When the woman acts this way at the beginning of the incident, it doesn’t escalate. When the woman acts this way, it does escalate. That sort of observation. Or when the man is like this, then the woman should do this. Those are inductive observations and I can only get those by collecting a lot of the rich data in this situation.

Number of Project Facets Reported via Each Data Collection Method Source: Gregory Guest, PhD Here’s an example taken from the annals of real life. This is not my study but it’s a friend of mine’s study. Take a look. First off, look at this slide and make sure you understand what you’re seeing. This was a study where we collected data in three different ways. The blue data collection was a survey we gave to people and we simply asked them how many facets of this complicated program have you actually implemented? We followed up with a phone survey and did a little bit more probing, and then we followed up with a site visit. Now, I love this slide because not only does it show how important it is to triangulate, how inadequate a single method of data collection might have been, but it also is counterintuitive. In my experience, which is a couple of decades now, I would have expected exactly opposite. If I asked these people how many facets of this intervention are you implementing? I would have expected the survey to show, oh, they all say they’re implementing all of them, you know, social desirability. I would have wanted to follow up with a phone survey to kind of test them, hear their body language over the phone, kind of test them on the intensity of the intervention, and I’m betting then it wouldn’t be 40 facets, it would be much closer to 30. Then I might go on site and look at it and I might find holy mackerel, when you’re really on site and looking at what they’re doing, it’s not 40, it’s not 30, it’s closer to 10 or 15. Now, these guys found exactly the opposite. They found that for whatever reason it was so hard to describe what these facets were that when people were asked on paper, they got it way wrong. When I came out on a site visit, it turned out good old number F there thought they were only doing two or three facets and they were actually doing more facets than anybody else that was getting this money. You know, J and L, if you look at them, same thing. In a survey they were reporting very few facets of the intervention. When I came out on the site, they were actually implementing successfully three out of four of the facets of the intervention. 24

Number of Project Facets Reported via Each Data Collection Method This is an example of using a qualitative method (site visits) to corroborate a quantitative method (surveys). The result was increased validity of the data. Source: Gregory Guest, PhD So this is a case where the combination of qualitative and quantitative for purposes of corroboration led to much richer and much more valid data. I was trying to corroborate my survey findings by going out there and doing a phone survey and then by going and doing a site visit, a more qualitative method. Lo and behold when I added that qualitative piece, that’s when I got my real valid data. 25

Which to Choose? How do you choose which methods to use? Which method comes first, the quantitative or the qualitative? You have a lot of flexibility in these decisions. Now, there’s a million ways to do mixed methods and they kind of vary across two dimensions, and we’re going to talk about some examples of each. So when we think about mixed methods, the basic definition is, well, it’s whenever we’re combining a quantitative and a qualitative method. Well, that’s very easy. But there’s a little bit more richness to it and there’s a little bit more nuance to it that will help us decide when we want to use this method versus that method and especially when it is we want to use the quantitative first or the qualitative first.

Parallel or Concurrent Mixed Methods For “parallel” or “concurrent” mixed methods, quantitative and qualitative data collection happen at the same time. QUANTITATIVE QUALITATIVE So the way mixed methods vary, and I’m gonna walk through some examples of each, is some mixed methodologies are what we call parallel or concurrent. I’m doing something quantitative and I’m doing something qualitative at exactly the same time.

Sequential Mixed Methods For “sequential” mixed methods, either quantitative or qualitative data collection can happen first. QUANTITATIVE QUALITATIVE OR QUALITATIVE QUANTITATIVE Many more mixed methodologies are what we call sequential. I’m gonna do something first and I’m gonna do something second. So I may do the quantitative piece first and I’ll do the qualitative one for corroboration.

Example of Sequential Mixed Methods to Corroborate Data QUALITATIVE In this case, the qualitative method (site visits) was used to corroborate the quantitative (survey) method and the results were different. QUANTITATIVE That’s exactly what happened on the example I just gave you with my site visits to those projects to see how many facets of the intervention. I did a survey which was quantitative, lo and behold 10 out of 40, 15 out of 40 being implemented, I did a qualitative site visit, oh my gosh, they’re implementing almost the entire intervention correctly. In other cases of sequentialness I do the qualitative piece first and that helps me boil down stuff which I’ll then test or verify with some quantitative technique.

Mixed Methods Is Your Choice You are never required to use mixed methods. However, you may choose to use mixed methods when: you have some indication that a single method may give you incorrect data. a single method may give you an incorrect perception of reality. Now again, if this sounds complicated, it’s not. Remember, the goal here is I never ever have to do mixed methods. I’m choosing to do mixed methods because something has told me that a single method alone is going to give me the wrong data, not enough data, not the data I can work with, or an incorrect perception of reality. So I’m jumping into mixed methods not as a default. I’m jumping into mixed methods when something makes me uneasy about collecting data in a single way.

Mixing Methods During Data Analysis Qualitative data (focus groups, observations, secondary data, etc.) can be converted to “numbers” via quantitative techniques like content analysis. This is also a mixed method design approach. Sometimes the mixing occurs at the point of collection analysis so I’m going to do two different methods of data collection and then I’m going to analyze each of them and combine them. But there are other cases where the mixing of methods really doesn’t occur until the analysis portion. So the most common example there is when I do a qualitative data collection, well, I may look at it for patterns, just eyeball it for patterns which is a qualitative analysis, but more commonly what I’ll do is I’ll either run it through some Content Analysis or some other simple software like that, and what it will yield is counts of things. Well, then it becomes quantitative data. But there I haven’t mixed the methods. What I’ve done is I’ve decided to mix the analyses.

Mixing Methods During Data Analysis Qualitative data can be very complex. Examining qualitative data with quantitative techniques helps to identify or validate patterns or themes. Why is that useful? God bless us, qualitative data is wonderful but sometimes it’s just so overwhelming it gets lost in the weeds and so sometimes we want to apply these sort of Content Analyses and other techniques that will yield some numbers for us just to help get our arms around what we’re seeing. Am I really seeing a pattern? Let me see how often, how many times the theme X,Y and Z shows up. Oh my gosh, that shows up in 20% of my interviews. Well, then I know that’s sort of a major theme, it’s a major finding about—the example before, how a woman might be able to de-escalate an escalating domestic violence incident.

Deciding When To Use Mixed Methods and How Key Concept: Using mixed methods is a deliberate design decision. You use it when you don’t trust the data from any single method. The reason for your uncertainty determines the methods you choose to mix and the order in which you use them. So we’re gonna show some examples in a second but almost always when you’re using mixed methods, you’ve made some assertive choice for some reason that you think you want to mix a quantitative method and a qualitative method because you don’t trust either method alone for some reason. Depending upon why you don’t trust the method alone, what’s the presenting problem you’re trying to solve, you’re going to choose to do something parallel, gonna do both methods at once, something sequential-- this is a case where I’ll benefit most from doing quantitative first, then qualitative surrounding it or qualitative first and quantitative summarizing it, or iteratively quantitative and qualitative. Now I’m going to do a little bit more quantitative, now I’m gonna to do some more qualitative to understand the outliers. All this sounds more complicated than it is. You have probably already done this, at least a poor person’s versions of it without even knowing it or maybe even while knowing it.

Example 1 Concurrent Design Problem or Purpose: Validity Do people give similar responses on surveys as well as in focus groups? Example Survey (quantitative) and focus groups (qualitative) are conducted concurrently with similar participants. So let’s walk through some examples here. So the first column is an example of a concurrent design, and the purpose of doing a concurrent design is primarily this validity we talked about. I’m trying to determine if results are similar when I do the quantitative and the qualitative piece. So I do surveys and I do a focus group and I’m dealing with similar participants. Well, if the focus group questions—the advantage of that is I might learn some more richness in the focus groups but in the concurrent design my main purpose is to figure out when people are asked this in a qualitative setting like a focus group, are they giving similar answers they do when they answer a survey?

Explain unexpected results Example 2 Explanatory Sequential Design Problem or Purpose: Explain unexpected results Use a qualitative method to explain “blindside” results from a quantitative method. Example Survey (quantitative) followed by focus groups (qualitative) to explain or to better understand what’s going on. In the explanatory sequential design I’m trying to figure out, explain results from my quantitative data with my qualitative data, so this is what I think of as the blindsiding example. I get something in the survey, what in the heck is that answer about? Well, then I follow up with some qualitative data to figure out what’s going on there? What can explain that variation in result or the fact that so few or so many people are doing this? In a case like this, the survey is the thing I’m gonna conduct first, that’s where my blindside comes from. And then I’m gonna follow up with something like focus groups or personal interviews or observation to try and figure out if I can explain or understand what’s going on in that quantitative result.

Problem or Purpose: Verify suspected patterns Example 3 Exploratory Sequential Design Problem or Purpose: Verify suspected patterns Explore potential patterns with a qualitative method and then verify the patterns with a quantitative follow-up. Example Focus groups (qualitative) first, identify potential patterns, then do a survey (quantitative) to validate any patterns. The exploratory sequential design has the exact purpose in mind. There, I’m going out there in a formative way, I’m trying to look at a new landscape, I’m kind of hunting around sort of in the old, in the very earliest means of induction. I’m sitting behind a rock trying to watch for patterns. Then I use the quantitative data to verify either to zoom in on the patterns that I found or to verify the patterns I found. So I do a bunch of focus groups on a phenomenon first. From that I learn a whole bunch of patterns. I validate that those patterns exist by asking them in a survey with similar folks.

With those first three, and see how they resonate before with those four purposes we have, explanation, corroboration, validation and elaboration.

Design Options Summary You mix quantitative and qualitative methods in a different order depending on the presenting problem: Validate results And you can see how, depending upon why, what my presenting problem is; am I trying to validate something I’m seeing?

Design Options Summary You mix quantitative and qualitative methods in a different order depending on the presenting problem: Validate results Explain the unexpected Am I trying to explain a result that surprises me?

Design Options Summary You mix quantitative and qualitative methods in a different order depending on the presenting problem: Validate results Explain the unexpected Explore new themes Am I trying to understand or figure out sort of the essence or the themes in something I’m observing for the first time? You can see how each one of those would call for a combination of quantitative and qualitative results, but a little different mix in results and a different order in which we would do them.

Selected Resources (Page 1 of 2) Caracelli, V. and J. Greene (eds.). 1997. Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating Diverse Paradigms. San Francisco, CA: Jossey-Bass. Creswell, J. and V. Plano Clark. 2010. Designing and Conducting Mixed Methods Research, 2nd edition. Thousand Oaks, CA. Sage Publications. Morse, J. and L. Niehaus. 2009. Mixed Method Design: Principles and Procedures. Walnut Creek, CA. Left Coast Press. So let me leave you with a couple of selected resources. So I’ve bolded the ones I used most in these slides.

Selected Resources (Page 2 of 2) Johnson, R. Burke, and L. Christensen. Evaluation Methods. 2008. www.southalabama.edu/coe/bset/johnson/ Plano Clark, V. and J. Creswell. 2008. The Mixed Methods Reader. Thousand Oaks, CA: Sage Publications. Teddlie, C. and Tashakkori, A. 2009. Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Thousand Oaks, CA. Sage Publications. I’ve given you about 5 or 6 resources that are pretty common and pretty easy to understand and user friendly resources on how to think about mixed methods and especially the use of mixed methods in evaluation.

Recommended Resource Creswell, J. and V. Plano Clark. 2010. Designing and Conducting Mixed Methods Research, 2nd edition. Thousand Oaks, CA. Sage Publications. I’d say if you can only buy one book, the Creswell and the Plano Clark is probably the one that is most common these days that people are reading. It’s a revision to a book they did earlier that was quite influential. And the good thing about it is, as some of these slides show, it’s really easy it’s kind of user friendly and its fairly practical. It’s not always moving you to the most expensive mixed method but just making you understand why would I bother to mix a qualitative and a quantitative method?

The Community Tool Box Community Tool Box http://ctb.ku.edu Chapter 37, Section 5. Collecting and Analyzing Data And then finally although the Community Tool Box isn’t as great a source for this as it is for other things, people on the call may be aware of the Community Toolbox. And if you search around through there there’s often great resources on lots of things. And I haven’t looked recently at the analysis of data collection portion but I’d be shocked if there wasn’t some really good, practical tools in there for helping you think about qualitative and quantitative data collection and the act of mixing methods.

End “Mixed Methods” Return to Webinar 4: Gathering Data, Developing Conclusions, and Putting Your Findings to Use Return to Evaluation Webinars home page Return to Webinar 4: Gathering Data, Developing Conclusions, and Putting Your Findings to Use (http://www.cdc.gov/asthma/program_eval/webinar4.htm ) Return to Evaluation Webinars home page. (http://www.cdc.gov/asthma/program_eval/evaluation_webinar.htm )