Download presentation
Presentation is loading. Please wait.
Published byJocelyn Schneider Modified over 11 years ago
1
Process Evaluation Susan Kasprzak, March 13, 2009
2
Process Evaluation Objectives: Overview of ‘process evaluation’
Steps for developing a process evaluation plan Measures As you know, the framework that all of you have been working on over the last many months has been focused on outcomes as well as process; in other words, measuring client outcomes as well as the way the program is being delivered and whether the program is reaching the intended population. In many ways, this webinar could be perceived as coming too late; most of you will have already gone through the task of identifying process evaluation questions and struggles of how you’re going to objectively measure your own internal program. The emphasis of the Evaluation Capacity Building Grant is on building capacity, so I think the value in this and other webinars is on building knowledge for both enhancing and making modifications to your current framework as well as applying these new skills to further evaluation within your agencies. Over this webinar, I’m planning to identify the various steps that you need to consider in developing a process evaluation plan. Some, of course, will overlap with developing your outcome evaluation plan; however, some are distinctive and need specific time and attention. Lastly, we will address the issue of how to measure process related variables.
3
Process Evaluation - Why
Emphasis has been on outcome evaluation to determine program success Some see the ‘bottom line’ as everything Can’t ignore ‘bottom line’, but “good ideas do not always yield good results” ‘Hard’ evidence of program impact is grounded in how an intervention is implemented and delivered A number of years ago, evaluation was all about numbers – how many clients were seen, how many staff hours were spent with clients, etc. From there, evaluation took an about face to the point where emphasis has been solely on the measurement of client outcomes – more specifically, behavioural changes – as a way of determining program success. The mandated use of CAFAS and BCFPI are clear examples of this. I think it’s important that we begin to see that the way that a program is implemented and delivered can have significant impact on client outcomes, and without taking that into consideration, the results of an evaluation can be very misleading. Now I’m not suggesting that counting the number of client participants, and number of clients on the waiting list, or measuring behavioural change over the term of an intervention are not important. Clearly they are, and in fact, the methods for data collection that you are already using should be incorporated into an evaluation plan. What I’m suggesting is that your evaluation plan be more robust and comprehensive so that by the end of the evaluation, the results of the data collection will provide a more reliable picture of the program and more fully inform program decisions.
4
Process Evaluation - Why
A program’s lack of success could be related to problems with program design and/or delivery, and failure to reach target population Process evaluation has the potential to ensure successful program implementation as well as increase understanding of outcome results (Babbie & Mouton, 2001; Pawson & Tilley, 2004) Let’s say, for example, that your evaluation focuses only on outcome measures, and your results suggest a deficit in care. The problem with this, however, is that the results do not provide clues as to the causes of the deficit. Process measures, on the other hand, which assess the component parts of the actual treatment provided, can be used to identify sub-optimal practice and provide direction for improvement of program activities. Process evaluation also adds a qualitative dimension to the statistics being gathered. You might learn, for example, how many counseling sessions of what duration were received by participants in a drug treatment program; or how many new cases were opened. A process evaluation would expand these indicators by providing information on particular modalities of treatment. In addition to furnishing richer data on some of the more quantitatively elusive aspects of program operations, a process assessment can also capture important information about a program's social, cultural, and political context. Since, for example, local cultures play a key role in shaping the direction and results of new program policies and approaches, understanding the local environment may be valuable to understanding how a program works. Field observation, reviews of case reports and interviews of program personnel, agency staff, key personnel in related agencies, and community and political leaders, are all important sources of descriptive and historical data on the program and the environment in which it operates. These data serve a number of important purposes: First and foremost, understanding exactly how a program operates (and how it may be influenced by various environmental factors) can be a great way of investigating and interpreting program impact. Qualitative data are also essential for interpreting the results of a quantitative inquiry. It makes no sense, for instance, to declare that a program had no discernable effect on its target population or area, if observation has revealed that the intended activities described by the program objectives were delivered in a qualitatively different way than intended. Finally, in cases where program impacts cannot be empirically measured, a process evaluation or case study may serve as a surrogate source of information about the nature of a particular program. Although the information will necessarily focus more on "what happened" than whether particular goals were achieved, case material may nonetheless shed important light on some of the apparent strengths and limitations of the program. When you are evaluating process, there are three possible resultant scenarios – program intervention sets into motion a causal process that leads to the desired effect; program intervention does not contribute to causal process which would have led to desired effect (theory of change failed); program intervention contributes to causal process but does not lead to desired effect (effect theory failed). Weiss, C.H. In the first instance, you’re celebrating your successes with staff and perhaps thinking about ways of replicating or expanding the program. In the second instance, there are clear issues with the way the program is being delivered. In the third instance, the program is being delivered in the intended manner, but it’s not making a different within the targeted population. Perhaps in the latter case, a different or modified approach needs to be considered; perhaps the approach that is being used is not appropriate given the target population.
5
Process Evaluation Formative evaluation: use of evaluation to improve program during the development phase Process evaluation: evaluates actual delivery of services vs. those intended Implementation: evaluation of all of the activities focused on the actual operation of a program Quality assurance: systematic process of checking to see whether a product or service being developed is meeting specified requirements Okay; so you’re starting to understand the value of process evaluation. So you decide to do some further reading as to what process evaluation is all about and how to go about conducting a process evaluation. The problem is that there are so many terms out there to describe what appears to be the same approach. This can be very confusing. Essentially all of these terms – formative, process, implementation, and quality assurance – describe multiple evaluative approaches to effective program management. Formative evaluation is more appropriate to a piloted project; to identify and make necessary changes to improve implementation and delivery prior to full implementation. Quality assurance or quality improvement – another process that can be incorporated with process evaluation as a management tool assesses a set of standards or tools
6
Process Evaluation - What
What was the program intended to be? Standards of Care? Best practices? Reach? (Specification of program components, review of historical program data) What is delivered in reality? (Methods for measuring program functioning) Identification of gaps between the intended and the actual program delivery and reach Process and ‘mplementation evaluation, on the other hand, are often used interchangeably. However, I think the one distinction between the two terms is the fact that process evaluation is comparative, measuring what is being delivered with what was intended. Implementation evaluation, on the other hand, focuses on all inputs, activities, and outputs and assesses their current functioning without necessarily making any comparisons.
7
Process Evaluation - What
“The use of empirical data to assess the delivery of programs…...verifies what the program is, whether or not it is delivered as intended to the targeted recipients and in the intended dosage” (Scheirer, p.40, in Handbook of Practical Program Evaluation, Wholey et al., 1994) So exactly how is process evaluation defined? Sheirer identifies it as … But I like Patton’s (1997) more simplified way of defining process evaluation. He says that process evaluation is all about “finding out if the program has all its parts, and if the parts are functioning as they are supposed to be functioning.
8
Process Evaluation - Why
Provides information on quality of services, and whether program was implemented as planned Assesses whether or not program is reaching the intended recipients, and their level of satisfaction Increases knowledge about program components contributing to outcomes – extent to which they’re working or not working Increases understanding about successful implementation of programs in complex organizational and community contexts
9
Process Evaluation - Why
Identifies and addresses program implementation problems in order to improve the program Links program processes to program outcomes Identifies organizational, program, and individual factors facilitating or impeding program implementation Here are some additional reasons for conducting a process evaluation: Provides positive reinforcement Process evaluation provides the reinforcement that you’ll need to keep your project going; there is nothing like positive feedback to boost your morale and that of your group members. Highlights errors It will reveal the errors and miscalculations that are bound to arise. Although it may seem threatening to have your mistakes highlighted in this way, the negative feedback that comes up through the course of the process evaluation can actually provide you with great insights, and help to ensure the quality of your project. Promotes self-analysis Process evaluation sets up a healthy climate of self-analysis and reflection that is essential in any community-based mental health program. Sometimes it’s helpful to set up regular review sessions, where you can monitor your progress on reaching the goals and objectives of your project. In fact, you’re probably already engaging in management activities relating to program assessment, and these activities can be integrated into an ongoing monitoring of process information. Provides a historical record If your project achieves good outcomes, those wishing to replicate it would need a clear description of exactly what was done, and how it was done. If your project does not achieve its objectives, process evaluation findings will help you to pinpoint why - whether your objectives were unrealistic, or whether the implementation or program delivery did not go as planned. Keeps you on track Without conducting frequent internal checks, it’s easy for programs to lose their momentum and drift off target. Regular, informal evaluations at the end of meetings (How are we doing? Is everyone feeling well/being heard! What are the challenges/barriers to program delivery? What are your staffing needs?) or even more formal (brief evaluation forms for participants and/or client satisfaction surveys) will help to keep your project on track.
10
Process Evaluation - Who
Funders Agency and program staff Community stakeholders Program participants You must determine your audience for process evaluation: external audiences include funders, clients, and community advocates; internal audiences include program staff, managers, and the board of directors. These audiences, or stakeholders, need to be involved in the evaluation planning from the beginning stages. And not only involved, but their opinions valued and ideas integrated into the plan. If, for example, your evaluation turns up findings relating to the success of your program and the need to expand services, but you failed to involve your funder until the end of the project, they’re unlikely to attach as much credibility to the findings. In order to ensure they are vested in the results, you must include them from the beginning. And who is responsible for ensuring the implementation of the evaluation findings? Who is going to ensure that needed changes illuminated over the evaluation are made to improve the program? Those persons should be closely involved and informed over the term of the evaluation. The perspectives of program participants, such as youth and parents, are also key stakeholders for the process evaluation. For example, if program participation or retention are the focus of the process evaluation, having feedback from families or youth can provide valuable insights.
11
Process Evaluation - How
Successful programs based on set of standards and objectives: Competent staff Good management Adequate facilities Sufficient funding Producing sufficient outputs to meet client needs Output level Output quality Reaching the intended target population As was mentioned earlier, the whole point of process evaluation is to measure actual program functioning and reach (extent to which you are reaching your intended target population) against the intended program functioning. In the initial planning stages, you must identify the intended optimum delivery of your program. As we go through the six steps of developing a process evaluation, you will understand how this all works…
12
Process Evaluation – How Six Steps (Steckler & Linnan, 2002)
Full description of program: Specification of purpose, underlying theory, objectives, strategies, expected impacts and outcomes. (conveyed in program logic model) Describe all elements consistent with optimum delivery of program – strategies, activities, staffing. This may include identification of factors (internal and external) contributing to program implementation and delivery problems Important issues that must be addressed: Understanding your program and how it is supposed to work Considering program characteristics and context and how these may affect implementation Before a program’s effectiveness can be judged, it is important to understand its operating environment. The program analysis stage requires a detailed description of the environment at the time of the evaluation. Included must be a description of any problems encountered in the implementation, how they were resolved, whether all planned activities were implemented; if not, what remains to be done, if plans and timetables were revised and why it was necessary, what new objectives were added and why, what changes occurred in leadership or staffing and what effect these changes have had, whether costs exceeded initial projections and why, support from referral sources and programs, lessons learned. For programs that have been functioning for some time, you will need to gather this information from program documentation. A process evaluation should provide detailed information about the program as it was actually implemented and as it is actually functioning in order to determine what is working and what is not working. A thorough process evaluation should include the following elements: Description of the program environment Description of process used to design and implement the program Description of program operations Identification and description of intervening events that may have affected implementation and outcomes Documentation such as meeting minutes, reports, memos, newsletters, and forms can be used in determining whether the intended has been achieved Consider how the goals and objectives of the program are translated into practice. What are the major project activities and how do these relate to the anticipated outcomes? Why and how is the program supposed to work? What are the underlying reasons the intervention is presumed to be effective? Who is the intended target population? What are the expectations of partnering programs. With respect to this last item – context – generally, there are participating agencies and partners as well as your target population – children/youth, and their caregivers/families. Assessment of program delivery must take into account the surrounding social system – not only the characteristics of the organization in which the program is being administered. All of these questions relate to the theory of the program. The program theory is a statement of the mechanisms through which the intervention is to work. Most importantly from an evaluation standpoint, it tells us what constructs to measure. One of the initial tasks over the term of the capacity-building grant was to develop a program logic model. This diagram essentially outlines your program theory. One of the subsequent tasks of evaluation planning is to construct a clear explication of the theory behind the program. What should the program deliver that will result in the intended changes within the targeted population? Each potential area of program emphasis implies a different casual process designed to change behaviour. Although program designers and administrators may not always claim to have employed theory in the creation of the program, theory is implicit in all forms of intervention. The literature review you have conducted should also shed light on your program theory, and to help identify what program elements contribute to changes in the outcomes. You may also want to consider here how new staff are oriented or trained on the program. If there are several staff delivering the program, how do you ensure everyone is on the same page? While it may appear quite straightforward to specify what the program is, specifying the content, what is actually going on, may be more difficult. There may be substantial variation in operational procedures, and among program staff. The more complex the program and varied the components, the more difficult and crucial this task is. You can see how potentially valuable this stage of the evaluation planning can be towards building program consensus, and the tremendous value of involving all program staff in the planning stages. Step 2 – describe high quality implementation and delivery based on direct experience and observing models similar. Here, you will want to describe what the program would look like in the ideal situation. If all these sound too complicated and difficult to do for your program, it may be a sign that your program is not yet ready for an outcome evaluation. The program may need to be further strengthened, and you may need to consider focusing on process evaluation.
13
Process Evaluation Questions - How
Develop list of potential process evaluation questions The most useful questions reflect: A diversity of stakeholder perspectives Key components of your program Priority information needs Available resources to answer your key evaluation questions While process evaluation has the potential to contribute to a better understanding of outcome results, these evaluations can also be flawed with vague questions phrased in unanswerable ways (McClure, Turner, & Yorkstons, 2005). Step 3 involves developing a list of potential process evaluation questions.
14
Process Evaluation Questions - How
Process evaluation questions address the Who What When How many in relation to program inputs, activities, outputs, and target population – the left hand side of your program logic model
15
Process Evaluation - Categories
Steckler and Linnan (2002) recommend a minimum of four elements or ‘categories’ of process evaluation: Program activities delivered Program activities received Reach Fidelity They also recommend documenting recruitment procedures and describing the context of the intervention.
16
Process Evaluation Questions - How
Coverage – extent of target population participation in program Activity, service, or agency level System level Service Delivery – extent to which program components are being delivered as planned Essentially, these components boil down to two forms of data around which the process evaluation can be organized: data on the program itself (service delivery), and data about the program participants (coverage). Program specific data would include the purposes of the program taken from program statements as well as staff descriptions, resource allocation, methods of operation, day to day procedures, staffing patterns, location, size of program, management structure, and inter-organizational relationships. Plus, every evaluation should carefully document the content, duration, and intensity of treatment involved in the intervention. Service delivery can be assessed from both an internal and external perspective. For example, agencies are increasingly comparing performance data against a set of external benchmarks – the same measurement data for similar agencies and programs (Bruder and Gray, 1994; Morley, Bryant, and Hatry, 2001). This can help an agency assess its own performance and set future target levels. The second type of input data – coverage – concerns characteristics of program clients. Coverage relates to how people enter the treatment service and what happens to them once they’re there. Data includes demographic and personal characteristics and assessments gathered at the initial point of intake. You can use this data to make comparisons between the actual and the intended target population. In addition, depending upon their relevance to the program, you may also wish to collect data on the attitude and perspective of program participants on a variety of issues that may be related to his or her performance in the project. These areas might include the youth's relationship to family and peers, attitude about the program, motivation for participation, and self concept. While the program may be reasonably expected to alter some of these factors, others are quite impervious to change. It is useful to collect data on these and other control variables to determine the types of clients who are more likely to be successful in the program. Measures: BCFPI, a standardized intake form, clinical assessment tools, motivation for treatment scale (e.g., Treatment Entry Questionnaire – Wild 1996)
17
Questions about coverage – At the activity, service, or agency level
Has the activity, service, or agency served the intended clients? What were the demographic and clinical characteristics of clients? What proportion of clients completed treatment, how many sessions were completed, and at what point did they drop-out? What were the characteristics of those who dropped out? CASE EXAMPLE: Substance-abuse program EVALUATION QUESTIONS: Has the number of referrals from external sources increased from the previous year? What percentage of clients are ages 15-18? What is the average age of program participants from each of the referral sources? HOW WAS DATA COLLECTED? As each client was admitted to the program, counselors recorded required demographics. At the end of the year, all forms were forwarded to the data clerk for tabulation HOW WAS DATA ANALYZED? Number and percent of clients from each referral source was calculated and compared to the previous year’s figures. Average age overall was calculated as well as average age from each referral source. WHAT DID THEY FIND OUT? The results indicated that the number of referrals from the legal system and from schools had increased substantially from the previous year. 87% of participants were in the age group. The average age was 17.8 years. Average age did not vary based on referral source. WHAT DID THEY DO WITH THE INFORMATION? Program staff anticipate continuing increased numbers of referrals and plans to expand program in the coming year.
18
Questions about coverage – At the system level
How many treatment programs exist for this target population within the region? How many clients are seen by each program in a year? Are there differences in the types of clients seen at each program? CASE EXAMPLE: New substance abuse program for youth within a particular region with other substance abuse programs EVALUATON QUESTIONS – What percent of clients live in the outlying area and what percent come from within city boundaries? Are clients from the different programs different in terms of age, sex or level of substance abuse? RESOURCES NEEDED? Managers from different centres collaborated to devise a one page data collection form. Distributed to all clinicians. HOW WAS DATA COLLECTED? Upon admission to each program, the admitting clinician competed a one page form including client’s age, sex, address, and substance use (type and frequency). Forms were forwarded to a central site where they were recorded in a log book and maintained by the program data manager HOW WAS DATA ANALYZED? Client data was divided according to different substance abuse sites. Within the new program site, clients’ addresses were coded for (1) living within the outlying area or (2) living within the city. The percentage of clients from each geographic area was calculated. Clients’ demographics were calculated across all of the centres. HOW WAS DATA USED? To ensure no overlap of services and to ensure there are no gaps in delivery of program across the community; to determine whether programs were reaching their target population from both the city and outlying areas; to determine client focus for each of the participating agencies
19
Questions about service delivery – At the activity, service, or agency levels
By what route did clients enter treatment? What services were actually delivered to clients in treatment and is this what was intended? What was the average length of stay or the average number of appointments kept? CASE EXAMPLE: Substance abuse program: Evaluation questions: What was the average number of days between the intake and the first scheduled appointment? What percentage of clients did not show for their first scheduled appointment? RESOURCES NEEDED TO ANSWER QUESTIONS: Data entry clerk to maintain database on a weekly basis in order to keep statistics on appointments and attendance. HOW WAS DATA COLLECTED? Attendance data was recorded on daily appointment logs kept by the assessment workers. The intake worker kept a record of the initial intake and the date of the initially scheduled assessment appointment. HOW WAS DATA ANALYZED? Person doing the analysis completed descriptive statistics on the average number of days between intake and the first scheduled appointment, and the percent of referrals who attended at least one assessment appointment during the past year. WHAT DID THEY DO WITH THE INFORMATION? Based on the findings that 34% of clients were not meeting the mandated time frame of 2 months between assessment and first appointment, and that 40% were no shows to their appointments, the agency decided to reorganize so that first appointments occurred on a specified day of the week at a specific time with several staff on call during that time. As a result, more patients could be scheduled earlier and the inefficiency of waiting for no shows was greatly reduced.
20
Questions about service delivery – At the system level
Are different treatment programs aware of one another? Do different treatment programs refer clients to one another? What is the relationship between general medical services and specialized treatment programs? CENTRAL FOCUS: The coordination among specialized treatment programs and between these program and other services that clients may need. First steps in measuring service deliver at the system level is to define the boundaries of the system itself. Focus – degree of communication and collaboration between and among staff in a given network of agencies. FOCUS OF REPORTS: Mutual awareness, frequency of interaction, frequency of cross referrals, information exchange, staff sharing, other resources exchanged (e.g, meeting rooms, materials, etc.) CASE EXAMPLE: Network of substance abuse treatment centres. EVALUATION QUESTIONS: For each centre, from where do referrals originate? What is the waiting time for services within each centre? WHAT RESOURCES? Case workers recorded referral information as part of their intake interviews. Consolidation of this data and tabulation of results were completed by data entry clerk, requiring approximately one hour per month. Each program manager completed an interview and questionnaire that assessed current treatment services. HOW WAS DATA COLLECTED? Data collection occurred in two parts – assessing the referral source, case workers recorded this information as part of their intake interviews. Consolidation of this data and tabulation of results was completed by a data entry clerk on a monthly basis. For the second part, assessing treatment services, two interviewers were dispatched to each program to collect data on waiting times for current treatment. HOW ANALYZED? All information was entered by a data entry clerk into a central records notebook. The clerk calculated the percentage of referrals from each referral source for each program, and categorized waiting times for the different types of treatment services offered. WHAT DID THEY FIND OUT? For the downtown program, the greatest number of clients (37%) were referred by the local emergency department. However, the emergency department tended not to make referrals to outlying clinics, even for clients who lived in these regions. Most referrals from outlying clinics came from family members or were self-referrals. The outlying program overlapped considerably with the downtown program, but tended to have shorter wait times. WHAT DID THEY DO WITH THIS INFORMATION? Evaluators concluded that more education of emergency department staff about the outlying programs (including shorter wait times) was warranted. They instituted a brochure campaign and a series of presentations at emergency department staff meetings to accomplish this goal.
21
Process Evaluation Questions - How
Steps to identifying questions with staff and stakeholders: Brainstorm questions relating to program, using logic model to guide process Sort questions from brainstorming session into relevant categories Use logic model to guide development of questions – focus on inputs, activities and outputs to generate specific questions. Priorities should be supported by your funders, clients, staff, board, and any other stakeholders.
22
Process Evaluation: Six Steps
Determine methods for process evaluation Qualitative and quantitative data sources Qualitative methods: interviews, focus groups, logs, case studies, open-ended survey questions, content analysis of videotaped sessions Quantitative methods: surveys, direct observation, checklists, attendance logs, document review (Baranowski & Stables, 2000; Devaney & Rossi, 1997; McGraw et al, 2000) A lot of process evaluation measures are transactional data that agencies collect on an ongoing basis – requests for service, clients admitted/discharged, production records, activity logs, treatments administered, follow-up or repeat sessions conducted, complaints from clients, etc. This information is usually maintained in a database and therefore readily accessible. Additional measures may be qualitative (e.g., focus groups, interviews).
23
Process Evaluation: Six Steps
Consider program resources and program characteristics and context Team must consider resources needed to answer questions from step 3 using methods from step 4 Considerations: time for planning, pilot testing instruments, any training needed, data collection, entry, analysis, and reporting Staff time, client burden, disruption to the intervention And finally, practical considerations and cost must weigh usefulness of the measures and the quality of the data against issues of feasibility, time, effort, and costs.
24
Process Evaluation: Six Steps
Finalize the process evaluation plan Set priorities and ensure feasibility of plan Determine priorities Important to staff and stakeholders Address important program needs Reflect key elements of program logic model Determine feasibility Can be answered with available resources and within timeframe Step 6 emerges from the planning involved over steps 3 to 5. EXAMPLE of use of process evaluation across the four elements: Program activities delivered – consistency in the approach being used with families. Program activities received – number of sessions actually attended by program participants. Reach – extent to which program is reaching their intended target population. Fidelity – extent to which program has been implemented as intended.
25
Process Evaluation: Methods
Qualitative Focus groups Key informant interviews Direct observation Quantitative Chart or record reviews Activity logs Demographics Qualitative and Quantitative In-person interviews (clients, staff, management) Measurement issues: Perhaps there is no more critical issue in evaluation than defining measurement tools. The validity of the evaluation depends on appropriate measures of project activities and outcomes. Indeed, the final judgment of the program may depend upon how the program operations are conceptualized and measured. The choice of measure and design of the evaluation will to a large degree, determine if the evaluation is to be useful and credible.
26
Process Evaluation: Selection of Measures
How meaningful is the measure? How feasible is the measure? How useful is the measure? Similar to the prioritizing of evaluation questions in step 6, you need to set priorities with regards to process measures. They must be meaningful and understandable – meaningful to stakeholders and focused on goals and objectives, priorities and dimensions of the program – and the findings not so obvious that the information is not that useful.
27
Process Evaluation: Selection of Measures
Questions involving quality Budget process – e.g., review of minutes Quality of services – standard client records Questions re: program operations Checklist of intended programming/services Questions on why a program was developed Planning documents, meeting minutes, funding proposals, needs assessments Budget process – E.g., reviewing meeting minutes related to budget to determine how budget decisions are made. Quality of services – E.g., standard client records; also, program policy manuals, program logs, observations, media reports, legislative reports, interviews. Questions re: program operations – E.g., checklist of intended programming/services; also program policies and manuals, program logs, observations, client records, and interviews.
28
Process Evaluation - Methods
Program coverage or reach Compare actual participation to intended participation Provides information about the level of program participation by the target group Requires clear definition of the intended target group, monitoring of key target group characteristics, and rate of participation In evaluating program coverage or reach, you are comparing actual participation to intended participation in the program. Coverage analysis provides feedback about participation in the program and can detect biases by examining differences in participation by subgroups of the target population (such as voluntary self-selection, intentional “creaming” of easier-to-serve clients by staff, and expansion of the target population to include those not intended for the program). Measures you would use, in this case, would be program demographics and participation records.
29
Process Evaluation – Methods to Evaluate Coverage
Develop clear definition of target population Identify key characteristics of target population for monitoring purposes Collect data on identified characteristics Analyze data to determine if actual population served is the intended population Analyze data to determine if persons served by program meet eligibility criteria Using characteristics data, determine which subgroups are over/under represented Analyze characteristics of program drop-outs Key characteristics will vary by program; they could include age, gender, ethnicity, geographic location, place of residence, etc. Also, identify any specific characteristics that may affect ability to attend program (e.g., transportation issues, child care issues, distance from program site, problem with hours of operation, etc.)
30
Process Evaluation: Methods to Evaluate Delivery
Program records: Specify evaluation questions re: program delivery Identify sources of data to answer each question Assess adequacy of current sources of data Develop a data collection plan – sources of data, who will collect data, etc. Develop an analysis plan (e.g., tables, graphs, figures) Program records – these typically include clients files and administrative data that provide a valuable source of information regarding client needs and characteristics, program resources and delivery, and program outputs. But in order to make good use of these documents, an agency has to have a good mastery of target group needs, program theory, and program operations. Sources of data typically include client intake/referral forms, assessments, service activity forms (type, duration, intensity, units of service), case notes, records of goal attainment and client progress, financial records, service statistics reports, quality assurance reports, quarterly and annual reports. One way of capturing intended program activities and outputs is by constructing service delivery pathways. These are flow diagrams that map the key program activities needed to achieve intended outcomes. This involves: Developing a logic model Studying current operations Conducting a literature review re: best practices of similar programs Developing a service delivery pathways diagram identifying program milestones that need to be achieved to reach each outcome target. (When used for monitoring program delivery, they can help to identify problems and ensure achievement of intended outcomes, service targets, and quality of services).
31
Process Evaluation: Methods
Client satisfaction or perception of care Client surveys can assess overall satisfaction with services as well as information regarding content of program and manner in which it was delivered Established measures: Client Satisfaction Questionnaire (CSQ-8) Youth Services Survey for Families Consumer-Oriented Mental Health Survey Kansas Family Satisfaction Survey
32
Process Evaluation: Selection of Measures
Standardized measures: Example: The National Inventory of Mental Health Quality Measures (developed by the Center for Quality Assessment and Improvement in Mental Health (CQAIMH): Future launch of the Centre’s Measures Database The Centre is also developing a Measures database, which will be accessible from the Centre’s website. We hope to launch this within the next few months. This is a database of the measures that are being used in various programs here in Ontario, with a review of the psychometric properties and how to get copies of the measure.
33
Process Evaluation – Tips for Data Collection
Create a schedule for collecting process data and stick to it Divide tasks & assign individuals to each task Maximize existing opportunities for data collection Only identify questions of interest/value and keep the evaluation focused Periodically review your instruments to ensure evaluation approach is still useful In a process evaluation, it is often difficult to decide what information is truly key to describing program operations and what information is simply extraneous detail. In selecting relevant data and posing questions about program operations, the evaluation team needs to refer carefully to the logic model prepared at the start of the project. Keep in mind that it is not only permissible but also important in process evaluation to revise the original logic model in light of findings during the evaluation. Analysis of qualitative data requires considerable substantive knowledge on the part of the evaluator. The person doing the analysis needs to be familiar with similar projects, respondents, and responses, and the context in which the project is operating. He or she will also need to be able to understand the project's historical and political context as well as the organizational setting and culture in which services are delivered. At the same time, the challenge will be for that person to maintain some objectivity in order to be able to make an unbiased assessment of whether responses support or refute hypotheses about the way the project works and the effects it has. Collecting qualitative data also requires skills in interviewing and observing; reading all about how to conduct an interview or a focus group in advance is important in order to understand the intent of each question, the possible variety of answers that respondents might give, and ways to probe to ensure that full information about the issues under investigation is obtained. Data must be carefully recorded or taped. Notes on contextual factors and interim hypotheses need to be recorded as soon as possible after data collection. There are many sophisticated data measurement applications for tracking process related data, but for evaluation purposes, Microsoft Excel may be all that is needed.
34
Next Steps March 27: webinar on writing the final report
March 31: Award term ends April 30: Final report due May 1 to 14: Evaluation readiness assessment for the Evaluation Implementation Grants (EIG) June 2: Deadline for EIG grant applications June webinar: The road to data collection
35
For more information Susan Kasprzak skasprzak@cheo.on.ca
/ Ext. 3320 Tanya Witteveen / Ext. 3483
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.