EVALUATION AND RESEARCH

Slides:



Advertisements
Similar presentations
Program Evaluation Alternative Approaches and Practical Guidelines
Advertisements

CHOOSING A RESEARCH PROJECT © LOUIS COHEN, LAWRENCE MANION, KEITH MORRISON.
What You Will Learn From These Sessions
Chapter 1: An Overview of Program Evaluation Presenter: Stephanie Peterson.
GROUNDED THEORY © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
PPA 502 – Program Evaluation
Measuring Learning Outcomes Evaluation
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
RESEARCH DESIGN.
Literature Review and Parts of Proposal
PART II – Management Audit: Basic Standards, Values and Norms Shared by Pratap Kumar Pathak.
Program Evaluation EDL 832 Jeffrey Oescher, Instructor 6 June 2013.
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
VIRTUAL WORLDS IN EDUCATIONAL RESEARCH © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
CHAPTER 1 Understanding RESEARCH
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Thomson South-Western Wagner & Hollenbeck 5e 1 Chapter Sixteen Critical Thinking And Continuous Learning.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
CES 20th Annual Conference Keun-bok Kang (CNU) & Chan-goo Yi (ETRI) ; Taejon, Korea A DESIGN OF THE METAEVALUATION MODEL A DESIGN OF THE METAEVALUATION.
Program Evaluation Overview. Definitions of Program Evaluation systematic collection of information abut the activities, characteristics, and outcome.
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
Differences between Research and Evaluation (Smith, M. and Glass, G. (1987) Research and Evaluation in the Social Sciences. New Jersey: Prentice Hall).
N ational Q ualifications F ramework N Q F Quality Center National Accreditation Committee.
Case Studies and Review Week 4 NJ Kang. 5) Studying Cases Case study is a strategy for doing research which involves an empirical investigation of a particular.
Background to Program Evaluation
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
ABRA Week 3 research design, methods… SS. Research Design and Method.
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
Decision-Oriented Evaluation Approaches Presenters: Chris, Emily, Jen & Marie.
RESEARCH METHODOLOGY Research and Development Research Approach Research Methodology Research Objectives Engr. Hassan Mehmood Khan.
Quality Assurance processes
DATA COLLECTION METHODS IN NURSING RESEARCH
Designing Effective Evaluation Strategies for Outreach Programs
Self Assessment for Pastoral Care
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Business Research Methods 4th edition
Leacock, Warrican and Rose (2009)
Patient Involvement in the HTA Decision Making Process
Structuring the independent fieldwork investigation
Research & Writing in CJ
© LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON
Programme Review Dhaya Naidoo Director: Quality Promotion
MIXED METHODS RESEARCH
THE NATURE OF ENQUIRY: SETTING THE FIELD
CASE STUDY BY: JESSICA PATRON.
THEORY IN EDUCATIONAL RESEARCH
© LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON
Research Methodology Universitas Advent Indonesia
IS Psychology A Science?
Research Methods: Concepts and Connections First Edition
IS Psychology A Science?
Planning a Learning Unit
CHOOSING A RESEARCH PROJECT
Research Methods Research Methods Lecturer/ Facilitator :
© LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON
Conformative Evaluation (Stronach, I. and Morris, B
EVALUATION THEORY AND MODEL
Social Research Methods
Business Retention and Expansion
A LEVEL Paper Three– Section A
RESEARCH BASICS What is research?.
BBA V SEMESTER (BBA 502) DR. TABASSUM ALI
Week 2 Evaluation Framework
Meta-analysis, systematic reviews and research syntheses
BEYOND MIXED METHODS: USING QUALITATIVE COMPARATIVE ANALYSIS (QCA) TO INTEGRATE CROSS-CASE AND WITHIN-CASE ANALYSES © BARRY COOPER, JUDITH GLAESSER, LOUIS.
USING SECONDARY DATA IN EDUCATIONAL RESEARCH
Presentation transcript:

EVALUATION AND RESEARCH © LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON

STRUCTURE OF THE CHAPTER Similarities between research and evaluation Differences between research and evaluation Connections between evaluation, research, politics and policy making © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

DEFINING EVALUATION The provision of information about specified issues upon which judgements are based and from which decisions for action are taken. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

COMPARING RESEARCH AND EVALUATION Origins: Research questions originate from scholars working in a field; evaluation questions issue from stakeholders. Audiences: Evaluations are often commissioned and they become the property of the sponsors and are not for the public domain; research is disseminated widely and publicly. Purposes: Research contributes to knowledge in the field, regardless of its practical application, and provides empirical information, i.e. ‘what is’; evaluation is designed to use that information and those facts to judge the worth, merit, value, efficacy, impact and effectiveness of something, i.e. what is valuable. Research is conducted to gain, expand and extend knowledge; evaluation is conducted to assess performance and to provide feedback. Research is to generate theory; evaluation is to inform policy making. Research is to discover; evaluation is to uncover. Research seeks to predict what will happen; evaluation concerns what has happened or what is happening. Stance: The evaluator is reactive (e.g. to a programme); the researcher is active and proactive. Status: Evaluation is a means to an end; research is an end in itself. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

COMPARING RESEARCH AND EVALUATION Focus: Evaluation is concerned with how well something works; research is concerned with how something works. Outcome focus: Evaluation is concerned with the achievement of intended outcomes; research may not prescribe or know its intended outcomes in advance (science concerns the unknown). Participants: Evaluation focuses almost exclusively on stakeholders; research has no such focus. Scope: Evaluations are concerned with the particular, e.g. a focus only on specific programmes. They seek to ensure internal validity and often have a more limited scope than research. Research often seeks to generalize (external validity) and, indeed, may not include evaluation. Setting of the agenda: The evaluator works within a given brief; the researcher has greater control over what will be researched (though often constrained by funding providers). Evaluators work within a set of ‘givens’, e.g. programme, field, participants, terms of reference and agenda, variables; researchers create and construct the field. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

COMPARING RESEARCH AND EVALUATION Relevance: Relevance to the programme or what is being evaluated is a prime feature of evaluations; relevance for researchers has wider boundaries (e.g. to generalize to a wider community). Research may be prompted by interest rather than relevance. For the evaluator, relevance must take account of timeliness and particularity. Timeframes: Evaluation begins at the start of the project and finishes at its end; research is ongoing and less time-bound (though this may not be the case with funded research). Uses of results: Evaluation is designed to improve; research is designed to demonstrate or prove. Evaluation informs decision making; research provides a basis for drawing conclusions. Evaluations might be used to increase or withhold resources or to change practice; research provides information on which others might or might not act, i.e. it does not prescribe. Decision making: Evaluation is used for micro decision making; research is used for macro decision making. Data sources and types: Evaluation has a wide field of coverage (e.g. costs, benefits, feasibility, justifiability, needs, value for money), so evaluators employ a wider and more eclectic range of evidence from an array of disciplines and sources than researchers. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

COMPARING RESEARCH AND EVALUATION Ownership of data: The evaluator often cedes ownership to the sponsor, upon completion; the researcher holds onto the intellectual property. Politics of the situation: The evaluator may be unable to stand outside the politics of the purposes and uses of, or participants in, an evaluation; the researcher provides information for others to use. Use of theory: Researchers base their studies in social science theory; this is not a necessary component of evaluation. Research is theory-dependent; evaluation is ‘field-dependent’, i.e. not theory-driven but derived from the participants, the project and stakeholders. Researchers create the research findings; evaluators may (or may not) use research findings. Reporting: Evaluators report to stakeholders/commissioners of research; researchers may include these and may also report more widely, e.g. in publications. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

COMPARING RESEARCH AND EVALUATION Standards for judging quality: Judgments of research quality are made by peers; judgements of evaluation are made by stakeholders. For researchers, standards for judging quality include validity, reliability, accuracy, causality, generalizability, rigour; for evaluators, to these are added utility, feasibility, involvement of stakeholders, side-effects, efficacy, fitness for purpose (though, increasingly, utility value and impact are seen as elements for judging research). © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

SIMILARITIES BETWEEN EVALUATION AND RESEARCH Evaluation can examine the effectiveness of a program or policies, as can research. Evaluation and research share the same methodologies (styles, instrumentation, sampling, ethics, reliability, validity, data analysis techniques, reporting and dissemination mechanisms). © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Smith, M. & Glass, G DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Smith, M. & Glass, G. (1987) Research and Evaluation in the Social Sciences. New Jersey: Prentice Hall) The intents and purposes of the investigation The scope of the investigation Values in the investigation The origins of the study The uses of the study The timeliness of the study Criteria for judging the study The agendas of the study © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N. (1990) Understanding Educational Evaluation. London: Kogan Page) The motivation of the enquirer The objectives of the research Laws versus description The role of explanation The autonomy of the enquiry © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N. (1990) Understanding Educational Evaluation. London: Kogan Page) Properties of the phenomena that are assessed Universality of the phenomena studied Salience of the value question Investigative techniques Criteria for assessing the activity Disciplinary base © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

CONFORMATIVE EVALUATION (Stronach, I. and Morris, B CONFORMATIVE EVALUATION (Stronach, I. and Morris, B. (1994) Polemical notes on educational evaluation in an age of ‘policy hysteria’. Evaluation and Research in Education, 8 (1-2), pp. 5-19) Short term, takes project goals as given. Ignores the evaluation of longer-term outcomes. Gives undue weight to the perceptions of programme participants who are responsible for the successful development and implementation of the programme: ‘over-reports’ change. Neglects/‘under-reports’ the views of some practitioners and critics. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

CONFORMATIVE EVALUATION Adopts an atheoretical approach, and regards the aggregation of opinion as the determination of significance. Involves a tight contractual relationship with programme sponsors that disbars public reporting or encourages self-censorship to protect future funding. Risks implicit advocacy of the programme in its reporting style. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

MODELS OF EVALUATION Survey: cross-sectional, longitudinal Experiment Illuminative The CIPP (Stufflebeam) and the Countenance model (Stake) Context, Input, Process, Product; Antecedents, Transactions, Outcomes; Look for congruence between what was intended to happen and what actually happened in these four areas. Objectives How far have the objectives been achieved. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

STAKE’S MODEL OF EVALUATION Congruence between intentions & observations – what actually happened   INTENTIONS Congruence OBSERVATIONS Intended antecedent         Actual antecedents Intended transactions Actual transactions Intended outcomes Actual outcomes Antecedents = initial conditions Transactions = processes, what takes place during the programme © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

RESEARCH, POLITICS AND POLICY MAKING Politics, research and evaluation are inextricably linked in respect of: Funding Policy-related research Commissioned research Control and release of data and findings Dissemination of research How does research influence policy? Who judges research utilization? Consonance with political agendas © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

RESEARCH, POLITICS AND POLICY MAKING Researchers and policy makers may have conflicting: Interests Agendas Audiences Time scales Terminology Concern for topicality © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

RESEARCH, POLITICS AND POLICY MAKING Policy makers like: Simple impact model Superficial facts Unequivocal data Short-term solutions Simply, clear remedies for social problems Certainty Positivist methodologies Researchers work with: Complex models Complex data Uncertain findings Longer-term time scales Subtle, provisional data on complex issues Conjecture Diverse methodologies © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors