Evaluation Test Justin K. Reeve EDTECH 505-4173 Dr. Ross Perkins.

Slides:



Advertisements
Similar presentations
Program Evaluation: What is it?
Advertisements

Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Chapter 4 How to Observe Children
Quantitative vs. Qualitative Research Method Issues Marian Ford Erin Gonzales November 2, 2010.
Chapter 4 Validity.
Unit 10: Evaluating Training and Return on Investment 2009.
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Concept of Measurement
6 Chapter Training Evaluation.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Variables cont. Psych 231: Research Methods in Psychology.
Chapter 6 Training Evaluation
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9 L. Michele Issel, PhD UIC School of Public Health.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
4.12 & 4.13 UNDERSTAND DATA-COLLECTION METHODS TO EVALUATE THEIR APPROPRIATENESS FOR THE RESEARCH PROBLEM/ISSUE Understand promotion and intermediate.
RESEARCH DESIGN.
Chapter 4 Principles of Quantitative Research. Answering Questions  Quantitative Research attempts to answer questions by ascribing importance (significance)
REC 375—Leadership and Management of Parks and Recreation Services Jim Herstine, Ph.D., CPRP Assistant Professor Parks and Recreation Management UNC Wilmington.
Collecting Quantitative Data
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
Chapter 1: Introduction to Statistics
Collecting Quantitative Data Creswell Chapter 6. Who Will You Study? Identify unit of analysis Specify population Describe sampling approach  Class =
The Evaluation Plan.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Standardization and Test Development Nisrin Alqatarneh MSc. Occupational therapy.
RESEARCH IN MATH EDUCATION-3
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
Classroom Assessment A Practical Guide for Educators by Craig A
Chapter 6 Training Evaluation. Chapter 6 Training Evaluation Concepts Training Evaluation: The process of collecting data regarding outcomes needed to.
5 Chapter Training Evaluation.
Descriptive and Causal Research Designs
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
Statistics, Data, and Statistical Thinking
 Collecting Quantitative  Data  By: Zainab Aidroos.
Chapter 29 conducting marketing research Section 29.1
The Research Enterprise in Psychology
Evaluating HRD Programs
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
Quantitative and Qualitative Approaches
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
25 Industrial Park Road, Middletown, CT · (860) ctserc.org.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
8. Observation Jin-Wan Seo, Professor Dept. of Public Administration, University of Incheon.
Issues in Validity and Reliability Conducting Educational Research Chapter 4 Presented by: Vanessa Colón.
Program Evaluation.
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Chapter 6 Training Evaluation
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Program Evaluation Principles and Applications PAS 2010.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Sampling technique  It is a procedure where we select a group of subjects (a sample) for study from a larger group (a population)
Chapter 9 Audit Sampling – Part a.
Measurement & Data Collection
Classroom Assessment A Practical Guide for Educators by Craig A
Principles of Quantitative Research
Associated with quantitative studies
Research Methods: Concepts and Connections First Edition
Chapter Six Training Evaluation.
Quantitative vs. Qualitative Research Method Issues
The Nature of Probability and Statistics
TESTING AND EVALUATION IN EDUCATION GA 3113 lecture 1
6 Chapter Training Evaluation.
Presentation transcript:

Evaluation Test Justin K. Reeve EDTECH Dr. Ross Perkins

Reflection It's useful to know the different models that are available for evaluation, and being able to identify which one is best for a situation. The goals, activities, and evaluation procedures in the programs play a crucial role in how the evaluation is conducted. Efficiency, efficacy, and impact should be measured across any program. There are a number of programs in our district that currently lack any evaluation, or for which we collect a minimal amount of information that doesn't follow a defined, systematic process. At best, a simple survey is supplied at the end of the programs (for example, see but no formative evaluation is conducted during the training sessions themselves, and the survey results are never reviewed by the training coordinator. Until these deficiencies are resolved, our evaluation process is ineffective. The knowledge I've learned in the class so far has given me the ability to identify these weak points.

Rationale for Evaluation An evaluation produces statistical data that helps sponsors and staff members choose appropriate resources, identify areas in need of adjustment, become aware of program strengths and weaknesses, and understand the outcome of actions conducted in a program. Photo Source:

Evaluation and EdTech Evaluation enables educational technologists to choose, improve, and measure the instruction and learning process. Adopting a defined, systematic process gives the evaluator a vision of the aspects of a program working together. Photo Source:

Research and Evaluation Research uses systematic processes to determine general knowledge, whether through controlled variables and the scientific method, or qualitative methods that have a rigorous base in scientific inquiry. Evaluation allows the evaluator and program’s stakeholders to better understand the internal processes of the program. Exterior of Watch (Research) Photo Source: Interior of Watch (Evaluation) Photo Source:

Efficiency, Efficacy, and Impact Efficiency refers to the balance between time and resources spent on a program, and how well the returns exceed the investment. Efficacy refers to how well objectives and desired needs are attained. Impact refers to how participants change their behavior on a permanent basis. Scales Photo Source: Energy-Efficient Light Bulb Photo Source: Change Photo Source:

Impact of Evaluation An evaluation can identify connections between goals and needs and implement focused activities to these ends. Formative and summative evaluation results identify processes that work effectively, and decisions that can be made for necessary program modifications. Photo Source: Created by Justin K. Reeve

Goal-Free Model A data collection- and analysis-oriented model, ideal for qualitative evaluations, in which program aspects come from observation of the program, without regard to goals and objectives. The evaluator uses the data to infer conclusions of the program's impact on clients' needs. Photo Source:

Kirkpatrick's Four-Level Model A model in which students evaluate the program's objectives, value, and relevance (Reaction), and participate in objective-based assessments (Learning). The model evaluates the impact of the program on participants (Behavior) and the organization for the decision-makers (Results). Photo Source: Created by Justin K. Reeve

Qualitative vs. Quantitative Qualitative data are collected with observations, interviews, surveys, and case studies. Data are subjectively interpreted. Quantitative data are collected with tests, instruments, and other quantifiable measuring tools, and clearly state numerical facts and may be used to predict outcomes. Photo Source: Created by Justin K. Reeve

Levels of Data Nominal data consists of mutually exclusive unordered categories (black/white). Ordinal data consists of ranked, ordered items, without numerical differences between the items (top 10 list). Interval data is ranked numerically without an absolute zero on the scale (temperature). Ratio data is ranked numerically with an absolute zero on the scale (length). Ordinal Photo Source: Ratio Photo Source: Nominal Photo Source: Created by J. Reeve Interval Photo Source:

Data Instruments Interview: A conversation which obtain considerable information, but with poor comparison across instances. Scale: A response in ranked intervals; a scale should be well-organized to prevent evaluator bias. Test: An assessment tool which produces numerical data. Observation: Constructing narratives of perception from predetermined criteria. Interview Photo Source: Scale Photo Source: Personal Collection Test Photo Source: Observation Photo Source:

Formative vs. Summative Formative evaluation tracks a program as it happens. Real- time adjustments can be made to the program from these results. Summative evaluation occurs after the program and gauges the overall effectiveness and objective attainment. Parent Assisting with Homework Photo Source: Parent Reviewing Homework Photo Source:

Samples and Populations The population represents all potential participants that meet the criteria for the intended evaluation. The sample is a selection of participants, typically random, that is representative of the target population, or represents a desired section of the population. Photo Source:

Validity and Reliability Reliability refers to data consistency, and the ability to produce the same results when an analysis is repeated. Validity refers to data where an accurate conclusion follows from the premises. Evaluation data should be both reliable and valid. Photo Source: Created by Justin K. Reeve

Independent and Dependent Variables are elements of an evaluation that may take on different characteristics or values. Dependent variables are uncontrolled, and strictly observed or studied. Independent variables are controlled by the evaluator or participants, and manipulated when necessary. Photo Source:

Criterion- vs. Norm-Referenced Criterion-referenced tests measure individual learning and personal achievement against curricular standards, with clear expectations of participant performance. Norm-referenced tests rank how students compare to others in the population, without setting goals or using standards of achievement. Photo Sources:

Activity Evaluation The idea of using visual metaphors as an assessment is an interesting one. I'd actually like to see some data comparing the benefits and disadvantages of this form of assessment vs. a traditional test. I personally struggle a little with coming up with actual metaphors and not just visual examples of the concepts being assessed (hopefully this isn't too evident in these slides), but that gives me a chance to improve my abstract thinking skills. I would like it if the activity's instructions explicitly gave extra allowed word space to explain each visual metaphor on the following slide, so I could better justify my choice of images (though maybe this defeats the point of coming up with a strong visual metaphor).