The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW.

Slides:



Advertisements
Similar presentations
The Living Literacy Framework and the E&I Literacy Action Plan Valerie Neaves Alberta Works Programs Alberta Asset Building Collaborative March 17, 2011.
Advertisements

Using Multiple Choice Tests for Assessment Purposes: Designing Multiple Choice Tests to Reflect and Foster Learning Outcomes Terri Flateby, Ph.D.
Job Analysis Background Research 1)Organizational charts (e.g., how the job is connected to other positions and where it is located in the overall company)
1 CASAS Overview Symposium on Issues and Challenges in Assessment and Accountability for Adult English Language Learners May 16, 2003 Washington DC Linda.
How to Write Goals and Objectives
Alaska School Leaders Institute Moving Toward Implementation of Alaska’s ELA & Math Standards.
CHILD WELFARE EDUCATION & RESEARCH PROGRAMS. IV-E National Roundtable: Curriculum Discussion Liz Winter, PhD, LSW Yodit Betru, DSW, LCSW June 3, 2015.
International certificate for ICD-10 mortality and morbidity coding The 6th Meeting of the Asia Pacific Network of the WHO Family of International Classifications.
Assessment Policy. Reporting Student Data in AERIS All student data must be entered into AERIS by the 15 th and approved by the 22 nd of each month for.
Administrative Evaluation Committee – Orientation Meeting Dr. Christine Carver, Associate Superintendent of Human Capital Development Mr. Stephen Foresi,
Are We making a Difference
New York State Education Department Understanding The Process: Science Assessments and the New York State Learning Standards.
Common Core 3.0 Learning Objectives for Stakeholder Feedback Seeking Your Input to Improve Child Welfare Training! For audio: call enter.
1 Executive Limitation 12: Curriculum and Instruction Darlene Westbrook Chief Academic Officer Denise Collier Executive Director for Curriculum Monitoring.
Student Learning Objectives: Setting Goals for Student Growth Countywide Professional Development Day Thursday, April 25, 2013 This presentation contains.
NCATE Standard 3: Field Experiences & Clinical Practice Monica Y. Minor, NCATE Jeri A. Carroll, BOE Chair Professor, Wichita State University.
ASSESSMENT IN EDUCATION ASSESSMENT IN EDUCATION. Copyright Keith Morrison, 2004 ITEM TYPES IN A TEST Missing words and incomplete sentences Multiple choice.
The Importance of Addressing the Affective Domain in Child Welfare Training Maureen Braun Scalera MSW, LCSW NSDTA Presentation
ScWk 242 Course Overview and Review of ScWk 240 Concepts ScWk 242 Session 1 Slides.
EEC Board Preliminary Recommendations Quality Rating and Improvement System (QRIS) Provisional Standards Study.
Strategic Planning for Training Evaluation
University of Houston Graduate College of Social Work.
Inside NAEP Developing NAEP Test Questions 1 Peggy G. Carr National Center for Education Statistics November 17, 2007.
Assessment Reporting: Institutional Requirements for Educational Programs Office of Institutional Assessment April 7, 2014.
© 2011 Commonwealth Corporation 1 Review of Pilot 1 (2011) Assessment Navjeet Singh, Vice President of Applied Research & Evaluation
About District Accreditation Mrs. Sanchez & Mrs. Bethell Rickards Middle School
Overview and Update.  LBUSD is currently facing a unique set of challenges and opportunities. It is imperative that we look intensely and thoroughly.
2012 National Human Services Training Evaluation Symposium: 2012 National Human Services Training Evaluation Symposium: An Investigation of Stereotype.
1 Teacher Evaluation Institute July 23, 2013 Roanoke Virginia Department of Education Division of Teacher Education and Licensure.
Lessons Learned. Communication, Communication, Communication Collaborative effort Support of all stakeholders Teachers, Principals, Supervisors, Students,
QAA COLLABORATIVE PROVISION AUDIT DRAFT REPORT. QAA CPA Process Submission by the University of Self Evaluation Document (SED) (December 2005) Selection.
WP6 – Monitoring and Evaluation 17th November 2014 Rome.
HLC Criterion Four Primer Thursday, Oct. 15, :40 – 11:40 a.m. Event Center.
Introduction Tony Cortez, Account Executive
SNRPDP Self Evaluation
NCATE Unit Standards 1 and 2
PARENTS’ INFORMATION SESSION -YEAR 6 SATS 2017
New Developments in NYS Assessments
What is a CAT? What is a CAT?.
American Institutes for Research
NASA Procurement Career Development and Training Policy
Concept of Test Validity
Credit Risk Skills Workshop Training Evaluation Report
ASSESSMENT OF STUDENT LEARNING
Understanding Your Child’s Report Card
NASA Procurement Career Development and Training Policy
Classroom test and Assessment
Parent Forum – Elementary Report Card
Key Stage 2 SATs.
Federal Policy & Statewide Assessments for Students with Disabilities
PARENTS’ INFORMATION SESSION -YEAR 6 SATS 2017
Viral Hepatitis Prevention Project (VHPP) in Massachusetts
Year 6 Information Evening
School’s Cool Makes a Difference!
Topic Principles and Theories in Curriculum Development
Building Capacity to Use Child Outcomes Data to Improve Systems and Practices 2018 DEC Conference.
Implementing Race to the Top
Academically or Intellectually Gifted (AIG)
Assessing Academic Programs at IPFW
Information and Guidance for 2018
Chapter 5THE NCLEX-PN®
AQIP Accreditation Systems Appraisal 2010
Leveraging Performance Management to Support School Priorities
Family Engagement Policy
M&E Report: Department of Communications
Milwaukee Public Schools University of Wisconsin-Milwaukee
EDUC 2130 Quiz #10 W. Huitt.
Key Stage 2 SATs Presentation to Parents of Year 6 children at St. Wilfrid’s Church of England Primary Academy.
Sats Information Session
Chapter 5THE NCLEX-PN®
Presentation transcript:

The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW Theresa McCutcheon, MSW Institute for Families May 2016

Overview New Jersey Child Welfare Training Partnership Pre/Posttests Development Process Revision Process Findings Discussion

New Jersey Child Welfare Training Partnership Educate 5,000+ annually 160 unique course titles 1,500 training days per year Funder interest and court-appointed monitor in documenting knowledge gain

Training Evaluation Two Methods – Satisfaction Surveys – Pre/Posttests

NJCWTP Pre/Posttests: The Beginning Starting on July 1, 2013: All training titles required to have a pretest and posttest January to June 2013 – 46 tests Average pretest score: 60% Average posttest score: 79% July to December 2013 – 84 tests Average pretest score: 61% Average posttest score: 81%

Importance of Good Test Questions Evaluate trainee performance – Scores appear on participant’s employee transcript – Unsatisfactory posttest scores (< 80%) result in request to repeat test and/or training Demonstrate whether and the extent to which trainees are learning Evaluate instructor performance Contract requirement

Measuring Knowledge Gain: Approach Pretest – Participants complete a multiple choice knowledge assessment before training Posttest – Participants complete the same multiple choice test after training Training effect on knowledge = difference in pretest and posttest – Difference attributed to knowledge gained as a result of the training – May be related to test, course content, instructor, trainee population

Solution: Collaboration, Testing, Iteration No one can do it alone in a singular attempt – An untested single draft by a single author will almost certainly produce a poor test NJCWTP Approach – Content expert drafts questions using standard guidelines – Revision by committee – Revisions sent to content expert for approval – Revised test administered in training – Analysis of scores indicates whether further revisions are necessary

Developing Test Questions: Template

Guidelines for Writing Questions Link Questions to Core Learning Objectives Test Trainees’ Understanding – Not Memory Challenge the Test Taker Focus on Facts That Can Be Substantiated Be Clear, Concise, and Specific Be Careful Not to Give Away the Answer Use ‘All/None of the Above’ Wisely OR Eliminate! Eliminate True and False

Common Issues Flagged in Review Questions are too easy Indicator: More than 80% correct on pretest Questions are too difficult (or perhaps not adequately covered during training) Indicator: Less than 80% correct on posttest Questions are confusing (or perhaps not adequately covered during training) Indicator: Higher percent correct on pretest than on posttest

Pre/Posttest Revision Committee Includes a variety of professionals – Each brings a unique perspective and skill set to evaluate questions: PhD MSWs / MSW students Subject matter experts Child welfare professionals Trainers Discusses improvements to pre/posttests – Based on Past question performance Their own assessment of the clarity, level of difficulty, and consistency with course objectives

NJCWTP Pre/Posttests: Current Findings July to December 2015 – 124 tests Average pretest score: 59% Average posttest score: 83%  78 more tests than before mandate started  Average posttest score 4 points higher than before mandate started

Question Analysis: Example Part 1 Foundation Class Example Old Version of Test: – Pretest: 46% (n=107) – Posttest: 80% (n=104) *8 questions were too hard, ranging from 54% to 79% choosing correct answer Revised Version of Test: – Pretest: 49% (n=82) – Posttest: 82% (n=80) *6 questions were too hard, ranging from 51% to 78% choosing correct answer

Question Analysis: Example Part 2 Old Version of Test: Results by Class Sept, n=15, posttest: 83% Trainer A Sept, n=17, posttest: 93% Trainer B Sept, n=12, posttest: 93% Trainer B Oct, n= 19, posttest: 87% Trainer B Nov, n=19, posttest: 64% Trainer B Dec, n=21, posttest: 66% Trainer B Revised Version of Test: Results by Class July, n=22, posttest: 82% Trainer C Nov, n=15, posttest: 83% Trainer C Nov, n=17, posttest: 77% Trainer D Nov, n=13, posttest: 90% Trainer E Dec, n=13, posttest: 78% Trainer C

Discussion Questions How do we better guide curriculum writers in developing new questions? How do we improve our revision process? How do we ensure test questions are fair? How do we reinforce standards when administrating tests with trainers who may be overly empathetic with participant concerns? How do we support test takers (e.g., language, literacy, time constraints)? How do we guide the funder in understanding the value and limits of the test results? How do we guide the field in using results as information about learning not measures of practice performance?

Christine M Allegra, PhD, LCSW Research Analyst Theresa McCutcheon, MSW Director, Office of Child Welfare Workforce Advancement