The Response-Shift Bias: Pre-Test/Post-Test v. Post-Test/Retrospective Pre-Test Evaluation of Information Literacy Training Programs Marie T. Ascher, Head,

Slides:



Advertisements
Similar presentations
Stephen C. Court Presented at
Advertisements

What do you think EL Civics is?
Building the Foundation of an Information Literacy Program
1 Assessing Outcomes After Theyre Gone – Measuring Preparedness and Quality in Practice Presented by: Mary Pat Wohlford-Wessels, Ph.D. Vice President for.
Potential impact of PISA
Information literacy Curriculum integration – an investment in lifelong learning Anne-Marie Haraldstad Library of Medicine and Health Sciences The University.
California Preschool Learning Foundations

Open Future Doors through Succession Planning Principal? Curriculum Supervisor? Assistant Superintendent? Special Services Director?
1 Mathematics and Science Model Lesson Series Elementary Science Module Developed by the Florida Department of Education.
Working Together: Understanding SBA Data Les Morse, Director Assessment & Accountability Alaska Department of Education & Early Development No Child Left.
How to Develop an Assessment Plan to Measure Effectiveness in Administrative and Academic Support Units Ann Boudinot-Amin Director of Planning and Assessment.
HeLP MN Seniors: A Health Literacy Program for Seniors in Your Community Anne Beschnett, MLIS Liaison and Outreach Librarian, Bio-Medical Library, University.
Empowering tobacco-free coalitions to collect local data on worksite and restaurant smoking policies Mary Michaud, MPP University of Wisconsin-Cooperative.
Evaluating Training Programs The Four Levels
1 The influence of the questionnaire design on the magnitude of change scores Sandra Nolte 1, Gerald Elsworth 2, Richard Osborne 2 1 Association of Dermatological.
Promoting Regulatory Excellence Self Assessment & Physiotherapy: the Ontario Model Jan Robinson, Registrar & CEO, College of Physiotherapists of Ontario.
Latest developments in the MYP © International Baccalaureate Organization 2007 Page 2 Background to the presentation This PowerPoint presentation.
1 © 2006 Curriculum K-12 Directorate, NSW Department of Education and Training English K-6 Syllabus Using the syllabus for consistency of assessment.
05/19/04 1 A Lessons Learned Process Celebrate the Successes Learn From the Woes Natalie Scott, PMP Sr. Project Manager.
Understanding Common Concerns about the Focus School Metric August
1 Literacy PERKS Standard 1: Aligned Curriculum. 2 PERKS Essential Elements Academic Performance 1. Aligned Curriculum 2. Multiple Assessments 3. Instruction.
Bridging Research, Information and Culture An Initiative of the Research and Planning Group for California Community Colleges Your Name Your Institution.
HIV PREVENTION AND SUPPORTIVE SERVICES FOR LATINA WOMEN: A GRANT PROPOSAL PROJECT LIZETT MORALES CALIFORNIA STATE UNIVERSITY, LONG BEACH MAY 2013.
Project RACE: Rigorous Academic Curriculum for Everyone.
“LEADS”: Leadership Enhancement And Development System.
Diana Chan, HKUST Enhancing Learning Experiences in Higher Education: International Conference 2 -3 December 2010.
Effect of Staff Attitudes on Quality in Clinical Microbiology Services Ms. Julie Sims Laboratory Technical specialist Strengthening of Medical Laboratories.
Federal Consulting Group August 2004 Department of Labor Civil Rights Center 2004 Satisfaction Study - Recipients.
Assessing Financial Education: A Practitioner’s Guide December 2010.
Understanding our First Years Two studies and a comparison.
RESEARCH METHODOLOGY CHAPTER 3. Components of a research methodology 3.1 Introduction 3.2 Research instruments 3.3 Respondents 3.4 Research procedure.
1 Classroom-Based Research: How to Be a Researcher in Your Classroom Basic Skills Initiative Teaching and Learning Workshop October 2009 Darla M. Cooper.
Assessment Surveys July 22, 2004 Chancellor’s Meeting.
Making a difference? Measuring the impact of an information literacy programme Ann Craig
Impact of a public education program on promoting rational use of medicines: a household survey in south district of Tehran, Darbooy SH, Hosseini.
An Evaluation of SLIS Student Satisfaction and its Global Impacts Christina Hoffman, MLS Dr. Samantha Hastings, Interim Dean The University of North Texas.
Before & After: What Undergraduates and Alumni Say About Their College Experience and Outcomes Angie L. Miller, NSSE & SNAAP Research Analyst Amber D.
PATIENTS IN RESEARCH RESEARCH PARTNERSHIPS.
Reaching for Excellence in Middle and High School Science Teaching Partnership Cooperative Partners Tennessee Department of Education College of Arts and.
Implementing Adult Risk Factor Surveillance in Manitoba Case Studies ARFS Symposium January 26, 2011.
Building a Successful Professional Development Model Presented by: Howard Landman Project Director “Eastern Connecticut Elementary Science Coaching Consortium”
1 Fostering Networks for academic sharing Prof. Muhammad Aslam Adeeb The Islamia University of Bahawalpur
What Do the Results Mean for Workforce Development? September 16, 2015.
GETTING HIGH SCHOOL STUDENT’S BUY-IN: Target Language Only Mandarin Chinese Classes.
Introducing the Next Generation Science Standards (NGSS)
SciencePLUS (Promoting Learning & Understanding for Students) Network A Federally Funded Project through the Math-Science Partnership and the Kentucky.
Flipping and Blending to Build Fluency in a Math Class Melanie Anderson Teacher 6 th grade Moyock Middle, Currituck County School.
Quality Of Life, Health And Well Being Of Highly Active Individuals Louisa Raisbeck, Jeanne Johnston, Joel Stager, Francoise Benay Human Performance Laboratory,
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
WAIT Training Annual Report Educational Evaluators Inc. Expertise.Partnership.Results.
Professional Development Opportunities for the New Math Standards.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Assessing Learners The Teaching Center Department of Pediatrics UNC School of Medicine The Teaching Center.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
 Training – the process of teaching new employees the basic skills they need to perform their job.  Development – learning that goes beyond today’s.
Evaluation Results MRI’s Evaluation Activities: Surveys Teacher Beliefs and Practices (pre/post) Annual Participant Questionnaire Data Collection.
MT ENGAGE Student Learning Outcomes and Assessment April 27, 2015.
Pathway to Excellence. School’s Out Washington provides services and guidance for organizations to ensure all young people have safe places to learn and.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Success through Technology and Assessment August 1 st -3 rd Attended workshops and training to prepare for the grant o Achievement Series o eMetric o Obtaining.
CaMSP Science Assessment Webinar Public Works, Inc. Sharing Lessons Learned in the Development and Use of Science Assessments for CaMSP Teachers and Students.
Performance Goals Samples (Please note, these goals are not proficient- they are for training purposes) What do you think?
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Evaluation Report: April 1, 2015 – March 31, 2016
Evaluation Period: January 1, 2016 – December
Evaluation of An Urban Natural Science Initiative
Grant Writing Information Session
Competency Assessment
Presentation transcript:

The Response-Shift Bias: Pre-Test/Post-Test v. Post-Test/Retrospective Pre-Test Evaluation of Information Literacy Training Programs Marie T. Ascher, Head, Reference & Information Services, Diana J. Cunningham, Associate Dean & Director Health Sciences Library, New York Medical College, Valhalla, NY Objective The purpose of this research was to look at the response-shift bias among public health workers who participated in informatics training. The purpose of this poster is to introduce this technique to information professionals and educators doing competency-based evaluation. Background Following a baseline survey to assess competencies in the Spring of 2005, a series of training sessions aimed at improving proficiency on many of those informatics competencies 1 was conducted between October 2005 and January Originally self-reported proficiency at baseline (pre-test) was compared to self-reported post-test scores. Using this methodology, evidence of gains in proficiency was weak, and in several cases even showed a decline! Thinking that this was a possible case of I know now what I didnt know then, the researchers looked for a so-called response shift bias. Figure 1. A segment of the original competencies instrument Methods As a follow-up to the initial post-test, investigators conducted focus groups with those employees who had attended training in order to better understand the results. During these follow-up focus groups, a post-test/retrospective pre-test questionnaire was distributed with a self-addressed stamped envelope, and an incentive for return. In contrast to a pre-test/post-test design, the post- test/retrospective pre-test survey asked "What is your proficiency for each competency today?" and "What was your proficiency for each competency one year ago?" This then/post survey enabled a comparison, for those participants, between results obtained using a pre-test/post-test methodology versus a post- test/retrospective pre-test methodology. Figure 2. A segment of the Post-Test/Retrospective Pre-Test instrument The Public Health Information Partners (PHIP) project is funded by the National Library of Medicine under NLM contract N01- LM with the National Networks of Library of Medicine, Middle Atlantic Region, Region 1. Poster presented at the Annual Meeting of the Medical Library Association, May 21, 2007, Philadelphia, PA. Furthermore, self-reported post scores on the two surveys varied wildly from each other. Eleven of 15 participants rated themselves relatively lower for most competencies on the then/post survey than they did on the previous post-test. Conclusions The decision to look at this methodology did not occur until after results from the original pre/post methodology were questioned, i.e. how could our trainees have become less proficient?! Self-report in general involves biases and perhaps should be avoided where other methods might be more reliable. The results of our post-test/retrospective pre-test survey are not conclusive: a small number of surveys, returned by probably our most diligent participants, with limited results to report. What we learned above all is that pre-post self-report is unreliable. The purpose of this inquiry was not to get better results, but to determine which results are more realistic. Based on satisfaction surveys with focus group results, it is believe that the then/post results are more realistic, and a better measure of knowledge gained or perceived to have been gained by participants. To conduct this research again and measure proficiency ratings for the recommended CDC competencies, this then/post methodology, combined with focus group follow-up, would be our methodology from the outset. There are times when self-report is the best or only option available. It is the recommendation of these researchers that library researchers consider this then/post methodology over traditional pre/post self-report comparisons, given that even provided with bad training (which ours wasnt), learners shouldnt know less when they complete a curriculum! References 1 OCarroll PW, and the Public Health Informatics Competencies Working Group. Informatics competencies for public health professionals. Seattle, WA: Northwest Center for Public Health Practice, Howard GS. Response-shift bias: a problem in evaluating interventions with pre-post self- reports. Eval Rev 1980 Feb;4: Cunningham DJ, Ascher MT, Viola D, Visintainer PF. Baseline assessment of public health informatics competencies in two Hudson Valley health departments. Public Health Rep 2007 May/June;122: Based upon results of the pre-test, the following instructional sessions were offered to all employees of three county health departments: 1. Utilizing Your PC for Public Health 1. Using Your PC for Inter-Office Communication - Creating a disease fact sheet with Word, e- communication (27 participants) 2. Managing and Presenting Your Data - Using Excel to create tables and graphics, Using Power Point (35 participants) 2. Information Sleuthing: Retrieving and Manipulating Public Health Information 1. Finding and Using Public Health Data – Locating web-based dat for various scenarios, downloading data to Excel (32 participants) 2. Finding and Managing the Literature of Public Health – phpartners.org resources, PubMed, evidence based intro (19 participants) 3.Grant-Writing Overview 1. Scoping it Out: Researching Grant Opportunities – Funding opportunities, online grant resources, grants.gov, etc. (14 participants) 2. Making it Happen: Basics of Grant Writing – Outsourced, hands-on workshop 4. Bringing It Together 1. Finding, Evaluating and Organizing Information - Developing a resource guide from a variety of sources using pandemic influenza example. Included some html instruction. (13 participants) Results Focus Group Results: Germane to this poster, the third focus group question asked participants: Based upon your current understanding of our projects efforts to identify your training needs, do you feel that you would rate yourself differently now than you did when you first completed the survey? Several responses indicated, as one participant from Putnam County succinctly put it: When I did the post-test, I realized that I didnt understand the questions during the pre-test Post-Test/Retrospective Pre-Test Results: Fifteen valid surveys (two surveys were discounted because they did not have valid ID numbers for comparison) were returned by focus group participants. The results show that for all respondents and for all twenty-six competencies there was an average overall gain from the aware (1) to knowledgeable (3) level. As demonstrated in Table 1, all respondents to the then/post survey except one (who reported zero change) reported increased overall proficiency. Most importantly, average levels of improvement were higher on the competencies originally targeted for training, those with a significant gap score between proficiency and relevance (roughly competency numbers 1-18). 3 Overall, there was significant improvement across competencies compared to what was indicated by the original pretest/posttest design. As Table 1 also demonstrates, individuals showed a greater increase in proficiency level using the then/post methodology. Satisfaction ratings at the end of each session were overwhelmingly positive (overall average score: 4.5 on a 5-point scale) The Response-Shift Bias In using self-report instruments, researchers assume that a subjects understanding of the standard of measurement for the dimension being assessed will not change from one testing to the next (pretest to posttest). If the standard of measurement were to change, the ratings would not accurately reflect change due to treatment and would be invalid. 2