School of Information Studies

Slides:



Advertisements
Similar presentations
Assessing our Classroom Olympians
Advertisements

About Certiport Worldwide administrator of the Microsoft Business Certification program: –Microsoft Business Certification Credentials Microsoft Office.
Quality Matters Building a Quality Online Course.
Farmer School of Business Discussion of The Acceptance and Adoption of Continuous Auditing by Internal Auditors: A Micro Analysis Douglas Havelka Farmer.
Cost Management ACCOUNTING AND CONTROL
Why Should I Be An IEEE Volunteer? What Will I Get Out of Being an IEEE Volunteer? Don C. Bramlett, PE, SMIEEE IEEE Region 4 Director Southeastern.
Introduction to Performance Measurement for Senior Corps Project STAR Support and Training for Assessing Results Recorded 9/10/2007.
Alignment and Quality 2/8/2014 Alignment and Quality 1 What is it? What do I need to look for?
Dropout Prevention EDSTAR, Inc.. © 2009 EDSTAR, Inc. Answer Key = Website
Organizing the Evaluation of Electronic Resources Lenore England, Digital Resources Librarian Li Fu, Digital Services Librarian ALCTS.
Survey Internal Communication trends in the Italian banking sector Mario Spatafora Simone DellOrto Milan, September 2002.
INDIAN RIVER SUCCESSION PLAN GOAL To groom existing teachers and staff members to become administrators at the school and district level.
1 Citrus County 21 st CCLC Progress Report Larry Parman External Evaluator.
March 18 th, 2008 Data Systems – Data Warehouse. Data-Driven Decisions.
Individual Education Plans in Practice Timetable 9:00 - 9:15IEPs in the Code of Practice 9:15 - 9:30Planning and target setting: whole-school approaches.
MA Leadership and Management (Further Education) Godfrey Pryce Hurley, Mike Jones, Gavin Thomas IPDA International Conference 30 November – 1 December.
Making the Most of Our Resources Tracy Marshall Academic Librarian Engineering.
How to Apply for an Interlibrary Cooperation Grant from the Alaska State Library March 23, 2013 Alaska Library Association Conference Valdez.
Getting Started with Learning Outcomes Assessment Purposes, Practical Options, & Impact Megan Oakleaf, MLS, PhD Library Assessment Conference.
Ongoing Training Day 1. Welcome Back! [re]Orientation Lead Evaluator Training Agenda Review.
[ 1 ] © 2011 iParadigms, LLC Benefits for Teaching. Impact on Learning. Introduction to Turnitin.
1 SESSION 5- RECORDING AND REPORTING IN GRADES R-12 Computer Applications Technology Information Technology.
Instructor: Bess A. Rose. Photograph by Quentin BaconGourmet Magazine, December 1999
Assessment Cycle or Circular File: Do Academic Librarians Use Information Literacy Assessment Data? Megan Oakleaf & Lisa Hinchliffe Library Assessment.
Southwood School: A Case Study in Training and Development
Evaluating Training Programs The Four Levels
March 23, Todays Outcomes By the end of todays workshop, participants will be able to... Revise and/or finalize SAOs within their areas Discover.
Data Warehouse Overview (Financial Analysis) May 02, 2002.
Collecting Citizen Input Management Learning Laboratories Presentation to Morrisville, NC January 2014.
SNDT Women's University Introduction to Evaluation and Assessments  Presented by Kathleen (Kat) Miller, Senior Consultant with Booz Allen Hamilton  4.
SEED – CT’s System for Educator and Evaluation and Development April 2013 Wethersfield Public Schools CONNECTICUT ADMINISTRATOR EVALUATION Overview of.
Appraising and Managing Performance (c) 2007 by Prentice Hall7-1 Chapter 7.
Online Rubric Assessment Tool for Marine Engineering Course
August 2014 PART TIME COMPENSATION AND CLASSIFICATION PROCESS Module 1 Policies and Procedures.
Instructor: Bess A. Rose. Given information from sources such as program designers’ web site, program curriculum materials, experts, and research-based.
WV Simulated Workplace
The Design and Implementation of Educator Evaluation Systems, Variability of Systems and the Role of a Theory of Action Rhode Island Lisa Foehr Rhode Island.
Student Growth Measures in Teacher Evaluation Module 3: Using Data to Inform Growth Targets and Submitting Your SLO 1.
1 Using K-12 Assessment Data from Teacher Work Samples as Credible Evidence of a Teacher Candidate’s Ability to Produce Student Learning Presented by Roger.
Module 12 WSP quality assurance tool 1. Module 12 WSP quality assurance tool Session structure Introduction About the tool Using the tool Supporting materials.
Osceola School District’s Classroom Instructor Evaluation The Ins and Outs of Our Classroom Instructor Evaluation
MT Evaluation: Human Measures and Assessment Methods : Machine Translation Alon Lavie February 23, 2011.
Rubric Assessment of Student Responses to an Information Literacy Tutorial Megan Oakleaf Librarian for Instruction & Undergraduate Research Steve McCann.
Becoming one with the SLO The Zen Of Assessment of SLOs And Rubric Writing.
Hiring, Training & Evaluating Employees
Portfolios: A Useful Resource For All Cynthia Cuellar Astrid Fossum Beth Schefelker The Milwaukee Mathematics Partnership (MMP), an initiative of the Milwaukee.
Lead Evaluator Training
P ORTFOLIOS What are portfolios? Types and Structures Developing Portfolios Strengths/Weaknesses.
ASSESSMENT OF STUDENT LEARNING Manal bait Gharim.
NAVIGATING THE PROCESS OF STUDENT LEARNING OUTCOMES: DEVELOPMENT, EVALUATION, AND IMPROVEMENT Shannon M. Sexton, Julia M. Williams, & Timothy Chow Rose-Hulman.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
Publicity and Marketing LIS 2970 Special Topics Library Instruction June 18, 2004.
Successful Contract Training: A Grounded Theory for a Sustainable Business Model presented at the National Council for Workforce Education Conference by.
Group. “Your partner in developing future Lifelong Learners” UROWNE UNIVERSITY LIBRARY.
Threshold Concepts & Assessment Ahmed Alwan, American University of Sharjah Threshold Concepts For Information Literacy: The Good, the Bad and the Ugly.
Advancing Assessment Literacy Setting the Stage II: Understanding Data Purposes & Uses.
EDU 385 CLASSROOM ASSESSMENT Week 1 Introduction and Syllabus.
Service Learning Dr. Albrecht. Presenting Results 0 The following power point slides contain examples of how information from evaluation research can.
Common to some 90% of organizations Acknowledged by CEOs to drive strategy Failure rates of 80%-90% Produces conflict & competition Some have advocated.
Introduction to... Teacher Evaluation System Teacher Effectiveness 12/6/
What Are the Characteristics of an Effective Portfolio? By Jay Barrett.
Training Evaluation Chapter 6 6 th Edition Raymond A. Noe Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.
Fact Finding (Capturing Requirements) Systems Development.
Virginia’s Workplace Readiness Skills
General Education Assessment
Program Evaluation Essentials-- Part 2
Reading Research Papers-A Basic Guide to Critical Analysis
Training Evaluation Chapter 6
Presentation transcript:

School of Information Studies Using Rubrics to Collect Evidence for Decision-Making: What do Librarians Need to Learn? Megan Oakleaf, MLS, PhD School of Information Studies Syracuse University 4th International Evidence Based Library & Information Practice Conference May 2007

Overview Introduction Definition & Benefits of Rubrics Methodology Emergence of Expert Rubric User Group Characteristics of Expert Rubric Users Barriers to Expert Use of Rubrics The Need for Training Directions for Future Research

Rubrics Defined describe the 1) parts, indicators, or criteria and 2) levels of performance of a particular task, product, or service formatted on a grid or table employed to judge quality used to translate difficult, unwieldy data into a form that can be used for decision-making

Rubrics are often used to make instructional decisions and evaluations. http://www.southcountry.org/BROOKHAVEN/classrooms/btejeda/images/rubric%20big.JPG

Potential Rubric Uses in Libraries To analyze and evaluate: Information-seeking behavior Employee customer service skills Marketing/outreach efforts Collection strengths Information commons spaces Student information literacy skills

Rubric for a Library Open House Event for First Year Students Indicators Beginning Developing Exemplary Data Source Attendance Attendance rates are similar to the 2006 Open House Attendance rates increase by 20% from 2006 Open House Attendance rates will increase by 50% from 2006 Open House Staff [Committee and Volunteers] records Staff Participation Staff participation is similar to 2006 Open House, no volunteers Increase in participation by library staff [librarians and paraprofessionals] and student volunteers Increase in participation with library staff [librarians and paraprofessionals], student volunteers, student workers, and academic faculty Budget Budget same as 2006 Open House, $200 Budget increases by $100 from 2006 Open House Budget increases by $300 from 2006 Open House Budget, Financial Statements Reference Statistics Reference statistics similar to 2006 Reference statistics increase by 20% from 2006 Reference statistics increase by 50% from 2006 Library Reference Department Statistics Student Attitudes Students are pleased with Open House Students enjoy the Open House, are satisfied with information Students are excited about the Open House, volunteer to participate with the next year’s event Survey Rubric for a Library Open House Event for First Year Students Rubric created by: Katherine Thurston & Jennifer Bibbens

Post-Training Surveys Rubric for a Virtual Reference Service Indicators Beginning Developing Exemplary Data Source Transactions 0 – 4 reference transactions per week. 5 – 7 reference transactions per week. 8 + reference transactions per week. Transaction Logs User Satisfaction Students, faculty and staff report they are “dissatisfied” or “very dissatisfied” with reference transactions. Students, faculty and staff report they are “neutral” about reference transactions. Students, faculty and staff report they are “satisfied” or “very satisfied” with reference transactions. User Surveys Training Librarians report they are “uncomfortable” or “very uncomfortable” with providing virtual reference service. Librarians report they are “neutral” about providing virtual reference service. Librarians report they are “comfortable” or “very comfortable” with providing virtual reference service. Post-Training Surveys Technology Between 75 % and 100 % of transactions a week report dropped calls or technical difficulties. Between 25 % and 74% of transactions a week report dropped calls or technical difficulties. Between 0 % and 24% of transactions a week report dropped calls or technical difficulties. System Transcripts Electronic Resources 0 – 50 hits on electronic resources a week. 50 – 100 hits on electronic resources a week. 100 + hits on electronic resources a week. Systems Analysis Logs Rubric created by: Ana Guimaraes & Katie Hayduke

Study Rubric

Benefits rubrics provide librarians the opportunity to discuss, determine, and communicate agreed upon values rubrics include descriptive, yet easily digestible data prevent inaccuracy of scoring prevent bias When used in student learning contexts… reveal the expectations of instructors and librarians to students offer more meaningful feedback than letter or numerical scores alone support not only student learning, but also self-evaluation and metacognition

The Research Question To what extent can librarians use rubrics to make valid and reliable decisions? Library service: an information literacy tutorial Artifacts: student responses to questions within the tutorial Goal: to make decisions about the tutorial and the library instruction program

Methodology 75 randomly selected student responses to open-ended questions embedded in an information literacy tutorial at NCSU 25 raters 15 internal & trained (NCSU librarians, faculty, students) 10 external & untrained (non-NCSU librarians) raters code artifacts using rubrics raters’ experiences captured on comment sheets reliability statistically analyzed using Cohen’s kappa validity statistically analyzed using a “gold standard” approach and Cohen’s kappa This study employed a survey design methodology. The data for the study came from student responses to open-ended questions embedded in an online information literacy tutorial. This textual data was translated into quantitative terms through the use of a rubric. Using a rubric, raters coded student answers into pre-set categories, and these categories were assigned point values. The point values assigned to student responses were subjected to quantitative analysis in order to describe student performance, test for interrater reliability, and explore the validity of the rubric. According to Lincoln, this approach is called “discovery phase” or preliminary experimental design, and it is commonly employed in the development of new rubrics. [1] Yvonna Lincoln. "Authentic Assessment and Research Methodology." E-mail to Megan Oakleaf. 2005.

Kappa Index Kappa Statistic Strength of Agreement 0.81-1.00 Almost Perfect 0.61-0.80 Substantial 0.41-0.60 Moderate 0.21-0.40 Fair 0.00-0.20 Slight <0.00 Poor

Average Kappa Rank Participant Group Status 0.72 1 NCSU Librarian Expert 0.69 2 Instructor 0.67 3 0.66 4 0.62 5 0.61 6 Non-Expert 0.59 7 0.58 8 Student 0.56 9 0.55 10 .055 11 0.54 12 0.52 13 14 0.43 15 External Instruction Librarian 0.32 16 External Reference Librarian 0.31 17 18 0.30 19 20 0.27 21 0.21 22 0.19 23 0.14 24 0.13 25 expert status does not appear to be correlated to educational background, experience, or position within the institution

Expert Kappa Statistics

Non-Expert Kappa Statistics

Expert Characteristics focus on general features of artifact adopt values of rubrics revisit criteria while scoring experience training

Non-Expert Characteristics diverse outlooks or perspectives prior knowledge or experiences fatigue mood other barriers

Barrier 1 Difficulty Understanding an Outcomes-Based Approach Many librarians are more familiar with inputs/outputs than outcomes. Comments from raters: using measurable outcomes to assess student learning focuses too much on specific skills—too much “science” and not enough “art.” “While the rubric measures the presence of concepts…it doesn’t check to see if students understand [the] issues.” “This rubric tests skills, not…real learning.”

Barrier 2 Tension between Analytic & Holistic Approaches Some librarians are unfamiliar with analytical evaluation. Comments from raters: The rubric “was really simple. But I worried that I was being too simplistic…and not rating [student work] holistically.” “The rubric is a good and a solid way to measure knowledge of a process but it does not allow for raters to assess the response as a whole.”

Analytic vs. Holistic Analytic Better for judging complex artifacts Allow for separate evaluations of artifacts with multiple facets Provide more detailed feedback Take more time to create and use Bottom line: Better for providing formative feedback Holistic Better for simple artifacts with few facets Good for getting a “snapshot” of quality Provide only limited feedback Do not offer detailed analysis of strengths/weaknesses Bottom line: Better for giving summative scores

Barrier 3 Failure to Comprehend Rubric Some librarians may not understand all aspects of a rubric. Comments from raters: “I decided to use literally examples, indicators to mean that students needed to provide more than one.” “The student might cite one example…but not…enough for me to consider it exemplary.”

Barrier 4 Disagreement with Assumptions of the Rubric Some librarians may not agree with all assumptions and values espoused by a rubric. Comments from raters: The rubric “valued students’ ability to use particular words but does not measure their understanding of concepts.”

Barrier 5 Difficulties with Artifacts Some librarians may be stymied by atypical artifacts. Comments from raters: I found myself “giving the more cryptic answers the benefit of the doubt.” “If a student answer consists of a bulleted list of responses to the prompt, but no discussion or elaboration, does that fulfill the requirement?” “It’s really hard…when students are asked to describe, explain, draw conclusions, etc. and some answer with one word.”

Barrier 6 Difficulties Understanding Library Context & Culture Librarians need campus context to use rubrics well.

Training Topics Value & principles of outcomes-based analysis and evaluation Theories that underlie rubrics Advantages & disadvantages of rubric models Structural issues that limit rubric reliability and validity (too general or specific, too long, focused on quantity not quality, etc) Ways to eliminate disagreement about rubric assumptions Methods for handling atypical artifacts

Future Research Investigate: attributes of expert raters effects of different types and levels of rater training non-instruction library artifacts impact of diverse settings

Conclusion Are rubrics worth the time and energy? This study confirmed the value of rubrics—nearly all participants stated that they could envision using rubrics to improve library instructional services. Such feedback attests to the merit of rubrics as tools for effective evidence based decision-making practice.

American Library Association. 2000 American Library Association. 2000. Information Literacy Competency Standards for Higher Education. 22 April 2005 <http://www.ala.org/ala/acrl/acrlstandards/informationliteracycompetency.htm>. Arter, Judith and Jay McTighe. Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance. Thousand Oaks, California: Corwin Press, 2000. Bernier, Rosemarie. “Making Yourself Indispensible By Helping Teachers Create Rubrics.” CSLA Journal 27.2 (2004). Bresciani, Marilee J., Carrie L. Zelna, and James A. Anderson. Assessing Student Learning and Development: A Handbook for Practitioners. Washington: National Association of Student Personnel Administrators, 2004. Callison, Daniel. “Rubrics.” School Library Media Activities Monthly 17.2 (Oct 2000): 34. Colton, Dean A., Xiaohong Gao, Deborah J. Harris, Michael J. Kolen, Dara Martinovich-Barhite, Tianyou Wang, and Catherine J. Welch. Reliability Issues with Performance Assessments: A Collection of Papers. ACT Research Report Series 97-3, 1997. Gwet, Kilem. Handbook of Inter-Rater Reliability: How to Estimate the Level of Agreement between Two or Multiple Raters. Gaithersburg, Maryland: STATAXIS, 2001. Hafner, John C. “Quantitative Analysis of the Rubric as an Assessment Tool: An Empirical Study of Student Peer-Group Rating.” International Journal of Science Education 25.12 (2003). Iannuzzi, Patricia. “We Are Teaching, But Are They Learning: Accountability, Productivity, and Assessment.” Journal of Academic Librarianship 25.4 (1999): 263-266. Landis, J. Richard and Gary G. Koch. “The Measure of Observer Agreement for Categorical Data.” Biometrics 33 (1977). Lichtenstein, Art A. “Informed Instruction: Learning Theory and Information Literacy.” Journal of Educational Media and Library Sciences 38.1 (2000). Mertler, Craig A. “Designing Scoring Rubrics For Your Classroom.” Practical Assessment, Research and Evaluation 7.25 (2001). Moskal, Barbara M. “Scoring Rubrics: What, When, and How?” Practical Assessment, Research, and Evaluation 7.3 (2000). Nitko, Anthony J. Educational Assessment of Students. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Popham, W. James. Test Better, Teach Better: The Instructional Role of Assessment. Alexandria, Virginia: Association for Supervision and Curriculum Development, 2003. Prus, Joseph and Reid Johnson. “A Critical Review of Student Assessment Options.” New Directions for Community Colleges 88 (1994). Smith, Kenneth R. New Roles and Responsibilities for the University Library: Advancing Student Learning through Outcomes Assessment. Association of Research Libraries, 2000. Stevens, Dannielle D. and Antonia Levi. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling, Virginia: Stylus, 2005. Tierney, Robin and Marielle Simon. “What's Still Wrong With Rubrics: Focusing On the Consistency of Performance Criteria Across Scale Levels.” Practical Assessment, Research, and Evaluation 9.2 (2004). Wiggins, Grant. “Creating Tests Worth Taking.” A Handbook for Student Performance in an Era of Restructuring. Eds. R. E. Blum and Judith Arter. Alexandria, Virginia: Association for Supervision and Curriculum Development 1996. Wolfe, Edward W., Chi-Wen Kao, and Michael Ranney. “Cognitive Differences In Proficient and Nonproficient Essay Scorers.” Written Communication 15.4 (1998).

Questions?