Reference Assessment Programs: Evaluating Current and Future Reference Services Dr. John V. Richardson Jr. Professor of Information Studies UCLA Department.

Slides:



Advertisements
Similar presentations
Test Development.
Advertisements

Standardized Scales.
Bridging Research, Information and Culture An Initiative of the Research and Planning Group for California Community Colleges Your Name Your Institution.
Preparing Data for Quantitative Analysis
SEM A – Marketing Information Management
2.06 Understand data-collection methods to evaluate their appropriateness for the research problem/issue.
Dr. G. Johnson, Sampling Demystified: Sample Size and Errors Research Methods for Public Administrators Dr. Gail Johnson.
Slides produced by the MBR Project Team
Survey Methodology Survey Instruments (2) EPID 626 Lecture 8.
Business Statistics for Managerial Decision
Richardson/DIS 2002 Symphony of Synchronicity? Evaluating Chat Reference Dr. John V. Richardson Jr. UCLA Professor of Information Studies LSSI Presidential.
UGA Libraries Compensation Satisfaction Consulting Project Carrie McCleese Starr Daniell.
Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis Dr. Dave Harmeyer Associate Professor Chair, Marshburn Memorial.
Characteristics of on-line formation courses. Criteria for their pedagogical evaluation Catalina Martínez Mediano, Department of Research Methods and Diagnosis.
Richardson/DIS 2002 Are They Willing to Wait and What If They Do? An Analysis of Virtual Reference Service Dr. John V. Richardson Jr. UCLA Professor of.
Improving Accuracy and User Satisfaction in the Digital Reference Environment: The 2002 LITA National Forum, Houston Dr. John V. Richardson Jr. UCLA Professor.
1 Customer and Market Focus in the Baldrige Criteria Examines how an organization determines requirements, expectations, and preferences of customers and.
Designing Electronic Performance Support Systems to Facilitate Learning PSYC 512 October 20, 2005 Christina Abbott.
Introducing the Computer Self-Efficacy to the Expectation-Confirmation Model: In Virtual Learning Environments 授課老師:游佳萍 老師 學 生:吳雅真 學 號:
Chapter 13 Survey Designs
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Chapter 9 Descriptive Research. Overview of Descriptive Research Focused towards the present –Gathering information and describing the current situation.
Presented by Beverly Choltco-Devlin Reference and Electronic Resources Consultant Mid-York Library System September 25, 2009 REVVED UP FOR REFERENCE CONFERENCE.
Creating Research proposal. What is a Marketing or Business Research Proposal? “A plan that offers ideas for conducting research”. “A marketing research.
Customer Survey Van Bennekom Book. Introduction Surveying has become a commonplace tool on the business landscape due to the drive of the quality management.
Assessing the Heritage Planning Process: the Views of Citizens Assessing the Heritage Planning Process: the Views of Citizens Dr. Michael MacMillan Department.
Targeting Research: Segmentation Birds of a feather flock together, i.e. people with similar characteristics tend to exhibit similar behaviors Characteristics.
Learning Objective Chapter 13 Data Processing, Basic Data Analysis, and Statistical Testing of Differences CHAPTER thirteen Data Processing, Basic Data.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 14.
CHAPTER 5: CONSTRUCTING OPEN- AND CLOSED-ENDED QUESTIONS Damon Burton University of Idaho University of Idaho.
Enhancing Parents’ Role in Higher Education Assessment Anne Marie Delaney Director of Institutional Research, Babson College.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
CHAPTER FIVE (Part II) Sampling and Survey Research.
C M Clarke-Hill1 Collecting Quantitative Data Samples Surveys Pitfalls etc... Research Methods.
Chapter 7: surveys.
Construction and Evaluation of Multi-item Scales Ron D. Hays, Ph.D. RCMAR/EXPORT September 15, 2008, 3-4pm
1 Chapter 11: Survey Research Summary page 343 Asking Questions Obtaining Answers Multi-item Scales Response Biases Questionnaire Design Questionnaire.
Chapter Nine Copyright © 2006 McGraw-Hill/Irwin Sampling: Theory, Designs and Issues in Marketing Research.
9/23/2015Slide 1 Published reports of research usually contain a section which describes key characteristics of the sample included in the study. The “key”
MAC Fall Symposium: Learning What You Want to Know and Implementing Change Elizabeth Yakel, Ph.D. October 22, 2010.
Postgraduate Taught Experience Survey (PTES) 2010 Interim Results Dr Pam Wells Adviser, Evidence-Informed Practice.
Chapter 6: Using Questionnaire Lecture 6 Topics –Question types –Scales –Formatting the questionnaire –Administering the questionnaire –Web questionnaires.
Developing Questionnaires and Conducting Surveys/Polls Harmony Bladow Robert McClellan Jon-Mark Richardson.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 1-4 Critical Thinking.
Chapter Twelve Copyright © 2006 John Wiley & Sons, Inc. Data Processing, Fundamental Data Analysis, and Statistical Testing of Differences.
Faculty Satisfaction Survey Results October 2009.
The Satisfied Student October 4 th, Today’s Presentation  Present data from Case’s Senior Survey and the National Survey of Student Engagement.
Business Project Nicos Rodosthenous PhD 16/12/ /12/20141Dr Nicos Rodosthenous.
Evaluating Creatives *  Concept Testing: Assessment of potential creatives  Communication Research: A Look at the Advertising Creative  Copy Testing:
Surveying instructor and learner attitudes toward e-learning Presenter: Jenny Tseng Professor: Ming-Puu Chen Date: April 12, 2008 Liaw, S., Huang, H.,
Online students’ perceived self-efficacy: Does it change? Presenter: Jenny Tseng Professor: Ming-Puu Chen Date: July 11, 2007 C. Y. Lee & E. L. Witta (2001).
Teaching Tips Chapters (23-24) Appraising and Improving your Teaching: Using students, Peers, Experts, and Classroom Research. Prepared By:Muhammed Bakir.
RESEARCH METHODS IN TOURISM Nicos Rodosthenous PhD 11/04/ /4/20131Dr Nicos Rodosthenous.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Citizen Satisfaction Survey September 2003 Results Office of the Mayor Program Management Office Nov 6, 2003.
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 13 Survey Designs.
Section 29.1 Marketing Research Chapter 29 conducting marketing research Section 29.2 The Marketing Survey.
Chapter 29 Conducting Market Research. Objectives  Explain the steps in designing and conducting market research  Compare primary and secondary data.
DESIGNING GOOD SURVEYS Laura P. Naumann Assistant Professor of Psychology Nevada State College.
Internet Self-Efficacy Does Not Predict Student Use of Internet-Mediated Educational Technology Article By: Tom Buchanan, Sanjay Joban, and Alan Porter.
Postgraduate Taught Experience Survey (PTES) 2010 Interim Results
2.06 Understand data-collection methods to evaluate their appropriateness for the research problem/issue.
Professor of Information Studies
Conducting Statewide Customer Satisfaction Surveys
Introduction to Survey Design
Questionnaire Construction: Eight Principles
2.06 Understand data-collection methods to evaluate their appropriateness for the research problem/issue.
2.06 Understand data-collection methods to evaluate their appropriateness for the research problem/issue.
NHS DUDLEY CCG Latest survey results August 2018 publication.
Presentation transcript:

Reference Assessment Programs: Evaluating Current and Future Reference Services Dr. John V. Richardson Jr. Professor of Information Studies UCLA Department of Information Studies

Presentation Outline Why Survey Our Users? Question Design and Validity Concerns Methodological Issues Mini Case Studies Recommended Readings

Why Survey Our Users? Need to know what we don’t know Satisfaction and dissatisfaction Loyalty and the Internet User needs and expectations Can’t design effective, new programs Best practices

Question Design and Validity Concerns Nine issues which must be addressed to insure validity of survey results: – Intent of the question – Clarity of the question – Unidimensionality – Scaling – Number of questions to include – Timing of administration – Question order – Sample sizes

1. Intent of the Question RUSA Behavioral Guidelines (1996) – Approachability – Interest in the query, and – Active listening skills UniFocus (300 factor analyses of the hospitality industry) – Friendliness – Helpfulness or accuracy – Promptness of service

2. Clarity of the Question Data from unclear questions – may be invalid Use instructions – to enhance question clarity

Mini Case Study What is the literal correct answer to the question posed?

3. Unidimensionality Unidimensionality is a statistical concept that describes the extent to which a set of questions all measure the same topic

Constellation of Attitudes Satisfaction Delight Intent to Return Feelings about Experiences Value Loyalty

RUSA Behavioral Guidelines Approachability Interest in the query Active listening skills

4. Scaling Three key characteristics: – Does the scale have the right number of points (called response options)? – Are the words used to describe the scale points appropriate? – Is there a midpoint or neutral point on the scale?

A. Response Options A common four point scale: – Very good, good, fair, and poor Distance between very good and good is not the same as the distance between fair and poor Numeric values associated with these options: – 4, 3, 2, and 1 may lead to invalid results…

Mini Case Study What is the distance between each of these response options?

B1. Scale Anchors VERY…VERY… SatisfiedDissatisfied Much AgreeMuch Disagree PositiveNegative ValuableCostly EnjoyableUnpleasant FriendlyUnfriendly

Mini Case Study What are the scale anchors here?

B2. Seven Point Scales Scale A: Very goodVery Poor N/A Scale B: ExcellentVery Poor N/A Scale C: OutstandingDisappointing N/A

C. Wording of Options The only difference in the preceding slide are the response anchors… – Is very good a rigorous enough expectation? – Would excellent be better? – What about outstanding?

Mini Case Study How many response points are there? What is the level of expectation?

D. Midpoint or Neutral Point The rate of skipped questions increases when a neutral response is not included Use an odd number of response points Also, a neutral response provides a way to treat missing data

Mini Case Study What’s the midpoint?

5. Number of Questions Short enough – So that users will answer all the questions Long enough – So that enough information is gathered for decision making purposes

A. Longer Surveys Take more time and effort on the part of the respondent High perceived “cost of completion” results in partially or completely unanswered questions in surveys

B. Likelihood of Complete Responses Higher salience or more important the topic to the user, the greater the likelihood that they will complete a longer survey Multiple questions measuring a single attitude make for longer surveys, although They also aid in evaluating user attitudes

6. Timing and Ease During or immediately following – Blurring together? Cards or mail method (IVR=interactive voice response) Delay seems to cause more positive results Electronic reference allows for ease of administration (more on PaSS™ later)

7. Question Order Specific questions first – Technology, resources, or staffing More general second – Value, overall satisfaction, intent to return – Halo Effect Four question survey: one overall and three specific questions – Asking general question last produces better data

Mini Case Study

8. Sample Sizes Depends upon population size – Error rate – Confidence Consult a table of sample sizes

A. Error Rate Defined as the precision of measurement Accurate to plus or minus some figure Has to be precise enough to know which direction service quality is going (i.e., up or down)

B. Confidence Refers to the overall confidence in the results: –.99 confidence level means that one can be relatively certain that the results are within that range 99% of the time –.95 confidence level is common –.90 confidence level is less common, but… – a 90 CL requires fewer respondents, but will result in a less accurate survey

C. Population and Sample Population (N) refers to the people of interest Sample (n) refers to the people measured to represent the population Response rate is the proportion of the population who respond to the survey

D. Population & Sample Size N=n= – – – – – – SOURCE: Robert V. Krejcie and Daryle W. Morgan, “Determining sample size for research activities," Educational and Psychological Measurement 30 (Autumn 1970):

Appropriate Sample Sizes

Case Studies Much of the extant surveying of reference service is inadequate, misleading, and can result in poor decision-making Improving user service means understanding what leads to satisfied and loyal users Patron Satisfaction Survey (PaSS)™ –

Recommended Bibliographies 1,000 citations to reference studies at – citations to virtual reference studies at –

Best Single Overview Richardson, “The Current State of Research on Reference Transactions,” In Advances in Librarianship, vol. 26, pages , edited by Frederick C. Lynden. New York: Academic Press, 2002.

Recommended Readings Saxton and Richardson, Understanding Reference Transactions (2002) – Most complete list of dependent and independent variables used in the study of reference service McClure et al., Statistics, Measures and Quality Standards (2002) – Most complete list of measures for virtual reference work

Questions and Answers What do you want to know now?