Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis Dr. Dave Harmeyer Associate Professor Chair, Marshburn Memorial.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

June 19, Proposal: An overall Plan Design to obtain answer to the research questions or problems Outline the various tasks you plan to undertake.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 12 Measures of Association.
Classroom-Based Interventions for Students with Emotional and Behavioral Disorders Joseph Wehby Associate Professor Special Education, Peabody College.
Richardson/DIS 2002 Symphony of Synchronicity? Evaluating Chat Reference Dr. John V. Richardson Jr. UCLA Professor of Information Studies LSSI Presidential.
Culture and psychological knowledge: A Recap
Reference Assessment Programs: Evaluating Current and Future Reference Services Dr. John V. Richardson Jr. Professor of Information Studies UCLA Department.
ALEC 604: Writing for Professional Publication Week 7: Methodology.
R U There? Looking for those Teaching Moments in Chat Transcripts Frances Devlin, John Stratton and Lea Currie University of Kansas ALA Annual Conference.
Richardson/DIS 2002 Training and Coordination for Chat Reference: UCLA’s DIS 245 Virtual Reference Training Experience Dr. John V. Richardson Jr. UCLA.
Robert Wonser Introduction to Sociology
Lecture 2 Research Questions: Defining and Justifying Problems; Defining Hypotheses.
Presented by Beverly Choltco-Devlin Reference and Electronic Resources Consultant Mid-York Library System September 25, 2009 REVVED UP FOR REFERENCE CONFERENCE.
Selection & Evaluation of Information Sources and Services Dr. Dania Bilal IS 530 Fall 2009.
 Department of Family and Children Services, Santa Clara County  San Jose State University School of Social Work  Santa Clara County Children’s Issue.
WRITING A RESEARCH PROPOSAL
Choosing Your Primary Research Method What do you need to find out that your literature did not provide?
Assessing the Heritage Planning Process: the Views of Citizens Assessing the Heritage Planning Process: the Views of Citizens Dr. Michael MacMillan Department.
FLCC knows a lot about assessment – J will send examples
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 8 Survey Research.
Research Process Research Process Step One – Conceptualize Objectives Step One – Conceptualize Objectives Step Two – Measure Objectives Step Two – Measure.
What do Graduate Learners Say about Instructor and Learner Discourse in their First Online Course? By Dr. Peter Kiriakidis, PhD Abstract This study was.
Research Process Step One – Conceptualization of Objectives Step Two – Measurement of Objectives Step Three – Determine Sampling Technique Step Four –
Evaluating the Validity of NLSC Self-Assessment Scores Charles W. Stansfield Jing Gao Bill Rivers.
1 / 27 California Educational Research Association 88 th Annual Conference Formative Assessment: Implications for Student Learning San Francisco, CA November.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
Research Methodology. Refers to search for knowledge. Research is an academic activity.
Qualitative Research Points for Beginners.
The Research Problem. Source of Problem: Ideas from EXRERIENCE  Your intuitions are unscientific until empirically tested.  Psychological biases can.
Students’ Perceptions of the Physiques of Self and Physical Educators
Learners’ Attitudes and Perceptions of Online Instruction Presented by: Dr. Karen S. Ivers Dr. JoAnn Carter-Wells Dr. Joyce Lee California State University.
The Genetics Concept Assessment: a new concept inventory for genetics Michelle K. Smith, William B. Wood, and Jennifer K. Knight Science Education Initiative.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
By: TARUN MEHROTRA 12MCMB11.  More time is spent maintaining existing software than in developing new code.  Resources in M=3*(Resources in D)  Metrics.
The Process of Conducting Research
Creswell, Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research, 4e © 2012, 2008, 2005, 2002 Pearson Education,
HOW TO WRITE RESEARCH PROPOSAL BY DR. NIK MAHERAN NIK MUHAMMAD.
Montana K-12 Content Standards for English Language Arts and Literacy in History/Social Studies, Science, and Technical Subjects Text Complexity.
"How Do I Assess That?“ Digital Reference Cynthia Johnson, UC Irvine Libraries California Academic Reference Librarians Discussion Interest Group-South.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
THE MASTER’S RESEARCH STUDY Fall EdAd 221 & 253  Institutional Review Board (IRB) application to be submitted  EdAd 221 guides and supports students.
Chapter 4 – Research Methods in Clinical Psych Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Copyright  2003 by Dr. Gallimore, Wright State University Department of Biomedical, Industrial Engineering & Human Factors Engineering Human Factors Research.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
What do Graduate Learners Say About Instructor and Learner Discourse in Online Courses? By Dr. Peter Kiriakidis, PhD Abstract This study was grounded on.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Cat 2 Non Experimental Research Projects Day Competition 2009.
IMPACT OF EXPORT PROMOTION PROGRAMS ON FIRM COMPETENCIES, STRATEGIES AND PERFORMANCE Group Rupee.
Knowledge sharing and creation in a teachers’ professional virtual community Presenter: Che-Yu Lin Advisor: Min-Puu Chen Date: August 5, Lin, F.,
Instructors’ General Perceptions on Students’ Self-Awareness Frances Feng-Mei Choi HUNGKUANG UNIVERSITY DEPARTMENT OF ENGLISH.
Experimental Research Methods in Language Learning Chapter 3 Experimental Research Paradigm and Processes.
Researching Technology in South Dakota Classrooms Dr. Debra Schwietert TIE Presentation April 2010 Research Findings.
The FDES revision process: progress so far, state of the art, the way forward United Nations Statistics Division.
Chapter 21prepared by Elizabeth Bauer, Ph.D. 1 Ranking Data –Sometimes your data is ordinal level –We can put people in order and assign them ranks Common.
Today Discussion Follow-Up Interview Techniques Next time Interview Techniques: Examples Work Modeling CD Ch.s 5, 6, & 7 CS 321 Human-Computer Interaction.
Research in Nursing Assistant Professor Dr. Ali K. Al-Mesrawi (1)
Cedric D. Murry APT Instructor of Applied Technology in research and development.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Analyzing Small Group Research DR. Joann Keyton
Chapter 12 Understanding Research Results: Description and Correlation
Quantitative Data Analysis and Interpretation
Improving Student Engagement Through Audience Response Systems
Unit 6 Research Project in HSC Unit 6 Research Project in Health and Social Care Aim This unit aims to develop learners’ skills of independent enquiry.
Research Methodologies, Realities and Funding Sources: A Perspective
Selection & Evaluation of Information Sources and Services
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
Unit 2 – Methods Objective 1 Describe quantitative and qualitative  methods such as surveys, polls, and statistics used in sociological research.  Objective.
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

Online Virtual Chat Library Reference Service: A Quantitative and Qualitative Analysis Dr. Dave Harmeyer Associate Professor Chair, Marshburn Memorial Library Azusa Pacific University

Outline 1.Purpose of the Study 2.Research Questions 3.Methodology 4.Variables (I.V., D.V.) 5.Significant Findings 6.Conclusions 7.Questions & Answers

Purpose of the Study  Virtual chat reference augments face-to-face reference interview  Library reference literature lacks research- based findings to back up recommended practices  This study fills the void with a theoretical conceptual model based on an empirical study of chat reference transactions

Research Questions 1. What measurable indicators are found for virtual chat reference transactions, looking exclusively at data created from the chat reference transcripts? 2. Do published reference interview guidelines from RUSA, a set of other strategies and the nature of the query contribute to an accurate answer? 3. What conceptual model of best practices can be suggested by an analysis of the data?

Methodology  Two-and-a-half years of archived academic library chat transcripts using Krippendorff’s (2004) content analysis  333 random transcripts from 2,500  Analyzing 16 independent variables and their relationship with one dependent variable of an accurate reference answer  Pearson correlations and ANOVA variance tests  120 virtual librarians at 43 American institutions  320 remote patrons accessing the service through one Southern California undergraduate/masters university

Methodology (cont.)

Variables  Research Question 1 answered:  16 independent variables  1 dependent variable - question accuracy  Influence on question accuracy  Observed in content analysis of chat transcripts  Derived from RUSA guidelines  Derived from literature review

Quantitative IVs 1. Librarian’s initial contact time (hold time, in seconds) 2. Total time of transaction (service time, in seconds) 3. Longest time gap by librarian (in seconds) 4. Number of URLs co-browsed with the patron 5. Keystrokes by librarian 6. Keystrokes by patron 7. Keystrokes by both

Qualitative IVs 1. The question’s difficulty (seven-point scale) 2. Response to a patron’s “are you there” statements (scored as present, not present, not applicable or ambiguous when coders disagreed) 3. Librarian’s friendliness 4. Lack of jargon 5. Use of open-ended questions 6. Use of closed and/or clarifying questions 7. Librarian maintains objectivity 8. Asking if the question was answered completely 9. The type of question (seven categories: ready reference, research question, library technology, request for materials, bibliographic verification, other and ambiguous for disagreements among coders)

Dependent Variable Coders Qualitative JudgmentsService Quality Coders Qualitative JudgmentsService Quality 8 Librarian gave (or referred) patron to a single 8 Librarian gave (or referred) patron to a single source with an accurate answer Excellent source with an accurate answer Excellent 7 Librarian gave (or referred) patron to more than 7 Librarian gave (or referred) patron to more than one source, one of which provided an accurate one source, one of which provided an accurate answer Very good answer Very good 6 Librarian gave (or referred) patron to a single 6 Librarian gave (or referred) patron to a single source which does not lead directly to an source which does not lead directly to an accurate answer but did serve as a preliminary accurate answer but did serve as a preliminary source Good source Good 5 Librarian gave (or referred) patron to more 5 Librarian gave (or referred) patron to more than one source, none of which leads directly than one source, none of which leads directly to an accurate answer but one which served as to an accurate answer but one which served as a preliminary source Satisfactory a preliminary source Satisfactory

Dependent Variable (cont.) Coders Qualitative JudgmentsService Quality Coders Qualitative JudgmentsService Quality 4No direct accurate answer given, 4No direct accurate answer given, referred to another person or institution Fair / poor 3No accurate answer (or referral) given 3No accurate answer (or referral) given (e.g., “I don’t know”) Failure 2Librarian gave (or referred) patron to a 2Librarian gave (or referred) patron to a single source which did not answer the question Unsatisfactory 1Librarian gave (or referred) patron to 1Librarian gave (or referred) patron to more than one source, none of which answered the question Most unsatisfactory (Richardson and Reyes, 1995)

Significant Findings Summary  Research Question 2 answered: yes  30 significant relationships (p <.05)  From 9 of 16 variables  5 found in RUSA guidelines  4 found in other strategies or nature of online chat

Significant Findings Answer Accuracy Answer Accuracy as Judged by Coders (N=331) Criteria Point Frequency %Cum. % Accurate Answer (single source) Excellent (1/4) Accurate Answer (mult. sources) Very good (1/2) Preliminary Source (single source) Good (2/3) Preliminary Source (mult. sources) Satisfactory (3/4) No Accurate Answer, referred Fair / poor “I don’t know,” no referral Failure Not Accurate (single source) Unsatisfactory Not Accurate (multiple sources) Most unsatisfactory

Significant Findings Best Practices Research Question 3 answered: yes A Conceptual Model for Reference Chat Accuracy minor plus est (less is more) 1. Keep time gaps between sending responses to patrons to no more than one-and-a-half minutes 2. Maintain a total chat transaction time of eight minutes or less 3. Keep total keystrokes per transaction to within six and-a-half lines of text (or 480 characters). 4. Expect to type twice as many characters as the patron

Significant Findings Best Practices (cont.) 5. Be careful about beginning the question negotiation segment of the reference interview with an open question unless the nature of the patron’s question explicitly calls for one. 6. Ask closed or clarifying questions when appropriate 7. At the end of the reference transaction, ask “Does this completely answer your question?” 8. Even moderately difficult questions decrease answer accuracy and not just the medium to high difficult questions

Significant Findings 1. Gaps  Keep time gaps between sending responses to patrons to not much more than one-and- a-half minutes  Reinforces RUSA’s interest guideline (2.6), time away from the patron short, maintain “word contact” (RUSA, June 2004)  Anything nearing two minutes or higher is likely to decrease answer accuracy

Significant Findings 1. Gaps Longest Librarian Gap Quartiles (min.) Acc. Mean Sig. of Diff. (p)_____ 1st nd 1.87 – (1st & 2nd) 3rd 2.85 – (1st & 3rd, no sig.) 4th (1st & 4th) Diff=.71 Diff=.65 Diff=.71

Significant Findings: 2. Service Time  Maintain a total chat transaction time of eight minutes or less  Average = 16.0 minutes (n = 331)  7 minutes more than Richardson’s (2002) 8.9 minutes (n = 20,000)  However, similar to six f2f studies with mean service time ranging from 10 to 20

Significant Findings 2. Service Time Service Time of Transactions Quartiles (min.) Accuracy Mean Sig. of Diff. (p) 1st 0 – nd 8.32 – (1st & 2nd) 3rd 13.1 – (1st & 3rd) 4th (1st & 4th) Diff=.78 Diff=.70 Diff=.80

Significant Findings: 3. Keystrokes  Keep total keystrokes per transaction to within six and-a-half lines of text (or 480 characters)  Application to virtual software vendors (add a timer)  Anything over 15 lines of text will decrease accuracy

Significant Findings 3. Keystrokes Keystrokes Keystroke Quartiles Accuracy Mean Sig. of Diff. (p) Librarian 1st 0 – 480 (6.5 lines)*6.58 4th 1128 (15 lines) (1st & 4th) Patron 1st 0 – 188 (2.5 lines)6.65 4th 545 (7.5) (1st & 4th) Both Librarian & Patron 1st 0 – 690 (9 lines)6.63 4th 1668 (22.5 lines) (1st & 4th) *measured at 74 keystrokes per line of text

Significant Findings 4. Twice the Typing  Expect to type twice as many characters as the patron  Appeared across all four quartile segments between librarian and patron.

Significant Findings 5. Open-ended Questions   Be careful about beginning the question negotiation segment of the reference interview with an open question unless the nature of the patron’s question explicitly calls for one Frequency of Open-ended Questions Category Frequency Percent Present Absent (but should) Not Applicable Ambiguous

Significant Findings 5. Open-ended Questions Open-ended Questions Category Accuracy Mean Sig. of Diff. (p) Not Applicable6.72 Present (3 & 1)

Significant Findings: 6. Closed-ended Questions   Ask closed or clarifying questions when appropriate Frequency of Closed-ended and/or Clarifying Questions Category Frequency Percent Present Absent Not Applicable Ambiguous4413.2

Significant Findings 6. Closed-ended Questions Closed and/or Clarifying Questions Category Accuracy Mean Sig. Of Diff. (p) 3. Not Applicable Absent (3 & 2, not sig.) Ambiguous filtered 3. Not Applicable Absent (3 & 2)

Significant Findings 7. Follow-up Question   At the end of the reference transaction, ask “Does this completely answer your question?” Frequency of the Librarian Asking If the Question Had Been Answered Completely Category FrequencyPercent Present Absent Not Applicable Ambiguous

Significant Findings 8. Question Difficulty Question Difficulty Criteria Point Frequency %Cum. % Low / /4 Medium % High

Significant Findings 8. Question Difficulty Question Difficulty and Accuracy (reporting only significance) Criteria Points Accuracy Mean Sig. of Diff. (p) Low (1.0 & 2.0) (1.0 & 2.5) Medium (1 & 3.5) (1.0 & 4.0) (1.0 & 4.5) (1.0 & 5.0) (1.0 & 5.5) High (1.0 & 6.0) Low (1.5 & 2.5) Medium (1.5 & 3.5) (1.5 & 4.0) (1.5 & 4.5) (1.5 & 5.5) High (1.5 & 6.0)

5. Conclusions  Virtual reference lacks a statistically sound conceptual model to guide the library profession toward improving the reference interview through empirical studies which informs best practices in professional training and assessment.  This study addresses that knowledge void by its discovery of several statistical relationships between nine behavioral factors and an acurate answer in the reference interview.  It is hoped that the suggested eight-point rubric and other results of this project can be a catalyst for practical application toward improving the practice of the global community of professionals and stakeholders in the field of library and information studies.

6. Questions & Answers