Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using the READ Scale (Reference Effort Assessment Data) Capturing Qualitative Statistics for Meaningful Reference Assessment.

Similar presentations


Presentation on theme: "Using the READ Scale (Reference Effort Assessment Data) Capturing Qualitative Statistics for Meaningful Reference Assessment."— Presentation transcript:

1 Using the READ Scale (Reference Effort Assessment Data) Capturing Qualitative Statistics for Meaningful Reference Assessment

2 Introduction to The READ Scale (Reference Effort Assessment Data): Qualitative Statistics for Academic Reference Services Bella Karr Gerlich G. Lynn Berard

3 A 2002 reference services and assessment survey conducted by ARL hoped to reveal current best practices, but instead, “revealed a situation in flux”. ARL SPEC Kit 268, Reference Services & Assessment, 2002

4 Traditional Reference Statistics Categories Typical – examples: –“Reference” question –“Directional” question –“Technical” question Approach Type recorded – examples: –“Walk-up, or In-Person” –“Off-desk” –“Phone” –“Email”

5 Traditional Reference Statistics / (Hash mark) / (Hash mark)

6 Traditional Reference Statistics Hash marks

7 The study reported a “general lack of confidence in current data collection techniques” The study remarked on declining transactions but noted that the librarians’ feeling of ‘being busier than ever’ contributed to the librarians’ observation that data collected ‘does not accurately reflect their own level of activity’. ARL SPEC Kit 268, Reference Services & Assessment, 2002

8 It was with sentiments similar to the ARL Study that the READ Scale was created and initially tested at the Carnegie Mellon University Libraries in 2003 - 2004.

9 The READ Scale Reference Effort Assessment Data A six point (1 - 6) sliding scale that asks librarians to assign a number based on effort / knowledge / skill / teachable moment instead of a hash mark after a reference transaction.

10 Results from the pilot study at Carnegie Mellon were disseminated at an ALA poster session, and institutional grants received (2006) enabled the expansion of the study from nine (9) to fifteen (15) academic libraries in spring of 2007.

11 14 Institutions, 12 States

12 Study Components IRB / Consent Forms Timeline (3 week and/or semester long) Pre-test / local calibration Blog Online Survey

13 Study Components - IRB IRB (Institutional Review Board) approval Done at GCSU and Carnegie Mellon Done at Institutions if required Consent forms delivered electronically, signed & returned with data at end of 3 week study period

14 Study Components - Timeline 3 Week and / or Full Semester Feb. 2 - Feb. 24, 2007 - 14 Institutions (all) Full Semester - 7 Institutions February dates selected to give institutions time to test as well as minimize chances for spring break, holidays etc.

15 Study Components – Pre-test, Local Calibration Sample Questions created to choose from Encouraged to include questions typical to their home institution (i.e. collection specific) On-Site Coordinator distributed locally and calibrated, creating an ‘example key’ for participants Asked to record time for each during test phase

16 Study Components - Pre-test, Local Calibration Pre-test allowed for study-wide calibration - test questions, responses, time and READ Scale category assignments were the same across institutions for the most part Institutions used their own recording sheets

17 Study Components - Blog Blog set up Only one question received during study Online survey responses suggest that the READ Scale was easy to apply, could explain why blog was not utilized

18 READ Scale - Data Comparisons per service point (READ Scale) Comparisons off-desk (READ Scale) Approach type, service points Approach type, off-desk Time

19 READ Scale - Data Total transactions submitted, 3 week and semester long 14 institutions, 24 service points, 170 participants READ scale 123456Totals Service points 9497562230859263036819501 Off desk658635565295117532323 TOTALS1015562573650122142012121824

20 Three Week Data, Service Points

21 Three Week Data, Off-Desk

22 Full Semester Participants Data, Service Points

23 Full Semester Participants Data, Off-Desk

24

25

26 Average time (in minutes) 1571590 READ Scale123456 Service points94975622308592630368Total 19501 Off desk65863556529511753Total 2323 Totals10155625736501221420121Total 21824 Hours169521426305630181Total 2232 Days (24 hours)7221812268Total 93 Days (8 hour day) 216553387822 Total 277

27 Study Components - Online Survey Online Survey sent at the conclusion of the three week study period to all participants Response rate was high - 102 responses out 170, or 60%

28 Survey Results - Overall No difficulty using the Scale Easy to apply Ranked perception of added value to reference statistics as ‘ high ’ Staff comfortable with rating their own efforts 68% would recommend as is, 20% with modifications 50% would adopt as is, additional 31% with modifications Low percentage of changes in approach (10%) reinforces ease of use / local adaptability

29 READ Scale - What Works Local approach to using the READ Scale Pre-testing / common questions Easy to use Adds value to data gathering Adds value to work / satisfaction Records previously unrecorded effort / knowledge / skills (service point and off-desk)

30 The READ Scale at Lawrence University Gretchen Revie Reference Librarian and Instruction Coordinator Appleton, Wisconsin

31 Lawrence University Private undergraduate college of the liberal arts and sciences with a conservatory of music 1450 students, 98% live on campus 170 FTE faculty, 97% with PhD or terminal degree Calendar of three 10-week terms, no summer session

32 Seeley G. Mudd Library Library staff: 15.5 FTE (8 MLS, 9 other); approximately 50 student employees Collections: –400,000 book volumes –1,800 periodical subscriptions –20,000 audio-visual items –14,000 musical scores

33 In the Beginning was the Stroke And the stroke was good... –Quick! –Cheap! But limited –Inconsistent –Didn’t reflect effort –Lots of time required to tabulate –Didn’t tell us anything about the questions

34 It’s not what you know... November 2006: we were contacted by Bella Gerlich, friend and former colleague of our Music Librarian, Antoinette Powell, about being a part of the READ study This was a timely coincidence –Lawrence’s upcoming NCA visit –Reference zeitgeist

35 First step: calibrating the scale Customized the questions Each librarian answered the questions, recording sources and process used and amount of time spent, then assigned a rating from the READ scale Librarians met and discussed our answers, our process, and our ratings Also proved to be a very useful process in terms of staff development

36 The study: Feb. 2 - 24, 2007 Used a paper form (just like our old form, only bigger) Placed a paper copy of the scale at the reference desk on the same clipboard we used for the tally sheets Counted number of digits as though they were strokes to fit into our previous recording scheme

37 Immediately after the study We found that the READ scale was easy enough to adopt that we just continued to use it for the rest of the term, then the rest of the year Use of the scale helped us value, as well as evaluate, our work at the reference desk. We found we were answering many more complex questions than we assumed.

38 Follow-up: our adaptations Fall 2007: started using an Excel spreadsheet saved on shared file space. File names were included on our reference Moodle space Included room to record the content of the questions Spring 2008: included formulas in the spreadsheet to total as we go

39

40 Ongoing challenges “Ratings drift:” we still seem to underrate our questions –One response: include a copy of the scale as a tab in the spreadsheet Slight decline in total number of questions –Slightly fewer questions recorded with the READ scale, probably because we were double-counting for complicated questions in the tickmark method –May also be due to increase in the number of reference appointments

41 Future use of the READ scale Will look to see if the level of questions fluctuates from term to term or over the course of a year May use to determine staffing Helps provide evidence of reference as teaching Advocacy with faculty and administration

42 The READ Scale at Northern Michigan University Kevin McDonough Reference Librarian Marquette, Michigan

43 Basic Facts about NMU Medium sized public university with undergraduate and Masters programs 8,600 undergrads, 725 grad students 80% of students from Michigan; 55% from Upper Peninsula of Michigan 435 faculty, inc. 124 adjuncts

44 Basic Facts about Olson Library Collection  650,000 monograph volumes  95,000 serial volumes  14,500 electronic serial volumes Budget  $2 million overall; approximately 89% is labor costs  $820,00 acquisition budget 25 staff members, inc. 10 librarians (5 in reference) Teach about 250 library instruction sessions per year

45 Why Revise Our Reference Stats? Wanted our statistics to provide meaningful data We wanted to record the statistics electronically We wanted a quick means of generating reports

46 Old Reference Form

47 READ Scale--Training 1.How can my dog become a therapy dog ? Score: 2-3 2.How can I scan a microfilm copy of a New York Times article? Score: 2-3 3.Where is the media collection? Score: 1-2 4.Has influenza vaccine been linked to encephalopathy in adult humans? Score: 3-5 5.I need to find some contemporary criticisms for the play Fences by August Wilson – both the writing of the play and a production. Score: 2-3

48 READ Scale Training—cont. 6. I need the issue number for this citation: Le Goff, Jacques Ordres mendiants et urbanisation dans la France Annales: economies, societes, civilisations, vol. 25, (1970). Score: 3-4 7. I want to go to med school, but don’t know what kind of doctor I want to be…can you help me get some career path information? Score: 3 8. Based on national demands for iron ore in the late 1800s, how did economic growth in Marquette and similar communities compare? I need both primary and secondary resources, including statistics on economics, demographics, etc. Score: 4-6 9. I am writing a 10 page paper on drugs—steroids, growth hormones, etc.—in sports. I need articles and books. Where do I start? Score: 3-4 10. How do I print in the library? Score: 1-3

49

50

51

52

53

54 Total questions by means of delivery

55 Total questions Total questions by question level

56 Percentage of questions by question level Percentage

57 Hour1 & 23-6 9am8020 10am8317 11am8515 12pm8317 1pm7426 2pm6931 3pm7723 4pm7921 5pm6634 6pm7228 7pm7030 8pm7525 9pm7723 10pm5743 Percentage of total questions by level and time of day Sunday - Friday

58 Time Percentage of total questions by level and time of day PercentagePercentage

59 Total questions and sum of questions 1 & 2 by time of day

60 HourMondayTuesdayWednesdayThursdayFridaySaturdaySunday 9am69759110070NA 10am8061918995NA 11am909376857186NA 12pm83957591934760 1pm92715392774263 2pm80697366793861 3pm86767969876764 4pm84918170866168 5pm57766283405367 6pm6863899201771 7pm586595830NA65 8pm586594820NA94 9pm5777100750NA64 10pm6743000NA100 Percentage of question levels 1 & 2 by day of week and hour

61 So, What Have We Learned? Recording stats electronically is wonderful. Being able to generate reports any time is convenient. Knowing the times when we get the most lower level questions is priceless!


Download ppt "Using the READ Scale (Reference Effort Assessment Data) Capturing Qualitative Statistics for Meaningful Reference Assessment."

Similar presentations


Ads by Google