LibQUAL+ Finding the right numbers Jim Self University of Virginia Library Charlottesville VA USA 7th Northumbria Conference Stellenbosch, South Africa 13 August 2007 Welcome. In this presentation I will provide my take on what is most important in LibQUAL—which numbers are worthy of attention. When you run LibQUAL you get a huge variety of numbers. Which should you pay attention to? Although I work with ARL as a Visiting Program Officer, the view expressed are strictly my own. They may, or may not, agree with the ‘official’ view of ARL—if there is such a thing.
Initial Examination Focus on the separate user categories Faculty Undergraduates Graduate students Tally the number of responses 100+ needed in each category First we need to get an overview of the results of each of the three major constituencies. The ARL notebook arrives with the combined results of all categories presented in the front of the notebook. I do not think the combined numbers are useful; undergrads, grads, and faculty are all very different in what they need and want from a library. We need to see how well we meet the needs of each group. Combining the three groups into a single group is not a helpful exercise. So I would say to skip that part of the notebook. The next step is to check the number of respondents in each category. The more, the better. You should certainly aim to have at least 100 in each category. If you have fewer, you can still use the data, but you really cannot do much in the way of drilling down, and looking at sub-categories.
UVa 2006 Responses per category Faculty -- 219 Undergraduates -- 210 Graduate students – 244 Library staff -- 74 We were reasonably satisfied with the responses at UVA. More would have been better, but we have enough to do a lot of analysis. Most libraries do not include library staff in the survey. We did it for a couple of reasons—so staff would know how the survey worked, what it was like, and also to see if library staff perceptions differed from those of our customers.
The 22 core questions Examine the notebooks Create thermometer graphs Scan results for each user category Scan the summary charts by dimension Rearrange the dimension charts Create thermometer graphs Question by question 22 bars for each user category Identify the red zones The heart of libqual is the set of 22 questions that each respondent answers. And answers three times—for each item a desired score and a minimum score are given, as well as a perceived score—the respondent’s rating of the actual service. This three part answering scheme is what make LibQual distinctive. A lot of respondents are confused by it, or annoyed by it. Response rates are low, and attrition rates are high for LibQual. But LibQual gives information that you cannot get in a standard survey.
These dimension charts (or thermometer graphs) appear in the notebooks downloaded from the LQ site. It compares the scores of each dimension, for each user category. So you can see, for example, that faculty give information control much more importance than they do library as place. These are useful charts, worth scrutinizing.
This is basically the same chart as on the previous page, except that I made it, and it came out with different colors. The top of the blue bar is the desired score; the bottom of the bar is the minimum score. And the red dot represents the perceived score. This thermometer graph slices up the data differently, than the notebook presentation. It compares the affect of service ratings for each category. You can see that library staff have higher expectations for service than do the external users. This is a very useful technique, allowing you to analyze and present the data in several different ways. How do you make a graph like this? Look at ‘Charting LibQUAL+™ Data’ by Jeff Stark of Texas A&M, revised 2004. It can be found on LibQUAL site, or through google.
This is the same chart as before, except it compares information control.
The same comparison for library as place.
This takes the thermometer graph to a new level of detail—displaying the faculty scores for desired, perceived, and minimum for each of the 22 core questions. This graph presents the same data as the LibQual radar chart. However, I would assert that this presentation is much more intelligible, and much more informative than the radar charts. A chart like this can be easily explained to an administrator or an outsider. Looking at this chart, we see that UVa faculty place a relatively high value on service (the right side of the chart), and believe the library is offering almost optimal service. They place an even higher value on information control, and believe the library is barely meeting the minimum, or falling short. Library as place is not important for faculty, and given those low expectations, the library is performing adequately.
Here is the same picture for grad students, who also have high expectations, and rather low perceived ratings, for information control.
Undergrads give library as place a comparatively higher rating, and do not report any shortfalls.
Library staff think the library is performing adequately expect in questions related to information access. The score for IC-2 (website) is strikingly low.
This display shows how the four groups answered one specific question: AS-2: Employees who are consistently courteous. Faculty gave it a relatively high ‘desired’ score, but said that the library was actually performing at slightly above the desired level. Graduates and undergrads gave good scores to the library. Library staff have much higher desired and minimum scores for this question, and said the library was barely above the minimum. At first glance, it looks like courtesy is more important to the library staff, than it is to the actual customers. But it is important to remember that ‘courtesy’ is something that front line staff really own. They may not have much influence with the collections budget, or hours of operation, or the furnishings, but they decide, every day, how the customers are treated. The high standards of library staff, regarding courtesy, may well translate into the high ratings from our customers.
Here is a similar display for journal collections. Undergrads and library staff say the library is doing ok. But faculty and grads have higher minimum and desired scores, and say the library is not meeting the minimum.
Red Zones at UVa Journal Collections Website Remote access Faculty Grad Students Website Remote access Grad students An important part of analyzing LibQual is to identify and investigate those questions that receive a negative score (a negative adequacy gap) from the actual customers. At UVa there were three questions in the ‘red zone.’
Here is a display (the thermometer graphs) of the red zones.
Drilling into the data and making internal comparisons By academic status By discipline By home library At UVa we are trying to find out more about certain issues, such as the reported journal shortfalls. We are drilling down into the data to see which academic units are dissatisfied. For other questions it may be more productive to subdivide the data by student or faculty status, or by the library used most frequently.
This is the breakdown by academic unit of IC-8, the journals question This is the breakdown by academic unit of IC-8, the journals question. We can see shortfalls in architecture faculty and graduate students, engineering faculty and grads, humanities faculty and grads, and science/math grads.
This is a breakdown on the courtesy question This is a breakdown on the courtesy question. We can surmise that both grads and undergrads are treated courteously at all libraries. However, there are some differences in reported treatment.
Among the 22 core questions, the most desired are: Faculty Journal collections 8.60 Web site 8.49 Grad Students Journal collections 8.61 Remote access 8.53 Undergrads Modern equipment 8.35 Comfortable and inviting location 8.29 Space that inspires study and learning 8.29 Another topic worth exploring: what does each group value? Faculty and grads are similar, they give the highest desired score to journal collections. Undergrads are different—they want good equipment, and a good location. LibQUAL is a reminder to us of just how important journals are to faculty and grad students.
Among the 22 core questions, the least desired are: Faculty Community space for group learning 6.56 Quiet space for individual activities 7.03 Grad Students Community space for group learning 6.86 Giving users individual attention 7.27 Undergrads Giving users individual attention 7.06 Employees who instill confidence 7.30 What is not important? Faculty do not care about the place. Students, both grads and undergrads, do not want individual attention.
Benchmarking with Peers: General Satisfaction Questions In general, I am satisfied with the way I am treated at the library. In general, I am satisfied with library support for my learning, research, and/or teaching. How would you rate the overall quality of the service provided by the library? LibQUAL has three questions that allows comparison with other libraries. They are scored on a 1-9 scale, there are no separate scores for desired and minimum.
At UVA we compared our overall satisfaction results with those of the other ARL libraries that did LibQUAL in 2006, and we were happy with the results.
Jim Self http://www.lib.virginia.edu/mis/ Thank you! self@virginia.edu That’s all folks!