Welcome to the Sheridan Libraries' Usability Services web site. Usability Services are part of the Library Digital Programs unit. Researchers, customers and staff interested in our usability evaluation efforts can review these pages for information about these services.
What is usability?
The goal of usability evaluation is to determine what can be done to make an interface efficient, satisfying, and easy to use, to learn, and to remember. Usability evaluation involves selecting some of the various methods designed to glean this information and applying them iteratively, from the early stages of a system's development through its active use. These methods may include surveys, focus groups, scenario-based think-aloud tests, contextual inquiry, card-sorting, link-naming, and heuristic evaluation. The Library Digital Programs employ experience in using a range of methods to evaluate library interfaces and related web sites in offering usability services for other academic interfaces.
Library Digital Programs offers usability services to the Krieger School of Arts and Sciences, the Whiting School of Engineering, and the School of Professional Studies in Business and Education. We have worked with the Engineering and Applied Science Programs for Professionals to hold a series of focus groups, in order to inform their web site resdesign process. We have conducted scenario-based think-aloud tests during the redesign of the Krieger School web site.
We have also conducted usability evaluations for other organizations. We worked with the Collaborative Digitization Program to evaluate the usability of Colorado's Historic Newspaper Collection. We made observations at a workshop for teachers, and we held scenario-based think-aloud tests with teachers and university students. We conducted scenario-based think-aloud tests for ARTstor, a digital image library project of the Andrew W. Mellon Foundation. We worked with Project Muse to conduct scenario-based think-aloud tests, link-naming, and a heuristic evaluation.
We have collaborated with other units within the Sheridan Libraries to evaluate the usability of various web sites. We have held scenario-based think-aloud tests for the library homepage and the library catalog interface. We have worked with Special Collections to conduct an online survey of the Roman de la Rose site, as well as to conduct focus groups and scenario-based think aloud tests to evaluate the usability of the Sheet Music Consortium.
Usability Evaluation Methods
A questionnaire is posted online for some period of time to gather feedback from users or the potential audience of a system. Questions may focus on how they currently use the system and what functionality would they like the system to have in the future.
A focus group typically involves a moderator, a note-taker, and 6-10 participants. Guided by a set of questions, the facilitator moderates a discussion about the system, while the note-taker and perhaps a tape recorder keep track of the conversation. Topics may include: how the participants currently use the system, what other systems they use instead, and what they would like the system to be able to do in the future.
Scenario-based think-aloud tests
A scenario-based think-aloud test session involves a participant, a facilitator, and a note-taker. The facilitator presents a series of scenarios to the participant. The participant uses the system to complete the tasks presented in the scenarios while "thinking aloud," that is, while providing comments on what he is doing. The note-taker and the facilitator keep track of these comments as well as the participant's actions and the system's responses. Several test sessions are held in order to observe the experiences of different users.
An observer watches the participant working with the system in the context of his or her typical work environment. The observer may ask some questions at the end of the session, but the most important aspect is observation of real use of the system in the work environment.
A facilitator presents a set of cards to the participant. Each card contains a brief description of one page in the system. The participant sorts the cards into groups and labels each group. The facilitator compiles the results from several participants and conducts a cluster analysis in order to see which cards tend to be grouped together most frequently. This information is applied to the organization of pages and links.
This is a two-stage method. In the first stage, the facilitator presents a set of page names to the participant and asks what she would expect to see if she clicked on links by those names. In the second stage, the facilitator presents descriptions of the pages or the pages themselves and asks what the participant would call the links to those pages. The facilitator can recommend new link names for the terms that were frequently misunderstood or renamed by participants.
In a heuristic evaluation, a usability specialist inspects a web site to determine if it meets general guidelines for usability and accessibility, such as consistency in navigation, clarity in language, and flexibility in the pace of interaction.