Welcome to the Sheridan Libraries' Usability Research web site. Usability Research is part of the Digital Knowledge Center unit.
"Click here." You may see these words on many web sites, but that does not make it any easier to figure out what they mean. Each time this blue, underlined phrase attracts your attention, you have to read the words around it to know where this link will take you. Ubiquity does not guarantee usability. Familiar words and common interface elements contribute to the usability of a web site, but many other aspects are involved.
The goal of usability evaluation is to determine what can be done to make an interface efficient, satisfying, and easy to use, to learn, and to remember. Usability evaluation involves employing various methods designed to glean this information in an iterative process, from the early stages of a web site's development through its active use. These methods may include surveys, focus groups, scenario-based think-aloud tests, contextual inquiry, card-sorting, link-naming, and heuristic evaluation.
At the Digital Knowledge Center, conducting a usability evaluation often entails inviting the Sheridan Libraries' "target users" to discuss their needs and goals in using the libraries' web resources or to participate in sessions in which library staff observe their use of a library web site. We seek participants that represent a cross-section of the university community: students, faculty, and staff in different fields of study and at different levels of familiarity with the libraries' resources.
In addition to evaluating the usability of the libraries' web resources, the Digital Knowledge Center conducts research on digital library usability. We seek to find the best methods for evaluating the usability of digital library resources. Our Usability Research Agenda discusses the issues and opportunities that arise in this area.
Usability Evaluation Methods
A questionnaire is posted online for some period of time to gather feedback from users or the potential audience of a system. Questions may focus on how they currently use the system and what functionality would they like the system to have in the future.
- Focus groups
A focus group typically involves a moderator, a note-taker, and 6-10 participants. Guided by a set of questions, the facilitator moderates a discussion about the system, while the note-taker and perhaps a tape recorder keep track of the conversation. Topics may include: how the participants currently use the system, what other systems they use instead, and what they would like the system to be able to do in the future.
- Scenario-based think-aloud tests
A scenario-based think-aloud test session involves a participant, a facilitator, and a note-taker. The facilitator presents a series of scenarios to the participant. The participant uses the system to complete the tasks presented in the scenarios while "thinking aloud," that is, while providing comments on what he is doing. The note-taker and the facilitator keep track of these comments as well as the participant's actions and the system's responses. Several test sessions are held in order to observe the experiences of different users.
- Contextual inquiry
An observer watches the participant working with the system in the context of his or her typical work environment. The observer may ask some questions at the end of the session, but the most important aspect is observation of real use of the system in the work environment.
A facilitator presents a set of cards to the participant. Each card contains a brief description of one page in the system. The participant sorts the cards into groups and labels each group. The facilitator compiles the results from several participants and conducts a cluster analysis in order to see which cards tend to be grouped together most frequently. This information is applied to the organization of pages and links.
This is a two-stage method. In the first stage, the facilitator presents a set of page names to the participant and asks what she would expect to see if she clicked on links by those names. In the second stage, the facilitator presents descriptions of the pages or the pages themselves and asks what the participant would call the links to those pages. The facilitator can recommend new link names for the terms that were frequently misunderstood or renamed by participants.
- Heuristic evaluation
In a heuristic evaluation, a usability specialist inspects a web site to determine if it meets general guidelines for usability and accessibility, such as consistency in navigation, clarity in language, and flexibility in the pace of interaction.
The Digital Knowledge Center applies a range of usability methods to our projects. For Gamera, we have conducted a heuristic evaluation of the documentation and we are conducting a survey of Gamera users. For SCALE, we have conducted a heuristic evaluation, a focus group, and an accessibility assessment. For the Digital Audio Archives Project (DAAP), we will evaluate the interfaces and tools that we will develop for audio digitization and search.