Joseph Dumas & Janice Redish analyze the potential benefits of combining various evaluation methods as complementary tools for usability testing in the context of usability engineering.

Getting experts to review the design

Calling in an Expert allows for a heuristic evaluation of a product’s features. The authors give the example Nielsen and Molich (1990) who, given statistical finding recommend that an interface be evaluated by a small group of 3 to 5 engineers, and provided a set of guidelines for evaluating usability. Those are: Use simple and natural language; Speak the user’s language; Minimize user memory load; Be consistent; Provide feedback; Provide clearly marked exits; Provide shortcuts; Provide good error messages; and Prevent errors. (p.65)

The advantages of heuristic evaluations are that they reveal local problems, whereas usability testing reveals global problems that users may encounter. The authors suggest using both methods to eliminate the most usability problems.

Having peers or experts walk through the design

Walkthroughs are somewhat similar to cognitive walkthroughs, as they involve a group of peers and usability experts analyzing the steps-requirement for accomplishing a task and find solutions for minimizing the time-cost and simplifying the complexity of that task. To distinguish the two variations: walkthroughs help revisit a design concept and determine programming alterations and layout, whereas cognitive walkthroughs are usually conducted before any prototype implementation. Cognitive walkthroughs focus on the human factor in terms of user expectations and goals.

For Redish and Dumas, “walkthroughs are less effective at finding usability problems than other evaluation methods such as heuristic evaluations”. (p.69)

Having users work with prototypes (static or interactive)

Prototypes usually come in two basic forms: static (paper-based) or interactive (software-based). Those have their advantages and disadvantages.

Using static prototypes will mean that one person will be playing the computer or doing a screenplay mimicking the interactivity of a finished product. This interplay between participant, and paper helps moderators find hierarchical issues and language interpretation in terms of features available and the wording assigned to them in the sketches.

Interactive prototypes allows participants to interact with a ‘seemingly’ working interface. The prototypes facilitate the incorporation of user feedback, exploration of several design concepts, and the evaluation of several iterations. (p.72)

Prototypes can also be tested in 3 ways: horizontal (surface interface), vertical (small number of paths available), or scenario prototypes (selected tasks made fully functional).

Getting user edits on early versions of documentation

Conducting a usability test in the early stages of a design helps save time and money in the long run for both designers and engineers, as rapid changes can be made and less work-load will be put on design alterations and programming modifications. However, it is important to test the intended end-users in order to make the right decisions in due time.

As seen in the Atlas (1981), Redish and Dumas define user edit as involving a user, a specific task, a set of product instructions, and an observer. A variation of this method is usability edit (Soderston, 1985) defined as involving a user, written material, a camcorder and/or observer, and a system (ie a computer). Here, users not only look at the material but also interact with the information while thinking aloud. This method of having real text helps moderators evaluate the effectiveness and efficiency of both “commission” and “omission.” (p.77)

Comparing usability testing with other usability evaluation methods

Finally, it is suggested that you combine usability testing with other evaluation methods to augment complement the advantages found in different approaches and diminish the risk of leaving out local problems that can make your product inadequate for your end-user.

After comparing different styles of combining methods, the authors conclude that: (p.82)
– Usability testing uncovers more usability problems, finds more global problems, and more unique problems than other evaluation methods.
– Usability testing uncovers less local problems and takes more hours to conduct than other methods, but is cost effective in the long-run.
– Heuristic evaluations are better at uncovering usability problems than walkthroughs, are more effective when conducted by several experts working independently, and are better at uncovering local problems.

Again, they emphasize the need to conduct both usability tests and heuristic evaluations, as together they help uncover the most usability problems.

@