uSer Interface evaluatIon
e last remaining part in the cycle of UI interactive software devel-
opment is the evaluation stage. Even if the developers may have
strived to adhere to various HCI principles, guidelines, and rules
and have applied the latest toolkits and implementation methodolo-
gies, the resulting UI or software is most probably not problem-free.
Frequently, careful considerations in interaction and interface design
may not even have been carried out in the first place. Aside from the
fact that there may be things that the developer failed to oversee or
consider, the overall development process was to be a gradual refine-
ment process to begin with, where the next refinement stages would
be based on the evaluation results of the previous rounds. In this
chapter, we will present several methods and examples of evaluation
for user interfaces.
8.1 Evaluation Criteria
When evaluating the interaction model and interface, there are largely
two criteria. One is the usability and the other is user experience (UX).
Simply put, usability refers to the ease of use and learnability of the
user interface (we come back to UX later in this section) [1]. Usability
can be measured in two ways, quantitatively or qualitatively.
Quantitative assessment often involves task-performance measure-
ments. at is, we assume that an interface is “easy to use and learn”
(good usability) if the subject (or a reasonable pool of subjects) is able
to show some (absolute) minimum user performance on typical appli-
cation tasks. e assessment of a given new interface is better made in
a comparative fashion against some nominal or conventional interface
(in terms of relative performance edge). Popular choices of such per-
formance measures are task completion time, task completion amount
in a unit time (e.g., score), and task error rate. For example, suppose
Human–ComPuter InteraCtIon
we would like to test a new motion-based interface for a smartphone
game. We could have a pool of subjects play the game, using both
the conventional touch-based interface and also the newly proposed
motion-based one. We could compare the score and assess the com-
parative effectiveness of the new interface. e underlying assumption
is that task performance is closely correlated to the usability (ease of
use and learnability). However, such an assumption is quite arguable.
In other words, task-performance measures, while quantitative, only
reveal the aspect of efficiency (or merely the aspect of ease of use) and
not necessarily the entire usability. e aspect of learnability should
be and can be assessed in a more explicit way by measuring the time
and effort (e.g., memory) for users to learn the interface. e problem
is that it is difficult to gather a homogeneous pool of subjects with
similar backgrounds (in order to make the evaluation fair). Measuring
the learnability is generally likely to introduce much more biasing
factors such as differences due to educational/experiential/cultural
background, age, gender, etc. Finally, quantitative measurements in
practice cannot be applied to all the possible tasks for a given applica-
tion and interface. Usually, a very few representative tasks are chosen
for evaluation. is sometimes makes the evaluation only partial.
To complement the shortcomings of the quantitative evaluation,
qualitative evaluations often are conducted together with the quantita-
tive analysis. In most cases, quantitative evaluations amount to con-
ducting a usability survey, posing usability-related questions to a pool
of subjects after having them experience the interface. A usability sur-
vey often includes questions involving the ease of use, ease of learning,
fatigue, simple preference, and other questions specific to the given
interface. NASA TLX (Task Load Index, Figure8.1) and the IBM
Usability Questionnaire (Figure8.2) are examples of the often-used
semi-standard questionnaires for this purpose [2, 3, 4].
User experience (UX) is the other important aspect of interface
evaluation. ere is no precise definition for UX. It is generally
accepted that the notion of user experience is “total” in the sense
that it is not just about the interface, but also something about the
whole product/application and even extends to the product family
(such as the Apple® products or MS Office). It is also deeply related
to the user’s emotions and perceptions that result from the use or
anticipated use of the application (through the given interface) [4].

Get Human-Computer Interaction now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.