A cost-effective usability evaluation progression for novel interactive systems

D. Hix, J.L. Gabbard, J.E. Swan, M.A. Livingston, T.H. Hollerer, S.J. Julier, Y. Baillot, D. Brown
2004 37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the  
For more than two decades, through our work in human-computer interaction and usability engineering, we have pursued the goals of developing, applying, and extending methods for improving the usability of interactive software applications. In particular, our work has focused on high-impact, cost-effective techniques for evaluating usability. This paper will report on user interface design and evaluation for a mobile, outdoor, augmented reality (AR) system. This novel system, called the
more » ... ld Augmented Reality System (BARS), supports information gathering for situational awareness in an urban warfighting setting. We have systematically applied a cost-effective progression of usability engineering activities from the very beginning of BARS development. To our knowledge, this is the first time usability engineering has been extensively and systematically incorporated into the research and development process of a real-world AR system. In this paper, we discuss how we first applied expert evaluation to BARS user interface development. This type of evaluation employs usability experts -not domain experts -to assess and iteratively redesign an evolving user interface. Our BARS team performed six cycles of structured expert evaluations on a series of visual user interface mockups representing occluded (non-visible) objects. We also discuss how results of our expert evaluations informed subsequent user-based statistical evaluations and formative evaluations. User-based statistical evaluations employ users to help determine empirically-validated design options for specific user interface components and features (in this case, occluded objects). Formative evaluation employs representative users of the evolving application to perform carefully-constructed tasks, while evaluators collect both qualitative and quantitative data on user performance. Results from all these types of studies inform selection of critical factors for more costly, comparative summative evaluations. Finally, we discuss how and why this sequence of types of evaluation is cost-effective.
doi:10.1109/hicss.2004.1265653 dblp:conf/hicss/HixGSLHJBB04 fatcat:w32pf5gew5a6pbqgbaaellt4ri