Effect of User Sessions on the Heuristic Usability Method

Effect of User Sessions on the Heuristic Usability Method

Jehad Alqurni, Roobaea Alroobaea, Mohammed Alqahtani
Copyright: © 2018 |Pages: 20
DOI: 10.4018/IJOSSP.2018010104
Article PDF Download
Open access articles are freely available for download

Abstract

Heuristic evaluation (HE) is a widely used method for assessing software systems. Several studies have sought to improve the effectiveness of HE by developing its heuristics and procedures. However, few studies have involved the end-user, and to the best of the authors' knowledge, no HE studies involving end-users with non-expert evaluators have been reported. Therefore, the aim of this study is to investigate the impact of end-users on the results obtained by a non-expert evaluator within the HE process, and through that, to explore the number of usability problems and their severity. This article proposes introducing two sessions within the HE process: a user exploration session (UES-HE) and a user review session (URS-HE). The outcomes are compared with two solid benchmarks in the usability-engineering field: the traditional HE and the usability testing (UT) methods. The findings show that the end-user has a significant impact on non-expert evaluator results in both sessions. In the UES-HE method, the results outperformed all usability evaluation methods (UEMs) regarding the usability problems identified, and it tended to identify more major, minor, and cosmetic problems than other methods.
Article Preview
Top

1. Introduction

A revelation in technologies has led to a significant spread in system products and has thus increased the demand for system product development. One of the most popular types of system products is web-based systems (Sova and Nielsen, 2003), which play a significant role in enabling private or public organizations to provide information and services to end-users (Harrison and Petrie, 2007) (Alqurni and Pooley, 2016). An end-user is anyone who can use the target system and interact with its interface (ISO 9241-11, 1998), (Alqurni and Pooley, 2016). The user interface of web-based systems is the mediator of the interaction between the end-user and the website. The usability of the user interface has thus increasingly attracted interest in our world because of the growing increase in the number of users every year. Quantitatively, the number of users of web-based systems has dramatically grown from 1,971 million users (28.8% of the world population) in 2010 to 3,675 million (50.1% of the world population) in 2016 (Group, 2016).

Chen (2012) stated that the success or failure of a product is significantly affected by its usability. Nielsen (2001) also reported that the reason behind the abandonment of 50% of sales websites is poor website usability. Commercial sites have also been shown to experience difficulties in the competitive environment due to poor usability (Osterbauer, Kφhle, Grechenig, & Tscheligi, 2000). For example, nearly 39% of online buyers were found to have failed to accomplish their purchases online due to usage difficulties (S. Y. Chen & Macredie, 2005). This shows that good or poor usability plays a pivotal role in the success or failure of website products. Consequently, usability is considered one of the most important factors that influence the user interface of a web-based system, and it plays a significant role in fulfilling users' satisfaction. Therefore, several usability evaluation methods (UEMs) have been developed to measure the level of usability. The most common UEMs used in web-based systems are the usability testing (UT) and heuristic evaluation (HE) methods (Fernandez, Insfran, & Abrahão, 2011). Although the HE method is described as being more affordable than the UT method, its results are prone to the opinion of the evaluators. UT results are derived from the end-user and are thus the consequence of real problems and it is limited to user tasks.

Several studies have attempted to improve the effectiveness of the HE method by examining the major factors of HE such as lists of heuristics or evaluator expertise. Expertise is one of the most important factors contributing to the improvement of the HE (Hwang & Salvendy, 2007) method, which can be used by either expert or non-expert evaluators. Nielsen (1992) has described the non-expert evaluator as one who lacks experience both in usability and in the system domain but who have a solid background in computer field. In contrast, expert evaluators are described as those who have expertise in usability. Although the former yield more accurate results (Nielsen, 1992), several studies indicated that expert evaluators are difficult to find (Äijö & Mantere, 2001; Desurvire & Thomas, 1993; Nielsen, 1999; Paz, Paz, Villanueva, & Pow-Sang, 2015). In addition, Fernandez et al. (2011) stated that: “Although inspection methods are intended to be performed by expert evaluators, most of them were applied by novice evaluators such as Web designers or students.”

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 1 Issue (2015)
Volume 5: 3 Issues (2014)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing