scispace - formally typeset
Search or ask a question

Showing papers on "Usability goals published in 2014"


Journal ArticleDOI
TL;DR: The results show a close correlation between usability and credibility, as e-government websites with a high usability were perceived as having higher credibility, and vice versa.

143 citations


Journal ArticleDOI
TL;DR: This paper proposes a compilation of heuristic evaluation checklists taken from the existing bibliography but readapted to new mobile interfaces, and shows the usefulness of the proposed checklist for avoiding usability gaps even with nontrained developers.
Abstract: The rapid evolution and adoption of mobile devices raise new usability challenges, given their limitations (in screen size, battery life, etc.) as well as the specific requirements of this new interaction. Traditional evaluation techniques need to be adapted in order for these requirements to be met. Heuristic evaluation (HE), an Inspection Method based on evaluation conducted by experts over a real system or prototype, is based on checklists which are desktop-centred and do not adequately detect mobile-specific usability issues. In this paper, we propose a compilation of heuristic evaluation checklists taken from the existing bibliography but readapted to new mobile interfaces. Selecting and rearranging these heuristic guidelines offer a tool which works well not just for evaluation but also as a best-practices checklist. The result is a comprehensive checklist which is experimentally evaluated as a design tool. This experimental evaluation involved two software engineers without any specific knowledge about usability, a group of ten users who compared the usability of a first prototype designed without our heuristics, and a second one after applying the proposed checklist. The results of this experiment show the usefulness of the proposed checklist for avoiding usability gaps even with nontrained developers.

124 citations


Journal ArticleDOI
TL;DR: It is confirmed that still too many companies either neglect usability and UX, or do not properly consider them, and suggestions on how they may be properly addressed are given, since their solution is the starting point for reducing the gap between research and practice of usability andUX.
Abstract: The efforts of addressing user experience (UX) in product development keep growing, as demonstrated by the proliferation of workshops and conferences bringing together academics and practitioners, who aim at creating interactive software able to satisfy their users This special issue focuses on “Interplay between User Experience Evaluation and Software Development”, stating that the gap between human-computer interaction and software engineering with regard to usability has somewhat been narrowed Unfortunately, our experience shows that software development organizations perform few usability engineering activities or none at all Several authors acknowledge that, in order to understand the reasons of the limited impact of usability engineering and UX methods, and to try to modify this situation, it is fundamental to thoroughly analyze current software development practices, involving practitioners and possibly working from inside the companies This article contributes to this research line by reporting an experimental study conducted with software companies The study has confirmed that still too many companies either neglect usability and UX, or do not properly consider them Interesting problems emerged This article gives suggestions on how they may be properly addressed, since their solution is the starting point for reducing the gap between research and practice of usability and UX It also provides further evidence on the value of the research method, called Cooperative Method Development, based on the collaboration of researchers and practitioners in carrying out empirical research; it has been used in a step of the performed study and has revealed to be instrumental for showing practitioners why to improve their development processes and how to do so

117 citations


Journal ArticleDOI
TL;DR: Usability testing identified mClinic development flaws and needed software enhancements and analysis of midwives' reactions to low-fidelity prototypes during the interview process supported the applicability ofmClinic to midwife' work and identified the need for additional functionality.

97 citations


Proceedings ArticleDOI
09 Oct 2014
TL;DR: This paper quantifies the sample size required for the Domain Specific-to-context Inspection (DSI) method, which itself is developed through an adaptive framework and shows that there is no certain number of participants for finding all usability problems; however, the rule of 16 4 users gains much validity in user testing.
Abstract: The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites, applications and software products that are growing rapidly in use and that have had a great impact on many businesses. These technologies need to be continuously evaluated by usability evaluation methods (UEMs) to measure their efficiency and effectiveness, to assess user satisfaction, and ultimately to improve their quality. However, estimating the sample sizes for these methods has become the source of considerable debate at usability conferences. This paper aims to determine an appropriate sample size through empirical studies on the social network and educational domains by employing three types of UEM; it also examines further the impact of sample size on the findings of usability tests. Moreover, this paper quantifies the sample size required for the Domain Specific-to-context Inspection (DSI) method, which itself is developed through an adaptive framework. The results show that there is no certain number of participants for finding all usability problems; however, the rule of 16 4 users gains much validity in user testing. The magic number of five evaluators fails to find 80% of problems in heuristic evaluation, whereas three evaluators are enough to find 91% of usability problems in the DSI method.

92 citations


Journal ArticleDOI
TL;DR: In this paper, a knowledge-based risk mapping tool for systematically assessing risk-related variables that may lead to cost overrun in international markets is proposed, which uses an ontology that relates risk and vulnerability to cost- overrun and a novel risk-vulnerability assessment methodology.

87 citations


Journal ArticleDOI
TL;DR: A systematic research review of whether and how researchers have studied and considered usability when conducting computer-based concept map assessments concludes that the interplay between usability and test use training has mostly been neglected in current studies.
Abstract: The concept map is now widely accepted as an instrument for the assessment of conceptual knowledge and is increasingly being embedded into technology-based environments. Usability addresses how appropriate (for a particular use) or how user-friendly a computer-based assessment instrument is. As we know from human-computer interaction research, if the interface is not user-friendly, a computer-based assessment can result in decreased test performance and reduced validity. This means that the usability of the interface affects the assessment in such a way that if the test is not user-friendly, then the test taker will not be able to fully demonstrate his/her level of proficiency and will instead be scored according to his/her information and communication technology (ICT) literacy skills. The guidelines of the International Test Commission (2006) require usability testing for such instruments and suggest that design standards be implemented. However, we do not know whether computer-conducted concept map assessments fulfill these standards. The present paper addresses this aspect. We conducted a systematic research review to examine whether and how researchers have studied and considered usability when conducting computer-based concept map assessments. Only 24 out of 119 journal articles that assessed computer-based concept maps discussed the usability issue in some way. Nevertheless, our review brings to light the idea that the impact of usability on computer-based concept map assessments is an issue that has received insufficient attention. In addition, usability ensures a suitable interaction between test taker and test device; thus, the training effort required for test use can be reduced if a test's usability is straight forward. Our literature review, however, illustrates that the interplay between usability and test use training has mostly been neglected in current studies.

75 citations


Journal ArticleDOI
TL;DR: A narrative review of peer-reviewed articles published from 2009 to 2013 that tested the usability of a web- or mobile-delivered system/application designed to educate and support patients with diabetes found mixed, descriptive quantitative, and qualitative methods were used to assess usability.
Abstract: Consumer health technologies can educate patients about diabetes and support their self-management, yet usability evidence is rarely published even though it determines patient engagement, optimal benefit of any intervention, and an understanding of generalizability. Therefore, we conducted a narrative review of peer-reviewed articles published from 2009 to 2013 that tested the usability of a web- or mobile-delivered system/application designed to educate and support patients with diabetes. Overall, the 23 papers included in our review used mixed (n = 11), descriptive quantitative (n = 9), and qualitative methods (n = 3) to assess usability, such as documenting which features performed as intended and how patients rated their experiences. More sophisticated usability evaluations combined several complementary approaches to elucidate more aspects of functionality. Future work pertaining to the design and evaluation of technology-delivered diabetes education/support interventions should aim to standardize the usability testing processes and publish usability findings to inform interpretation of why an intervention succeeded or failed and for whom.

64 citations


Journal ArticleDOI
TL;DR: A statistical study is conducted to evaluate the usability and accessibility levels of three popular academic websites based on human (user) perception and the results were observed to be consistent with the results of performance-based evaluation.

60 citations


Journal ArticleDOI
TL;DR: Evaluating the usability of widely used laboratory and radiology information systems using heuristic evaluation method found that the designers design systems that inhibit the initiation of erroneous actions and provide sufficient guidance to users is recommended.
Abstract: This study was conducted to evaluate the usability of widely used laboratory and radiology information systems. Three usability experts independently evaluated the user interfaces of Laboratory and Radiology Information Systems using heuristic evaluation method. They applied Nielsen's heuristics to identify and classify usability problems and Nielsen's severity rating to judge their severity. Overall, 116 unique heuristic violations were identified as usability problems. In terms of severity, 67 % of problems were rated as major and catastrophic. Among 10 heuristics, "consistency and standards" was violated most frequently. Moreover, mean severity of problems concerning "error prevention" and "help and documentation" heuristics was higher than of the others. Despite widespread use of specific healthcare information systems, they suffer from usability problems. Improving the usability of systems by following existing design standards and principles from the early phased of system development life cycle is recommended. Especially, it is recommended that the designers design systems that inhibit the initiation of erroneous actions and provide sufficient guidance to users.

59 citations


Proceedings ArticleDOI
26 Apr 2014
TL;DR: This paper describes how two new CAPTCHA schemes for Google that focus on maximizing usability are designed and tested, and how the resulting scheme is now an integral part of the production system and is served to millions of users.
Abstract: Websites present users with puzzles called CAPTCHAs to curb abuse caused by computer algorithms masquerading as people. While CAPTCHAs are generally effective at stopping abuse, they might impair website usability if they are not properly designed. In this paper we describe how we designed two new CAPTCHA schemes for Google that focus on maximizing usability. We began by running an evaluation on Amazon Mechanical Turk with over 27,000 respondents to test the usability of different feature combinations. Then we studied user preferences using Google's consumer survey infrastructure. Finally, drawing on the insights gleaned during those studies, we tested our new captcha schemes first on Mechanical Turk and then on a fraction of production traffic. The resulting scheme is now an integral part of our production system and is served to millions of users. Our scheme achieved a 95.3% human accuracy, a 6.7.

Journal ArticleDOI
TL;DR: Although the correlations are far from perfect, there are reliable and reasonably strong positive correlations between subjective usability measures and task success rates, for both the laboratory and field studies at both the individual and system level.
Abstract: This article examines the relationship between users’ subjective usability assessments, as measured using the System Usability Scale (SUS), and the ISO metric of effectiveness, using task success as the measure. The article reports the results of two studies designed to explore the relationship between SUS scores and user success rates for a variety of interfaces. The first study was a field study, where stereotypical usability assessments on a variety of products and services were performed. The second study was a well-controlled laboratory study where the level of success that users were able to achieve was controlled. For both studies, the relationship between SUS scores and their attendant performance were examined at both the individual level and the average system level. Although the correlations are far from perfect, there are reliable and reasonably strong positive correlations between subjective usability measures and task success rates, for both the laboratory and field studies at both the indiv...

Proceedings ArticleDOI
15 Jun 2014
TL;DR: A checklist to measure the usability of augmented reality applications in a practical manner was developed adapting the ISO 9241-11 and Nielsen Heuristics for an augmented reality context, and criteria created by us.
Abstract: Augmented reality applications merge virtual content with the real world, with real-time interaction and they have inherent characteristic, such as the lighting conditions, use of sensors, and the user position. These applications are very different from conventional applications that use mouse and keyboard, so they require a usability evaluation to verify whether they achieve their goals and satisfy users. We developed a checklist to measure the usability of augmented reality applications in a practical manner. This checklist was developed adapting the ISO 9241-11 and Nielsen Heuristics for an augmented reality context, and criteria created by us. Five experts evaluated two applications to test the checklist. In both cases, the evaluation clearly identified key problems in the application design. They also evaluated the checklist. Analyzing the results, we could determine that the checklist appears to be a possible solution to evaluate usability of augmented reality applications.

Journal ArticleDOI
TL;DR: The interface designed according to usability principles was perceived to be more usable and inspired greater confidence among physicians than the guided navigation interface.

Journal ArticleDOI
TL;DR: In this article, the authors explore and study the aspects of usability related to eMaintenance solutions, aiming to expand the domain of emaintenance by increasing the usefulness of the solutions.
Abstract: Purpose – The purpose of this paper is to explore and study the aspects of usability related to eMaintenance solutions. The study aims to expand the domain of eMaintenance by increasing the usefuln ...

Joseph L. Gabbard1
01 Jan 2014
TL;DR: It is postulate that there is no single method for VE usability engineering, and how each of these methodologies supports focused, specialized design, measurement, management, and assessment techniques such as those presented in other chapters of this Handbook is addressed.
Abstract: To this end, we present several usability engineering methods, mostly adapted from GUI development, that have been successfully applied to VE development. These methods include user task analysis, expert guidelines-based evaluation (also sometimes called heuristic evaluation or usability inspection), and formative usability evaluation. Further, we postulate that — like GUI development — there is no single method for VE usability engineering, and we address how each of these methodologies supports focused, specialized design, measurement, management, and assessment techniques such as those presented in other chapters of this Handbook {NOTE TO EDITOR: these will be listed as appropriate as we see how other chapters are completed}. We include our experiences with usability engineering of three different VEs: one was a desktop -based new interaction technique, one was a CAVE -based medical imaging system, and one was a Responsive Workbench-based navigation application. We also discuss summative evaluation, even though it is not a usability engineering method per se. We present summative evaluation because it is an important aspect of making comparative assessments of VEs from a user's perspective.

Journal ArticleDOI
TL;DR: An Intelligent Usability Evaluation (IUE) tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods.
Abstract: With the major advances of the Internet throughout the past couple of years, websites have come to play a central role in the modern marketing business program. However, simply owning a website is not enough for a business to prosper on the Web. Indeed, it is the level of usability of a website that determines if a user stays or abandons it for another competing one. It is therefore crucial to understand the importance of usability on the web, and consequently the need for its evaluation. Nonetheless, there exist a number of obstacles preventing software organizations from successfully applying sound website usability evaluation strategies in practice. From this point of view automation of the latter is extremely beneficial, which not only assists designers in creating more usable websites, but also enhances the Internet users' experience on the Web and increases their level of satisfaction. As a means of addressing this problem, an Intelligent Usability Evaluation (IUE) tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods. Experimental results show there exists a high correlation between the tool and human annotators when identifying the considered usability violations.

Book ChapterDOI
22 Jun 2014
TL;DR: A prototype of an intermodal passenger information system is investigated in a usability evaluation and tested in comparison to the leading mobility application in Germany, revealing easy to improve usability problems but also a trust issue and the need for a participatory component in public transportation.
Abstract: Public transportation becomes increasingly diverse because of innovation in transport modalities and a large number of service providers. For facilitating passengers’ comfort, intermodal passenger information systems are required, which combine data of different providers and transport modes. Therefore, context sensitive mobile applications are promising solutions to supporting passengers at every stage of their trip. Crucial for the success of these applications is their usability. In this paper, a prototype of an intermodal passenger information system is investigated in a usability evaluation and tested in comparison to the leading mobility application in Germany. Both iOS apps were evaluated with a questionnaire using the system usability scale (SUS) in a lab setting (n=20) and in a field test (n=20). Additionally, participants of the field test were interviewed retrospectively about app and setting. The user feedback was beneficial in learning about users’ expectations towards information retrieval procedure in and functionalities of a passenger information system. The usability evaluation basically revealed easy to improve usability problems, but also a trust issue and the need for a participatory component in public transportation, probably by integrating social media.

Proceedings ArticleDOI
20 Dec 2014
TL;DR: A systematic review was performed to identify the evaluation methods that have been more employed over the last three years in order to assess the level of usability of a software application.
Abstract: Since usability is considered as a critical success factor for any software application, several evaluation methods have been developed. Nowadays, it is possible to find many proposals in the literature that address to evaluate usability issues. However, there is still discussion about what usability evaluation method is the most widely accepted by the scientific community. In this research, a systematic review was performed to identify the evaluation methods that have been more employed over the last three years in order to assess the level of usability of a software application. From these results, it has been possible to establish clear evidence about the current trends in this field. A total of 274 usability studies have allowed to reach useful information for scholars in this area.

Proceedings ArticleDOI
07 Apr 2014
TL;DR: A case study where a new set of usability heuristics is verified for Transactional Web Sites and provides more accurate and promising results than the current proposal of Nielsen.
Abstract: One of the most recognized methods to evaluate usability in software applications is the heuristic evaluation. In this inspection method, Nielsen's heuristics, are the most widely used evaluation instrument. However, there is evidence in the literature which establishes that these heuristics are not appropriate when they are used to measure the level of usability of new emerging categories of software applications. This paper presents a case study where this concept is verified for Transactional Web Sites. Therefore, given the present limitations, a new set of usability heuristics was proposed following a structured and systematic methodology. Finally, fifteen new usability heuristics were obtained as final product of this research. A validation phase allowed to contrast this new proposal with the Nielsen's principles under a real context, where a heuristic evaluation was performed to a Transactional Web Site. The results established that the new set of usability heuristics, which is presented in this study, provides more accurate and promising results than the current proposal of Nielsen.

Proceedings ArticleDOI
01 Nov 2014
TL;DR: Five usability dimensions and twelve relevant criteria (sub-dimensions) have been created that can be used to evaluate m-banking application and it is proposed that this set of dimensions and measurements should be suitable and an appropriate set of measurements for m- banking evaluation.
Abstract: Usability has greatly been considered as one of the significant quality attributes to determine the success of mobile application. Mobile banking application is increasingly recognized as an emergent m-commerce application which dignified to become the giant killer mobile application arena. However, prominent usability evaluation models for mobile applications are too general and do not adequately capture the complexities of interacting with m-banking application platform. Similarly, there are no sufficient descriptions concerning the relationship between phases and appropriate usability measures for a specific application. Some banks do not offer m-banking application, while those that offer have inadequate functionalities and this shows that their interfaces are still insufficient and not user friendly. To date, usability and measurements for mbanking application in particular is very limited or even isolated and this makes usability evaluation of m-banking more of challenging. Consequently, this report proposes to address this matter by proposing a suitable and an appropriate set of usability dimensions and measurements for m-banking evaluation. The systematic literature review was employed to review relevant journals and conference proceedings. Seven hundred and eight papers were downloaded but merely forty nine papers have been selected and fully reviewed/analysed. Five usability dimensions and twelve relevant criteria (sub-dimensions) have been created that can be used to evaluate m-banking application.

Journal ArticleDOI
TL;DR: This mixed method evaluation provided comprehensive and realistic feedback for iterative refinement of the ADAPT system prior to implementation and demonstrated that "think-aloud" protocol analysis with "near-live" clinical simulations provided a successful usability evaluation of a new primary care pre-diabetes shared goal setting tool.

Journal ArticleDOI
TL;DR: It is argued that UIM complements existing usability evaluation methods and discusses future research on utility inspection.
Abstract: Whereas research in usability evaluation abounds, few evaluation approaches focus on utility. We present the utility inspection method (UIM), which prompts evaluators about the utility of the syste...

Journal ArticleDOI
TL;DR: The findings reveal that experience and/or lack of experience of students do not carry much connotation in this study but confirmed that the usability attributes are vital for the natural and spontaneous interactions with e-learning web sites.

Book ChapterDOI
22 Jun 2014
TL;DR: A study comparing effectiveness, efficiency, ease of use, usability and user experience when using tablets and laptops in typical private and business tasks indicates that a pleasant and meaningful experience depends on more characteristics than work-related qualities such as effectiveness and efficiency.
Abstract: Initially perceived as a consumer device, in recent years tablets have become more frequently used in business contexts where they often replace laptops as mobile computing devices. Since they follow different user interaction paradigms we conducted a study comparing effectiveness, efficiency, ease of use, usability and user experience when using tablets and laptops in typical private and business tasks. To measure these characteristics we used the task completion rate, the task completion time, the Single Ease Question (SEQ), the Software Usability Scale (SUS) and AttrakDiff. Results indicate that there is a difference between effectiveness, efficiency and the users’ assessment of the devices. Users can carry out tasks more effectively and efficiently on laptops, but rate tablets higher in perceived usability and user experience, indicating that a pleasant and meaningful experience depends on more characteristics than work-related qualities such as effectiveness and efficiency.

Journal ArticleDOI
TL;DR: This paper proposes a novel model to extract knowledge from opinions to improve subjective software usability, and for the first time opinion mining is used in software usability.
Abstract: Usability is critical for any system, but in software it is one of the most important features. In fact, one of the main reasons for software failure is the system lacking to achieve users specified goals and satisfaction. For this reason, usability evaluation is becoming an important part of software development. Software usability evaluation can be costly in terms of time and human. Therefore, automation is promising way to augment existing approaches especially if the evaluation is subjective where the usability concentrated about user's "opinion". This paper proposes to use opinion mining as an automatic technique to evaluate subjective usability. Opinion mining is a research subtopic of data mining aiming to automatically obtain useful opinioned knowledge in subjective texts. We propose a novel model to extract knowledge from opinions to improve subjective software usability. This is the first time opinion mining used in software usability. To evaluate our proposed model, a set of experiments was designed and conducted and we got an average accuracy of 85.41%. Also, we propose to use graphics to visualize user's opinion in software and to compare the usability of two software.

Journal ArticleDOI
TL;DR: Two in-depth empirical studies of supporting software development practitioners by training them to become barefoot usability evaluators show that the SWPs after 30 hours of training obtained considerable abilities in identifying usability problems and that this approach revealed a high level of downstream utility.
Abstract: Usability evaluations provide software development teams with insights on the degree to which a software application enables a user to achieve his/her goals, how fast these goals can be achieved, how easy it is to learn and how satisfactory it is in use. Although usability evaluations are crucial in the process of developing software systems with a high level of usability, their use is still limited in the context of small software development companies. Several approaches have been proposed to support software development practitioners (SWPs) in conducting usability evaluations and this paper presents two in-depth empirical studies of supporting SWPs by training them to become barefoot usability evaluators. Findings show that the SWPs after 30 hours of training obtained considerable abilities in identifying usability problems and that this approach revealed a high level of downstream utility. Results also show that the SWPs created relaxed conditions for the test users when acting as test monitors but ex...

Book ChapterDOI
01 Jan 2014
TL;DR: This chapter takes a user centered, Human-Computer Interaction-based perspective to discuss usability evaluation of information visualization techniques, taking into account a background of well-known usability evaluation methods from HCI to help understanding why there are still open problems.
Abstract: In the last decade, the growing interest in evaluation of information visualization techniques is a clear indication that usability and user experience are very important quality criteria in this context. However, beyond this level of agreement there is much room for discussion about how to extend the variety of usability evaluation approaches for assessing information visualization techniques, and how to determine which ones are the most effective, and in what ways and for what purposes. In this chapter we take a user centered, Human-Computer Interaction-based perspective to discuss usability evaluation of information visualization techniques. We begin by presenting a singular view of the evolution of visualization techniques evaluation, briefly summarizing the main contributions of several works in this area since its humble beginning as a collateral activity until the recent growth of interest. Then, we focus on current issues related to such evaluations, particularly concerning the way they are designed and conducted, taking into account a background of well-known usability evaluation methods from HCI to help understanding why there are still open problems. A set of guidelines for a (more) user-centered usability evaluation of information visualization techniques is proposed and discussed. Our ultimate goal is to provide some insight regarding if and how sound ergonomic user-centered knowledge can be transferred to the information visualization context.

Journal ArticleDOI
TL;DR: The main contribution of the study is the usability evaluation criteria for BI applications presented as guidelines that deviate from existing usability evaluation guidelines in that it emphasises the aspects of information architecture, learnability and operability.
Abstract: Business Intelligence (BI) applications provide business information to drive decision support. Usability is one of the factors determining the optimal use and eventual benet derived from BI applications. The documented need for more BI usability research together with the practical necessity for BI evaluation guidelines in the mining industry provides the rationale for this study. The purpose of the study was to investigate the usability evaluation of BI applications in the context of a coal mining organization. The research is guided by the question: How can the existing usability criteria be customized to evaluate the usability of BI applications? The research design included user observation, heuristic evaluation and a survey. Based on observations made during user support on a BI application used at a coal mining organization a log of usability issues was compiled. The usability issues extracted from this log were compared and contrasted with general usability criteria from literature to synthesize an initial set of BI usability evaluation criteria. These criteria were used as the basis for a heuristic evaluation of the BI application used at the coal mining organization. The same BI application was also evaluated using the Software Usability Measurement Inventory (SUMI) standardized questionnaire. The results from the two evaluations were triangulated and then compared with the BI user issues again to contextualize the ndings and synthesize a validated and rened set of criteria. The main contribution of the study is the usability evaluation criteria for BI applications presented as guidelines. These BI guidelines deviate from existing usability evaluation guidelines in that it emphasises the aspects of information architecture, learnability and operability.

Journal ArticleDOI
TL;DR: This paper proposes an approach based on Web mining to analyze product usability that uses the massive online customer reviews on analogous products and features as data source, which are easy to get from Web and can reflect the most updated customer opinions on product usability.