scispace - formally typeset
Open AccessJournal Article

Usability Testing and Expert Inspections Complemented by Educational Evaluation: A Case Study of an e-Learning Platform

Andrina Granić, +1 more
- 01 Jan 2011 - 
- Vol. 14, Iss: 2, pp 107-123
Reads0
Chats0
TLDR
It is expected that this contribution with its general findings and know-how from the experience will facilitate the understanding on how to evaluate and improve the usability of e-learning systems based on users' and teachers' and experts' feedback.
Abstract
Introduction In the context of the inclusive knowledge society, the role of system interfaces that are more closely tailored to the way people naturally work, live and acquire knowledge is unquestionably recognized as important. In addition, the need for active and accessible learning promotes only the e-learning that engages the users effectively. Nevertheless, despite so much publicity and activity, the progress in the field of e-learning has been relatively slow until recently, when problems were often associated with poor designed e-learning applications cf. (SIGCHI, 2001; Granic, 2008). It seems that too much of the research has been driven by technical possibilities, while paying inadequate attention to the area of application. This issue has been ignored for some time, in the hope that new technologies will somehow resolve the lack of real progress. However, to efficiently communicate the contents and improve the learning experience, interaction mechanisms merit particular consideration. Usability studies in the e-learning field are not very frequent despite the important role that usability plays in the success of every e-learning system. If the interface is not transparent and easy to use, the learners/students concentrate on interaction aspects and not on acquiring content. In addition, it has been claimed that usability assessment needs further consideration of the learning perspective. Namely, the approaches to e-learning usability range from those adapted to e-learning to those applying heuristics without special adjustment to the educational context. Accordingly, as an established set of heuristics and a joint evaluation methodology for e-learning systems do not exist yet, there is obviously a need for further research and empirical evaluation. The paper reports on a case study of an e-learning platform implemented in the network of fourteen European schools. The contribution of this paper is two-fold. First, it critically examines the usability of a large-scale e-learning system across several countries in Europe. The second contribution of the paper is providing some general findings and lessons learned from the experience. Usability testing, which integrated six empirical methods into a laboratory-based test, was complemented with heuristic inspections. Interface compliance with Nielsen's (1994) traditional principles was enhanced with experts' judgment of the system's "educational evaluation" by means of three sets of criteria: Learning with software heuristics (Squires & Preece, 1999), Educational design heuristics (Quinn, 1996) and Pedagogical dimensions (Reeves, 1994). We expect that this contribution with its general findings and know-how from the experience will facilitate the understanding on how to evaluate and improve the usability of e-learning systems based on users' (learners'/students' and teachers') and experts' feedback. Since there are limited studies in the field, this contribution adds to the body of knowledge. Related Work Research in the human-computer interaction (HCI) field has provided numerous principles and guidelines that can steer designers in making their decisions. Although applying good design guidelines alone is a good start, it is no substitute for system usability evaluation. In general, usability is context-dependent and is shaped by the interaction between users, tasks and system purpose. A variety of usability evaluation methods have been developed over the past few decades and most are grouped into usability test methods, user-based involving end-users, and inspection methods engaging HCI experts. Research studies involving different kinds of applications, different user groups and evaluation techniques have been conducted and the need for combining the methods is well understood in the usability field; see e.g., Sears & Jacko (2008). To analyze usability of interaction mechanisms of e-learning systems, more or less standard assessments and studies have been carried out. …

read more

Citations
More filters
Journal ArticleDOI

Perceived usability evaluation of learning management systems: Empirical evaluation of the System Usability Scale

TL;DR: An empirical evaluation of the SUS questionnaire in the context of LMSs’ perceived usability evaluation found that the perceived usability of the evaluated L MSs is at a satisfactory level, and that the validity and reliability of SUS for LMS’ evaluation remains robust even for small sample sizes.
Book

Handbook of Research on Scholarly Publishing and Research Methods

TL;DR: This chapter provides an overview of multilevel modeling for researchers and provides guides for the development and investigation of these models.
Proceedings ArticleDOI

Perceived Usability Evaluation of Learning Management Systems: A First Step towards Standardization of the System Usability Scale in Greek

TL;DR: A first step towards standardization of a Greek version of SUS in the context of LMSs perceived usability evaluation is reported, and three studies involving 280 university students in both blended and distance learning education demonstrated the validity and reliability of the Greek versionof SUS.
Journal ArticleDOI

A review of personalised e-learning: Towards supporting learner diversity

TL;DR: Some of the technological challenges which developers may encounter in creating authoring tools for personalised e-learning and some of the pedagogical challenges which authors may encounter when creating personalised E-learning activities to enhance the learning experience of their students are reviewed.
Book ChapterDOI

Usability Evaluation Methods: A Systematic Review

TL;DR: This chapter aims to identify, analyze, and classify the methodologies and methods described in the literature for the usability evaluation of systems and services based on information and communication technologies.
References
More filters
Book ChapterDOI

SUS: A 'Quick and Dirty' Usability Scale

John Brooke
TL;DR: This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.
Proceedings ArticleDOI

Usability inspection methods

Jakob Nielsen
TL;DR: Usability inspection is the generic name for a set of costeffective ways of evaluating user interfaces to find usability problems.
Book

Heuristic evaluation

Jakob Nielsen
TL;DR: This chapter discusses heuristic evaluation, which Inspection of a prototype or finished system to identify all changes necessary to optimize human performance and preference.
Book

The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications

TL;DR: The evolution of human-computer interaction has been discussed in detail in this paper, where the authors present a moving target for human-Computer interaction: the evolution of Human-Computer Interaction.
Book

The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design

TL;DR: Inside, a twenty-year expert answers this question in full, presenting the techniques of Usability Engineering as a series of product lifecycle tasks that result directly in easier-to-learn, easier- to-use software.
Related Papers (5)
Trending Questions (1)
Established concepts within usability in e-learning?

The paper does not explicitly mention any established concepts within usability in e-learning.