scispace - formally typeset
Search or ask a question

Showing papers on "Usability goals published in 1997"


Journal ArticleDOI
TL;DR: Based on human information processing theory, eight human factors considerations which are relevant to software usability are identified which formed the framework from which the Purdue Usability Testing Questionnaire (PUTQ) is derived.
Abstract: Usability is becoming a more and more important software criterion, but the present usability measurement methods are either difficult to apply, or overly dependent upon evaluators' expertise. Based on human information processing theory, this study identified eight human factors considerations which are relevant to software usability. These considerations as well as the three stages of human information processing theory formed the framework from which our Purdue Usability Testing Questionnaire (PUTQ) is derived. An experiment was conducted to test the validity of PUTQ. The experiment result showed high correlation between PUTQ and the Questionnaire for User Interaction Satisfaction (QUIS version 5.5). In addition, PUTQ detected the differences in user performance between two experimental interface systems, but QUIS failed to do so.

321 citations


Journal ArticleDOI
TL;DR: It is shown that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality, and a solution is presented: a new report structure for usability problems.
Abstract: Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.

154 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: This work reports on a series of experiments designed to compare usability testing methods in a novel information retrieval interface by looking at the problem focus, the quality of the results and the cost effectiveness of each method.
Abstract: We report on a series of experiments designed to compare usability testing methods in a novel information retrieval interface. The purpose of this ongoing work is to investigate the problems people encounter while performing information retrieval tasks, and to assess evaluation methods by looking at the problem focus, the quality of the results and the cost effectiveness of each method. This ftrst communication compares expett evaluation using heuristics [15] with end user testing [24].

130 citations


Journal ArticleDOI
TL;DR: A case study that tracks usability problems predicted with six usability evaluation methods through a development process and concludes that Predictive methods are not as effective as the HCI field would like.
Abstract: We present a case study that tracks usability problems predicted with six usability evaluation methods (claims analysis, cognitive walkthrough, GOMS, heuristic evaluation, user action notation, and simply reading the specification) through a development process. We assess the method's predictive power by comparing the predictions to the results of user tests. We assess the method's persuasive power by seeing how many problems led to changes in the implemented code. We assess design-change effectiveness by user testing the resulting new versions of the system. We concludethatpredictivemethodsarenot as effective as the HCI field would like and discuss directions for future research.

123 citations


Journal ArticleDOI
TL;DR: A method for measuring usability in terms of task performance-achievement of frequent and critical task goals by particular users in a context simulating the work environment is reported.
Abstract: This paper reports a method for measuring usability in terms of task performance-achievement of frequent and critical task goals by particular users in a context simulating the work environment. The terms usability and quality in use are defined in international standards as the effectiveness, efficiency and satisfaction with which goals are achieved in a specific context of use. The performance measurement method gives measures which, in combination with measures of satisfaction, operationalize these definitions. User performance is specified and assessed by measures including task effectiveness (the quantity and quality of task performance) and User efficiency (effectiveness divided by tasktime). Measures are obtained with users performing tasks in a context of evaluation which matches the intended context of use. This can also reveal usability problems which may not become evident if the evaluator interacts with the user. The method is supported by tools which make it practical in commercial t...

116 citations


Book ChapterDOI
14 Jul 1997
TL;DR: This thesis fulfils its objective of developing interface design guidelines for virtual environments, using interaction modelling as a theoretical base and provides an improved understanding of user interaction in virtual environments and can be used to inform further theories, methods or tools forvirtual environments and human-computer interfaces.
Abstract: The development of HCI guidelines for virtual environments, is reported, using interaction modelling

102 citations


Patent
19 May 1997
TL;DR: A method for quantitatively and objectively measuring the usability of a system has been proposed in this paper, which provides quantitative measures for usability satisfaction, usability performance, and usability performance indicators.
Abstract: A method for quantitatively and objectively measuring the usability of a system. The method provides quantitative measures for usability satisfaction, usability performance, and usability performance indicators. Usability satisfaction is measured by acquiring data from a system user population with respect to a set of critical factors that are identified for the system. Usability performance is measured by acquiring data for quantifying the statistical significance of the difference in the mean time for an Expert population to perform a task on a particular number of trials and the estimated mean time for a Novice population to perform the task on the same number of trials. The estimated mean time is calculated according to the Power Law of Practice. Usability Performance Indicators include Goal Achievement Indicators, Work Rate Usability Indicators, and Operability Indicators which are calculated according to one or more measurable parameters which include performance times, numbers of problems encountered, number of actions taken, time apportioned to problems, learning time, number of calls for assistance, and the number of unsolved problems.

63 citations


01 Jan 1997
TL;DR: A comprehensive framework for software product quality is being incorporated in a revision to ISO/IEC 9126, which provides a general-purpose model which defines six broad categories of software quality: functionality, reliability, usability, efficiency, maintainability and portability.
Abstract: ISO/IEC 9126 (1991) established a practical way of decomposing software quality into a set of characteristics and subcharacteristics. Reconciling this approach to quality with a new standard for usability (ISO 9241-11) has led to a comprehensive framework for software product quality which is being incorporated in a revision to ISO/IEC 9126. The new framework defines three perspectives: internal quality (static properties of the code), external quality (behaviour of the software when it is executed) and quality in use (whether the software meets the needs of the user when it is in use). Quality in use is a broader view of the concept of usability defined in ISO 9241-11. ISO/IEC 14598 describes a process for evaluating software product quality which is consistent with this model. Software quality characteristics: ISO/IEC 9126 In order to evaluate software it is necessary to select relevant quality characteristics. This can be done using a quality model which breaks software quality down into different characteristics. ISO/IEC 9126 (1991) provides a general-purpose model which defines six broad categories of software quality: functionality, reliability, usability, efficiency, maintainability and portability. These are further broken down into subcharacteristics which have measurable attributes (figure 1). The ISO/IEC 9126 characteristics and subcharacteristics provide a useful checklist of issues related to quality. The actual characteristics and subcharacteristics which are relevant in any particular situation will depend on the purpose of the evaluation, and should be identified by a quality requirements study.

60 citations


Book ChapterDOI
Clare-Marie Karat1
01 Jan 1997
TL;DR: Usability engineering presents management with a vehicle for increased probability of success in the marketplace and reduced software development as discussed by the authors. But, the focus on results has moved from relatively long term goals to near term quarterly deliverables and profits.
Abstract: Publisher Summary Practitioners and researchers in the field of usability engineering have sought to educate, communicate, and influence corporate decision makers about the value of usability engineering in the software life cycle. Management has a heightened awareness of time, money, and resources allocated for projects. The focus on results has moved from relatively long term goals to near term quarterly deliverables and profits. At the same time, customer service and satisfaction have become the lines for market differentiation. Business leaders with sound judgment invest in usability engineering in order to reap the benefits and solve their pressing business problems. Usability engineering presents management with a vehicle for increased probability of success in the marketplace and reduced software development.

56 citations


Book ChapterDOI
01 Jan 1997
TL;DR: This chapter presents the variety of techniques that fall under the umbrella term of usability inspection, and presents the considerations for the practitioners considering using one of these techniques.
Abstract: Publisher Summary This chapter describes the costs and benefits associated with a collection of non-empirical methods for evaluating user interfaces, collectively called usability inspection methods. These methods are considered non-empirical because rather than collecting data or observing users interacting with a system, they rely on the ability of judges who attempt to predict the kinds of problems users will experience with a user interface. While on the face of it this may seem to be pure folly, in practice variants of the technique have been demonstrated to be both cost-effective and reliable ways of evaluating usability in some cases. This chapter presents the variety of techniques that fall under the umbrella term of usability inspection. Following this, research reports are reviewed. The chapter presents the considerations for the practitioners considering using one of these techniques.

44 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: The results suggest that the dimensions of usability are highly interrelated in consumers’ evaluation and have only a limited potentiat to explain product preferences.
Abstract: The design of smart products involves undesirable, yet frequen~ cases when compromises between the quality of appeamnce, functionality, price and usability am required. Usability has lately been considered increasingly important for product competitiveness, but perceiving how usable a product might be prior to actual use is difficult. l%is paper considers the way people perceive and weight usability related product attributes in a decision making situation. The dimensions of usability are analyscd from consumer attitude formation point of view. A model of evaluation criteria related to expected usability is presented. It includes consumers’ beliefs concerning product characteristics, benefits and an overall emotional response. Scales to measure the dimensions are developed. The scrdes are applied in a case study with 91 subjects evaluating six different heart rate monitors. The results suggest that the dimensions of usability are highly interrelated in consumers’ evaluation and have only a limited potentiat to explain product preferences.

Journal ArticleDOI
TL;DR: The proposed technique is a specialisation for hypermedia of a general methodology for usability evaluation of interactive systems, named SUE -Systematic Usability Evaluation, based on the use of a hypermedia model, a set of hypermedia-specific usability attributes, and aset of patterns of inspection activities, called abstract tasks.
Abstract: This paper will discuss a usability inspection method for hypermedia applications, that explicitly takes into account the particular features of this class of systems. The proposed technique is a specialisation for hypermedia of a general methodology for usability evaluation of interactive systems, named SUE -Systematic Usability Evaluation. Our approach aims to support a systematic evaluation process, making it well organised, fast and cheap, and is based on the use of a hypermedia model, a set of hypermedia-specific usability attributes, and a set of patterns of inspection activities, called abstract tasks. The model identifies the ‘dimensions’ along which hypermedia usability can be analysed, defines the application constituents that must be inspected, and allows a precise formulation of usability attributes and abstract tasks. tasks provide operational guidelines for systematically checking usability attributes throughout the application. To exemplify our approach, we will report som...

Proceedings Article
01 Jan 1997
TL;DR: The application of usability testing is designed to determine the current usability level of a workstation designed for the clinician's use, determine specific problems with the Clinical Workstation's usability, and then evaluate the effectiveness of changes that address those problems.
Abstract: Once the users' needs are determined, how does one ensure that the resulting software meets the users' needs? This paper describes our application of a process, usability testing, that is used to measure the usability of systems as well as guide modifications to address usability problems. Usability testing is not a method to elicit opinions about software, but rather a method to determine scientifically a product's level of usability. Our application of usability testing is designed to determine the current usability level of a workstation designed for the clinician's use, determine specific problems with the Clinical Workstation's usability, and then evaluate the effectiveness of changes that address those problems.

Book ChapterDOI
14 Jul 1997
TL;DR: This tutorial explains the benefits of measuring usability as part of a user-centred design process, and introduces the methodology for usability measurement developed by the collaborative European ESPRIT MUSiC (Measurement of Usability in Context) project.
Abstract: The tutorial explains the benefits of measuring usability as part of a user-centred design process, and introduces the participants to the methodology for usability measurement developed by the collaborative European ESPRIT MUSiC (Measurement of Usability in Context) project. The tutorial includes demonstration of the use of MUSiC tools, and class exercises to apply the methods to case studies.


Journal ArticleDOI
S. Hakiel1
TL;DR: A set of usability deliverables is identified, their content is described and how they relate to the software product design and development process is shown.
Abstract: The uptake of HCI (human-computer interaction) and usability engineering by software development organisations can be terminally impeded by lack of appreciation of their value, even where ease of use is recognised as important. Descriptions of HCI offerings in terms of activities and methods are not readily comprehensible to managers of software products. Similarly, descriptions of HCI activities and methods give little indication of how these are to be managed with reference to the stages and deliverables in a software product development life cycle. This lack of clear association between usability engineering and software engineering in product development contributes to the marginalisation of usability-related activities. Specifying HCI or usability contributions to product development in terms of deliverables, on the other hand, can provide a solid basis for a manageable usability perspective on the development of software products. This article identifies a set of usability deliverables, describes their content and shows how they relate to a software product design and development process.

Journal ArticleDOI
TL;DR: Why are some efforts to implement usabil-ity engineering successful and others not?
Abstract: Why are some efforts to implement usabil-ity engineering successful and others not? Usability methods have been applied in computer systems development for more than 15 years. Although much progress has been made, usability has not become a standard part of all development. Some attempts at usability have resulted in a \" one-time-only \" phenomenon. That is, things appear to have gone relatively well but when the projects and team members move on, no one initiates further usability efforts. Other attempts at usability have failed because the problems identified by usability methods did not influence product design.

Book ChapterDOI
14 Jul 1997
TL;DR: Do developers use proven usability techniques like user involvement, usability testing, and iterative design in industrial practice?
Abstract: Do developers use proven usability techniques like user involvement, usability testing, and iterative design in industrial practice? Based on inside knowledge of many different types of projects, the author must conclude that these techniques are seldom used. There are several reasons for that. For instance: (1) The market pressure is not there. Users ask primarily for functionality, and cannot formulate their usability requirements. (2) Developers misunderstand the usability techniques. For instance they assume that usability testing is a kind of debugging, rather than a step in designing the interface. Or they believe that expensive labs have to be used, rather than a low-cost approach that can be learned in a day and carried out by developers. (3) There is no proven way to make a good, first prototype. Since development in practice seems restricted to modifications of the first prototype, it is essential to make it good. (4) There is no proven way to correct observed problems.

Book ChapterDOI
14 Jul 1997
TL;DR: An automatic Usability Testing and Evaluation tool running at the operating system level that collects all the information related to user actions when s/he is using a particular application and displays this information in various useful formats as well as calculates suitable measures of Usability.
Abstract: Currently Laboratory Testing for Usability evaluation requires external monitoring and recording devices such as video and audio, as well as evaluator observation of user and user actions. It then requires review of these recordings, a time consuming and tedious process. We describe an automatic Usability Testing and Evaluation tool that we have developed. This consists of a piece of software called AUS (Automatic Usability Software) running at the operating system level that collects all the information related to user actions when s/he is using a particular application. It then displays this information in various useful formats as well as calculates suitable measures of Usability. There is no need with this system for any external recording devices. Patent protection has been applied for in respect of the inventive aspects of this system.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: Casebased and organizational learning technology is used to support this process and integrates emerging interface and sign experiences with established guidelines to create a contextspecific body of knowledge about usability practices.
Abstract: Usability guidelines am becoming increasingly popular with organizations that develop software with significant user interface components. But most guidelines fall short of the goal to put the accumulated knowledge of user-centered design at the fingertips of everyday developers, often becoming a static document twd only by human factors specialists. This paper describes a process and technology designed to turn usability guidelines into a proactive development resource that can be applied throughout the development process. The process ensures conformance with established guidelines, but has the flexibility to meet the diverse needs of user interface design requirements, and use project experiences to evolve the guidelines to meet the dynamic needs of organizations. Casebased and organizational learning technology is used to support this process and integrates emerging interface &sign experiences with established guidelines to create a contextspecific body of knowledge about usability practices.

Proceedings ArticleDOI
22 Mar 1997
TL;DR: There is a tension between initial usability (measured by success at first encounter) and efficiency of skilled performance and a narrow focus on initial usability elevates learnability above efficiency once "up the learning curve".
Abstract: Usability studies are usually conducted in a compressed time scale (measured in hours) compared with a user's eventual experience with a product (often measured in years). For this reason, typical usability evaluations focus on success during initial interactions with a product (sea for example Dumas & Redish, 1994 and Nielsen & Mack, 1994). Success on initial use is often driven by familiarity. Are what we call "intuitive" user interfaces really just familiar user interfaces? This "familiarity effect" can often swamp the usability differences between design alternatives. If usability evaluations continue to emphasize initial success with a product we may inhibit innovation in user interface design. There is a tension between initial usability (measured by success at first encounter) and efficiency of skilled performance. Initial learning of a product's user interface often results in quite rapid increases in efficiency of use. A narrow focus on initial usability elevates learnability above efficiency once "up the learning curve". While this approach is appropriate for some products targeted primarily for casual / occasional users, it fails to capture the usability issues associated with power users (those with significant experience, training, or a professional orientation to their interaction with the product).

Proceedings ArticleDOI
22 Mar 1997
TL;DR: This practical, hands-on tutorial will examine what is required to develop a usability strategy for a whole organisation to finding data to convince stakeholders of a single usability activity.
Abstract: Usability may now be practised by a large number of software developers, but has yet to gain wide acceptance. Communicating the value of usability must happen across multiple levels of an organisation, and requires speaking several "languages". This practical, hands-on tutorial will cover techniques for convincing management or potential clients of the value of usability, in terms each group understands. It will examine what is required to develop a usability strategy for a whole organisation to finding data to convince stakeholders of a single usability activity.

Proceedings ArticleDOI
22 Mar 1997
TL;DR: This workshop explains how low-fidelity prototyping and usability testing can be used in a process of iterative refinement in order to develop more usable products.
Abstract: Product developers are typically faced with small budgets, tight schedules, and over-committed resources. To deliver high-quality products under these constraints, developers need an understanding of basic design principles, techniques that allow them to work effectively with materials on hand, and a development process that is built around the use of such techniques. This workshop explains how low-fidelity prototyping and usability testing can be used in a process of iterative refinement in order to develop more usable products.

Journal ArticleDOI
TL;DR: In this paper, the authors provide principles and examples of distributed learning and support systems that can aid individual development, considering motivation, speed of program development, and cost issues for some current projects where usability and impact are central design criteria.
Abstract: MANY COMPANIES are adopting an approach of hiring the skills they need, then discarding them and hiring others as needs change. Individuals must rapidly learn in their specialty and also develop new skill sets over the long-term. Access to intellectual capital and to learning and performance support systems is improving with developments in the World Wide Web. However access is not usability, and usability is not organizational impact. This paper provides principles and examples of distributed learning and support systems that can aid individual development. It considers motivation, speed of program development, and cost issues for some current projects where usability and impact are central design criteria.

29 Jul 1997
TL;DR: The concept of usability evaluation beyond the laboratory is typically used, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings.
Abstract: Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users'' work context is difficult or impossible to reproduce in a laboratory setting. These barriers to usability evaluation led to extending the concept of usability evaluation beyond the laboratory, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: The thesis provides knowledge and insights gained from real-life situations about what UCSD is and how it can be put into practice, and proposes the proposal of a clear definition of UCSD and a set of key principles encompassing UCSD.
Abstract: Have you ever been frustrated with that IT system at work that does not behave the way you expect it to? Or had problems with using the features on your new mobile phone? When systems and appliances do not support us in what we are doing, and do not behave the way we expect them to, then usability is neglected. Poor usability may be frustrating and irritating when trying out your mobile phone, but in a critical work situation poor usability may be disastrous.In this thesis, user-centred systems design (UCSD) is advocated as an approach for facilitating the development of usable interactive systems. Systems that suit their intended use and users do not just “emerge”. They are the result of a UCSD process and a user-centred attitude during the development. This means in short that the real users and their needs, goals, context of use, abilities and limitations, drive the development – in contrast to technology-driven development. We define UCSD as: a process focusing on usability throughout the entire development process and further throughout the system life cycle. I argue that this definition along with a set of key principles do help organisations and individual projects in the process of developing usable interactive systems. The key principles include the necessity of having an explicit focus on users and making sure that users are actively involved in the process.The thesis provides knowledge and insights gained from real-life situations about what UCSD is and how it can be put into practice. The most significant results are: the proposal of a clear definition of UCSD and a set of key principles encompassing UCSD; a process for usability design and the usability designer role. Furthermore, design cases from different domains are provided as examples and illustrations.

29 Jul 1997
TL;DR: The over-arching goal of this work is to discuss the user-reported critical incident method, a cost-effective remote usability evaluation method for real-world applications involving real users, doing real tasks in real work environments.
Abstract: The over-arching goal of this work is to discuss the user-reported critical incident method, a cost-effective remote usability evaluation method for real-world applications involving real users, doing real tasks in real work environments. Several methods have been developed for conducting usability evaluation without direct observation of a user by an evaluator. However, contrary to the user-reported critical incident method, none of the existing remote evaluation methods (nor even traditional laboratory-based evaluation) meets all the following criteria: - data are centered around critical incidents that occur during task performance; - tasks are performed by real users; - users are located in normal working environment; - users self-report own critical incidents; - data are captured in day-to-day task situations; - no direct interaction is needed between user and evaluator during an evaluation session; - there is a cost-effective way to capture data; and - data are high quality and therefore relatively easy to convert into usability problems.

Journal ArticleDOI
TL;DR: The authors suggest ways in which ethnographic principles, historically used to describe a culture from the point of view of someone within that culture, can be used along with traditional usability testing to predict a product's acceptability in the marketplace.
Abstract: The only way to judge a product's acceptance in the workplace is through its use. However, before a product is released into the marketplace, its developers would like to predict its acceptability in the target market. One predictor of acceptability is usability test results. Typically, usability testing takes place outside of the user's natural environment in a usability test lab, an artificial environment. This article suggest ways in which ethnographic principles, historically used to describe a culture from the point of view of someone within that culture, can be used along with traditional usability testing to predict a product's acceptability in the marketplace.

Proceedings ArticleDOI
22 Mar 1997
TL;DR: The approach is to create tools and methods in which software development organizations can develop and evolve usability guidelines based on the kinds of applications they develop, which can be used to match customer requirements to specific interface techniques that have proven effective for similar users and application domains.
Abstract: Working with a large information technology organization in industry, we have been investigating how a repository of organization-specific usability guidelines can be created and used to produce high quality end-user applications. Our approach is to create tools and methods in which software development organizations can develop and evolve usability guidelines based on the kinds of applications they develop. This information can then be used to match customer requirements to specific interface techniques that have proven effective for similar users and application domains. This is supported through a case-based system that attaches experience cases to guidelines to help find, explain, specialize, and extend usabiltiy guidelines.

Proceedings ArticleDOI
22 Oct 1997
TL;DR: How usability testing affected the design and content of the documentation and how follow-on usability studies added significant new data not revealed in the initial tests are described.
Abstract: The paper describes two cases in which usability testing and documentation projects were performed in conjunction with one other. It describes how usability testing affected the design and content of the documentation and how follow-on usability studies added significant new data not revealed in the initial tests.