scispace - formally typeset
Search or ask a question
Author

Wayne D. Gray

Bio: Wayne D. Gray is an academic researcher from Rensselaer Polytechnic Institute. The author has contributed to research in topics: Cognitive model & Cognition. The author has an hindex of 33, co-authored 185 publications receiving 10036 citations. Previous affiliations of Wayne D. Gray include United States Department of the Navy & University of California, Berkeley.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors define basic objects as those categories which carry the most information, possess the highest category cue validity, and are the most differentiated from one another, and thus the most distinctive from each other.

5,074 citations

Journal ArticleDOI
TL;DR: In this review, the design of 5 experiments that compared usability evaluation methods (UEMs) are examined, showing that small problems in the way these experiments were designed and conducted call into serious question what the authors thought they knew regarding the efficacy of various UEMs.
Abstract: An interest in the design of interfaces has been a core topic for researchers and practitioners in the field of human-computer interaction (HCI); an interest in the design of experiments has not. To the extent that reliable and valid guidance for the former depends on the results of the latter, it is necessary that researchers and practitioners understand how small features of an experimental design can cast large shadows over the results and conclusions that can be drawn. In this review we examine the design of 5 experiments that compared usability evaluation methods (UEMs). Each has had an important influence on HCI thought and practice. Unfortunately, our examination shows that small problems in the way these experiments were designed and conducted call into serious question what we thought we knew regarding the efficacy of various UEMs. If the influence of these experiments were trivial, then such small problems could be safely ignored. Unfortunately, the outcomes of these experiments have been used to justify advice to practitioners regarding their choice of UEMs. Making such choices based on misleading or erroneous claims can be detrimental--compromising the quality and integrity of the evaluation, incurring unnecessary costs, or undermining the practitioner's credibility within the design team. The experimental method is a potent vehicle that can help inform the choice of a UEM as well as help to address other HCI issues. However, to obtain the desired outcomes, close attention must be paid to experimental design.

515 citations

Journal ArticleDOI
TL;DR: The process and results of model building as well as the design and outcome of the field trial are discussed and the accuracy of GOMS predictions are assessed and the mechanisms of the models are used to explain the empirical results.
Abstract: Project Ernestine served a pragmatic as well as a scientific goal: to compare the worktimes of telephone company toll and assistance operators on two different workstations and to validate a GOMS analysis for predicting and explaining real-world performance. Contrary to expectations, GOMS predicted and the data confirmed that performance with the proposed workstation was slower than with the current one. Pragmatically, this increase in performance time translates into a cost of almost $2 million a year to NYNEX. Scientifically, the GOMS models predicted performance with exceptional accuracy. The empirical data provided us with three interesting results: proof that the new workstation was slower than the old one, evidence that this difference was not constant but varied with call category, and (in a trial that spanned 4 months and collected data on 72,450 phone calls) proof that performance on the new workstation stabilized after the first month. The GOMS models predicted the first two results and explained all three. In this article, we discuss the process and results of model building as well as the design and outcome of the field trial. We assess the accuracy of GOMS predictions and use the mechanisms of the models to explain the empirical results. Last, we demonstrate how the GOMS models can be used to guide the design of a new workstation and evaluate design decisions before they are implemented.

366 citations

Journal ArticleDOI
TL;DR: Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior.
Abstract: Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R’s memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior.

306 citations

Journal ArticleDOI
TL;DR: A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task, suggesting that episodic task codes play an important role in keeping the cognitive system focused under a variety of performance constraints.
Abstract: A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that contend with it reproduce a wide range of behavioral phenomena when simulated, including well-known task-switching effects, such as latency and error switch costs, and effects on which other theories are silent, such as with-run slowing and within-run error increase. The model generalizes across multiple task-switching procedures, suggesting that episodic task codes play an important role in keeping the cognitive system focused under a variety of performance constraints.

293 citations


Cited by
More filters
Book
01 Jan 1993
TL;DR: This guide to the methods of usability engineering provides cost-effective methods that will help developers improve their user interfaces immediately and shows you how to avoid the four most frequently listed reasons for delay in software projects.
Abstract: From the Publisher: Written by the author of the best-selling HyperText & HyperMedia, this book provides an excellent guide to the methods of usability engineering. Special features: emphasizes cost-effective methods that will help developers improve their user interfaces immediately, shows you how to avoid the four most frequently listed reasons for delay in software projects, provides step-by-step information about which methods to use at various stages during the development life cycle, and offers information on the unique issues relating to informational usability. You do not need to have previous knowledge of usability to implement the methods provided, yet all of the latest research is covered.

11,929 citations

Journal ArticleDOI
TL;DR: The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds and a set of qualitative assumptions are shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features.
Abstract: The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarity is developed in which objects are represented as collections of features, and similarity is described as a feature-matching process. Specifically, a set of qualitative assumptions is shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features. Several predictions of the contrast model are tested in studies of similarity with both semantic and perceptual stimuli. The model is used to uncover, analyze, and explain a variety of empirical phenomena such as the role of common and distinctive features, the relations between judgments of similarity and difference, the presence of asymmetric similarities, and the effects of context on judgments of similarity. The contrast model generalizes standard representations of similarity data in terms of clusters and trees. It is also used to analyze the relations of prototypicality and family resemblance

7,251 citations

Journal ArticleDOI
TL;DR: The data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
Abstract: Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate “face area” also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.

7,059 citations

Journal ArticleDOI
TL;DR: This paper presents work on computing shape models that are computationally fast and invariant basic transformations like translation, scaling and rotation, and proposes shape detection using a feature called shape context, which is descriptive of the shape of the object.
Abstract: We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set.

6,693 citations

Journal ArticleDOI
TL;DR: In this paper, the scope and range of ethnocentrism in group behavior is discussed. But the focus is on the individual and not on the group as a whole, rather than the entire group.
Abstract: INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR 3 From Individual to Group Impressions 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR 7 The Scope and Range of Ethnocentrism 8 The Development of Ethnocentrism 9 Intergroup Conflict and Competition 12 Interpersonal and intergroup behavior 13 Intergroup conflict and group cohesion 15 Power and status in intergroup behavior 16 Social Categorization a d Intergroup Behavior 20 Social categorization: cognitions, values, and groups 20 Social categorization a d intergroup discrimination 23 Social identity and social comparison 24 THE REDUCTION FINTERGROUP DISCRIMINATION 27 Intergroup Cooperation and Superordinate Goals " 28 Intergroup Contact. 28 Multigroup Membership and "lndividualizat~’on" of the Outgroup 29 SUMMARY 30

6,550 citations