scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1997"


Journal ArticleDOI
TL;DR: The Ontolingua Server as mentioned in this paper is a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups, and it allows users to publish, browse, create and edit ontologies stored on an ontology server.
Abstract: Reusable ontologies are becoming increasingly important for tasks such as information integration, knowledge-level interoperation and knowledge-base development. We have developed a set of tools and services to support the process of achieving consensus on commonly shared ontologies by geographically distributed groups. These tools make use of the World Wide Web to enable wide access and provide users with the ability to publish, browse, create and edit ontologies stored on anontology server. Users can quickly assemble a new ontology from a library of modules. We discuss how our system was constructed, how it exploits existing protocols and browsing tools, and our experience supporting hundreds of users. We describe applications using our tools to achieve consensus on ontologies and to integrate information.The Ontolingua Server may be accessed through the URLhttp://ontolingua.stanford.edu

893 citations


Journal ArticleDOI
TL;DR: The main message is that early in the knowledge engineering process an application-specific ontology should be constructed, and some principles for organizing a library of reusable ontological theories which can be configured into an application ontology are presented.
Abstract: This article presents a number of ways in which ontologies-schematic descriptions of the contents of domain knowledge-can be constructed and can be used to improve the knowledge engineering process. The main message is that early in the knowledge engineering process an application-specific ontology should be constructed. To facilitate this, the article presents some principles for organizing a library of reusable ontological theories which can be configured into an application ontology. This application ontology is then exploited to organize the knowledge acquisition process and to support computational design. The process is illustrated with a knowledge engineering scenario in the domain of treating acute radiation syndrome.

816 citations


Journal ArticleDOI
TL;DR: Based on users' revisitation patterns to World Wide Web pages, eight design guidelines for web browser history mechanisms were formulated and explain why some aspects of today's browsers seem to work well, and other's poorly.
Abstract: We report on users' revisitation patterns to World Wide Web (web) pages, and use the results to lay an empirical foundation for the design of history mechanisms in web browsers. Through history, a user can return quickly to a previously visited page, possibly reducing the cognitive and physical overhead required to navigate to it from scratch. We analysed 6 weeks of detailed usage data collected from 23 users of a well-known web browser. We found that 58% of an individual's pages are revisits, and that users continually add new web pages into their repertoire of visited pages. People tend to revisit pages just visited, access only a few pages frequently, browse in very small clusters of related pages and generate only short sequences of repeated URL paths. We compared different history mechanisms, and found that the stack-based prediction method prevalent in commercial browsers is inferior to the simpler approach of showing the last few recently visited URLs with duplicates removed. Other predictive approaches fare even better. Based on empirical evidence, eight design guidelines for web browser history mechanisms were then formulated. When used to evaluate the existing hypertext-based history mechanisms, they explain why some aspects of today's browsers seem to work well, and other's poorly. The guidelines also indicate how history mechanisms in the web can be made even more effective.f1f1This article is a major expansion of a conference paper (Tauscher & Greenberg, 1997). This research reported in this article was performed as part of an M.Sc. project (Tauscher, 1996).

638 citations


Journal ArticleDOI
TL;DR: It is defended here the thesis of the independence between domain knowledge and problem-solving knowledge, arguing against the dominance of the so-called ‘‘interaction problem’’ mentioned in a recent paper to dispute the feasibility of a single domain ontology shared by a number of different applications.
Abstract: I defend here the thesis of the independence between domain knowledge and problem-solving knowledge, arguing against the dominance of the so-called ‘‘interaction problem’’ mentioned in a recent paper by Van Heijst, Schreiber and Wielinga to dispute the feasibility of a single domain ontology shared by a number of different applications. The main point is that reusability across multiple tasks or methods can and should be systematically pursued even when modelling knowledge related to a single task or method. Under this view, I discuss how the principles of formal ontology and ontological engineering can be used in the practice of knowledge engineering, focusing in particular on the interplay between general ontologies, method ontologies and application ontologies, and on the role of ontologies in the knowledge engineering process. I will then stress the role of domain analysis, often absent in current methodologies for the development of knowledge-based systems.

458 citations


Journal ArticleDOI
John M. Carroll1
TL;DR: Human-computer interaction study has progressively integrated its scientific concerns with the engineering goal of improving the usability of computer systems and applications, which has resulted in a body of technical knowledge and methodology.
Abstract: Human?computer interaction (HCI) is the area of intersection between psychology and the social sciences, on the one hand, and computer science and technology, on the other. HCI researchers analyse and design-specific user-interface technologies (e.g. three-dimensional pointing devices, interactive video). They study and improve the processes of technology development (e.g. usability evaluation, design rationale). They develop and evaluate new applications of technology (e.g. computer conferencing, software design environments). Through the past two decades, HCI has progressively integrated its scientific concerns with the engineering goal of improving theusabilityof computer systems and applications, thus establishing a body of technical knowledge and methodology. HCI continues to provide a challenging test domain for applying and developing psychology and social science in the context of technology development and use.

337 citations


Journal ArticleDOI
TL;DR: The current version of the Basic Support for Cooperative Work system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.
Abstract: The emergence and widespread adoption of the World Wide Web offers a great deal of potential in supporting cross-platform cooperative work within widely dispersed working groups. The Basic Support for Cooperative Work (BSCW) project at GMD is attempting to realize this potential through development of web-based tools which provide cross-platform collaboration services to groups using existing web technologies. This paper describes one of these tools, theBSCW Shared Workspace system?a centralized cooperative application integrated with an unmodified web server and accessible from standard web browsers. The BSCW system supports cooperation through “shared workspaces”; small repositories in which users can upload documents, hold threaded discussions and obtain information on the previous activities of other users to coordinate their own work. The current version of the system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.

334 citations


Journal ArticleDOI
TL;DR: The study concludes that the effects of flattery from a computer can produce the same general effects asFlattery from humans, as described in the psychology literature.
Abstract: A laboratory experiment examines the claims that (1) humans are susceptible to flattery from computers and (2) the effects of flattery from computers are the same as the effects of flattery from humans. In a cooperative task with a computer, subjects (N=41) received one of three types of feedback from a computer: “sincere praise”, “flattery” (insincere praise) or “generic feedback”. Compared to generic-feedback subjects, flattery subjects reported more positive affect, better performance, more positive evaluations of the interaction and more positive regard for the computer, even though subjects knew that the flattery from the computer was simply noncontingent feedback. Subjects in the sincere praise condition responded similarly to those in the flattery condition. The study concludes that the effects of flattery from a computer can produce the same general effects as flattery from humans, as described in the psychology literature. These findings may suggest significant implications for the design of interactive technologies.

322 citations


Journal ArticleDOI
TL;DR: A set of high-level hypermedia features including typed nodes and links, link attributes, structure-based query, transclusions, warm and hot links, private and public links, hypermedia access permissions, computed personalized links, external link databases, link update mechanisms, overviews, trails, guided tours, backtracking and history-based navigation are presented.
Abstract: World Wide Web authors must cope in a hypermedia environment analogous to second-generation computing languages, building and managing most hypermedia links using simple anchors and single-step navigation. Following this analogy, sophisticated application environments on the World Wide Web will require third- and fourth-generation hypermedia features. Implementing third- and fourth-generation hypermedia involves designing both high-level hypermedia features and the high-level authoring environments system developers build for authors to specify them. We present a set of high-level hypermedia features including typed nodes and links, link attributes, structure-based query, transclusions, warm and hot links, private and public links, hypermedia access permissions, computed personalized links, external link databases, link update mechanisms, overviews, trails, guided tours, backtracking and history-based navigation. We ground our discussion in the hypermedia research literature, and illustrate each feature both from existing implementations and a running scenario. We also give some direction for implementing these on the World Wide Web and in other information systems.

222 citations


Journal ArticleDOI
TL;DR: This review of design issues identifies genres of web sites, goals of designers, communities of users and a spectrum of tasks, and an Objects/Actions Interface Model is offered as a way to think about designing and evaluating web sites.
Abstract: “Gradually I began to feel that we were growing something almost organic in a new kind of reality, in cyberspace, growing it out of information?a pulsing tree of data that I loved to climb around in, scanning for new growth.”(Mickey Hart,Drumming at the Edge of Magic: A Journey into the Spirit of Percussion, 1990)“Look at every path closely and deliberately.Try it as many times as you think necessary.Then ask yourself, and yourself alone, one question?Does this path have a heart?If it does, the path is good; if it doesn?t it is of no use.”(Carlos CastanedaThe Teachings of Don Juan)The abundance of information on the World Wide Web has thrilled some, but frightened others. Improved web site design may increase users? successful experiences and positive attitudes. This review of design issues identifies genres of web sites, goals of designers, communities of users and a spectrum of tasks. Then an Objects/Actions Interface Model is offered as a way to think about designing and evaluating web sites. Finally, search and navigation improvements are described to bring consistency, comprehensibility and user control.

220 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived hypotheses about how and why individuals will choose between electronic mail and voice mail and tested them among users of both media in the corporate headquarters of a large company.
Abstract: How and why people choose which communication medium to use is an important issue for both behavioral researchers and software product developers. Little is yet known about how and why people in organizations choose amongnewmedia like electronic mail and voice mail, although the availability and use of new media are increasing dramatically. Media richness theory (MRT) is the most prominent, if contested, theory of media choice. It is concerned with identifying the most appropriate medium in terms of “medium richness” for communication situations characterized by equivocality and uncertainty. From this theory, we derived hypotheses about how and why individuals will choose between electronic mail and voice mail and tested them among users of both media in the corporate headquarters of a large company. The data are analysed using both quantitative and qualitative techniques. The results fail to support MRT, but they do support alternative explanations of people's media choice behavior. While the concept of media richness is too poor to explain the richness of people's media use behavior, our behavioral findings and explanations should prove useful to those building the next generation of integrated multimedia communication tools.

206 citations


Journal ArticleDOI
TL;DR: This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out information using various communication tools, and describes why social navigation is useful and how it can be supported better in future systems.
Abstract: This paper discusses a navigation behavior on Internet information services, in particular the World Wide Web, which is characterized by pointing out information using various communication tools. We call this behaviorsocial navigationas it is based on communication and interaction with other users, be it through email, or any other means of communication. Social navigation phenomena are quite common although most current tools (like web browsers or email clients) offer very little support for it. We describe why social navigation is useful and how it can be supported better in future systems. We further describe two prototype systems that, although originally not designed explicitly as tools for social navigation, provide features that are typical for social navigation systems. One of these systems, the Juggler system, is a combination of a textual virtual environment and a web client. The other system is a prototype of a web-hotlist organizer, called Vortex. We use both systems to describe fundamental principles of social navigation systems.

Journal ArticleDOI
TL;DR: Greater facilitative effects of the page-by-page presentation were observed in both tasks, and participants' reading task performance indicated that they built a better mental representation of the text as a whole and were better at locating relevant information and remembering the main ideas.
Abstract: Two studies using the methods of experimental psychology assessed the effects of two types of text presentation (page-by-page vs. scrolling) on participants' performance while reading and revising texts. Greater facilitative effects of the page-by-page presentation were observed in both tasks. The participants' reading task performance indicated that they built a better mental representation of the text as a whole and were better at locating relevant information and remembering the main ideas. Their revising task performance indicated a larger number of global corrections (which are the most difficult to make).

Journal ArticleDOI
TL;DR: The future of GSS research in terms of what is needed, some important research questions, and some possible directions is discussed.
Abstract: Group support systems (GSS) represent one of the real success stories of research by the MIS academic community. There is no doubt that GSS academic research has had an impact on practice in the MIS field. This paper discusses the future of GSS research in terms of what is needed, some important research questions, and offers some possible directions. Section 1 describes what a GSS is and explains the underlying fundamental background. Section 2 explains why GSS research is needed. Section 3 describes the multimethodological approach that is needed for well-grounded GSS research programs. Section 4 discusses some of the major issues in applying GSS in organizational settings. Section 5 explores the scope of GSS research and the questions that need to be answered. Section 6 provides keys to successful distributed collaboration from our experience. Section 7 starts to answer the difficult question “what is needed for a distributed workspace?” Section 8 begins to clarify just what virtual reality can offer for distributed collaboration. Section 9 explains the justification for a virtual reality representation of the distributed office. Section 10 explores what we need to get real work done in a virtual workspace including: support for sense making during the process, automating bottlenecks in the process, modeling through simulation and animation, multiple languages, education, crisis response and software inspection. Research in GSS is just beginning and thousands of questions must be answered before we can have an understanding of the field.

Journal ArticleDOI
TL;DR: The evolution and development of one such inexpensive, simple, networked tele-operated mobile robot (tele-mobot) designed to provide this ability to enter the office of the person at the other end of the connection is described.
Abstract: In the rush into cyberspace we leave our physical presence and our real-world environment behind. The internet, undoubtedly a remarkable modern communications tool, still does not empower us to enter the office of the person at the other end of the connection. We cannot look out of their window, admire their furniture, talk to their office-mates, tour their laboratory or walk outside. We lack the equivalent of a body at the other end with which we can move around in, communicate through and observe with. However, by combining elements of the internet and tele-robotics it is possible to transparently immerse users into navigable real remote worlds filled with rich spatial sensorium and to make such systems accessible from any networked computer in the world, in essence: tele-embodiment. In this article we describe the evolution and development of one such inexpensive, simple, networked tele-operated mobile robot (tele-mobot) designed to provide this ability. We also discuss several social implications and philosophical questions raised by this research.

Journal ArticleDOI
TL;DR: The accuracy of, and influences on, attributions of authors' identities in seven field groups with considerable work history after they used the GSS to enter technically anonymous comments about salient topics during a brainstorming session are analyzed.
Abstract: This study explores the taken-for-granted assumption that “anonymous” comments posted on a group support system (GSS) aresociallyas well astechnicallyanonymous. It analyses the accuracy of, and influences on, attributions of authors' identities in seven field groups with considerable work history after they used the system to enter technically anonymous comments about salient topics during a brainstorming session. GSS participants made attributions about authors' identities, but overall these attributions were about 12% accurate (ranging from 1 to 29%). The expected predictors of accuracy (an individual's total communication with the group, network centrality, and length of membership in the group) were inconsistent influences across the seven groups.

Journal ArticleDOI
TL;DR: This paper describes a personalized newspaper on the World Wide Web, called ANATAGONOMY, that is personalized without asking the users to specify their preferences explicitly and evaluated a scheme in which the user scores each article explicitly and all the personalization is done automatically.
Abstract: This paper describes a personalized newspaper on the World Wide Web (WWW), called ANATAGONOMY. The main feature of this system is that the newspaper is personalized without asking the users to specify their preferences explicitly. The system monitors user operations on the articles and reflects them in the user profiles. Differently from conventional newspapers on the WWW, our system sends an interaction agent implemented as a Java applet to the client side, and the agent monitors the user operations and creates each user's newspaper pages automatically. The server side manages user profiles and anticipates how interesting an article would be for each user. The interaction agent on the client side manages all the user interactions, including the automatic layout of pages. Our system has page multiple layout algorithms and the user can switch from one view to another anytime, according to the preference or machine environment. On one of the views, the user can even see all the articles sequentially without performing any operations. We evaluated a scheme in which the user scores each article explicitly, and a scheme in which all the personalization is done automatically. The results show that automatic personalization works well when some parameters are set properly.

Journal ArticleDOI
TL;DR: PDQ Tree-browser as discussed by the authors presents trees in two tightly-coupled views, one a detailed view and the other an overview, users can use dynamic queries, a method for rapidly filtering data, to filter nodes at each level of the tree.
Abstract: Users often must browse hierarchies with thousands of nodes in search of those that best match their information needs. ThePDQ Tree-browser(Pruning with Dynamic Queries) visualization tool was specified, designed and developed for this purpose. This tool presents trees in two tightly-coupled views, one a detailed view and the other an overview. Users can use dynamic queries, a method for rapidly filtering data, to filter nodes at each level of the tree. The dynamic query panels are user-customizable. Sub-trees of unselected nodes are pruned out, leading to compact views of relevant nodes. Usability testing of the PDQ Tree-browser, done with eight subjects, helped assess strengths and identify possible improvements. The PDQ Tree-browser was used in Network Management (600 nodes) and UniversityFinder (1100 nodes) applications. A controlled experiment, with 24 subjects, showed that pruning significantly improved performance speed and subjective user satisfaction. Future research directions are suggested.

Journal ArticleDOI
TL;DR: This paper proposes to enhance RSDA by two simple statistical procedures, both based on randomization techniques, to evaluate the validity of prediction based on the approximation quality of attributes of rough set dependency analysis.
Abstract: Rough set data analysis (RSDA) has recently become a frequently studied symbolic method in data mining. Among other things, it is being used for the extraction of rules from databases; it is, however, not clear from within the methods of rough set analysis, whether the extracted rules are valid.In this paper, we suggest to enhance RSDA by two simple statistical procedures, both based on randomization techniques, to evaluate the validity of prediction based on the approximation quality of attributes of rough set dependency analysis. The first procedure tests the casualness of a prediction to ensure that the prediction is not based on only a few (casual) observations. The second procedure tests the conditional casualness of an attribute within a prediction rule.The procedures are applied to three data sets, originally published in the context of rough set analysis. We argue that several claims of these analyses need to be modified because of lacking validity, and that other possibly significant results were overlooked.

Journal ArticleDOI
TL;DR: This work applies techniques from voting theory to arrive at consensus choices for meeting times while balancing different preferences, particularly in developing intelligent agents that can partially automate routine information processing tasks.
Abstract: Our research agenda focuses on building software agents that can facilitate and streamline group problem solving in organizations. We are particularly interested in developing intelligent agents that can partially automate routine information processing tasks by representing and reasoning with the preferences and biases of associated users. The distributed meeting scheduler is a collection of agents, responsible for scheduling meetings for their respective users. Users have preferences on when they like to meet, e.g. time of day, day of week, status of other invitees, topic of the meeting, etc. The agent must balance such concerns, proposing and accepting meeting times that satisfy as many of these criteria as possible. For example, a user might prefer not to meet at lunchtime unless the president of the company is hosting the meeting. We apply techniques from voting theory to arrive at consensus choices for meeting times while balancing different preferences.

Journal ArticleDOI
TL;DR: It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.
Abstract: The paper considers the usability of the World Wide Web in the light of a decade of research into the usability of hypertext and hypermedia systems. The concepts of virtual hierarchies and virtual networks are introduced as a mechanism for alleviating some of the shortcomings inherent in the current implementations of the web, without violating its basic philosophy. It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.The paper first analyses the published work on hypermedia usability, identifying the assumptions that underlie this research and relating them to the assumptions underlying the web. Some general conclusions are presented about both hypermedia usability principles and their applicability to the web. These results are coupled with problems identified from other sources to produce a requirements list for improving web usability. A possible solution is then presented which utilizes the capabilities of existing distributed information management software to permit web users to create virtual hierarchies and virtual networks. Some ways in which these virtual structures assist searchers to find useful information, and thus help authors to publicize their information more effectively, are described. The explanation is illustrated by examples taken from the GENIE Service, an implementation of some of the ideas. This uses the World Wide Web as a means of allowing global environmental change researchers throughout the world to find data that may be relevant to their research.

Journal ArticleDOI
TL;DR: This paper describes the approach to the development of an Internet-based course designed for distance education and provides general observations on the opportunities and constraints which the web provides and on the pedagogic issues which arise when using this delivery mechanism.
Abstract: The phenomenal growth of the Internet over the last few years, coupled with the development of various multimedia applications which exploit the Internet presents exciting opportunities for educators. In the context of distance education, the World Wide Web provides a unique challenge as a new delivery mechanism for course material allowing students to take a course (potentially) from anywhere in the world. In this paper, we describe our approach to the development of an Internet-based course designed for distance education. Using this experience, we provide general observations on the opportunities and constraints which the web provides and on the pedagogic issues which arise when using this delivery mechanism.We have found that the process of developing web-based courses is one area which requires careful consideration as technologies and tools for both the authoring and the delivery of courses are evolving so rapidly. We have also found that current tools are severely lacking in a number of important respects?particularly with respect to the design of pedagogically sound courseware.

Journal ArticleDOI
TL;DR: Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results and there was no evidence of resource or speed accuracy trade-off between tasks.
Abstract: Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

Journal ArticleDOI
TL;DR: Issues regarding task delegation as they pertain to the design of intelligent-agent?user interfaces are described, which need to overcome well-established drawbacks in delegation.
Abstract: There is currently a great deal of interest in the development of intelligent agents. While there is little agreement on exactly what constitutes an intelligent agent, many definitions embody a user-interface model that differs from the traditional one where users perform tasks with the help of computer-based “tools”. In contrast, the “delegation” model associated with agents is based on entrusting tasks to an autonomous, sometimes anthropomorphized system, whose performance is monitored and evaluated. This change in user-interface model is a dramatic one since delegation can be a difficult and often-avoided behavior in humans. Agent-interface designs need to overcome well-established drawbacks in delegation. For this purpose, designers should find the management sciences and organizational psychology literatures to be as relevant as that of traditional human factors. This paper describes issues regarding task delegation as they pertain to the design of intelligent-agent?user interfaces.

Journal ArticleDOI
TL;DR: Results show that, for hidden-profile tasks, a critical performance level must be reached before performance is positively related to satisfaction, and a curvilinear (U-shaped) relationship between information sharing and satisfaction was observed.
Abstract: This paper reports on an experimental study of information sharing for groups using a group support system (GSS). A group member's success or failure in sharing unique information can have important impacts on meeting outcomes. This research builds on previous work which has examined various factors that impact information-sharing performance. To examine these issues, groups processed a hidden profile task, i.e. a task with an asymmetrical distribution of information. In addition, group size (groups of four and seven) and the level of structure (structured or unstructured agenda) were manipulated. Results show that group size had no effect on information sharing. However, groups using the structured agenda shared more initially-shared information and initially-unshared information. Although no relationship was found between information-sharing performance and decision quality, a curvilinear (U-shaped) relationship between information sharing and satisfaction was observed. These results show that, for hidden-profile tasks, a critical performance level must be reached before performance is positively related to satisfaction. The paper concludes with a discussion of the findings and the implications for future research and use.

Journal ArticleDOI
TL;DR: It is demonstrated that the process of decision making in groups varies in terms of persuasive arguments exchanged as a function of the interaction between the medium of communication and the cultural setting in which the decision is attempted.
Abstract: In this paper we examine the impact of technology and culture and their interaction on the process and outcomes of group decision making. The conceptual foundation for this research draws on three domains: GSS, cross-cultural and group polarization. This paper uses the theory of persuasive arguments for studying group behavior in a computer-mediated, cross-cultural setting. Our findings illustrate that group decisions are a function of the medium of communication and the cultural setting in which the decision is attempted. In addition, the protocol analysis conducted demonstrates that the process of decision making in groups varies in terms of persuasive arguments exchanged as a function of the interaction between the medium of communication and the cultural setting observed. These results have both theoretical and practical implications for GSS research.

Journal ArticleDOI
TL;DR: Cognitive Task Analysis methods are used to identify decision requirements, as part of a project to improve the decision making of AEGIS cruiser officers in high-stress situations and found that by identifying these requirements, and centering the system design process on them, they could develop storyboards for a human-computer interface that reflected the user's needs.
Abstract: The decision requirements of a task are the key decisions and how they are made. Most task analysis methods address the steps that have to be followed; decision requirements offer a complementary picture of the critical and difficult judgments and decisions needed to carry out the task. This article describes the use of Cognitive Task Analysis methods to identify decision requirements, as part of a project to improve the decision making of AEGIS cruiser officers in high-stress situations. We found that by identifying these requirements, and centering the system design process on them, we could develop storyboards for a human-computer interface that reflected the user's needs.

Journal ArticleDOI
TL;DR: The results suggest that users' attitudes toward software are strongly influenced by their past history of usage, including what interaction styles the user has encountered, and this should be considered in the design of software and training programs.
Abstract: In recent years, a body of literature has developed which shows that users' perceptions of software are a key element in its ultimate acceptance and use. We focus on how the interaction style and prior experience with similar software affect users' perceptions of software packages. In our experiment, direct manipulation, menu-driven and command-driven interfaces were investigated. We studied users' perceptions of the software in two hands-on training sessions. In the first session, novice users were given initial training with word-processing software, and in the second session the users were trained on a word processor which was functionally equivalent to the prior one, but had a different interaction style. In the initial training session, we found that the interaction style had a reliable but small effect on learners' perceptions of ease of use. The direct manipulation interface was judged easier to use than the command style. The interaction style, however, did not affect learners' perceptions of the usefulness of the software. In the second training session, subjects who had used a direct manipulation interface in the first session learned either the menu-based or command-based software. The perceptions of these users were compared to those of learners, who had used the menu or command software in the initial training session. We found that both interaction style and the prior experience with a direct manipulation interface affected perceptions of ease of use. Subjects with prior experience of a direct style interface tended to have very negative attitudes toward a less direct interface style. The interaction style did not affect perceptions of usefulness of the package, but the prior experience did. These results suggest that users' attitudes toward software are strongly influenced by their past history of usage, including what interaction styles the user has encountered, and this should be considered in the design of software and training programs.

Journal ArticleDOI
TL;DR: The study demonstrates the importance of recognizing the influence that managerial interventions and the use of new technology can have upon the conduct of software development, as well as the difficulties such changes may bring about when they disrupt organizational and cognitive processes such as “mutual adjustment” and “knowledge sharing".
Abstract: In this paper we report findings from a study of the impact of cognitive and organizational factors upon the work of a software development project within a commercial context. We chose to study the relationship between the way in which project work is organized; the distribution of knowledge amongst project members; their use of programming tools; and the major problems that occurred during the development of a large scale computer program. Our findings point to a dynamic interplay between these factors which partly reflects the importance of expertise and knowledge within the project as well as evidence of opportunistic and emergent forms of work organization, communication and collaboration. Our study demonstrates the importance of recognizing the influence that managerial interventions and the use of new technology can have upon the conduct of software development, as well as the difficulties such changes may bring about when they disrupt organizational and cognitive processes such as “mutual adjustment” and “knowledge sharing”. We conclude the paper by describing a series of implications and recommendations. These cover issues related to the “knowledge intensive” nature of software development; the influence of new technology upon project work; as well as recommendations regarding the management of software projects and the software process.

Journal ArticleDOI
TL;DR: A method is described for analysing task-related information needs linked to design of information displays by defining users' requirements with information types and selecting appropriate means of information delivery according to the users' needs.
Abstract: Task analysis methods have paid little attention to specification of information displays. A method is described for analysing task-related information needs linked to design of information displays. The method starts by defining users' requirements with information types. These are added to the task model to specify what type of information is required during the task. The next step selects appropriate means of information delivery according to the users' needs. Different information access and display paradigms, e.g. hypertext, data retrieval and display media are considered. The method is illustrated with a case study of a shipboard information system.

Journal ArticleDOI
TL;DR: A case study of a user centred design method in an in-house project and theoretical implications for knowledge work and the concept of user participation are discussed and practical recommendations given.
Abstract: This paper is concerned with various problems that can impede the implementation and practice of user participation in the software development process. We describe a case study of a user centred design method in an in-house project. Taking a work organization perspective, we highlight several problem areas, relating to human and organizational issues. These arise from the internal processes of the method, the method's relationship with other procedures and the organizational context. We discuss the impacts of these problems and the interconnections between them. The key underlying issues identified are a lack of integrated effort and the failure to include the full range of necessary knowledge. Theoretical implications for knowledge work and the concept of user participation are discussed and practical recommendations given.