scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1998"


Journal ArticleDOI
TL;DR: This paper presents both theoretical analysis and initial experimental results showing that learning is beneficial in the sequential negotiation model, called Bazaar, which provides an adaptive, multi-issue negotiation model capable of exhibiting a rich set of negotiation behaviors.
Abstract: Negotiation has been extensively discussed in game-theoretic, economic and management science literatures for decades. Recent growing interest in autonomous interacting software agents and their potential application in areas such as electronic commerce has give increased importance to automated negotiation. Evidence both from theoretical analysis and from observations of human interactions suggests that if decision makers can somehow take into consideration what other agents are thinking and furthermore learn during their interactions how other agents behave, their payoff might increase. In this paper, we propose a sequential decision-making model of negotiation, called Bazaar. It provides an adaptive, multi-issue negotiation model capable of exhibiting a rich set of negotiation behaviors. Within the proposed negotiation framework, we model learning as a Bayesian belief update process. In this paper, we present both theoretical analysis and initial experimental results showing that learning is beneficial in the sequential negotiation model.

515 citations


Journal ArticleDOI
TL;DR: Brahms was developed as a tool to support the design of work by illuminating how formal flow descriptions relate to the social systems of work; it is accomplished by incorporating multiple views—relating people, information, systems, and geography—in one tool.
Abstract: A continuing problem in business today is the design of human–computer systems that respect how work actually gets done. The overarching context of work consists ofactivities, which people conceive as ways of organizing their daily life and especially their interactions with each other. Activities include reading mail, going to workshops, meeting with colleagues over lunch, answering phone calls, and so on. Brahms is a multiagent simulation tool for modeling the activities of groups in different locations and the physical environment consisting of objects and documents, including especially computer systems. A Brahms model of work practice revealscircumstantial, interactional influenceson how work actually gets done, especially how people involve each other in their work. In particular, a model of practice reveals how people accomplish a collaboration through multiple and alternative means of communication, such as meetings, computer tools, and written documents. Choices of what and how to communicate are dependent uponsocial beliefs and behaviors—what people know about each other’s activities, intentions, and capabilities and their understanding of the norms of the group. As a result, Brahms models can help human–computer system designers to understandhow tasks and information actually flowbetween people and machines, what work is required to synchronize individual contributions, and how tools hinder or help this process. In particular, workflow diagrams generated by Brahms arethe emergent product of local interactions between agents and representational artifacts, not pre-ordained, end-to-end paths built in by a modeler. We developed Brahms as a tool to support the design of work by illuminating how formal flow descriptions relate to the social systems of work; we accomplish this by incorporating multiple views—relating people, information, systems, and geography—in one tool. Applications of Brahms could also include system requirements analysis, instruction, implementing software agents, and a workbench for relating cognitive and social theories of human behavior.

218 citations


Journal ArticleDOI
TL;DR: It is concluded that video can result in more fluent conversation, particularly where there are more than two discussants, and in the case of dyadic conversation auditory cues to turn taking, etc., would seem to suffice.
Abstract: There are many commercial systems capable of transmitting a video image of parties in a conversation over a digital network. Typically, these have been used to provide facial images of the participants. Experimental evidence for the advantages of such a capability has been hard to find. This paper describes two experiments that demonstrate significant advantages for video conferencing over audio-only conferencing, in the context of a negotiation task using electronically shared data. In the video condition there was a large, high-quality image of the head and upper torso of the participant(s) at the other end of the link and high-quality sound. For the audio-alone condition the sound was the same but there was no video image. The criteria by which these two communication conditions were compared were not the conventional measures of task outcome. Rather, measures relating to conversational fluency and interpersonal awareness were applied. In each of the two experiments, participants completed the same task with data presented by a shared editor. In Experiment 1, they worked in pairs and in Experiment 2 they worked at quartets with two people at each end of the link. Fluency was assessed from transcripts in terms of length of utterance, overlapping speech and explicit questions. Only the latter measure discriminated between the two communication conditions in both experiments. The other measures showed significant effects in Experiment 2 but not in Experiment 1. Given this pattern of results it is concluded that video can result in more fluent conversation, particularly where there are more than two discussants. However, in the case of dyadic conversation auditory cues to turn taking, etc., would seem to suffice. In both experiments there was a large and significant effect on interpersonal awareness as assessed by ratings of the illusion of presence, and most clearly, awareness of the attentional focus of the remote partner (s). In Experiment 2, the ratings for the remote partners were similar to those for the co-located discussants, demonstrating the effectiveness of the video link with regard to these subjective scales.

157 citations


Journal ArticleDOI
TL;DR: A novel cognitive model of comprehension of multimodal presentations for the specific application of explaining how machines work is described and guidelines for hypermedia design derived from this model are proposed.
Abstract: User's mental representations and cognitive strategies can have a profound influence on how they interact with computer interfaces (Janosky, Smith & Hildreth, 1986). However, there is very little research that elucidates such mental representations and strategies in the context of interactive hypermedia. Furthermore, interface design for hypermedia information presentation systems is rarely driven by what is known of users' mental models and strategies. This paper makes three contributions toward addressing these problems. First, it describes a novel cognitive model of comprehension of multimodal presentations for the specific application of explaining how machines work, and proposes guidelines for hypermedia design derived from this model. Since the development of this model draws heavily upon research in both cognitive science and computational modeling, a second contribution is that it contains a detailed review of literature in these fields on comprehension from static multimodal presentations. Third, it illustrates how cognitive and computational modeling are being used to inform the design of hypermedia information presentation systems about machines. This includes a framework for empirical validation of the model and evaluation of hypermedia design so that both theory and design can be refined iteratively.

128 citations


Journal ArticleDOI
TL;DR: In CMB, contexts are represented explicitly as contextual schemas, and an agent recognizes its context by finding the c-schemas that match it, then it merges these to form a coherent representation of the current context.
Abstract: Humans and other animals are exquisitely attuned to their context. Context affects almost all aspects of behavior, and it does so for the most part automatically, without a conscious reasoning effort. This would be a very useful property for an artificial agent to have: upon recognizing its context, the agent's behavior would automatically adjust to fit it. This paper describescontext-mediated behavior(CMB), an approach to context-sensitive behavior we have developed over the past few years for intelligent autonomous agents. In CMB, contexts are represented explicitly ascontextual schemas(c-schemas). An agent recognizes its context by finding the c-schemas that match it, then it merges these to form a coherent representation of the current context. This includes not only a description of the context, but also information about how to behave in it. From that point until the next context change, knowledge for context-sensitive behavior is available with no additional effort. This is used to influence perception, make predictions about the world, handle unanticipated events, determine the context-dependent meaning of concepts, focus attention and select actions. CMB is being implemented in the Orca program, an intelligent controller for autonomous underwater vehicles.

128 citations


Journal ArticleDOI
TL;DR: This study investigated how people make attributions of responsibility when interacting with computers and predicted similarity between a user's personality and a computer's personality would reduce the tendency for users to exhibit a “self-serving bias” in assigning responsibility for outcomes in human?computer interaction.
Abstract: This study investigated how people make attributions of responsibility when interacting with computers. In particular, two questions were addressed: under what circumstances will usersblamecomputers for failed outcomes? And under what circumstances will userscreditcomputers for successful outcomes? The first prediction was that similarity between a user's personality and a computer's personality would reduce the tendency for users to exhibit a “self-serving bias” in assigning responsibility for outcomes in human?computer interaction. The second prediction was that greater user control would lead to more internal attributions, regardless of outcome. A 2×2×2 balanced, between-subjects experiment (N=80) was conducted. Results strongly supported the predictions: when the outcome was negative, participants working with asimilarcomputer were less likely to blame the computer and more likely to blame themselves, compared with participants working with adissimilarcomputer. When the outcome was positive, participants working with asimilarcomputer were more likely to credit the computer and less likely to take the credit themselves, compared with participants working with adissimilarcomputer. In addition, when users were given more control over outcomes, they tended to make more internal attributions, regardless of whether the outcome was positive or negative.

116 citations




Journal ArticleDOI
TL;DR: This research study how the incorporation of case-specific, episodic, knowledge enables decision-support systems to become more robust and to adapt to a changing environment by continuously retaining new problem-solving cases as they occur during normal system operation.
Abstract: Decision-support systems that help solving problems in open and weak theory domains, i.e. hard problems, need improved methods to ground their models in real-world situations. Models that attempt to capture domain knowledge in terms of, e.g. rules or deeper relational networks, tend either to become too abstract to be efficient or too brittle to handle new problems. In our research, we study how the incorporation of case-specific, episodic, knowledge enables such systems to become more robust and to adapt to a changing environment by continuously retaining new problem-solving cases as they occur during normal system operation. The research reported in this paper describes an extension that incorporates additional knowledge of the problem-solving context into the architecture. The components of this context model is described, and related to the roles the components play in an abductive diagnostic process. Background studies are summarized, the context model is explained and an example shows its integration into an existing knowledge-intensive CBR system.

69 citations


Journal ArticleDOI
TL;DR: This work presents an inference structure for diagnostic problem solving optimized with respect to knowledge reuse among methods motivated by experience from large knowledge bases built in medical, technical and other diagnostic domains.
Abstract: While diagnostic problem solving is in principle well understood, building and maintaining systems in large domains cost effectively is an open issue. Various methods have different advantages and disadvantages making their integration attractive. When switching from one method to another in an integrated system, as much knowledge as possible should be reused. We present an inference structure for diagnostic problem solving optimized with respect to knowledge reuse among methods motivated by experience from large knowledge bases built in medical, technical and other diagnostic domains.

69 citations


Journal ArticleDOI
TL;DR: In NATURE, it is anticipated better reuse if object system models in NATURE's database correspond to natural mental categories elicited using card sorting from experienced software engineers.
Abstract: Requirements engineering is the complex technical, social and cognitive process which produces requirements for a software-intensive system. However, little is understood about the problem domains for which these software-intensive systems are developed. Card sorting was used to determine mental categories of problem domains to inform design of a library of semi-formal, reusable object system models. Card sorting is a knowledge elicitation technique effective for eliciting mental categories from subjects who sort concepts such as objects or problems into categories. In NATURE, we anticipate better reuse if object system models in NATURE's database correspond to natural mental categories elicited using card sorting from experienced software engineers. Results led to some revision of the structure and contents of several models and how these models might be retrieved and used.

Journal ArticleDOI
TL;DR: In this paper, the authors used a robotic soccer system to study different types of multi-agent learning: low-level skills, collaborative and adversarial learning, and discussed the issues that arise as they extend the learning scenario to require collaborative learning.
Abstract: Soccer is a rich domain for the study of multiagent learning issues. Not only must the players learn lower-level skills, but they must also learn to work together and to adapt to the behaviors of different opponents. We are using a robotic soccer system to study these different types of multiagent learning: low-level skills, collaborative and adversarial. Here we describe in detail our experimental framework. We present a learned, robust, low-level behavior that is necessitated by the multiagent nature of the domain, viz. shooting a moving ball. We then discuss the issues that arise as we extend the learning scenario to require collaborative and adversarial learning.

Journal ArticleDOI
TL;DR: An initial empirical investigation of the effect of ecological interface design (EID) on subjects' deep knowledge suggests that EID can lead to a functionally organized knowledge base as well as superior performance, but only if subjects actively reflect on the feedback they get from the interface.
Abstract: Some researchers have argued that providing operators with externalized, graphic representations can lead to a trade-off whereby deep knowledge is sacrificed for cognitive economy and performance. This article provides an initial empirical investigation of this hypothesis by presenting a longitudinal study of the effect of ecological interface design (EID), a framework for designing interfaces for complex industrial systems, on subjects' deep knowledge. The experiment continuously observed the quasi-daily performance of the subjects' over a period of six months. The research was conducted in the context of DURESS II, a real-time, interactive thermal-hydraulic process control simulation that was designed to be representative of industrial systems. The performance of two interfaces was compared, an EID interface based on physical and functional (P+F) system representations and a more traditional interface based solely on a physical (P) representation. Subjects were required to perform several control tasks, including startup, tuning, shutdown and fault management. Occasionally, a set of knowledge elicitation tests was administered to assess the evolution of subjects' deep knowledge of DURESS II. The results suggest that EID can lead to a functionally organized knowledge base as well as superior performance, but only if subjects actively reflect on the feedback they get from the interface. In contrast, if subjects adopt a surface approach to learning, then EID can lead to a shallow knowledge base and poor performance, although no worse than that observed with a traditional interface.

Journal ArticleDOI
TL;DR: The relationship between attention allocation strategies and performance on a thermal-hydraulic process simulation is investigated to provide some initial, specific evidence of the advantages of an abstraction hierarchy interface over more traditional interfaces that emphasize physical rather than functional information.
Abstract: Previous research has shown that Rasmussen's abstraction hierarchy, which consists of both physical and functional system models, provides a useful basis for interface design for complex human?machine systems. However, very few studies have quantitatively analysed how people allocate their attention across levels of abstraction. This experiment investigated the relationship between attention allocation strategies and performance on a thermal-hydraulic process simulation. Subjects controlled the process during both normal and fault situations for about an hour per weekday for approximately one month. All subjects used a multi-level interface consisting of four separate windows, each representing a level of the abstraction hierarchy. Subjects who made more frequent use of functional levels of information exhibited more accurate system control under normal conditions, and more accurate diagnosis performance under fault trials. Moreover, subjects who made efficient use of functional information exhibited faster fault compensation times. In contrast, subjects who made infrequent or inefficient use of functional information exhibited poorer performance on both normal and fault trials. These results provide some initial, specific evidence of the advantages of an abstraction hierarchy interface over more traditional interfaces that emphasize physical rather than functional information.

Journal ArticleDOI
TL;DR: In this paper, the authors analyse the discourse and cross-workspace information movement between two collaborators working side by side on a maintenance task and reveal four mutually constraining representations: task, system structure, modifications and system behavior.
Abstract: I analyse the discourse and cross-workspace information movement between two collaborators working side by side on a maintenance task. The analysis revealed four mutually constraining representations: task, system structure, modifications and system behavior. Side-by-side collaboratorspush or pullinformation across workspaces in an attempt to ground common representations of these four structures. Although the representations collaborators actually create vary with the type of collaborative endeavor, pushing and pulling information across workspaces is ageneralcollaborative activity. I end by discussing ways in which the push/pull conception of collaboration can be used to inform the design of effective remote collaboration tools.

Journal ArticleDOI
TL;DR: It is argued that meta-dialog and meta-reasoning, far from being of only occasional use, are the very essence of conversation and communication between agents and that there may be a core set of meta- Dialog principles that is in some sense complete and that may correspond to the human ability to engage in “free-ranging” conversation.
Abstract: We argue that meta-dialog and meta-reasoning, far from being of only occasional use, are the very essence of conversation and communication between agents. We give four paradigm examples of massive use of meta-dialog where only limited object dialog may be present, and use these to bolster our claim of centrality for meta-dialog. We further illustrate this with related work in active logics. We argue moreover that there may be a core set of meta-dialog principles that is in some sense complete, and that may correspond to the human ability to engage in “free-ranging” conversation. If we are right, then implementing such a set would be of considerable interest. We give examples of existing computer programs that converse inadequately according to our guidelines.

Journal ArticleDOI
TL;DR: A definition of context is proposed through the description of the different types of information manipulated by a process and two main issues related to context are tackled: how context representation can be built and organized and how context contents can be re-used for other applications.
Abstract: This paper tackles several issues of context representation in knowledge-based systems. First, we propose a definition of context through the description of the different types of information manipulated by a process. Thanks to this definition we explain the role of the granularity level of processing and the role of the abstraction level of application in modelling context. Based on this definition two main issues related to context are tackled: how context representation can be built and organized and how context contents can be re-used for other applications. Then we propose several solutions to deal with these issues: using a multi-viewpoint representation and describing context through symbolic information. We illustrate the proposed context model with the process of dynamic scene interpretation. After explaining the reasons why this process is particularly concerned with the use of contextual information, we describe the context representation and its implementation for this specific process. Finally, we give an example illustrating the utilization of the context representation and we describe the software we have developed to ease the acquisition stage of context contents.

Journal ArticleDOI
TL;DR: A context-based representation of incidents on the basis of the onion metaphor is discussed to support subway line traffic operators in incident-solving with an incident-manager system, which is a part of the SART project (French acronym for support system in the traffic control).
Abstract: The control of the subway line traffic is a domain where operators must deal with huge quantities of pieces of knowledge more or less implicit in the control itself. When an incident occurs on a subway line, the operator must choose the best strategy applicable for moving from the incidental context to the operational one. An incident on the subway line may cause traffic delay or service interruption and may last for a long or short time, depending on the nature of the incident and many other elements. Operators mainly focus on contextual information for incident solving. An operator said, “When an incident occurs, I look first at what the incident context is”. We propose to support subway line traffic operators in incident-solving with an incident-manager system, which is a part of the SART project (French acronym for support system in the traffic control). The incident manager is a decision-support system based on the contextual analysis of events that arise at the time of the incident. It uses a context-based representation of incidents and applies a context-based reasoning. In this paper we discuss a context-based representation of incidents on the basis of the onion metaphor. The SART project now enters the second year of the system design and development and implies two universities and two subway companies in France and Brazil.

Journal ArticleDOI
TL;DR: A domain-independent model for a real-time decision support as a structured collection of problem solving methods for the domain of traffic management following a knowledge modelling approach and the notion of problem-solving method.
Abstract: This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modelling approach and the notion of problem-solving method. The article presents first a domain-independent model for a real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modelling tool, called Knowledge Structure Manager (KSM) was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.

Journal ArticleDOI
TL;DR: This paper elaborates the argument that assumptions, dynamic reasoning behaviour and functionality are the three elements necessary to characterize a problem-solving method and introduces a framework for characterizing and developing such efficient problem solvers.
Abstract: In this paper, we present the following view on problem-solving methods for knowledge-based systems: problem-solving methods describe anefficient reasoning strategyfor achieving a goal by introducingassumptionsabout the available domain knowledge and the required functionality. Assumptions, dynamic reasoning behaviour and functionality are the three elements necessary to characterize a problem-solving method. In this paper, we elaborate this argument and introduce a framework for characterizing and developing such efficient problem solvers.

Journal ArticleDOI
TL;DR: Results indicated that monitoring systems have the potential to evoke altered arousal states in the form of increased heart rate and blood pressure and the hypothesized improvement in task performance within the performance monitoring condition was not observed.
Abstract: Electronic monitoring systems are becoming a prominent feature of the modern office. The aims of the present study were three-fold. First, to assess the effects electronic security monitoring systems (ESM) have on the user's physiological state. Second, the researches aimed to examine the effects explicit security challenges have on both user behaviour and physiological state when using an ESM system. Finally, the research aimed to examine the effects one form of electronic performance monitoring system may have on the user's physiological state. To this effect, the present study examined the physiological and performance effects of two simulated electronic monitoring systems (security/performance). The computer task required 32 subjects to enter mock clinical case notes under various conditions. In the first session subjects were only required to enter the case notes while keystroke data were collected. In the “security baseline” condition subjects were informed that a keystroke security monitoring system had been instituted, but no security challenges occurred. In the “security challenge” condition, however, a number of explicit security challenges occurred. In the final “performance monitoring” condition, subjects were informed that their data entry speed was monitored and they were placed on a response-cost schedule for poor performance. Blood pressure and continuous inter-heartbeat latency were recorded for the security and performance conditions. Results indicated that monitoring systems have the potential to evoke altered arousal states in the form of increased heart rate and blood pressure. Contrary to expectations, the hypothesized improvement in task performance within the performance monitoring condition was not observed. The implications of these results for the design and implementation of electronically based behavioural-based security and performance monitoring systems are discussed.

Journal ArticleDOI
TL;DR: A data-driven approach to fuzzy modelling that provides the user with both accurate and transparent rule bases and is demonstrated on a real world problem concerning the modelling of algae growth in lakes.
Abstract: One of the objectives of machine learning is to enable intelligent systems to acquire knowledge in a highly automated manner. In systems modelling and control engineering, fuzzy systems have shown to be highly suitable for the modelling of complex and uncertain systems. Recently, the interest in fuzzy systems has shifted from the seminal ideas about modelling the process or the behaviour of operators by knowledge acquisition towards a data-driven approach. Reasons to choose fuzzy systems instead of modelling techniques such as neural networks, radial basis functions, genetic algorithms or splines, are mainly the possibility of integrating logical information processing with the attractive mathematical properties of general function approximators. Furthermore, the rule-based structure of fuzzy systems makes analysis easier. The fuzzy sets in the rules represent linguistic qualitative terms that approximate the human-like way of information quantization. However, many of the data-driven fuzzy modelling algorithms that have been developed, aim at good numerical approximation and pay little attention to the semantical properties of the resulting rule base. In this article, we briefly discuss different approaches to data-intensive fuzzy modelling reported in the literature. Next, we present a data-driven approach to fuzzy modelling that provides the user with both accurate and transparent rule bases. The method has two main steps: data exploration by means of fuzzy clustering and fuzzy set aggregation by means of similarity analysis. First, fuzzy relations are identified in the product space of the system's variables and are described by means of fuzzy production rules. Compatible fuzzy concepts defined for the individual variables are then identified and aggregated to produce generalizing concepts, giving a comprehensible rule base with increased semantic properties. The transparent fuzzy modelling approach is demonstrated on a real world problem concerning the modelling of algae growth in lakes.

Journal ArticleDOI
TL;DR: The aim of Ripple-down Rules is to provide a system that lets the user choose the mode of interaction and view of the knowledge according to the situation in which they find themselves and their own personal preferences.
Abstract: Situated cognition poses a challenge that requires a paradigm shift in the way we build symbolic knowledge-based systems. Current approaches require complex analysis and modelling and the intervention of a knowledge engineer. They rely on building knowledge-level models which often result in static models that suffer from the frame of reference problem. This approach has also resulted in an emphasis on knowledge elicitation rather than user requirements elicitation. The situated nature of knowledge necessitates a review of how we build, maintain and validate knowledge-based systems. We need systems that are flexible, intuitive and that interact directly with the end-user. We need systems that are designed with maintenance in mind, allowing incremental change and on-line validation. This will require a technique that captures knowledge in context and assists the user to distinguish between contexts. We take up this challenge with a knowledge acquisition and representation method known as Ripple-down Rules. Context in Ripple-down Rules is handled by its exception structure and the storing of the case that prompted a rule to be added. A rule is added as a refinement to an incorrect rule by assigning the correct conclusion and picking the salient features in the case that differentiate the current case from the case associated with the wrong conclusion. Thus, knowledge acquisition and maintenance are simple tasks, designed to be performed incrementally while the system is in use. Knowledge acquisition, maintenance and inferencing are offered in modes that can be performed reflexively without a knowledge engineer. We further describe the addition of modelling tools to assist the user to reflect on their knowledge for such purposes as critiquing, explanation, “what-if” analysis and tutoring. Our aim is to provide a system that lets the user choose the mode of interaction and view of the knowledge according to the situation in which they find themselves and their own personal preferences.

Journal ArticleDOI
TL;DR: This paper argues that situated cognition is not a mere philosophical concern: it has pragmatic implications for current practice in knowledge acquisition, and tools must move from being design-focused to being maintenance-focused.
Abstract: Situated cognition is not a mere philosophical concern: it has pragmatic implications for current practice in knowledge acquisition. Tools must move from being design-focused to being maintenance-focused. Reuse-based approaches (e.g. using problem-solving methods) will fail unless the reused descriptions can be extensively modified to suit the new situation. Knowledge engineers must model not only descriptions of expert knowledge, but also the environment in which a knowledge base will perform. Descriptions of knowledge must be constantly re-evaluated. This re-evaluation process has implications for assessing representations

Journal ArticleDOI
TL;DR: This paper investigates the reuse of tasks and problem-solving methods and a model of how to organize a library of reusable components for knowledge-based systems is proposed and illustrated in the area ofparametric design.
Abstract: In this paper we investigate the reuse of tasks and problem-solving methods and we propose a model of how to organize a library of reusable components for knowledge-based systems. In our approach, we first describe a class of problems by means of atask ontology. Then we instantiate a generic model of problem solving assearchin terms of the concepts in the task ontology, to derive a task-specific, but method-independent,problem-solving model. Individual problem-solving methods can then be (re-)constructed from the generic problem-solving model through a process ofontology/method specializationandconfiguration. The resulting library of reusable components enjoys a clear theoretical basis and has been tested successfully on a number of applications. In the paper, we illustrate the approach in the area ofparametric design.

Journal ArticleDOI
TL;DR: The findings largely validate this paradigm of “cognitive fit” that has been applied in non-language computer display domains, and the results suggest language-fostered “perspective-bias” in the formation and use of mental representations of spatial (scenic) information.
Abstract: Previous studies have provided evidence of multi-level mental representations of language-conveyed spatial (scenic) information. However, the available evidence is largely inconclusive with regard to the structure of these mental representations. A laboratory experiment assesses computer-assisted problem-solving performance abilities when language-conveyed representations of spatial information are matched with the language perspective of the task and with individual cognitive skills. Our findings largely validate this paradigm of “cognitive fit” that has been applied in non-language computer display domains, and the results suggest language-fostered “perspective-bias” in the formation and use of mental representations of spatial (scenic) information.

Journal ArticleDOI
TL;DR: A visual formalism and a tool to support design and evaluation of human?computer interaction in context-customized systems and to make evaluations of interface correctness and usability easier or automatic are described.
Abstract: This paper describes a visual formalism and a tool to support design and evaluation of human?computer interaction in context-customized systems. The formalism is called XDM (for “context-sensitive dialogue modelling”) and combines extended Petri nets with Card, Moran and Newell's KLM operators theory to describe static and dynamic aspects of interaction in every context in which the system should operate, and to make evaluations of interface correctness and usability easier or automatic. The method was developed in the scope of a European Community Project to iteratively prototype a knowledge-based medical system. It has been subsequently employed in several research projects and in teaching activities.

Journal ArticleDOI
TL;DR: This paper investigates how people extract information from any one of several common displays by analysing the match between display, decision task and data, and develops a model of strategy formulation.
Abstract: Decision tasks often require the extraction of information from displays of quantitative data. This paper investigates how people extract information from any one of several common displays by analysing the match between display, decision task and data. We posit two kinds of activity: first, the formulation of an appropriate extraction strategy and second, the execution of that strategy. We then develop a model ofstrategy formulation. We hypothesize that with matched designs a higher proportion of subjects use common strategies characterized by less time to formulate, less time to execute and more accurate decisions. A laboratory experiment using a new technique of graphical protocol analysis supported these hypotheses. Moreover, the experiment demonstrated how changes in display, decision task and data alter the way people select decision strategies. This suggests new opportunities for designing more effective human?computer interfaces.

Journal ArticleDOI
TL;DR: A plan-based agent architecture is described that models misunderstandings in cooperative NL agent communication and exploits a notion of coherence in dialogue based on the idea that the explicit and implicit goals which can be identified by interpreting a conversational turn can be related with the previous explicit/implicit goals of the interactants.
Abstract: We describe a plan-based agent architecture that models misunderstandings in cooperative NL agent communication; it exploits a notion of coherence in dialogue based on the idea that the explicit and implicit goals which can be identified by interpreting a conversational turn can be related with the previous explicit/implicit goals of the interactants. Misunderstandings are hypothesized when the coherence of the interaction is lost (i.e. an unrelated utterance comes). The processes of analysis (and treatment) of a misunderstanding are modelled as rational behaviours caused by the acquisition of a supplementary goal, when an incoherent turn comes: the agent detecting the incoherence commits to restore the intersubjectivity in the dialogue; so, he restructures his own contextual interpretation, or he induces the partner to restructure his (according to who seems to have made the mistake). This commitment leads him to produce a repair turn, which initiates a sub-dialogue aimed at restoring the common interpretation ground. Since we model speech acts uniformly with respect to the other actions (the domain-level actions), our model is general and covers misunderstandings occurring at the linguistic level as well as at the underlying domain activities of the interactants.

Journal ArticleDOI
TL;DR: This work introduces a representation of the functionality of problem-solving methods that allows us to view the construction of problem solvers as a configuration problem, and specifically as a parametric design problem.
Abstract: The knowledge-engineering literature contains a number of approaches for constructing or selecting problem solvers. Some of these approaches are based on indexing and selecting a problem solver from a library, others are based on a knowledge acquisition process, or are based on search-strategies. None of these approaches sees constructing a problem solver as a configuration task that could be solved with an appropriate configuration method. We introduce a representation of the functionality of problem-solving methods that allows us to view the construction of problem solvers as a configuration problem, and specifically as a parametric design problem. From the available methods for parametric design, we use propose-critique-modify for the automated configuration of problem-solving methods. We illustrate this approach by a scenario in a small car domain example.