scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1991"


Journal ArticleDOI
John C. Tang1
TL;DR: Specific features of collaborative work activity that raise design implications for collaborative technology are identified: collaborators use hand gestures to uniquely communicate significant information; the process of creating and using drawings conveys much information not contained in the resulting drawings.
Abstract: The work activity of small groups of three to four people was videotaped and analysed in order to understand collaborative work and to guide the development of tools to support it. The analysis focused on the group's shared drawing activity—their listing, drawing, gesturing and talking around a shared drawing surface. This analysis identified specific features of collaborative work activity that raise design implications for collaborative technology: (1) collaborators use hand gestures to uniquely communicate significant information; (2) the process of creating and using drawings conveys much information not contained in the resulting drawings; (3) the drawing space is an important resource for the group in mediating their collaboration; (4) there is a fluent mix of activity in the drawing space; and (5) the spatial orientation among the collaborators and the drawing space has a role in structuring their activity. These observations are illustrated with examples from the video data, and the design implications they raise are discussed.

646 citations


Journal ArticleDOI
TL;DR: An alternative model and explanation based on social identity (SI) theory and a re-conceptualization of de-individuation, which takes into account the social and normative factors associated with group polarization are provided.
Abstract: This paper discusses social psychological processes in computer-mediated communication (CMC) and group decision-making, in relation to findings that groups communicating via computer produce more polarized decisions than face-to-face groups. A wide range of possible explanations for such differences have been advanced, in which a lack of social cues, disinhibition, “de-individuation” and a consequent tendency to antinormative behaviour are central themes. In these explanations, both disinhibition and greater equality of participation are thought to facilitate the exchange of extreme persuasive arguments, resulting in polarization. These accounts are briefly reviewed and attention is drawn to various problematic issues. We provide an alternative model and explanation based on social identity (SI) theory and a re-conceptualization of de-individuation, which takes into account the social and normative factors associated with group polarization. Predictions from both sets of explanations are explored empirically by means of an experiment manipulating the salience of the discussion group, and de-individuation operationalized as the isolation and anonymity of the participants. In this experiment we were able to partial out the effects of the CMC technology which have confounded comparisons with face-to-face interaction in previous research. The results challenge the explanations based on persuasive arguments, while being consistent with our SI model. We discuss our approach in relation to other very recent research in group computer-mediated communication and offer a reinterpretation of previous findings.

504 citations


Journal ArticleDOI
TL;DR: The results imply that touchscreens, when properly used, have attractive advantages in selecting targets as small as 4 pixels per size (approximately one-quarter of the size of a single character).
Abstract: Three studies were conducted comparing speed of performance, error rates and user preference ratings for three selection devices. The devices tested were a touchscreen, a touchscreen with stabilization (stabilization software filters and smooths raw data from hardware), and a mouse. The task was the selection of rectangular targets 1, 4, 16 and 32 pixels per side (0·4 × 0·6, 1·7 × 2·2, 6·9 × 9·0, 13·8 × 17·9 mm respectively). Touchscreen users were able to point at single pixel targets, thereby countering widespread expectations of poor touchscreen resolution. The results show no difference in performance between the mouse and touchscreen for targets ranging from 32 to 4 pixels per side. In addition, stabilization significantly reduced the error rates for the touchscreen when selecting small targets. These results imply that touchscreens, when properly used, have attractive advantages in selecting targets as small as 4 pixels per size (approximately one-quarter of the size of a single character). A variant of Fitts' Law is proposed to predict touchscreen pointing times. Ideas for future research are also presented.

483 citations



Journal ArticleDOI
TL;DR: Drawing on recent work in psychology and sociology, this work was able to create a more realistic model of the situation the authors' users faced and apply it to the system to understand the breakdowns.
Abstract: When studying the use of Cognoter, a multi-user idea organizing tool, we noticed that users encountered unexpected communicative breakdowns. Many of these difficulties stemmed from an incorrect model of conversation implicit in the design of the software. Drawing on recent work in psychology and sociology, we were able to create a more realistic model of the situation our users faced and apply it to the system to understand the breakdowns. We discovered that users encountered difficulties coordinating their conversational actions. They also had difficulty determining that they were talking about the same objects and actions in the workspace. This work led to the redesign of the tool and to the identification of areas for further exploration.

246 citations


Journal ArticleDOI
TL;DR: In contrast to the common view of spreadsheets as single-user programs, the authors found that spreadsheets offer surprisingly strong support for cooperative development of a wide variety of applications, arguing that the division of the spreadsheet into two distinct programming layers permits effective distribution of computational tasks across users with different levels of programming skill.
Abstract: In contrast to the common view of spreadsheets as “single-user” programs, we have found that spreadsheets offer surprisingly strong support for cooperative development of a wide variety of applications. Ethnographic interviews with spreadsheet users showed that nearly all of the spreadsheets used in the work environments studied were the result of collaborative work by people with different levels of programming and domain expertise. We describe how spreadsheet users cooperate in developing, debugging and using spreadsheets. We examine the properties of spreadsheet software that enable cooperation, arguing that: (1) the division of the spreadsheet into two distinct programming layers permits effective distribution of computational tasks across users with different levels of programming skill; and (2) the spreadsheet's strong visual format for structuring and presenting data supports sharing of domain knowledge among co-workers.

229 citations


Journal ArticleDOI
TL;DR: A system CABARET (CAse-BAsed REasoning Tool) is described that provides a domain-independent shell that integrates reasoning with rules and reasoning with previous cases in order to apply rules containing ill-defined terms.
Abstract: Rules often contain terms that are ambiguous, poorly defined or not defined at all In order to interpret and apply rules containing such terms, appeal must be made to their previous constructions, as in the interpretation of legal statutes through relevant legal cases We describe a system CABARET (CAse-BAsed REasoning Tool) that provides a domain-independent shell that integrates reasoning with rules and reasoning with previous cases in order to apply rules containing ill-defined terms The integration of these two reasoning paradigms is performed via a collection of control heuristics, which suggest how to interleave case-based methods and rule-based methods to construct an argument to support a particular interpretation CABARET is currently instantiated with cases and rules from an area of income tax law, the so-called “home office deduction” An example of CABARET's processing of an actual tax case is provided in some detail The advantages of CABARET's hybrid approach to interpretation stem from the synergy derived from interleaving case-based and rule-based tasks

221 citations


Journal ArticleDOI
TL;DR: The conclusion is that users and designers should prepare to learn from breakdowns and focus shifts in cooperative prototyping sessions rather than trying to avoid them.
Abstract: In most development projects, descriptions and prototypes are developed by system designers on their own utilizing users as suppliers of information on the use domain. In contrast, we are proposing a cooperative prototyping approach where users are involved actively and creatively in design and evaluation of early prototypes. This paper illustrates the approach by describing the design of computer support for casework in a technical department of a Danish municipality. Prototyping is viewed as an ongoing learning process, and we analyse situations where openings for learning occur in the prototyping activity. The situations seem to fall into four categories: (1) Situations where the future work situation with a new computer application is simulated to some extent to investigate the future work activity; (2) situations where the prototype is manipulated and used as a basis for idea exploration; (3) situations focusing on the designers' learning about the users' work practice; (4) situations where the prototyping tool or the design session as such becomes the focus. Lessons learned from the analysis of these situations are discussed. In particular we discuss a tension between the need for careful preparation of prototyping sessions and the need to establish conditions for user and designer creativity. Our conclusion is that users and designers should prepare to learn from breakdowns and focus shifts in cooperative prototyping sessions rather than trying to avoid them.

205 citations


Journal ArticleDOI
TL;DR: HYPO's reasoning process and various computational definitions are described and illustrated, including its definitions for computing relevant similarities and differences, the most on point and best cases to cite, four kinds of counter-examples, targets for hypotheticals and the aspects of a case that are salient in various argument roles.
Abstract: HYPO is a case-based reasoning system that evaluates problems by comparing and contrasting them with cases from its Case Knowledge Base (CKB). It generates legal arguments citing the past cases as justifications for legal conclusions about who should win in problem disputes involving trade secret law. HYPO's arguments present competing adversarial views of the problem and it poses hypotheticals to alter the balance of the evaluation. HYPO uses Dimensions as a generalization scheme for accessing and evaluating cases. HYPO's reasoning process and various computational definitions are described and illustrated, including its definitions for computing relevant similarities and differences, the most on point and best cases to cite, four kinds of counter-examples, targets for hypotheticals and the aspects of a case that are salient in various argument roles. These definitions enable HYPO to make contextually sensitive assessments of relevance and salience without relying on either a strong domain theory or a priori weighting schemes.

196 citations


Journal ArticleDOI
TL;DR: A research study is described which was conducted to determine the possibility of using keystroke characteristics as a means of dynamic identity verification, and results indicate significant promise in the temporal personnel identification problem.
Abstract: The implementation of safeguards for computer security is based on the ability to verify the identity of authorized computer systems users accurately. The most common form of identity verification in use today is the password, but passwords have many poor traits as an access control mechanism. To overcome the many disadvantages of simple password protection, we are proposing the use of the physiological characteristics of keyboard input as a method for verifying user identity. After an overview of the problem and summary of previous efforts, a research study is described which was conducted to determine the possibility of using keystroke characteristics as a means of dynamic identity verification. Unlike static identity verification systems in use today, a verifier based on dynamic keystroke characteristics allows continuous identity verification in real-time throughout the work session. Study results indicate significant promise in the temporal personnel identification problem.

188 citations


Journal ArticleDOI
TL;DR: Comparisons of pre- and post-experimental attitudes show that both restricted and unrestricted subjects felt significantly more positive toward computers after their interactions with the natural-language system.
Abstract: This study tested whether people can be shaped to use the vocabulary and phrase structure of a program's output in creating their own inputs Occasional computer-users interacted with four versions of an inventory program ostensibly capable of understanding natural-language inputs The four versions differed in the vocabulary and the phrase length presented on the subjects' computer screen Within each version, the program's outputs were worded consistently and presented repetitively in the hope that subjects would use the outputs as a model for their inputs Although not told so in advance, one-half of the subjects were restricted to input phrases identical to those used by their respective program (shaping condition), the other half were not (modeling condition) Additionally, one-half of the subjects communicated with the program by speaking, the other half by typing The analysis of the verbal dependent variables revealed four noteworthy findings First, users will model the length of a program's output Second, it is easier for people to model and to be shaped to terse, as opposed to conversational, output phrases Third, shaping users' inputs through error messages is more successful in limiting the variability in their language than is relying on them to model the program's outputs Fourth, mode of communication and output vocabulary do not affect the degree to which modeling or shaping occur in person-computer interactions Comparisons of pre- and post-experimental attitudes show that both restricted and unrestricted subjects felt significantly more positive toward computers after their interactions with the natural-language system Other performance and attitude differences as well as implications for the development of natural-language processors are discussed

Journal ArticleDOI
TL;DR: The result is a new user interface integrating and enhanced spatial sound presentation system, an audio emphasis system, and a gestural input recognition system that convey added information without distraction or loss of intelligibility.
Abstract: This paper proposes and organization of presentation and control that implements a flexible audio management system we call “audio windows”. The result is a new user interface integrating and enhanced spatial sound presentation system, an audio emphasis system, and a gestural input recognition system. We have implemented these ideas in a modest prototype, also described, designed as an audio server appropriate for a teleconferencing system. Our system combines a gestural front end (currently based on a DataGlove, but whose concepts are appropriate for other devices as well) with an enhanced spatial sound system, a digital signal processing separation of multiple sound sources, augmented with “filtears”, audio feedback cues that convey added information without distraction or loss of intelligibility. Our prototype employs a manual front end (requiring no keyboard or mouse) driving an auditory back end (requiring no CRT or visual display).

Journal ArticleDOI
TL;DR: The hierarchical modeling principle and diagnostic algorithm are applied to a medium-scale medical problem and the performance of a four-level qualitative model of the heart is compared to other representations in terms of diagnostic efficiency and space requirements.
Abstract: Model-based reasoning about a system requires an explicit representation of the system's components and their connections. Diagnosing such a system consists of locating those components whose abnormal behavior accounts for the faulty system behavior. In order to increase the efficiency of model-based diagnosis, we propose a model representation at several levels of detail, and define three refinement (abstraction) operators. We specify formal conditions that have to be satisfied by the hierarchical representation, and emphasize that the multi-level scheme is independent of any particular single-level model representation. The hierarchical diagnostic algorithm which we define turns out to be very general. We show that it emulates the bisection method, and can be used for hierarchical constraint satisfaction. We apply the hierarchical modeling principle and diagnostic algorithm to a medium-scale medical problem. The performance of a four-level qualitative model of the heart is compared to other representations in terms of diagnostic efficiency and space requirements. The hierarchical model does not reach the time/space performance of dedicated diagnostic rules, but it speeds up the diagnostic efficiency of a one-level model by a factor of 20.

Journal ArticleDOI
TL;DR: The knowledge engineer's task of analysing interview data conceptually as part of the knowledge elicitation process is similar to that of the social scientist analysing qualitative data, and a range of methods originally developed by social scientists for the analysis of unstructured and semi-structured qualitative material will be of assistance to the knowledge engineer.
Abstract: In many practical knowledge engineering contexts, interview data is the commonest form in which information is obtained from domain experts. Having obtained interview data the knowledge engineer is then faced with the difficult task of analysing what is initially relatively unstructured and complex material. It is argued that the knowledge engineer's task of analysing interview data conceptually as part of the knowledge elicitation process is similar to that of the social scientist analysing qualitative data. One implication of this is that a range of methods originally developed by social scientists for the analysis of unstructured and semi-structured qualitative material will be of assistance to the knowledge engineer. The background philosophical issues linking qualitative social science research and knowledge elicitation are outlined; both are characterized as fundamentally creative, interpretative processes. “Grounded Theory”, a social science methodology for the systematic generation of conceptual models from qualitative data, is described in detail. An example is presented of the use of Grounded Theory for the analysis of expert interview transcripts, drawn from a knowledge engineering project in civil engineering. The discussion focuses upon the processes used to move from an initial unstructured interview transcript to a core set of interrelated concepts, memos and models that fully describe the data.

Journal ArticleDOI
TL;DR: This paper argues that quantitative experimental methods may not be practical at early stages of design, but a behavioural record used in conjunction with think-aloud protocols can provide a designer with the information needed to evaluate an early prototype in a cost-effective manner.
Abstract: A strong case has been made for iterative design, that is, progressing through several versions of a user interface design using feedback from users to improve each prototype. One obstacle to wider adoption of this approach is the perceived difficulty of obtaining useful data from users. This paper argues that quantitative experimental methods may not be practical at early stages of design, but a behavioural record used in conjunction with think-aloud protocols can provide a designer with the information needed to evaluate an early prototype in a cost-effective manner. Further, it is proposed that a method for obtaining this data can be specified which is straightforward enough to be used by people with little or no training in human factors. Two studies are reported in which trainee designers evaluated a user interface by observing a user working through some set tasks. These users were instructed to think aloud as they worked in a procedure described as “cooperative evaluation”. The instruction received by the designers took the form of a brief how-to-do-it manual. Study 1 examines the effectiveness of the trainee designers as evaluators of an existing bibliographic database. The problems detected by each team were compared with the complete set of problems detected by all the teams and the problems detected by the authors in a previous and more extensive evaluation. Study 2 examined the question of whether being the designer of a system makes one better or worse at evaluating it and whether designers can predict the problems users will experience in advance of user testing.

Journal ArticleDOI
TL;DR: RIBIS effectiveness is affected by both people and implementation issues, and the high-level architecture and user interface of the rIBIS system is described.
Abstract: This paper describes rIBIS, a real-time group hypertext system, which allows a distributed set of users to simultaneously browse and edit multiple views of a hypertext network At any time, rIBIS users can switch back and forth between tightly coupled and loosely coupled interaction modes The paper describes the high-level architecture and user interface of the rIBIS system Early use of the rIBIS system by a software system design team suggests that users' acceptance increases as they continue to use the tool We conclude that rIBIS effectiveness is affected by both people and implementation issues

Journal ArticleDOI
TL;DR: GeneratoR of Exemplar-Based Explanations (GREBE) as mentioned in this paper is a system that uses detailed knowledge of the facts and reasoning of specific past cases, together with legal rules and common-sense knowledge, to determine and justify the legal consequences of new cases.
Abstract: A central task underlying many of the activities of attorneys is inferring the legal consequences of a given set of facts. GREBE (GeneratoR of Exemplar-Based Explanations) is a system that uses detailed knowledge of the facts and reasoning of specific past cases, together with legal rules and common-sense knowledge, to determine and justify the legal consequences of new cases. GREBE can apply either rule-based reasoning or case-based reasoning to goals at any level of its analysis. GREBE uses an approach to case-based reasoning in which new cases are compared with the smallest collections of precedent facts that justified an individual inference step in the explanation of a precedent case. This enables knowledge of the interactions among individual inference steps in a precedent to be used in case comparison. Case comparison is also assisted by an expressive semantic network representation of case facts. Techniques are presented for retrieving and comparing cases represented in this formalism. GREBE's output is a memorandum that justifies a legal conclusion in terms of the applicable precedents and legal rules.

Journal ArticleDOI
TL;DR: A methodology for control of an active above-knee prosthesis (AKP) is described, and depends on the use of production rules, so that the controller may be thought of as a leg movement expert.
Abstract: A methodology for control of an active above-knee prosthesis (AKP) is described. This approach is called Artificial Reflex Control (ARC), and depends on the use of production rules, so that the controller may be thought of as a leg movement expert. This control strategy is applicable to a variety of different gait modes. Automatic adaptation, according to the environment, and to the gait mode required, is based on heuristics related to human motor control.

Journal ArticleDOI
TL;DR: The development and evaluation research conducted at the University of Arizona that has led to the installation of one Electronic Meeting System at more than 30 corporate and university sites around the world is described.
Abstract: In recent years, there has been a rapidly growing interest in the use of information technology to support face-to-face group meetings. Such Electronic Meeting System (EMS) environments represent a fundamental shift in the technology available for group meetings. In this paper, we describe the development and evaluation research conducted at the University of Arizona that has led to the installation of one EMS at more than 30 corporate and university sites around the world. Based on our experiences in working with student groups in controlled laboratory experiments and with organizational work groups in the field, we are convinced that EMS technology has the potential to dramatically change the way people work together by effectively supporting larger groups, reducing meeting and project time, and enhancing group member satisfaction.

Journal ArticleDOI
TL;DR: Findings indicate that animated demonstrations, as they were implemented for this study, were not robust enogh to aid in later transfer.
Abstract: Animated demonstrations have been created due to the development of direct manipulation interfaces and the need for faster learning, so that users can learn interface procedures by watching. To compare animated demonstrations with written instructions we observed users learning and performing HyperCard™authoring tasks on the Macintosh™ during three performance sessions. In the training session, users were asked either to watch a demonstration or read the procedures needed for the task and then to perform the task. In the later two sessions users were asked to perform tasks identical or similar to the tasks used in the training session. Results showed that demonstrations provided faster and more accurate learning during the training session. However, during the later sessions those who saw demonstrated procedures took longer to perform the tasks than did users of written instructions. Users appeared to be mimicking the training demonstrations without processing the information which would be needed later. In fact, when users had to infer procedures for tasks which were similar to those seen in the training session, the text group was much better at deducing the necessary procedures than the demonstration group. These findings indicate that animated demonstrations, as they were implemented for this study, were not robust enogh to aid in later transfer.

Journal ArticleDOI
TL;DR: Examining the concept empirically for academic articles with a view to making recommendations for the design of a hypertext database shows that experienced journal readers do indeed possess a generic representation and can use this to organize isolated pieces of text into a more meaningful whole.
Abstract: Hypertext is often described as a liberating technology, freeing readers and authors from the constraints of “linear” paper document formats. However, there is little evidence to support such a claim and theoretical work in the text analysis domain suggests that readers form a mental representation of a paper document's structure that facilitates non-serial reading. The present paper examines this concept empirically for academic articles with a view to making recommendations for the design of a hypertext database. The results show that experienced journal readers do indeed possess such a generic representation and can use this to organize isolated pieces of text into a more meaningful whole. This representation holds for text presented on screens. Implications for hypertext document design are discussed.

Journal ArticleDOI
TL;DR: The SCL work forced the boundaries of social place to extend beyond the boundariesof physical place and add to the existing knowledge of collaboration by focusing on intellectual effort where the primary resource is information.
Abstract: From 1985 for three years, the System Concepts Laboratory (SCL) of the Xerox Palo Alto Research Center had employees in both Palo Alto, California, and Portland, Oregon. The Portland remote site was intended to be a forcing function for the lab to focus on issues of interpersonal computing in a geographically distributed organization. Interpersonal computing supports people communicating and working together through computers; it includes tools to support interaction separated by time and/or space as well as face-to-face interaction and meetings. A consultant to the laboratory took on the role of outside observer to provide insight into questions about the process of working in a distributed organization and about tools for supporting collaboration in a distributed organization. The primary collaborative work of the lab itself was design. The major tool that developed to support the cross-site environment was Media Space, a network of video, audio and computing technologies. With the Media Space, SCL members were able to make significant progress in supporting their distributed design process. The SCL experience adds to the existing knowledge of collaboration by focusing on intellectual effort where the primary resource is information. The activities of the lab depended on reciprocal interdependence of group members for information. Their work required them to be in touch with one another to share and coordinate information, yet lab members were often not together physically or temporally. The SCL work forced the boundaries of social place to extend beyond the boundaries of physical place.

Journal ArticleDOI
TL;DR: It is concluded that even experienced users must acquire the information they need from the device's display during interactions, and that they do not necessarily remember regular details that are available in this way.
Abstract: This paper examines the hypothesis that information flow, from device to user, is a vital part of skilled activity in human-computer interaction. Two studies are reported. The first study questions users of keyboard-driven word processors about the effects of cursor-movement, finding and word-deletion commands in various contexts. The second study questions users of the Apple Macintosh-based systems, MacWrite and Microsoft Word, about the behaviour of the menu-driven find command. In both studies it is discovered that users often do not know the precise effects of frequently-used actions, such as the final position of the cursor, even though these effects are vital for future planning. It is concluded that even experienced users must acquire the information they need from the device's display during interactions, and that they do not necessarily remember regular details that are available in this way. This conclusion conflicts with those current models of user psychology that assume routine skill relies on complete mental specifications of methods for performing tasks.

Journal ArticleDOI
TL;DR: The critiquing approach to building knowledge-based interactive systems, which discusses critics from the perspective of overcoming the problems of high-functionality computer systems, of providing a new class of systems to support learning, of extending applications-oriented construction kits to design environments, and of providing an alternative to traditional autonomous expert systems are described.
Abstract: We describe the critiquing approach to building knowledge-based interactive systems. Critiquing supports computer users in their problem solving and learning activities. The challenges for the next generation of knowledge-based systems provide a context for the development of this paradigm. We discuss critics from the perspective of overcoming the problems of high-functionality computer systems, of providing a new class of systems to support learning, of extending applications-oriented construction kits to design environments, and of providing an alternative to traditional autonomous expert systems. One of the critiquing systems we have built—JANUS, a critic for architectural design—is used as an example for presenting the key aspects of the critiquing process. We then survey additional critiquing systems developed in our and other research groups. The paper concludes with a discussion of experiences and extensions to the paradigm.

Journal ArticleDOI
TL;DR: The theory may explain the “feeling of directness” that goes with good direct manipulation interfaces and the results indicate that user friendliness, as this is traditionally measured, in some cases may prove to reduce the users' problem-solving ability.
Abstract: According to a recent theory by Hayes and Broadbent (1988), learning of interactive tasks could proceed in one of two different learning modes. One learning mode, called S-mode, has characteristics not unlike what traditionally has been called “Insight learning”. The other mode, called U-mode, is in some respects like trial and error learning. Extending the theory to human-computer interaction, it predicts different problem-solving strategies for subjects (Ss) using command and direct manipulation interfaces. Command interfaces should induce S-mode learning, while direct manipulation should not do this. The theory was supported by two experiments involving the tower of Hanoi problem. Ss with a command interface made the least number of errors, met criterion in the least number of trials and used the most time pr. trial. They were also more able to verbalize principles governing the solution of the problem than Ss using a direct manipulation interface. It is argued that the theory may explain the “feeling of directness” that goes with good direct manipulation interfaces. Further, the results indicate that user friendliness, as this is traditionally measured, in some cases may prove to reduce the users' problem-solving ability.

Journal ArticleDOI
TL;DR: Applications to the qualitative circuit analysis for a class of feedback amplifiers and active resistive circuits, using a combination of the Signal Flow Graph and Fuzzy Cognitive Map concepts are discussed.
Abstract: The Fuzzy Cognitive Maps (FCM) introduced by Kosko represent a novel way of fuzzy causal knowledge processing, using the net rather than the traditional tree knowledge representation. In this paper similarities between the Fuzzy Cognitive Maps and Signal Flow Graphs are pointed out and the inference process used in Fuzzy Cognitive Maps is compared and paralled with a fixed point iterative solution of the equations describing the Signal Flow Graph. Then, applications to the qualitative circuit analysis for a class of feedback amplifiers and active resistive circuits, using a combination of the Signal Flow Graph and Fuzzy Cognitive Map concepts are discussed. Several examples are given.

Journal ArticleDOI
TL;DR: This two-edition series on computer supported cooperative work and groupware contains sixteen original articles, selected from over forty submissions, and will help the new reader gain some insight of what this field is about.
Abstract: This two-edition series on computer supported cooperative work and groupware contains sixteen original articles, selected from over forty submissions. As the papers were chosen on individual technical merit, the collection does not introduce all aspects of CSCW and groupware. Still, the new reader should gain some insight of what this field is about, while the active CSCW researcher and groupware implementor will be informed of several exciting new findings.

Journal ArticleDOI
TL;DR: Two methods are investigated for adding knowledge in neural network systems: decomposition of networks; and rule-injection hints, which play a role similar to adding rules or defining algorithms in symbolic systems.
Abstract: Neural network systems can be made to learn faster and generalize better through the addition of knowledge. Two methods are investigated for adding this knowledge: (1) decomposition of networks; and (2) rule-injection hints. Both of these approaches play a role similar to adding rules or defining algorithms in symbolic systems. Analyses explain two important points: (1) what functions which are easy to learn (as well as what functions which make effective hints) are known from an analysis of the effect of learning monotonic functions; (2) a set theory and functional entropy analysis shows for what kinds of systems hints are useful. The approaches have been tested in a variety of settings, and an example application using a lunar lander game is discussed.

Journal ArticleDOI
TL;DR: The fact that subjects were able to recode icon meanings to screen positions after some training backs the everyday experience that icon design seems to be of little influence on the performance of advanced users.
Abstract: After subjects practised using a pointing device (two-button mouse) for selecting icons on a computer screen, the effect of “articulatory distance” (i.e. the difference between a picture and its meaning) on performance in menu-selection tasks was analysed. Three icon sets with different articulatory distances and one text set were constructed, validated and tested in a “search and select” experiment with icon positions randomized on the screen. This was contrasted with an experiment in which icons were to be selected from fixed screen positions. Results indicate that articulatory distance indeed had an effect on reaction time in the first design, but not in the latter. A recognition task was finally given to decide whether articulatory distance could influence memory for icons. The fact that subjects were able to recode icon meanings to screen positions after some training backs the everyday experience that icon design seems to be of little influence on the performance of advanced users. Icon-oriented interfaces are aimed, however, at the computer novice.

Journal ArticleDOI
TL;DR: This paper describes common obstacles that product developers face in obtaining knowledge about actual or potential users of their systems and applications, and applies to user involvement in human-computer interface development in general, but has particular relevance to CSCW and groupware development.
Abstract: This paper addresses one particular software development context: large product development organizations. It describes common obstacles that product developers face in obtaining knowledge about actual or potential users of their systems and applications. Many of these obstacles can be traced to organizational structures and development practices that arose prior to the widespread market for interactive systems. These observations apply to user involvement in human-computer interface development in general, but have particular relevance to CSCW and groupware development.