scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1983"


Journal ArticleDOI
TL;DR: A sufficiency theory is presented of the process by which a computer programmer attempts to comprehend a program, intended to explain four sources of variation in behavior on this task: the kind of computation the program performs, the intrinsic properties of the program text, such as language and documentation, the reason for which the documentation is needed, and differences among the individuals performing the task.
Abstract: A sufficiency theory is presented of the process by which a computer programmer attempts to comprehend a program. The theory is intended to explain four sources of variation in behavior on this task: the kind of computation the program performs, the intrinsic properties of the program text, such as language and documentation, the reason for which the documentation is needed, and differences among the individuals performing the task. The starting point for the theory is an analysis of the structure of the knowledge required when a program is comprehended which views the knowledge as being organized into distinct domains which bridge between the original problem and the final program. The program comprehension process is one of reconstructing knowledge about these domains and the relationship among them. This reconstruction process is theorized to be a top-down, hypothesis driven one in which an initially vague and general hypothesis is refined and elaborated based on inf ormation extracted from the program text and other documentation.

742 citations


Journal ArticleDOI
TL;DR: A presentation of what cognitive systems are, and of how CSE can contribute to the design of an MMS, from cognitive task analysis to final evaluation is given.
Abstract: This paper presents an approach to the description and analysis of complex Man-Machine Systems (MMSs) called Cognitive Systems Engineering (CSE). In contrast to traditional approaches to the study of man-machine systems which mainly operate on the physical and physiological level, CSE operates on the level of cognitive functions. Instead of viewing an MMS as decomposable by mechanistic principles, CSE introduces the concept of a cognitive system: an adaptive system which functions using knowledge about itself and the environment in the planning and modification of actions. Operators are generally acknowledged to use a model of the system (machine) with which they work. Similarly, the machine has an image of the operator. The designer of an MMS must recognize this, and strive to obtain a match between the machine's image and the user characteristics on a cognitive level, rather than just on the level of physical functions. This article gives a presentation of what cognitive systems are, and of how CSE can contribute to the design of an MMS, from cognitive task analysis to final evaluation.

617 citations


Journal ArticleDOI
TL;DR: It is proposed that a generalization of the set covering problem can be used as an intuitively plausible model for diagnostic problem solving and potentially useful as a basis for expert systems in that it provides a solution to the difficult problem of multiple simultaneous disorders.
Abstract: This paper proposes that a generalization of the set covering problem can be used as an intuitively plausible model for diagnostic problem solving. Such a model is potentially useful as a basis for expert systems in that it provides a solution to the difficult problem of multiple simultaneous disorders. We briefly introduce the theoretical model and then illustrate its application in diagnostic expert systems. Several challenging issues arise in adopting the set covering model to real-world problems, and these are also discussed along with the solutions we have adopted.

418 citations


Journal ArticleDOI
TL;DR: This paper will present the basis for this view of expertise, the reasoning model it implies, and a computer program which begins to implement the theory, called SHRINK, which models psychiatrie diagnosis and treatment.
Abstract: Two major factors seem to distinguish novices from experts. First, experts generally know more about their domain. Second, experts are better than novices at applying and using that knowledge effectively. Within AI, the traditional approach to expertise has concentrated on the first difference. Thus, “expert systems” research has revolved around extracting the rules experts use and developing problem solving methodologies for dealing with those rules. Unlike these systems, human experts are able to introspect about their knowledge and learn from past experience. It is this view of expertise, based on the second distinguishing feature above, that we are exploring. Such a view requires a reasoning model based on organization of experience in a long-term memory, and incremental learning and refinement of both reasoning processes and domain knowledge. This paper will present the basis for this view, the reasoning model it implies, and a computer program which begins to implement the theory. The program, called SHRINK, models psychiatrie diagnosis and treatment.

225 citations


Journal ArticleDOI
TL;DR: Results from sorting tasks show experts and novices begin their problem representations with specific different problem categories, and a preliminary study of programming managers indicates an abstraction different from that used by programmers.
Abstract: The representation of computer programming problems in relation to the organization of programming knowledge is examined. An experiment previously done for physics knowledge is replicated to examine differences in the categories used for problem representation by novice and expert programmers. Results from sorting tasks show experts and novices begin their problem representations with specific different problem categories. Experts initially abstract an algorithm to solve a problem, whereas novices base their approach on a problem's literal features. A preliminary study of programming managers indicates an abstraction different from that used by programmers.

157 citations


Journal ArticleDOI
TL;DR: ONCOCIN has been adapted to accept, analyze, and critique a physician's own therapy plan and provides a less intrusive method of computer-assisted consultation because the user need not be interrupted in the majority of cases.
Abstract: A predominant model for expert consultation systems is one in which a computer program simulates the decision making processes of an expert. The expert system typically collects data from the user and renders a solution. Experience with regular physician use of ONCOCIN, an expert system that assists with the treatment of cancer patients, has revealed that system users can be annoyed by this approach. In an attempt to overcome this barrier to system acceptance, ONCOCIN has been adapted to accept, analyze, and critique a physician's own therapy plan. A critique is an explanation of the significant differences between the plan that would have been proposed by the expert system and the plan proposed by the user. The critique helps resolve these differences and provides a less intrusive method of computer-assisted consultation because the user need not be interrupted in the majority of cases—those in which no significant differences occur. Extension of previous rule-based explanation techniques has been required to generate critiques of this type.

154 citations


Journal ArticleDOI
TL;DR: This work argues for the primacy of models of causal interaction, rather than the traditional fault models, in troubleshooting digital electronics and points out the importance of making these models explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems.
Abstract: While expert systems have traditionally been built using large collections of rules based on empirical associations, interest has grown recently in the use of systems that reason “from first principles”, i.e. from an understanding of causality of the device being examined. Our work explores the use of such models in troubleshooting digital electronics. In discussing troubleshooting we show why the traditional approach—test generation—solves a different problem and we discuss a number of its practical shortcomings. We consider next the style of debugging known as discrepancy detection and demon-strate why it is a fundamental advance over traditional test generation. Further explor-ation, however, demonstrates that in its Standard form discrepancy detection encounters interesting limits in dealing with commonly known classes of faults. We suggest that the problem arises from a number of interesting implicit assumptions typically made when using the technique. In discussing how to repair the problems uncovered, we argue for the primacy of models of causal interaction, rather than the traditional fault models. We point out the importance of making these models explicit, separated from the troubleshooting mechanism, and retractable in much the same sense that inferences are retracted in current systems. We report on progress to date in implementing this approach and demonstrate the diagnosis of a bridge fault—a traditionally difficult problem—using our approach.

148 citations


Journal ArticleDOI
TL;DR: The generalized Boolean query model must be reconciled with the vector space approach, suggested new lattice structures for weighted retrieval, and probabilistic retrieval models, and proper retrieval evaluation mechanisms reflecting the fuzzy nature of retrieval are needed.
Abstract: Substantial work has been done on the application of fuzzy subset theory to information retrieval. Boolean query processing has been generalized to allow for weights to be attached to individual terms, in either the document indexing or the query representation, or both. Problems with the generalized Boolean lattice structure have been noted, and an alternative approach using query thresholds and appropriate document evaluation functions has been suggested. Problems remain unsolved, however. Criteria generated for the query processing mechanism are inconsistent. The exact functional form and appropriate parameters for the query processing mechanism must be specified. Moreover, the generalized Boolean query model must be reconciled with the vector space approach, suggested new lattice structures for weighted retrieval, and probabilistic retrieval models. Finally, proper retrieval evaluation mechanisms reflecting the fuzzy nature of retrieval are needed.

130 citations


Journal ArticleDOI
TL;DR: The aspects of NEOMYCIN that make abstract strategic explanations possible--the representation of strategic knowledge explicitly and separately from domain knowledge--and how this representation can be used to generate explanations are described.
Abstract: This paper examines the problem of automatic explanation of reasoning, especially as it relates to expert systems. By explanation we mean the ability of a program to discuss what it is doing in some understandable way. We first present a general framework in which to view explanation and review some of the research done in this area. We then focus on the explanation system for NEOMYCIN, a medical consultation program. A consultation program interactively helps a user to solve a problem. Our goal is to have NEOMYCIN explain its problem-solving strategies. An explanation of strategy describes the plan the program is using to reach a solution. Such an explanation is usually concrete, referring to aspects of the current problem situation. Abstract explanations articulate a general principle, which can be applied in different situations; such explanations are useful in teaching and in explaining by analogy. We describe the aspects of NEOMYCIN that make abstract strategic explanations possible--the representation of strategic knowledge explicitly and separately from domain knowledge--and demonstrate how this representation can be used to generate explanations.

125 citations


Journal ArticleDOI
TL;DR: This paper proposes a decomposition of graceful interaction into a number of relatively independent skills: skills involved in parsing elliptical, fragmented, and otherwise ungrammatical input; in ensuring robust communication; in explaining abilities and limitations, actions and the motives behind them; in keeping track of the focus of attention of a dialogue.
Abstract: Natural language processing is often seen as a way to provide easy-to-use and flexible interfaces to interactive computer systems. White natural language interfaces typically perform well in response to straightforward requests and questions within their domain of discourse, they often fail to interact gracefully with their users in less predictable circumstances. Most current systems cannot, for instance: respond reasonably to input not conforming to a rigid grammar; ask for and understand clarification if their user's input is unclear; offer clarification of their own output if the user asks for it; or interact to resolve any ambiguities that may arise when the user attempts to describe things to the system. We believe that graceful interaction in these and the many other contingencies that can arise in human conversation is essential if interfaces are ever to appear co-operative and helpful, and hence be suitable for the casual or naive user, and more habitable for the experienced user. In this paper, we attempt to outline key components of graceful interaction, to identify major problems involved in realizing them, and in some cases to suggest the shape of solutions. To this end we propose a decomposition of graceful interaction into a number of relatively independent skills: skills involved in parsing elliptical, fragmented, and otherwise ungrammatical input; in ensuring robust communication; in explaining abilities and limitations, actions and the motives behind them; in keeping track of the focus of attention of a dialogue; in identifying things from descriptions, even if ambiguous or unsatisfiable; and in describing things in terms appropriate for the context. We suggest these skills are necessary for graceful interaction in general and form a good working basis for graceful interaction in a certain large class of application domains, which we define. None of these components appear individually much beyond the current state of the art, at least for suitably restricted domains of discourse. Thus, we advocate research into the construction of gracefully interacting systems as an activity likely to pay major dividends in improved man-machine communication in a relatively short time.

114 citations


Journal ArticleDOI
TL;DR: The origins of the QWERTY keyboard, and other sequential keyboards which have been developed since 1909 are reviewed and the reasoning behind the design of these other keyboards and the subsequent impact they made on the keyboard world are discussed.
Abstract: The Standard typewriter keyboard (nicknamed QWERTY) was designed over a century ago. During this time, QWERTY has become a controversial issue, because many individuals feel that the sequential keyboard market is being monopolized by a sub-optimum layout. Despite these feelings, in 1971 the International Standards Organization recognized QWERTY as the Standard keyboard, and a year later Alden, Daniels & Kanarick (1972) concluded that QWERTY was “the de facto Standard layout for Communications and computer interface keyboards”. This article reviews the origins of the QWERTY keyboard, and other sequential keyboards which have been developed since 1909. The reasoning behind the design of these other keyboards and the subsequent impact they made on the keyboard world are discussed. Various explanations are suggested as to why this previous research has not had any effect on the design of the QWERTY keyboard.

Journal ArticleDOI
Ronald R. Yager1
TL;DR: The formalism of fuzzy subset theory is used to provide a framework in which to interpret linguistic quantifiers in binary logic and the problem of making quantified statements based upon the observation of a sample from a set of objects is studied.
Abstract: We introduce two methodologies for interpreting quantifiers in binary logic. We then extend these interpretations to the case where the quantifiers are linguistic. We use the formalism of fuzzy subset theory to provide a framework in which to interpret linguistic quantifiers. We discuss various methodologies for measuring the cardinality of a fuzzy set including the concept of a fuzzy cardinality. Among the important questions we study in the paper is the problem of making quantified statements based upon the observation of a sample from a set of objects.

Journal ArticleDOI
Masatoshi Sakawa1
TL;DR: Interactive computer programs that run in conversational mode are developed to implement man-machine interactive procedures using a new method by combined use of bisection method and linear programming method.
Abstract: In this article, we present interactive computer programs fof solving fuzzy linear programming problems with multiple objectives. Through the use of five types of membership functions including non-linear functions, the fuzzy of imprecise goals of the decision maker are quantified. Although the formulated problem becomes a nonlinear programming problem, it can be reduced to a set of linear inequalities if some variable is fixed. Based on this idea, we propose a new method by combined use of bisection method and linear programming method. On the basis of the proposed method, FORTRAN programs that run in conversational mode are developed to implement man-machine interactive procedures. The commands in our programs and major prompt messages are also explained. An illustrative numerical example for the interactive processes is demonstrated together with the computer outputs

Journal ArticleDOI
TL;DR: The projection or mapping appears to carry out an automatic “normalization of description” for the same object independent of retinal image size, which suggests new concepts regarding contrast sensitivity, the nature and role of indirect vision, and the recognition of patterns and the analysis of scenes.
Abstract: Based on Hubel & Wiesel's physiological findings on the projection from retina to cortex, a schematic model of that stage of visual processing is constructed and its properties investigated. The projection or mapping appears to carry out an automatic “normalization of description” for the same object independent of retinal image size. This property suggests new concepts regarding (1) contrast sensitivity, (2) the nature and role of indirect vision, (3) the role of eye movements and (4) the recognition of patterns and the analysis of scenes.

Journal ArticleDOI
TL;DR: An evaluation of different modalities of verbalization related to instructions of a “thinking aloud” kind for a sorting task shows that simultaneous verbalization slows down the automation of the activity, introduces hitches and must be avoided out of problem-solving situations.
Abstract: After having been out of fashion for some time, verbal reports have once again become established by certain authors as indicators of cognitive processes. In this perspective we have done an evaluation of different modalities of verbalization all related to instructions of a “thinking aloud” kind for a sorting task: (a) simultaneous verbalization (the subject is asked to verbalize what he says to himself while performing the task); and (b) subsequent verbalization (he is asked to verbalize what he said to himself at first without recall aids, afterwards in front of the record of his anterior behavior). The subjects performed three trials so that their activity becomes more-or-less routine. We have dealt with: (1) the compatibility between verbalization and the activity it refers to; and (2) the kind of verbal reports produced. It is shown that (a) simultaneous verbalization slows down the automation of the activity, introduces hitches and must be avoided out of problem-solving situations, (b) aided subsequent verbalization permits the automation, produces satisfactory and very precise verbal reports, as simultaneous verbalization does, and (c) unaided subsequent verbalization produces too much distance from the task and not very valid data. We conclude on the interest of aided subsequent verbalization.

Journal ArticleDOI
TL;DR: Part of the expert systems strategy of one major chemical company is outlined and this system is described briefly at the start of the paper and used to illustrate much of the later discussion.
Abstract: Expert systems have recently been arousing much interest in industry and elsewhere: it is envisaged that they will be able to solve problems in areas where computers have previously failed, or indeed, never been tried. However, although the literature in the field of expert systems contains much on their construction, on knowledge representa-tion techniques, etc, relatively little has been devoted to discussing their application to real-life problems. This article seeks to bring together a number of issues relevant to the application of expert systems by discussing their advantages and limitations, their roles and benefits, and the influence that real-life applications might have on the design of expert systems software. Part of the expert systems strategy of one major chemical company is outlined. Because it was in constructing one particular expert system that many of these issues became important this system is described briefly at the start of the paper and used to illustrate much of the later discussion. It is of the plausible-inference type and has application in the field of materials engineering. The article is aimed as much at the interested end-user who has a possible application in mind as at those working in the field of expert systems.

Journal ArticleDOI
TL;DR: The NLC system was an effective problem solver for the selected classes of problems and users and none of the Standard concerns about natural language programming related to vagueness, ambiguity, verbosity or correctness was a significant problem, although minor difficulties did arise occasionally.
Abstract: An experiment is described which gives data related to the usefulness and efficiency of English as a programming language. The experiment was performed with the NLC system, described herein, and used twenty-three paid volunteers from a first course in programming. Subjects were asked to solve two problems, namely (1) solution of linear equations and (2) gradebook averaging. A total of 1581 English sentences were typed, 81% of which were processed correctly. The remaining 19% were rejected because of questionable user syntax or system inadequacies. In most cases, subjects were able to paraphrase a rejected input in terminology understandable by the system. The overall success rate at solving an entire problem, within the 2½ h time constraint of the experiment, was 73·9%. In short, the system was an effective problem solver for the selected classes of problems and users. Many system failures resulted from “bugs” or syntactic oversights which appear amenable to easy repair. None of the Standard concerns about natural language programming related to vagueness, ambiguity, verbosity or correctness was a significant problem, although minor difficulties did arise occasionally.

Journal ArticleDOI
TL;DR: An analysis of expert thinking has been developed to assist in understanding human expertise, and identifies a number of human, knowledge-handling techniques which could be implemented in a system to meet most of the user's specifications.
Abstract: Human expertise should be better understood before the users of expert sytems specify the services needed and expected from such systems. An analysis of expert thinking has been developed to assist in this understanding. The analysis is discussed in the paper under three main headings: Specifications : examples are given of the services users obtain from human experts, in the particular domain of petroleum geology. These services indicate general qualities desirable in a human and, by analogy, in a system. The qualities are listed as specifications for expert system design. A theory of expert thinking : how human experts acquire, understand and use their knowledge (particularly with reference to petroleum geology). The theory identifies a number of human, knowledge-handling techniques which could be implemented in a system to meet most of the user's specifications. Human and system expertise : a comparison suggests that, properly designed and suitably applied, an expert system can help its users make well-informed decisions; failing this, the system may prove dangerously misleading and should not be accepted as a substitute for an accountable, human expert.

Journal ArticleDOI
TL;DR: Eight current and proposed implementations of fuzzy knowledge-based systems have been selected for study in order to provide a context for examining the overall structure and application of the system; the nature and locus of uncertainty; and the representation of the logical implication (If-Then) operator.
Abstract: The purpose of this study is to examine some critical issues in the development of practical knowledge-based systems using fuzzy logic and related techniques. Eight current and proposed implementations of such systems have been selected for study in order to provide a context for examining the following three general topic areas: the overall structure and application of the system; the nature and locus of uncertainty; and the representation of the logical implication (If-Then) operator. Our goal is not to determine which treatment of these issues is best overall, but rather to draw from such practical experience as currently exists to begin mapping out the advantages and disadvantages of each option relative to the specific structure of the application being addressed. The eight fuzzy knowledge-based systems selected are not intended as a complete survey of a rapidly-growing field, but only to provide a sampling from the wide variety of ways in which fuzzy logic can be used to represent and process various sorts of inexact knowledge.

Journal ArticleDOI
TL;DR: A measure of the information content of an evidence inducing a belief function or a possibility function is axiomatically defined and is to be additive for distinct evidences.
Abstract: A measure of the information content of an evidence inducing a belief function or a possibility function is axiomatically defined. lts major property is to be additive for distinct evidences.

Journal ArticleDOI
TL;DR: The COUSIN project of Carnegie–Mellon University is developing command interfaces which appear more friendly and supportive to their users, using a form-based model of communication, and incorporating error correction and on-line help.
Abstract: Currently available interactive command interfaces often fail to provide adequate error correction or on-line help facilities, leading to the perception of an unfriendly interface and consequent frustration and reduced productivity on the part of the user. The COUSIN project of Carnegie–Mellon University is developing command interfaces which appear more friendly and supportive to their users, using a form-based model of communication, and incorporating error correction and on-line help. Because of the time and effort involved in constructing truly user-friendly interfaces, we are working on interface system designed to provide interfaces to many different applica-tion systems, as opposed to separate interfaces to individual applications. A COUSIN interface system gets the information it needs to provide these services for a given application from a declarative description of that application's communication needs.

Journal ArticleDOI
TL;DR: Procedures for translating and rotating a region are presented and the superposition of binary images with different characteristics is considered, showing translation, rotation, and superposition to be O(N log N) operations.
Abstract: In Gargantini (1982a) it has been shown that storing black nodes of a quadtree is sufficient to retrieve any basic property associated with quadtrees. To achieve this, each black node must be represented as a quaternary integer whose digits (from left to right) describe the path from the root to that node. The sorted sequence of quaternary integers representing a given region is called the linear quadtree associated with that region. Such a structure has been shown to save more than two-thirds of the memory locations used by regular quadtrees. In this paper we present procedures for translating and rotating a region and consider the superposition of binary images with different characteristics (such as different resolution parameter, different pixel size and/or different center). Translation, rotation, and superposition are shown to be O(N log N) operations; for translation N is the number of black pixels; for rotation N is the number of black nodes; for superposition N is the sum of black nodes or black pixels of the two images, depending on whether or not the two regions are centered on the same raster.

Journal ArticleDOI
TL;DR: The view of computer systems as communication media with formal communicating behaviour permits an explanation of problems arising from computer applications, especially at the human-machine interface, and shows directions for future research.
Abstract: Nowadays computers are increasingly used for communication purposes and less for mere calculation. Userfriendly dialog design for non-computer professicial users is becoming an important research issue. The discussion has already shown that human-machine systems have to be studied as a whole: apart from the machine and its users they include the designers and those persons responsible for the system's application, as well. Computers just play a special role as one element in a highly complex communication network with several human agents linked in space and time. In order to characterize communication between humans and machines the concept of formal communication is introduced and related to natural communication. Communicating behaviour and its determining factors are represented by a model which is based on psycholinguistic concepts of communication and which uses high-level Petri net interpretations. Formal communication can be observed among humans as well as with machines; of ten it is caused by delegation. Programming of computer systems can be conceived as a special form of delegation. The view of computer systems as communication media with formal communicating behaviour permits an explanation of problems arising from computer applications, especially at the human-machine interface, and shows directions for future research.

Journal ArticleDOI
TL;DR: SAGE is an adaptive production system model of strategy learning that begins a task with weak, overly general operators and uses these to find a solution to some problem by trial and error.
Abstract: SAGE is an adaptive production system model of strategy learning. The system begins a task with weak, overly general operators and uses these to find a solution to some problem by trial and error. The program then attempts to resolve the problem, using its knowledge of the solution path to determine blame when an error occurs. Once the faulty operator has been found, the system employs a process of discrimination to generate more conservative versions of the rule containing additional conditions. Such variants are strengthened each time they are relearned, until they come to override their precursors. The program continues to learn until it can solve the problem without errors. SAGE has learned useful heuristics in the domains of the slide—jump puzzle, solving simple algebra equations and seriating blocks of different lengths.

Journal ArticleDOI
TL;DR: A precise and computationally effective model of the structure of human explanation is presented, based upon the careful analysis of actual, naturally occurring conversations between human beings involved in explanation, that emphasizes the view that explanation is a social process.
Abstract: Reasoning occupies a central position in studies of both artificial and natural intel-ligence. This paper presents a precise and computationally effective model of the structure of human explanation, based upon the careful analysis of transcipts of actual, naturally occurring conversations between human beings involved in explanation. Explanations are represented by trees whose internal nodes correspond to the three major types of justification which we have found offered for assertions (giving a reason, giving examples, and eliminating alternatives); the real-time process of explanation production is represented by a sequence of transformations on such trees. Focus of attention in explanation is represented by pointers in the tree, and shifts of focus of attention by pointer movement. The ordering and embedding of explanations are considered; we explain why some orderings are easier to understand than others, and we demonstrate that some forms of explanation require multiple pointers, some of which are embedded in others. A fairly complex example is analyzed in detail. The paper emphasizes the view that explanation is a social process, i.e. occurs in particular contexts involving particular people who have particular assumptions and predisposi-tions which significantly influence what actually occurs. Implications of the results of this paper to the design of systems for explanation production and for artificial reasoning are discussed.

Journal ArticleDOI
TL;DR: Current theory and practice in psychology, computer sicence and process control is examined, and a consensus for the design of representations suitable for describing the operations of on-line computer systems via their terminal interfaces is sought.
Abstract: Computer science has now recognized that in holistic systems design the designer must include not just the terminal, but also the user within the boundaries of the system. Users, particularly if they are computer naive, require a conceptual model of the computer system so that they can form a clear idea of what the system is doing and what it can do. This model is communicated to the user by the representation of the system which appears at his terminal. Existing techniques for the design of terminal dialogues do not include methods for representing conceptual models, so that new techniques are needed. If these are to be reliable they must be based in theory rather than just the intuitions of individual designers. This article examines current theory and practice in psychology, computer sicence and process control, and seeks a consensus for the design of representations suitable for describing the operations of on-line computer systems via their terminal interfaces.

Journal ArticleDOI
Brian R. Gaines1
TL;DR: A Standard uncertainty logic (SUL) is presented that subsumes Standard propositional, fuzzy and probability logies, and how many key results may be derived within SUL without further constraints are shown.
Abstract: This paper examines the motivation and foundations of fuzzy sets theory, now some 20 years old, particularly possible misconceptions about possible operators and relations to probability theory. It presents a Standard uncertainty logic (SUL) that subsumes Standard propositional, fuzzy and probability logies, and shows how many key results may be derived within SUL without further constraints. These include resolutions of Standard paradoxes such as those of the bald man and of the barber, decision rules used in pattern recognition and control, the derivation of numeric truth values from the axiomatic form of the SUL, and the derivation of operators such as the arithmetic mean. The addition of the constraint of truth-functionality to a SUL is shown to give fuzzy, or Lukasiewicz infinitely-valued, logic. The addition of the constraint of the law of the excluded middle to a SUL is shown to give probability, or modal S5, logic. An example is given of the use of the two logies in combination to give a possibility vector when modelling sequential behaviour with uncertain observations. This paper is based on the banquet address with the same title given at NAFIP-1, the First North American Fuzzy Information Processing Group Workshop, held at Utah State University, May 1982.

Journal ArticleDOI
TL;DR: These guidelines presented here are not considered to be exhaustive, and were developed to apply to a specific system, but may be of considerable general interest in such situations.
Abstract: This paper presents, in the form of a case study, guidelines on relevant human factors considerations for use in designing a computer graphics system. Although the guidelines presented here are not considered to be exhaustive, and were developed to apply to a specific system, many of the issues addressed may be of considerable general interest in such situations. Both guidelines extracted from the literature and authors' design observations are presented for each of six principal system components addressed. These components are: the graphics display (color CRT monitor); the man-computer dialogue used for interactive communication (menu selection dialogue); the graphics tablet; an alphanumeric support display (black-and-white CRT monitor); an alphanumeric keyboard for inputing data into the support CRT; and the workspace within which these components are located. This study points out areas requiring further research and experimentation towards the development of man-computer interface guidelines.

Journal ArticleDOI
Ronald R. Yager1
TL;DR: The theory of approximate reasoning is used to provide both a means for translating propositions into a machine-understandable form and a methodology for making inferences from this information.
Abstract: We are interested in finding automated procedures for extracting information from knowledge bases which contain linguistic information. We use the concept of a possibility distribution to represent the information in a linguistic value. The theory of approximate reasoning is used to provide both a means for translating propositions into a machine-understandable form and a methodology for making inferences from this information. An algorithmic procedure consisting of the development of a knowledge tree and an evaluation procedure resulting in the desired information are presented. In this paper we restrict ourselves to answering questions about the value of a variable from a knowledge base consisting of simple data statements and implication statements.

Journal ArticleDOI
TL;DR: A modified fuzzy c-varieties (FCV) algorithm is proposed which is capable of seeking out a mixture of clusters of possibly different shapes and should thereby be more likely to successfully detect the actual structure in the data and less likely to impose it.
Abstract: A modified fuzzy c-varieties (FCV) algorithm is proposed which is capable of seeking out a mixture of clusters of possibly different shapes. The algorithm adapts to the structure encountered during the computation and should thereby be more likely to successfully detect the actual structure in the data and less likely to impose it.