scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1986"


Journal ArticleDOI
TL;DR: A fuzzy causal algebra for governing causal propagation on FCMs is developed and it allows knowledge bases to be grown by connecting different FCMs.
Abstract: Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.

3,116 citations


Journal ArticleDOI
TL;DR: The research was based on the mental models theory which proposes that people can be trained to develop a “mental model” or a qualitative simulation of a system which will aid in generating methods for interacting with the system, debugging errors, and keeping track of one's place in the system.
Abstract: An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a database of bibliographic records. The research was based on the mental models theory which proposes that people can be trained to develop a “mental model” or a qualitative simulation of a system which will aid in generating methods for interacting with the system, debugging errors, and keeping track of one's place in the system. It follows that conceptual training based on a system model will be superior to procedural training based on the mechanics of the system. We performed a laboratory experiment with two training conditions (model and procedural), and with each condition split by sex. Forty-three subjects participated in the experiment, but only 32 were able to reach the minimum competency level required to complete the experiment. The data analysis incorporated time-stamped monitoring data, personal characteristics variables, affective variables, and interview data in which subjects described how they thought the system worked (an articulation of the model). As predicted, the model-based training had no effect on the ability to perform simple, procedural tasks, but subjects trained with a model performed better on complex tasks that required extrapolation from the basic operations of the system. A stochastic process analysis of search-state transitions reinforced this conclusion. Subjects had difficulty articulating a model of the system, and we found no differences in articulation by condition. The high number of subjects (26% ) who were unable to pass the benchmark test indicates that the retrieval tasks were inherently difficult. More interestingly, those who dropped out were significantly more likely to be humanities or social science majors than science or engineering majors, suggesting important individual differences and equity issues. The sex-related differences were slight, although significant, and suggest future research questions.

291 citations


Journal ArticleDOI
TL;DR: Results indicated that for those persons with high anxiety an English composition course treatment was more significantly effective than was a course in computer programming in reducing computer anxiety.
Abstract: Two studies are described in this article. The first examined antecedents of computer anxiety and the second exposed two groups of subjects who had no previous experience with computers to two treatments designed to lower the anxiety. Results indicated that for those persons with high anxiety an English composition course treatment in which students used word-processing as a tool was more significantly effective than was a course in computer programming in reducing computer anxiety. The programming course, however, was significantly more effective in reducing anxiety than was no treatment. Women were represented more often than men in the high-anxiety conditions. Results are discussed in terms of appropriate training techniques in educational and workplace environments to lower anxiety in vulnerable populations so that all might participate in the technological revolution.

213 citations


Journal ArticleDOI
TL;DR: None of the technologies available at present are “ideal”, particularly for the casual computer user, though they still can provide very successful interfaces if their particular properties are fully taken into account in the target application.
Abstract: Touch sensing on the surface of a computer display screen is a means of capturing man's natural pointing instincts and using it as mode of human-computer communication. The “touch-sensitive screen” is gradually becoming more popular and it is vital that the individual characteristics of the technologies are understood in order to produce a successful interface. This review covers the methods used for touch sensing on displays, their modes of use and how well they might meet application designers' and users' expectations. None of the technologies available at present are “ideal”, particularly for the casual computer user, though they still can provide very successful interfaces if their particular properties are fully taken into account in the target application.

209 citations


Journal ArticleDOI
TL;DR: The paper attempts to provide a more systematic treatment of icon interfaces than has hitherto been made, and to create a classification which it is hoped will be of use to the dialogue designer.
Abstract: This paper is concerned with the use of icons in human-computer interaction (HCI). Icons are pictographic representations of data or processes within a computer system, which have been used to replace commands and menus as the means by which the computer supports a dialogue with the end-user. They have been applied principally to graphics-based interfaces to operating systems, networks and document-processing software. The paper attempts to provide a more systematic treatment of icon interfaces than has hitherto been made, and to create a classification which it is hoped will be of use to the dialogue designer. The characteristics, advantages and disadvantages of icon-based dialogues are described. Metaphors, design alternatives, display structures and implementation factors are discussed, and there is a summary of some icon design guidelines drawn from a variety of sources. Some mention is also made of attempts by researchers to measure the effectiveness of icon designs empirically.

192 citations


Journal ArticleDOI
Ronald R. Yager1
TL;DR: It is shown how variables whose values are represented by Dempster-Shafer structures can be combined under arithmetic operations such as addition under the intersection operation.
Abstract: We show how variables whose values are represented by Dempster-Shafer structures can be combined under arithmetic operations such as addition. We then generalize this procedure to allow for the combination of these types of variables under more general operations. We note that Dempster's rule is a special case of this situation under the intersection operation.

167 citations


Journal ArticleDOI
TL;DR: An architecture for building better Computer-Assisted Instruction (CAI) programs by applying and extending Artificial Intelligence techniques which were developed for planning and controlling the actions of robots is proposed.
Abstract: This paper proposes an architecture for building better Computer-Assisted Instruction (CAI) programs by applying and extending Artificial Intelligence (AI) techniques which were developed for planning and controlling the actions of robots. A detailed example shows how programs built according to this architecture are able to plan global teaching strategies using local information. Since the student's behavior can never be accurately predicted, the pre-planned teaching strategies may be foiled by sudden surprises and obstacles. In such cases, the planning component of the program is dynamically reinvoked to revise the unsuccessful strategy, often by recognizing student misconceptions and planning a means to correct them. This plan-based teaching strategy scheme makes use of global course knowledge in a flexible way that avoids the rigidity of earlier CAI systems. It also allows larger courses to be built than has been possible in most AI-based “intelligent tutoring systems” (ITSs), which seldom address the problem of global teaching strategies.

159 citations


Journal ArticleDOI
William S. Cleveland1, R. McGill1
TL;DR: This paper describes an experiment that was conducted to investigate the accuracy of six basic judgments of graphical perception, and two types of position judgments were found to be the most accurate.
Abstract: Graphical perception is the visual decoding of categorical and quantitative information from a graph. Increasing our basic understanding of graphical perception will allow us to make graphs that convey quantitative information to viewers with more accuracy and efficiency. This paper describes an experiment that was conducted to investigate the accuracy of six basic judgments of graphical perception. Two types of position judgments were found to be the most accurate, length judgments were second, angle and slope judgments were third, and area judgments were last. Distance between judged objects was found to be a factor in the accuracy of the basic judgments.

149 citations


Journal ArticleDOI
TL;DR: The studies showed performance advantages for on-screen touch panel entry and preference ratings for the touch panel and keyboard devices depended on the type of task being performed, while the mouse was always the least preferred device.
Abstract: Two studies were conducted to test user performance and attitudes for three types of selection devices used in computer systems. The techniques examined included onscreen direct pointing (touch panel), off-screen pointer manipulation (mouse), and typed identification (keyboard). Both experiments tested subjects on target selection practice tasks, and in typical computer applications using menu selection and keyboard typing. The first experiment examined the performance and preferences of 24 subjects. The second experiment used 48 subjects divided into two typing skill groups and into male-female categories. The studies showed performance advantages for on-screen touch panel entry. Preference ratings for the touch panel and keyboard devices depended on the type of task being performed, while the mouse was always the least preferred device. Differences between this result and those reporting an advantage of mouse selection are discussed.

134 citations


Journal ArticleDOI
TL;DR: Using the method of rough classification it is shown that the given norms ensure a good classification of patients and some minimum sets of attributes significant for high-quality classification are obtained.
Abstract: The concept of “rough” sets is used to approximate the analysis of an information system describing 77 patients with duodenal ulcer treated by highly selective vagotomy (HSV). The patients are described by 11 attributes. The attributes concern sex, age, duration of disease, complication of duodenal ulcer and various factors of gastric secretion. Two values of sex and age are distinguished, five values of complications and three or four values of secretion attributes, according to norms proposed in this paper. For each patient, the result of treatment by HSV is expressed in the Visick grading which corresponds to four classes. Using the method of rough classification it is shown that the given norms ensure a good classification of patients. Afterwards, some minimum sets of attributes significant for high-quality classification are obtained. Upon analysis of values taken by attributes belonging to these sets a “model” of patients in each class is constructed. This model gives indications for treatment by HSV.

117 citations


Journal ArticleDOI
TL;DR: The main objective of this paper is to show that the concept of “approximate classification” of a set is closely related to the statistical approach.
Abstract: Quinlan suggested an inductive algorithm based on the statistical theory of information originally proposed by Shannon. Recently Pawlak showed that the principles of inductive learning (learning from examples) can be precisely formulated on the basis of the theory of rough sets. These two approaches are apparently very different, although in both methods objects in the knowledge base are assumed to be characterized by “features” (attributes and attribute values). The main objective of this paper is to show that the concept of “approximate classification” of a set is closely related to the statistical approach. In fact, in the design of inductive programs, the criterion for selecting dominant attributes based on the concept of rough sets is a special case of the statistical method if equally probable distribution of objects in the “doubtful region” of the approximation space is assumed.

Journal ArticleDOI
TL;DR: The computational and economic enabling of ICAI is proceeding more rapidly than are its empirical and cognitive foundations, but significant overall progress is being made; increasing availability, decreasing cost and growing commercial interest in AI-based educational devices are enhancing the development of I CAI systems.
Abstract: Educational devices incorporating artificial intelligence (AI) would “understand” what , whom and how they were teaching and could therefore tailor content and method to the needs of an individual learner without being limited to a repertoire of prespecified responses (as are conventional computer assisted instruction systems). This article summarizes and synthesizes some of the most important research in the development of stand-alone intelligent computer-assisted instruction (ICAI) systems; a review of passive AI-based educational tools (e.g. microworlds, “idea processors”, empowering envionments) would require a separate discussion. ICAI tutors and coaches have four major components: a knowledge base, a student model, a pedagogical module and a user interface. Major current themes of research in the knowledge base include studies of expert cognition, the transfer of meaning, and the sequencing of content. Student-modelling issues focus on alternative ways to represent a pupiľs knowledge, errors and learning. Pedagogical strategies used by ICAI devices range over presenting increasingly complex concepts or problems, simulating phenomena, Socratic tutoring with correction of pupil misconceptions and modelling of expert problem solving via coaching; the central theme of research is finding overarching paradigms for explanation. Language comprehension and generation topics which have special relevence to intelligent tutors and coaches are also briefly reviewed. Overall, increasing availability, decreasing cost and growing commercial interest in AI-based educational devices are enhancing the development of ICAI systems. Limits on the sophistication of user interfaces, on the scope of subject domains and on current understanding of individual learning are all constraining the effectiveness of computer tutors and coaches. The explicitness required for constructing intelligent devices makes their evolution more difficult and time consuming, but enriches the theoretical perspective which emerges. In brief, the computational and economic enabling of ICAI is proceeding more rapidly than are its empirical and cognitive foundations, but significant overall progress is being made.

Journal ArticleDOI
TL;DR: Performance on the novel device, subjects' perceptions of the similarity among devices' functions and subjects' recall of the three training devices' operations, all provided data indicating that exploration-based training promoted the use of analogical reasoning in knowledge transfer and facilitated the induction of abstract device representations (schemas).
Abstract: This study compared exploration-based training and instruction-based training as methods of acquiring and transferring procedural device knowledge, and examined whether any differences in learning outcomes could be explained by the trainees' use of analogical reasoning from either abstract or concrete representations of devices in memory. The exploration trainees experimented with three analogous simulated devices in order to discover the procedures governing their operations, whereas the instructed trainees followed procedural examples contained in manuals. After a 2-day post-training delay, trainees were exposed to a novel transfer device, which was either analogous or disanalogous to the three training devices. Performance on the novel device, subjects' perceptions of the similarity among devices' functions and subjects' recall (written and behavioural) of the three training devices' operations, all provided data indicating that exploration-based training promoted the use of analogical reasoning in knowledge transfer and facilitated the induction of abstract device representations (schemas). No such claim could be made for instruction-based training. Implications for the future of exploration as a training method and suggestions for future research are discussed.

Journal ArticleDOI
TL;DR: Evidence for the existence and use of beacons in comprehension of a sort program and the results of both experiments support the idea that beacons exist as a focal point for study and understanding of programs by experienced programmers.
Abstract: In programming, beacons are lines of code which serve as typical indicators of a particular structure or operation. This research sought evidence for the existence and use of beacons in comprehension of a sort program. In the first experiment, subjects memorized and later recalled the whole sort program. Experienced programmers, but not novices or intermediates, recalled the beacon lines much better than non-beacon lines. In the second experiment, experienced programmers studied the same program and then were asked to recall several isolated parts of it. They did not know in advance that they would be asked to recall. Subjects recalled the beacon much better than non-beacon parts. They also were more certain that they recalled the beacon correctly. The results of both experiments support the idea that beacons exist as a focal point for study and understanding of programs by experienced programmers.

Journal ArticleDOI
Jakob Nielsen1
TL;DR: A model of computer-human interaction is presented, viewing the interaction as a hierarchy of virtual protocol dialogues, where each virtual protocol realizes the dialogue on the level above itself and is in turn supported by a lower-level protocol.
Abstract: A model of computer-human interaction is presented, viewing the interaction as a hierarchy of virtual protocol dialogues. Each virtual protocol realizes the dialogue on the level above itself and is in turn supported by a lower-level protocol. This model is inspired by the OSI-model for computer networks from the International Standards Organization. The virtual dialogue approach enables the separation of technical features of new devices (e.g. a mouse or a graphical display) from the conceptual features (e.g. menus or windows). Also, it is possible to analyse error messages and other feedback as part of the different protocols.

Journal ArticleDOI
Andrew Monk1
TL;DR: Experimental work presented shows that this mode-dependent keying-contingent sound can be an effective way of making users aware of mode changes and mode errors were reduced.
Abstract: It is often claimed that the user interfaces of advanced integrated systems are mode-free. However, if one applies the user-centred analysis developed in this paper, it is clear that almost any system of realistic complexity will have modes of some kind. By using this analysis it is also possible to identify the situations in which modes are likely to give rise to errors and those where they will not. Various measures for preventing mode errors are suggested. One of these is to signal mode by generating sounds which are contingent on the users' actions. Experimental work presented shows that this mode-dependent keying-contingent sound can be an effective way of making users aware of mode changes. Mode errors were reduced to a third of the number observed with a control group.

Journal ArticleDOI
TL;DR: The role of models and analogical thinking when learning to interact with a computer is discussed, suggestions are given as to how novices' difficulties could be alleviated and topics for future research are proposed.
Abstract: Literature dealing with cognitive aspects of novices' use of computers is reviewed. Many of the conclusions drawn in cognitive psychology about differences between novices and experts are supported also in the computer domain. Novices have less, and more fragmented, knowledge, spend less time encoding the task and do so in a way that is more determined by the surface features of the problem or information given, compared with experts. Novices in general make more errors and have greater difficulties finding them than experts. Other studies show that novices have difficulties in taking advantage of aid given by advisors, computer programs and other sources of information. The role of models and analogical thinking when learning to interact with a computer is discussed, suggestions are given as to how novices' difficulties could be alleviated and topics for future research are proposed.

Journal ArticleDOI
TL;DR: A theory of the “cognitive layout” of information presented in multiple windows or screens is developed and it is hypothesized that the particular layout adopted by a user will drastically affect the user's understanding and expectation of events at the human-computer interface and could either greatly facilitate or frustrate the interaction.
Abstract: In order to make computers easier to use and more versatile many system designers are exploring the use of multiple windows on a single screen and multiple coordinated screens in a single work station displaying linked or related information. The designers of such systems attempt to take into account the characteristics of the human user and the structure of the tasks to be performed. Central to this design issue is the way in which the user views and cognitively processes information presented in the windows or in multiple screens. This paper develops a theory of the “cognitive layout” of information presented in multiple windows or screens. It is assumed that users adopt a cognitive representation or layout of the type of information to be presented and the relationships among the windows or screens and the information they contain. A number of cognitive layouts are derived from theories in cognitive psychology and are discussed in terms of the intent of the software driving the system and congruence with the cognitive processing of the information. It is hypothesized that the particular layout adopted by a user will drastically affect the user's understanding and expectation of events at the human-computer interface and could either greatly facilitate or frustrate the interaction. Ways of ensuring the former and avoiding the latter are discussed in terms of implementations on existing multiple-window and multiple-screen systems.

Journal ArticleDOI
TL;DR: A detailed account of how people learn about a complex device in an instructionless-learning context and forms hypotheses about various aspects of the Big Trak, including the syntax of interaction, the semantics of operators, and the device model.
Abstract: In order to study the mechanisms that underlie “intuitive” scientific reasoning, verbal protocols were collected from seven computer-naive college students asked to “figure out” a Big Trak programmable toy, without a user's guide or other assistance. We call this paradigm Instructionless Learning . The present paper presents a detailed account of how people learn about a complex device in an instructionless-learning context. Subjects' behavior is divided into an orientation phase and a systematic phase . We attend most carefully to the systematic phase. Learners form hypotheses about various aspects of the Big Trak: the syntax of interaction, the semantics of operators, and the device model —which includes objects such as memories, switches, etc. Subjects attempt to confirm hypotheses from which predictions can be made, to refine hypotheses that do not immediately yield predictions, and to verify their total knowledge of the device. Hypotheses are formulated from observation. If an initial hypothesis is incorrect, it will yield incorrect predictions in interactions. When such failures occur, learners change their theory to account for the currently perceived behavior of the device. These changes are often based upon little evidence and may even be contradicted by available information. Thus, the new hypotheses may also be incorrect, and lead to further errors and changes.

Journal ArticleDOI
TL;DR: This paper surveys the development of HCI and related topics in artificial intelligence; their history, foundations, and relations to other computing disciplines; and topics relating to future developments in HCI.
Abstract: The human-computer interface is increasingly the major determinant of the success or failure of computer systems. It is time that we provided foundations of engineering human-computer interaction (HCI) as explicit and well-founded as those for hardware and software engineering. Computing technology has progressed through a repeated pattern of breakthroughs in one technology, leading to its playing a key role in initiating a new generation. The basic technologies of electronics, virtual machines, and software have gone through cycles of breakthrough, replication, empiricism, theory, automation and maturity. HCI entered its period of theoretical consolidation at the beginning of the fifth generation in 1980. The lists of pragmatic dialog rules for HCI in the fourth generation have served their purpose, and effort should now be directed to the underlying foundations. The recently announced sixth-generation computer system (SGCS) development program is targeted on these foundations and the formulation of knowledge science. This paper surveys the development of HCI and related topics in artificial intelligence; their history, foundations, and relations to other computing disciplines. The companion paper surveys topics relating to future developments in HCI.

Journal ArticleDOI
Takashi Kato1
TL;DR: It is argued that question-asking protocols shed light on what problems users experience in what context, what instructional information they come to need, what features of the system are harder to lean, and how users may come to understand or misunderstand the system.
Abstract: To make computer systems easier to use, we are in need of behavioral data which enable us to pinpoint what specific needs and problems users may have Recently, the “thinking-aloud protocol” method was adopted as a technique for studying user behaviours in interactive computer systems In the present paper, the “question-asking protocol” method is proposed as a viable alternative to the thinking-aloud method where the application of the latter is difficult or even inappropriate It is argued that question-asking protocols shed light on (1) what problems users experience in what context, (2) what instructional information they come to need, (3) what features of the system are harder to lean, and (4) how users may come to understand or misunderstand the system

Journal ArticleDOI
TL;DR: The development of styles of dialog through generations of computers, the principles involved, and the move towards integrated systems are surveyed and the foundations of HCI are explored by analysing the various analogies possible when the parties are taken to be general systems, equipment, computers or people.
Abstract: The human-computer interface is increasingly the major determinant of the success or failure of computer systems. It is time that we provided foundations of engineering human-computer interaction (HCI) as explicit and well-founded as those for hardware and software engineering. Through the influences of other disciplines and their contribution to software engineering, a rich environment for HCI studies, theory and applications now exists. Many principles underlying HCI have systemic foundations independent of the nature of the systems taking part and these may be analysed control-theoretically and information-theoretically. The fundamental principles at different levels may be used in the practical design of dialog shells for engineering effective HCI. This paper surveys the development of styles of dialog through generations of computers, the principles involved, and the move towards integrated systems. It then systematically explores the foundations of HCI by analysing the various analogies to HCI possible when the parties are taken to be general systems, equipment, computers or people.

Journal ArticleDOI
TL;DR: The arrow-jump keys were found to have the quickest traversal times for paths with either short or long target distances, and personality type was not found to play a critical role.
Abstract: This paper reports on an experiment which was conducted to examine relative merits of using a mouse or arrow-jump keys to select text in an interactive encyclopedia. Timed path traversais were performed by subjects using each device, and were followed by subjective questions. Personality and background of the subjects were recorded to see if those attributes would affect device preference and performance. The arrow-jump keys were found to have the quickest traversal times for paths with either short or long target distances. The subjective responses indicated that the arrow-jump method was overwhelmingly preferred over the mouse method. Personality type was not found to play a critical role.

Journal ArticleDOI
TL;DR: This paper compares three important approaches to building decision aids implemented as expert systems: Bayesian classification, rule-based deduction, and frame-based abduction.
Abstract: Given the current widespread interest in expert systems, it is important to examine the relative advantages and disadvantages of the various methods used to build them. In this paper we compare three important approaches to building decision aids implemented as expert systems: Bayesian classification, rule-based deduction, and frame-based abduction. Our critical analysis is based on a survey of previous studies comparing different methods used to build expert systems as well as our own collective experience over the last five years. The relative strengths and weaknesses of the different approaches are analysed, and situations in which each method is easy or difficult to use are identified.

Journal ArticleDOI
TL;DR: The Memory Extender (ME) system improves the user interface to a personal database by actively modeling the user's own memory for files and for the context in which these files are used.
Abstract: The benefits of electronic information storage are enormous and largely unrealized. As its cost continues to decline, the number of files in the average user's personal database may increase substantially. How is a user to keep track of several thousand, perhaps several hundred thousand, files? The Memory Extender (ME) system improves the user interface to a personal database by actively modeling the user's own memory for files and for the context in which these files are used. The ME system is similar, in many respects, to current spreading activation, network models of human memory. Files are multiply indexed through a network of variably weighted term links. Context is similarly represented and is used to minimize the user input necessary to specify a file unambiguously—either for purposes of storage or retrieval. Files are retrieved through a spreading-activation-like process. The system aims toward an ideal in which the computer provides a natural extension to the user's own memory.


Journal ArticleDOI
TL;DR: This paper represents a thesaurus (R) for an information system as the sum of two fuzzy relations, S(synonyms) and G(generalizations), and interprets R¯*, which extends the concept-pair fuzzy relation R initially provided by an expert, as a linguistic completion of theThesaurus.
Abstract: In this paper we represent a thesaurus (R) for an information system as the sum of two fuzzy relations,S(synonyms) and G(generalizations). The max-star completion of R is defined as R¯*, the max-star transitive closure of R. We interpret R¯*, which extends the concept-pair fuzzy relation R initially provided by an expert, as a linguistic completion of the thesaurus. Six max-star completions, corresponding to six well-known T- norms, are defined, analysed, and numerically illustrated on a nine-term dictionary. The application of our results in the context of document retrieval is this: one may use R¯* as a means of effecting replacements of terms appearing in a natural-language document request. The weights (R¯*)ij can be used to diminish or increase one's confidence in the degree of support being developed for each document considered relevant to a given query. The ijth element of R¯* can be regarded as the ultimate extent to which term j can be “reached” from term i; the values in R¯* thus represent degrees of confidence in max-star transitive chains.

Journal ArticleDOI
TL;DR: An interactive Prolog debugger, Dewlap, is described, which is implemented in Prolog on relatively standard hardware with a central processor running Unix and remote workstations with bit-mapped displays and mice.
Abstract: An interactive Prolog debugger, Dewlap , is described. Output from the debugger is in the form of graphical displays of both the derivation tree and the parameters to procedure calls. The major advantage of such displays is that they allow important information to be displayed prominently and unimportant information to be shrunk so that it is accessible but not distracting. Other advantages include the following: the control flow in Prolog is clearly shown; the control context of a particular call is readily determined; it is easy to find out whether two uninstantiated variables are bound together; and very fine control is possible over debugging and display options. A high level graphics language is provided to allow the user to tailor the graphical display of data structures to particular applications. A number of issues raised by the need to update such displays efficiently and to control their perceived complexity are addressed. The Dewlap system is implemented in Prolog on relatively standard hardware with a central processor running Unix and remote workstations with bit-mapped displays and mice.

Journal ArticleDOI
Ellen Hisdal1
TL;DR: A number of difficulties with present-day fuzzy-set theory are pointed out in order to justify the necessity of a modified approach to this theory, called TEE model, which interprets a membership value μ λ as his estimate of the probability that the label λ would be assigned to that object in an LB or YN experiment.
Abstract: A number of difficulties with present-day fuzzy-set theory is pointed out in order to justify the necessity of a modified approach to this theory. For each difficulty, its resolution in a modified approach, called TEE model, is outlined. The paper is therefore also a short survey of the TEEJ model theory for grades of membership. Superficially stated, this model interprets a membership value μ λ (u ex ) assigned by a subject to an object of attribute value u ex as his estimate of the probability that the label λ would be assigned to that object in an LB (labeling) or YN (yes-no) experiment; e.g. by himself under nonexact conditions of observation; or by another subject.

Journal ArticleDOI
TL;DR: A study of 30 first-time users of LCS, the online library catalog system at the Ohio State University, found the profound influence that incorrect mental representations have on the viewing and interpretation of online and offline help and instructions.
Abstract: A study of 30 first-time users of LCS, the online library catalog system at the Ohio State University, was conducted. Subjects were provided with the online and offline help available to users of this system and were asked to conduct four standard searches (author, title, subject, etc.). While conducting the online searches, the subjects were asked to “think aloud”. A detailed analysis of errors and associated verbal protocols provided insights into the design features and mental processes contributing to the commission of errors. Of particular significance are: (1) the profound influence that incorrect mental representations have on the viewing and interpretation of online and offline help and instructions; and (2) the snowballing effects of a misconception as a user tries to seek and interpret additional information in attempts to recover from an error.