scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 1985"


Journal ArticleDOI
TL;DR: It is shown that knowledge spaces are in a one-to-one correspondence with AND/OR graphs of a particular kind and provided the foundation for later work on algorithmic procedures for the assessment of knowledge.
Abstract: The information regarding a particular field of knowledge is conceptualized as a large, specified set of questions (or problems). The knowledge state of an individual with respect to that domain is formalized as the subset of all the questions that this individual is capable of solving. A particularly appealing postulate on the family of all possible knowledge states is that it is closed under arbitrary unions. A family of sets satisfying this condition is called a knowledge space. Generalizing a theorem of Birkhoff on partial orders, we show that knowledge spaces are in a one-to-one correspondence with AND/OR graphs of a particular kind. Two types of economical representations of knowledge spaces are analysed: bases, and Hasse systems, a concept generalizing that of a Hasse diagram of a partial order. The structures analysed here provide the foundation for later work on algorithmic procedures for the assessment of knowledge.

395 citations


Journal ArticleDOI
TL;DR: This paper reports the results of an exploratory study that investigated expert and novice debugging processes with the aim of contributing to a general theory of programming expertise.
Abstract: This paper reports the results of an exploratory study that investigated expert and novice debugging processes with the aim of contributing to a general theory of programming expertise. The method used was verbal protocol analysis. Data was collected from 16 programmers employed by the same organization. First, an expert-novice classification of subjects was derived from information based on subjects' problem solving processes: the criterion of expertise was the subjects' ability to chunk effectively the program they were required to debug. Then, significant differences in subjects' approaches to debugging were used to characterize programmers' debugging strategies. Comparisons of these strategies with the expert-novice classification showed programmer expertise based on chunking ability to be strongly related to debugging strategy. The following strategic propositions were identified for further testing. 1. (a) Experts use breadth-first approaches to debugging and, at the same time, adopt a system view of the problem area; (b) Experts are proficient at chunking programs and hence display smooth-flowing approaches to debugging. 2. (a) Novices use breadth-first approaches to debugging but are deficient in their ability to think in system terms; (b) Novices use depth-first approaches to debugging; (c) Novices are less proficient at chunking programs and hence display erratic approaches to debugging.

333 citations


Journal ArticleDOI
TL;DR: Drivers who listened to directions drove to destinations in fewer miles, took less time, and showed about 70% fewer errors than the map drivers.
Abstract: To compare the effectiveness of navigational aids, drivers attempted to follow routes in unfamiliar environments using either customized route maps, vocal directions, or both. The customized route maps, which included only information relevant to the particular route, were drawn to scale, used colour, included interturn mileages, and showed landmarks. The route to be driven was traced in red. To obtain vocal directions, drivers operated a tape recorder that permitted them to play the next or the previous instruction. Instructions were generated by a set of rules with roughly one set of instructions per turn. Information that was not on the map was not included in the vocal instructions. Drivers who listened to directions drove to destinations in fewer miles, took less time, and showed about 70% fewer errors than the map drivers. The performance of drivers with route maps and voice directions was between that of the map only and voice only drivers.

278 citations


Journal ArticleDOI
TL;DR: The concept of overlap coefficient is introduced to describe the properties of a given fuzzy measure and to characterize the human evaluation process and a new algorithm is developed to identify the fuzzy measure with full degrees of freedom.
Abstract: This paper presents a mathematical model for the human subjective evaluation process using fuzzy integrals based on the general fuzzy measure. First, the concept of overlap coefficient is introduced to describe the properties of a given fuzzy measure and to characterize the human evaluation process. Further, a new algorithm is developed to identify the fuzzy measure with full degrees of freedom. Lastly, the model and the identification scheme are applied to two practical examples: prediction of wood strength by an experienced inspector and trouble evasive actions taken by a computer game player.

237 citations


Journal ArticleDOI
TL;DR: A reference profile is built to serve as a basis of comparison for future typing samples and has the capability of providing identity surveillance throughout the entire time at the keyboard.
Abstract: Most personal identity mechanisms in use today are artificial. They require specific actions on the part of the user, many of which are not “friendly”. Ideally, a typist should be able to approach a computer terminal, begin typing, and be identified from keystroke characteristics. Individuals exhibit characteristic cognitive properties when interacting with the computer through a keyboard. By examining the properties of keying patterns, statistics can be compiled that uniquely describe the user. Initially, a reference profile is built to serve as a basis of comparison for future typing samples. The profile consists of the average time interval between keystrokes (mean keystroke latency) as well as a collection of the average times required to strike any two successive keys on the keyboard. Typing samples are scored against the reference profile and a score is calculated assessing the confidence that the same individual typed both the sample and the reference profile. This mechanism has the capability of providing identity surveillance throughout the entire time at the keyboard.

222 citations


Journal ArticleDOI
TL;DR: This report reviews work on defining and measuring conceptual structures of expert and novice fighter pilots and identifies areas of agreement and disagreement in the knowledge structures of experts and novices.
Abstract: This report reviews work on defining and measuring conceptual structures of expert and novice fighter pilots. Individuals with widely varying expertise were tested. Cognitive structures were derived using multidimensional scaling (MDS) and link-weighted networks (Pathfinder). Experience differences among pilots were reflected in the conceptual structures. Detailed analyses of individual differences point to factors that distinguish experts and novices. Analysis of individual concepts identified areas of agreement and disagreement in the knowledge structures of experts and novices. Applications in selection, training and knowledge engineering are discussed.

222 citations


Journal ArticleDOI
TL;DR: Methods from George Kelly's personal construct psychology have been incorporated into a computer program, the Expertise Transfer System, which interviews experts, and helps them construct, analyse, test and refine knowledge bases.
Abstract: Retrieving problem-solving information from a human expert is a major problem when building an expert system. Methods from George Kelly's personal construct psychology have been incorporated into a computer program, the Expertise Transfer System, which interviews experts, and helps them construct, analyse, test and refine knowledge bases. Conflicts in the problem-solving methods of the expert may be enumerated and explored, and knowledge bases from several experts may be combined into one consultation system. Fast (one to two hour) expert system prototyping is possible with the use of the system, and knowledge bases may be constructed for various expert system tools.

209 citations


Journal ArticleDOI
TL;DR: The recognition problem is given a clear mathematical formulation as the search for that sequence of basic speech units that best fits the input acoustic pattern, and a best-few algorithm with partial traceback of explored paths, satisfying the above requisites, is described.
Abstract: In this paper, “continuous speech recognition” problem is given a clear mathematical formulation as the search for that sequence of basic speech units that best fits the input acoustic pattern. For this purpose spoken language models in the form of hierarchical transition networks are introduced, where lower level subnetworks describe the basic units as possible sequences of spectral states. The units adopted in this paper are either whole words or smaller subword elements, called diphones. The recognition problem thus becomes that of finding the best path through the network, a task carried out by the linguistic decoder. By using this approach, knowledge sources at different levels are strongly integrated. In this way, early decision making based on partial information (in particular any segmentation operation or the speech/silence distinction) is avoided; usually this is a significant source or errors. Instead, decisions are deferred to the linguistic decoder, which possesses all the necessary pieces of information. The properties that a linguistic decoder must possess in order to operate in real-time are listed, and then a best-few algorithm with partial traceback of explored paths, satisfying the above requisites, is described. In particular, the amount of storage needed is almost constant for any sentence length, the computation is approximately linear with sentence length, and the interpretation of early words in a sentence may be possible long before the speaker has finished talking. Experimental results with two systems, one with words and the other with diphones as basic speech units, are reported. Finally, relative merits of words and diphones are discussed, taking into account aspects such as the storage and computing time requirements, their relative ability to deal with phonological variations and to discriminate between similar words, their speaker adaptation capability, and the ease with which it is possible to change the vocabulary and the language dependencies.

172 citations



Journal ArticleDOI
TL;DR: A simple two-component model of transfer is proposed that allows for the differential practice of general and specific components when learning a skill.
Abstract: Computer-naive subjects were taught to use either one or two line editors and then a screen editor. Positive transfer was observed both between the line editors and from the line editors to the screen editor. Transfer expressed itself in terms of reductions in total time, keystrokes, residual errors, and seconds per keystroke. A simple two-component model of transfer is proposed that allows for the differential practice of general and specific components when learning a skill.

127 citations


Journal ArticleDOI
TL;DR: It is argued that user models are an essential component of any system which attempts to be “user friendly”, and that expert systems should tailor explanations to their users, be they super-experts or novices.
Abstract: The paper argues that user models are an essential component of any system which attempts to be “user friendly”, and that expert systems should tailor explanations to their users, be they super-experts or novices. In particular, this paper discusses a data-driven user modelling front-end subsystem, UMFE, which assumes that the user has asked a question of the main system (e.g. an expert system, intelligent tutoring system etc.), and that the system provides a response which is passed to UMFE. UMFE determines the user's level of sophistication by asking as few questions as possible, and then presents a response in terms of concepts which UMFE believes the user understands. Investigator-defined inference rules are then used to suggest additional concepts the user may/may not know, given the concepts the user indicated he or she knew in earlier questioning. Several techniques are discussed for detecting and removing inconsistencies in the user model. Additionally, UMFE modifies its inference rules for individual users when it detects certain types of inconsistencies. UMFE is a portable domain-independent implementation of a system which infers overlay models for users. UMFE has been used in conjunction with NEOMYCIN; and the paper contains several protocols which demonstrate its principal features. The paper concludes with a critique of UMFE and suggestions for enhancing the current system.

Journal ArticleDOI
TL;DR: It was found that, in spite of the simplicity of the materials, experts were significantly faster and more accurate than novices, which supports the idea that experts automate some simple subcomponents of the programming task.
Abstract: Automation is the ability to perform a very well-practised task rapidly, smoothly and correctly, with little allocation of attention. This paper resports on experiments which sought evidence of automation in two programming subtasks, recognition of syntactic errors and understanding of the structure and function of simple stereotyped code segments. Novice and expert programmers made a series of timed decisions about short, textbook-type program segments. It was found that, in spite of the simplicity of the materials, experts were significantly faster and more accurate than novices. This supports the idea that experts automate some simple subcomponents of the programming task. This automation has potential implications for the teaching of programming, the evaluation of programmers, and programming language design.

Journal ArticleDOI
TL;DR: Methods are described for co-operative indexing, evaluating and synthesizing information through well-specified interactions by many users with a common database based on the use of a structured representation for reasoning and debate, in which conclusions are explicitly justified or negated by individual items of evidence.
Abstract: Interactive computer networks create new opportunities for the co-operative structuring of information which would be impossible to implement within a paper-based medium. Methods are described for co-operatively indexing, evaluating and synthesizing information through well-specified interactions by many users with a common database. These methods are based on the use of a structured representation for reasoning and debate, in which conclusions are explicitly justified or negated by individual items of evidence. Through debates on the accuracy of information and on aspects of the structures themselves, a large number of users can co-operatively rank all available items of information in terms of significance and relevance to each topic. Individual users can then choose the depth to which they wish to examine these structures for the purposes at hand. The function of this debate is not to arrive at specific conclusions, but rather to collect and order the best available evidence on each topic. By representing the basic structure of each field of knowledge, the system would function at one level as an information retrieval system in which documents are indexed, evaluated and ranked in the context of each topic of inquiry. At a deeper level, the system would encode knowledge in the argument structures themselves. This use of an interactive system for structuring information offers further opportunities for improving the accuracy, integration and accessibility of information.

Journal ArticleDOI
TL;DR: A framework of three central man-machine interface issues: knowledge acquisition, knowledge representation and the communications interface is used, as a basis for evaluating a Prospector-type expert system shell.
Abstract: The effectiveness and acceptability of an expert system is critically dependent on its man-machine interface. This paper uses a framework of three central man-machine interface issues: knowledge acquisition, knowledge representation and the communications interface, as a basis for evaluating a Prospector-type expert system shell. The application domain used as an example is a small system for fault finding on 11 GHz radio equipment. Long-term implications for the design of good man-machine interfaces for future expert systems are discussed and, where possible, shorter-term guidelines for knowledge engineers are offered.

Journal ArticleDOI
TL;DR: The results suggest that psychological studies on how existing cognitive skills are applied to computerized situations, could provide a valuable source of information for designers of computer systems.
Abstract: The effect presentation mode has on subjects' performance of a reasoning task was tested by comparing four different modes of presentation. Subjects were required to search and integrate information that was presented in short texts (22 sentences long). The texts were presented via a VDU (computerized reading situation) or on paper (non-computerized reading situation), in their entirety or as separate sentence. Sixteen psychology students participated in the study. Reading speed and accuracy of judgement were unaffected by presentation medium (VDU or paper). Moreover, in both situations search times were longer when little information was available and when search demands were increased. Negative information had a similar effect on subjects' ratings of difficulty in the two situations. The way information was searched differed, however, in the computerized and the non-computerized reading situations when the texts were presented as separate sentences. Four different search strategies were found; they were unevenly distributed in the two situations. In the non-computerized situation, subjects searched almost twice as much information as they did in the computerized situation. On the other hand, in the computerized situation search times were almost twice as long. The results suggest that psychological studies on how existing cognitive skills are applied to computerized situations, could provide a valuable source of information for designers of computer systems.

Journal ArticleDOI
Peter Hajek1
TL;DR: It is shown that the notion of an ordered Abelian group is central for the definition of these combining functions of consulting systems, and various particular groups are considered for use in consulting systems.
Abstract: Consulting systems are rule-based systems of Artificial Intelligence working with propositions that may be uncertain, i.e. may have a truth-value different from “true” and “false”. The work of a consulting system consists, roughly, in propagation of uncertain knowledge throughout the net of rules according to some combining functions. It is shown that the notion of an ordered Abelian group is central for the definition of these combining functions. Various particular groups, both Archimedean and non-Archimedean, are considered for use in consulting systems. Practical experiments with them are also described.

Journal ArticleDOI
TL;DR: An inexact inference using AND/OR/COMB relation and Dempster's rule of combination to combine two fuzzy sets with certainty factors is introduced.
Abstract: In structural engineering practice, situations exist where the available information is inexact or imprecise. Frequently, experienced structural engineers are capable of providing meaningful answers to such problems. The purpose of this investigation is to construct an expert system called SPERIL-II for the damage assessment of existing structures on the basis of the knowledge of experienced structural engineers. SPERIL-II is a knowledge-based damage assessment system in which there are the following three steps in the assessment process: ( 1 ) the evaluation of local damageability from input data, (2) the evaluation of global damageability, (3) the estimation of the safety or damage state of the structure. This paper introduces an inexact inference using AND/OR/COMB relation and Dempster's rule of combination to combine two fuzzy sets with certainty factors. This inexact inference is used in all steps, and a suitable measure is given according to the importance of the structure in step (3).

Journal ArticleDOI
TL;DR: Testing how seriously error correction would affect SQL user performance found that error correction improved user performance by 26%.
Abstract: Previous human factors research on SQL has discovered many correctable errors made by users. An experiment was run, testing how seriously error correction would affect SQL user performance. In the study, 39 subjects used SQL without error correction and 40 subjects had specific categories of errors corrected. The main result was that error correction improved user performance by 26%.

Journal ArticleDOI
TL;DR: An integrated approach based on Possibility Theory for evaluating the degree of match between the set of conditions occurring in the antecedent of a production rule and the input data, for combining the evidence degree of a fact with the strength of implication of a rule and for combining evidence degrees coming from different pieces of knowledge are presented.
Abstract: This paper discusses some of the problems related to the representation of uncertain knowledge and to the combination of evidence degrees in rule-based expert systems. Some of the methods proposed in the literature are briefly analysed with particular attention to the Subjective Bayesian Probability (used in PROSPECTOR) and the Confirmation Theory adopted in MYCIN. The paper presents an integrated approach based on Possibility Theory for evaluating the degree of match between the set of conditions occurring in the antecedent of a production rule and the input data, for combining the evidence degree of a fact with the strength of implication of a rule and for combining evidence degrees coming from different pieces of knowledge. The semantics of the logical operators AND and OR in possibility theory and in our approach are compared. Finally, the definitions of some quantifiers like AT LEAST n, AT MOST n, EXACTLY n are introduced.

Journal ArticleDOI
TL;DR: The results indicate that some calculi appear to prevent the development of good queries while those whose behaviour is appropriately smooth can give satisfactory performance and the evidence suggests that as queries become more complex the impact of the choice of calculus is reduced.
Abstract: This paper describes the results of an experimental investigation of the effects of different representations of uncertainty in an interactive rule-based expert system for Information Retrieval. We draw on Fuzzy Set Theory both to define the various representations and to help analyse the results. We conclude that specification of an uncertainty calculus is a subtle problem that interacts in several ways with the scheme used to represent the expert knowledge itself. Our results indicate that some calculi appear to prevent the development of good queries while those whose behaviour is appropriately smooth can give satisfactory performance. More interestingly, our evidence suggests that as queries become more complex the impact of the choice of calculus is reduced. The paper concludes with a discussion of the insights gained with respect to the general problem of building rule-based expert systems.

Journal ArticleDOI
TL;DR: Preliminary findings are presented of a research study into general problem decomposition strategies used in program design that revealed that solutions are strongly biased in favour of one of two paradigms.
Abstract: Preliminary findings are presented of a research study into general problem decomposition strategies used in program design The initial phase of the investigation involved three separate experiments in which groups of subjects familiar with the principles of structured programming were asked to undertake certain tasks associated with a particular programming problem, solutions to which can be mapped onto one of two “process decomposition paradigms” Problem-solving strategies are advanced that account for the two types of solution and are consistent with the experimental results obtained The latter revealed that solutions are strongly biased in favour of one of these paradigms, and that this bias can be explained in terms of “perception difficulty” allied to inadequacies in abstraction skills attributable to inappropriate previous training The possible effects caused by problem specification characteristics are also discussed briefly

Journal ArticleDOI
TL;DR: The results of these experiments suggest that when required to use menus with multiple levels in simple menu selection, the options at a given level should include information about options at deeper levels in the menu.
Abstract: The results of two experiments on simple menu selection are reported in which participants searched for target words through hierarchical menu displays consisting of binary choices at six levels. The menu hierarchy contained 64 words at the lowest level. Category descriptor terms were provided at higher levels and participants were required to select a sequence of options which would lead to the target word. In addition to the standard menu options, participants in experimental groups were shown help fields containing either previous selections, the target word, or upcoming selections. Participants who selected options in the presence of options at the next lower level in the menu (upcoming selections) searched with greater accuracy than participants in the control condition (no additional information), but neither continuous display of the target nor providing a list of previous selections within a trial benefited search performance. This pattern of results was found both when participants had no previous experience on the task and when help fields were introduced after 64 trials on the standard menu. Similar trends were found when help fields were introduced after 128 trials on the standard menu, but between-group differences failed to reach significance in that condition. The results of these experiments suggest that when required to use menus with multiple levels in simple menu selection, the options at a given level should include information about options at deeper levels in the menu.

Journal ArticleDOI
TL;DR: Sixty students performed simple menu selection with one of ten categorized menus; each with 64 items arranged in four columns of 16 on a single frame, finding no difference in search time was observed for categorial vs alphabetical ordering within categories.
Abstract: Sixty students performed simple menu selection with one of ten menus; each with 64 items arranged in four columns of 16 on a single frame. Target words consisted of eight items from each of eight categories. In eight categorized menus, words belonging to the same category were presented together in the display. Three factors were varied in the categorized menus: alphabetical vs categorial ordering of words within categories; spacing vs no additional spacing between category groups; and category organization arranged by column or by row. In the final two menus the entire array was arranged in alphabetical order, top-to-bottom by column in one, and left-to-right by row in the other. Both spacing and columnar organization facilitated search time. Menus with spacings between category groups were searched approximately 1 s faster than menus without additional spacing and menus with categories organized by column were searched about 1 s faster than menus organized by row. Furthermore, the effects of spacing and organization were additive. Given categorized menus, no difference in search time was observed for categorial vs alphabetical ordering within categories. Menus in which the entire array was arranged in alphabetical order were searched with rates similar to those for categorized menus with spacings and faster than categorized menus without spacings; these effects were observed with both forms of organization, row and column. Explanations were offered for the results and their implications for menu design were discussed.

Journal ArticleDOI
TL;DR: Anchises is described, a coach which aims to detect inefficient use and ignorance of important facilities of an interactive program in a domain-independent way and provides highly-selective access to pertinent parts of the on-line documentation with little overhead for the user.
Abstract: A computer coach unobtrusively monitors interaction with a system and offers individualized advice on its use. Such active on-line assistance complements conventional documentation and its importance grows as the complexity of interactive systems increases. Instead of studying manuals, users learn highly-reactive systems through experiment, imported metaphors and natural intelligence. However, in so doing they inevitably fail to discover features which would help them in their work. This paper describes Anchises, a coach which aims to detect inefficient use and ignorance of important facilities of an interactive program in a domain-independent way. Its current knowledge base is the Emacs text editor, and Anchises provides highly-selective access to pertinent parts of the on-line documentation with little overhead for the user. In the design of Anchises, close attention has been paid to the user modelling component which determines the needs of an individual without entering into any explicit dialogue with him; in general this is the least well-understood aspect of computer coaches. An informal experiment was conducted to determine the effectiveness of the user modelling techniques employed.

Journal ArticleDOI
TL;DR: The several “compositional inference” axiom systems were used in an expert knowledge-based system and the quality of the system outputs—fuzzy linguistic phrases—were compared in terms of correctness and precision.
Abstract: In this paper we report the results of an empirical study to compare eleven alternative logics for approximate reasoning in expert systems. The several “compositional inference” axiom systems (described below) were used in an expert knowledge-based system. The quality of the system outputs—fuzzy linguistic phrases—were compared in terms of correctness and precision (non-vagueness). In the first section of the paper we discuss fuzzy expert systems. The second section provides a brief review of logic systems and their relation to approximate reasoning. Section three contains the experimental design, and section four supplies the results of the experiment. Finally, a summary is given.

Journal ArticleDOI
TL;DR: The adaptation of basic knowledge representation constructs to the treatment of imprecise or uncertain information, the modelling of which uses distribution and scalar value respectively is discussed.
Abstract: This paper discusses the adaptation of basic knowledge representation constructs to the treatment of imprecise or uncertain information, the modelling of which uses distribution and scalar value respectively. Other representation issues such as the use of variables and the organization of rules in networks are briefly addressed. Then the problems of matching and propagation (in deductive inference and combination mechanisms) are considered in the specific setting of imprecision and uncertainty. Lastly, some questions concerning control strategies involved in such a problem are briefly considered.

Journal ArticleDOI
TL;DR: The Checklist Paradigm is described in some detail, providing a unifying framework within which these relationships among the various operators which have come to light can be understood, and the valid modes of reasoning are increased from the classical two to a total of four.
Abstract: In the design of Expert Systems there is increasing recognition of the need for graded production rules, that is, the use of grades or degrees of strength of implication, leading to grades or degrees of certainty or possibility attached to the conclusions. This accords well with the desire to accommodate imprecise, incomplete and faulty input data, and nonetheless to arrive—as humans do—at meaningful results, however provisional. For historical reasons, probabilistic methods are the best-known and the oftenest attempted, although they form a small and not particularly suitable sub-set of the methods available. This paper displays the probabilistic rules in their proper perspective in the wider canvas, dwelling on some of the inherent relationships among the various operators which have come to light. For this purpose, the Checklist Paradigm is described in some detail, providing a unifying framework within which these relationships can be understood. Meanwhile, the valid modes of reasoning are increased from the classical two to a total of four.

Journal ArticleDOI
TL;DR: The key feature of “mechanistic” cognition appears to be a CNS embodiment of the category of simplicial sets, and specific structures for problem-solving and memory are determined in terms of this category.
Abstract: The topological structure of fibration (a total space coupled with a base space by a projection mapping) appears to be found throughout the CNS Neuropsychological structure and functioning are analysed in terms of such fibrations, and application made to perceptual and cognitive systems The key feature of “mechanistic” cognition appears to be a CNS embodiment of the category of simplicial sets Specific structures for problem-solving and memory are determined in terms of this category

Journal ArticleDOI
TL;DR: It is suggested that it will be most difficult to learn a computerized task when new methods are related to old goals and/or when old methods require new conditions to be satisfied.
Abstract: This paper presents a theoretical analysis of the relationships between the requirements of a computerized task and people's knowledge of this task outside the computer system. The analysis is based on the goals to be reached, the methods which may be used, and the conditions which must be satisfied for each method to be used, with or without the computer system. It is suggested that it will be most difficult to learn a computerized task when new methods are related to old goals and/or when old methods require new conditions to be satisfied. Emperical observations supporting analyses of different tasks are presented. The empirical data reveal difficulties which are not predicted by the theoretical analyses, and it is concluded that a good prediction of the ease with which a new system is learnt can only result from a combination of theoretical analyses and empirical observations of users working with the system.

Journal ArticleDOI
TL;DR: The paper describes the course components and their inter-relationships, discusses how program control might be expressed in the form of production rules, and presents a program that demonstrates one facet of the intended course: the ability to parse student input in such a way that rules can be used to update a dynamic student model.
Abstract: Rule-based systems are a development associated with recent research in artificial intelligence (AI). These systems express their decision-making criteria as sets of production rules, which are declarative statements relating various system states to program actions. For computer-assisted instruction (CAI) programs, system states are defined in terms of a task analysis and student model, and actions take the form of the different teaching operations that the program can perform. These components are related by a set of means-ends guidance rules that determine what the program will do next for any given state. The paper presents the design of a CAI course employing a rule-based tutorial strategy. This design has not undergone the test of full implementation; the paper presents a conceptual design rather than a programming blueprint. One of the unique features of the course design described here is that it deals with the domain of computer graphics. The precise subject of the course is ReGIS, the Remote Graphics Instruction Set on Digital Equipment Corporation GIGI and VT125 terminals. The paper describes the course components and their inter-relationships, discusses how program control might be expressed in the form of production rules, and presents a program that demonstrates one facet of the intended course: the ability to parse student input in such a way that rules can be used to update a dynamic student model.