scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Intelligent Systems in 1986"


Journal ArticleDOI
TL;DR: Six generic tasks that are very useful as building blocks for the construction of knowledge-based systems are found: hierarchical classification, hypothesis matching, and knowledge-directed information passing as three generic tasks and showed how certain classes of diagnostic problems can be implemented as an integration of these generic tasks.
Abstract: ion level relative to the information processing task, some control issues are artifacts of the representation. In our opinion these are often misinterpreted as issues at the knowledge level. For example, rule-based approaches often concern themselves with syntactic conflict resolution strategies. When the knowledge is viewed at the appropriate level, we can often see the existence of organizations of knowledge that bring up only a small, highly relevant body of knowledge without any need for conflict resolution at all. Of course, these organizational constructs could be \"programmed\" in the rule language (metarules are meant to do this in rule-based systems), but because of the status assigned to the rules and their control as knowledge-level phenomena (as opposed to the implementation-level phenomena, which they often are), knowledge acquisition is often directed toward strategies for conflict resolution, whereas they ought to be directed to issues of knowledge organization. This is not to argue that rule representations and backwardor forward-chaining controls are not natural for some situations. If all a problem solver has in the form of knowledge in a domain is a large collection of unorganized associative patterns, then data-directed or goal-directed associations may be the best the agent can do. But that is precisely the occasion for weak methods such as hypothesize -and-match (of which the above associations are variants), and, typically, successful solutions cannot be expected in complex problems without combinatorial searches. Typically, however, expertise consists of much better organized collections of knowledge, with control behavior indexed by the kinds of organization and forms of knowledge they contain. We have found six generic tasks that are very useful as building blocks for the construction (and understanding) of knowledge-based systems. These tasks cover a wide range of existing expert systems. Because of their role as building blocks, we call them elementary generic tasks. While we have been adding to our repertoire of elementary generic tasks for quite some time, the basic elements of the framework have been in place for a number of years. In particular, our work on MDX4,5 identified hierarchical classification, hypothesis matching, and knowledge-directed information passing as three generic tasks and showed how certain classes of diagnostic problems can be implemented as an integration of these generic tasks. (In the past we have also referred to them as problem-solving types.) Over the years we have identified several others: object synthesis by plan selection and refinement,6 state abstraction,7 and abductive assembly of hypotheses.8 This list is not exhaustive; in fact, our ongoing research objective is to identify other useful generic tasks and understand their knowledge representation and control

717 citations


Journal ArticleDOI
TL;DR: A way to propagate belief functions in certain kinds of trees using only local computations, which generalizes the computational scheme proposed by Pearl5 for Bayesian causal trees and describes qualitative Markov trees.
Abstract: In this article, we describe a way to propagate belief functions in certain kinds of trees using only local computations. This scheme generalizes the computational scheme proposed by Shafer and Logan ' for diagnostic trees of the type studied by Gordon and Shortliffe2,3 and the slightly more general scheme proposed by Shafer4 for hierarchical evidence. It also generalizes the computational scheme proposed by Pearl5 for Bayesian causal trees. Pearl's causal trees and Gordon and Shortliffe's diagnostic trees are both ways of breaking down the evidence that bears on a large problem into smaller items of evidence that bear on smaDler parts of the problem so that these smaller problems can be dealt with one at a time. This localization of effort is often essential to make the process of probability judgment feasible, both for the person who is making probability judgments and for the machine that is combining them. The basic structure for our scheme is a type of tree that generalizes both Pearl's and Gordon and Shortliffe's trees. Trees of this type permit localized computation in Pearl's sense. They are based on qualitative judgments of conditional independence. We believe that the scheme we describe here will prove useful in expert systems. It is now clear that the successful propagation of probability or certainty factors in expert systems requires much more structure than can be provided in a pure production-system framework. Bayesian schemes, on the other hand, often make unrealistic demands for structure. The propagation of belief functions in trees and more general networks occupies a middle ground where some sensible and useful things can be done. We would like to emphasize that the basic idea of local computation for propagating probabilities is Judea Pearl's. It is an innovative idea; we do not believe that it can be found in the Bayesian literature prior to Pearl's work. We see our contribution as extending Pearl's idea from Bayesian probabilities to belief functions. We wil describe the scheme proposed by Pearl5 for Bayesian causal trees. Then, we will describe the scheme proposed by Shafer and Logan I for diagnostic trees, and, afterwards, as a background to our own scheme, we will describe qualitative Markov trees. Finaly, we wil describe our belieffunction scheme, which has Pearl's and Shafer and Logan's schemes as special cases.

272 citations


Journal ArticleDOI
TL;DR: In this article, a VLSI-based inference engine based on fuzzy logic has been proposed for decision-making in the area of command and control for intelligent robot systems, process control, missile and aircraft guidance, and other high performance machines.
Abstract: The role of inferencing with uncertainty is becoming more important in rule-based expert systems (ES), since knowledge given by a human expert is often uncertain or imprecise. We have succeeded in designing a VLSI chip which can perform an entire inference process based on fuzzy logic. The design of the VLSI fuzzy inference engine emphasizes simplicity, extensibility, and efficiency (operational speed and layout area). It is fabricated in 2.5 mm CMOS technology. The inference engine consists of three major components; a rule set memory, an inference processor, and a controller. In this implementation, a rule set memory is realized by a read only memory (ROM). The controller consists of two counters. In the inference processor, one data path is laid out for each rule. The number of the inference rule can be increased adding more data paths to the inference processor. All rules are executed in parallel, but each rule is processed serially. The logical structure of fuzzy inference proposed in the current paper maps nicely onto the VLSI structure.A two-phase nonoverlapping clocking scheme is used. Timing tests indicate that the inference engine can operate at approximately 20.8 MHz. This translates to an execution speed of approximately 80,000 Fuzzy Logical Inferences Per Second (FLIPS), and indicates that the inference engine is suitable for a demanding real-time application. The potential applications include decision-making in the area of command and control for intelligent robot systems, process control, missile and aircraft guidance, and other high performance machines.

267 citations


Journal ArticleDOI

220 citations


Journal ArticleDOI
Avinash C. Kak1, K.L. Boyer1, C. H. Chen1, R.J. Safranek1, H. S. Yang1 

64 citations


Journal ArticleDOI
TL;DR: The approach is to develop knowledge acquisition tools that make explicit the knowledge representation implications of various methods, including Salt and Sear, the knowledge acquisition tool described in this article.
Abstract: F or several years, Digital Equipment Corporation has used a system called RI (or sometimes Xcon) to configure the computer systems it manufactures. The most recent account of RI's development' notes that as RI has grown to several thousand rules, maintaining and developing it have become substantially more difficult. As we explored what kind of tool could facilitate knowledge acquisition for RI, we saw that the most valuable tool would (1) help determine the role any new piece of knowledge should play and (2) suggest how to represent the knowledge so it would be applied whenever relevant. Several researchers in recent years2-4 have stressed that to maintain and to continue to develop a knowledge base it is critical to identify the various knowledge roles and to represent the knowledge in a way that does not conflate these roles. What is not yet clear is how many interestingly different roles exist and, if there are many, how one identifies the appropriate-subset for a particular expert system. We believe we will find the answers to these questions by studying knowledge roles in different problem-solving methods. Our approach is to develop knowledge acquisition tools that make explicit the knowledge representation implications of various methods. Until recently, most of the research in knowledge acquisition tools has concentrated on tools for classification problem-solvers.5 Because knowledge acquisition tools such as Teiresias,6 ETS,7 and More8 presuppose relatively similar problem-solving methods, the systems built with these tools have similar knowledge roles. However, knowledge acquisition tools for constructive problem-solvers are now being developed-e.g., Salt9 and Sear, the knowledge acquisition tool we describe in this article-that are

58 citations




Journal ArticleDOI

48 citations


Journal ArticleDOI
TL;DR: The components and applications of expert systems, which are computer systems designed to simulate the problem-solving behavior of a person expert in a narrow field, are examined.
Abstract: Developments in the field of AI are discussed. The components and applications of expert systems, which are computer systems designed to simulate the problem-solving behavior of a person expert in a narrow field, are examined. Two types of expert systems, shallow and deep, are described and examples are given. A logic programming system, rule-based system, and framed-based system are utilized as means of representing the expert system's data base. The limitations of expert systems are considered.

47 citations





Journal ArticleDOI
TL;DR: The ultimate goal of machine/computer vision system based on planar-faced objects using range data emresearch is to develop and engineer machines ploying a modified version of the Hough transform on a with the ability to sense, understand, and act VAX 11/780.
Abstract: T he ultimate goal of machine/computer vision system based on planar-faced objects using range data emresearch is to develop and engineer machines ploying a modified version of the Hough transform on a with the ability to sense, understand, and act VAX 11/780. The results are encouraging. However, our upon their environments in an autonomous vision system also points to the limitations of the current manner. Machines with limited and specialized vision capavision systems compared to the ideal general-purpose vision bilities are already in industrial use. However, building a system and indicates directions for future research. general-purpose machine/computer vision system is still far off in the future. Overview. The use of range data for object recognition is An intermediate and possibly more immediate objective a relatively new field of computer vision. To make the in developing a vision system would be the machine's abilpresent article self-contained, a brief review of the relevant ity to sense the three-dimensional structure of objects literature is in order. Several techniques have been demonplaced in the field of view of the sensor and to recognize strated for extracting primitives (e.g., planes, cylinders, the objects from a library of a \"limited\" number of obedges, vertices, circles) from the range image to construct jects. The characteristics of the possible objects will be descriptions of the objects in the scene.49 Underwood and stored in a database in the machine. The complexity and Coates'\" developed a program that used a series of two-didifficulty of this problem will depend upon the nature of mensional views of a polyhedron to construct a 3-D model. the objects, the dissimilarity of the objects, the nature of The model could then be matched to a single 2-D view of sensing, the availability of the computing resources, and the polyhedron. Bhanu\" introduced a method using stothe type of the recognition techniques. chastic labeling techniques to match faces of an unknown To obtain a better feel for the problem, the reader may view with faces of the model. Another method is to segconsider the following possibilities. The objects may be ment the scene into planar and curved surfaces and to asplanar faced or curved; several of the objects may be of sociate with each surface a set of 2-D properties (e.g., area, the same type, as in the case of a bucket of nails; the oblength of perimeter, mean radius, etc.) and relationships jects may be selected from a finite number of dissimilar between each surface (e.g., angle of intersection between types; the sensor may be one or more video cameras or a adjoining planes). This method was demonstrated by Oshrange sensor; the available computer may range in computima and Shirai.'2 These relationships between surfaces can ing power from an IBM-PC, to a VAX 11/780, or to a be represented as a graph type structure and the matching Cray XMP; and finally, the algorithms may be based upon process as the graph/subgraph isomorphism problem'3 statistical theorems, Hough transform procedures, or rulewhich is known to be NP-complete.'4 Another method based results. demonstrated by Magee et al.'5 uses an intensity image to The research of the past 20 years has addressed the issue guide the use of a range image, and it computes all of the of building a vision system with a variety of sensors and aldistances between vertices in the image and attempts to gorithmic tools. It is difficult, if not impossible, to review match those with the models. Similarly, circles were all the past research here. Reviews by Aggarwal et al.' and matched by the distance between centers of the circles and Besl and Jam2'3 document the state of the art rather well. In by their radii. A distinction can be made between these two fact, one of the papers by Besl and Jamn3 is fairly encymethods since the Oshima-Shirai method is view-dependent clopedic. The present article describes a particular vision (a different model is generated for each unique view of the



Journal ArticleDOI
TL;DR: To perform these processes successfully, software experts must depend on experiential knowledge gathered while developing similar software products that can be incorporated into design environments to facilitate specification and design.
Abstract: S oftware design, often regarded as an art, is frequently delegated to expert software engineers. They must interpret requirements and specifications provided by system analysts when designing solutions for new software problems. Interpreting these specifications leads to an internal conceptual model eventually transformed into a software design. To perform these processes successfully, software experts must depend on experiential knowledge gathered while developing similar software products. Al methods can be incorporated into design environments to facilitate specification and design, including

Journal ArticleDOI
TL;DR: In this paper, the authors define analogies and analogical thinking as aspects of problem solving that may be found in a variety of domains and can be used by both novices and experts.
Abstract: A nalogies and analogical thinking are aspects of problem solving that may be found in a variety of domains and can be used by both novices and experts. What do the terms \"analogical thinking\" and \"analogy\" mean? Let us consider the following example that illustrates an intuitive sense of the application of an analogy. The domain in the example falls within software development, though illustrative analogies are identifiable in everyday domains as well.


Journal ArticleDOI
TL;DR: The teaching experience convinced us that students can learn, in a one-semester course, to apply effective fundamental skills in expert systems development-skills providing a sound basis for further growth.



Journal ArticleDOI
Newton Lee1
TL;DR: P-shell as mentioned in this paper is a Prolog-based knowledge programming environment designed for building expert systems that supports knowledge representation in first-order predicate logic, frames, procedures, production rules, semantic networks, tables, and trees.
Abstract: PS -Shell is a Prolog-based knowledge programming environment designed for building expert systems. It has been implemented in University of Edinburgh C-Prolog1 that runs on Unix System V. P-Shell is a set of Prolog tools, modules, and built-in library routines essential for knowledge programming. It supports knowledge representation in first-order predicate logic, frames, procedures, production rules, semantic networks, tables, and trees. Built-in control methods include backwardand forward-chaining, agenda, and blackboard architecture. P-Shell's support-environment provides facilities for explanation, user/system interface, system/system interface, and debugging. It also provides knowledge structure editors for interactive manipulation of knowledge, and knowledge structure transformers for multiple views of a given piece of knowledge. In short, P-Shell is a rich environment for the development of expert systems.


Journal ArticleDOI
TL;DR: Prolog is an implementation of predicate logic as a programming language, that is, it is a language for logic programming, which contrasts with conventional ones called procedural languages, such as Fortran, Basic, and more recent variants of Pascal and Ada.
Abstract: Po rolog has been becoming more popular as artificial intelligence applications, such as expert systems and knowledge engineering, have grown rapidly. Other Al-related applications include symbolic computation, natural language processing, very high level languages, natural language interfaces to databases, deductive databases, and automatic programming. Briefly stated, Prolog is an implementation of predicate logic as a programming language. For example, a statement in logic, \"If Y loves X, then X loves Y,\" can be written as a Prolog clause: loves (X, Y) :loves (Y, X). Here \"loves (X, Y)\" can be read as \"X loves Y,\" and the symbol \" :\" as \"if.\" Prolog is a relational language, that is, it is a language for logic programming as stated above. This contrasts with conventional ones called procedural languages, such as Fortran, Basic, and more recent variants of Pascal and Ada. In a procedural language, one specifies step by step how computations are carried out by the computer. For example, one can specify in one step \"Add A to B giving C\"; in the next step one can say \"if X then do P1 else do P2,\" and so on. In a relational language, that is, in logic programming, one specifies a relationship among entities. We can say that one describes what in a relational language rather than how. For example, in the previous Prolog clause, \"loves (X, Y) loves (Y, X),\" no procedural command is specified. Instead, this clause simply states the fact that \"if Y loves X, then X loves Y.\" During a program execution, suppose that X is instantiated to Mary and Y to John, asking a question:


Journal ArticleDOI
TL;DR: The next generation of CBT systems will incorporate many innovative concepts in the acquisition, representation, and management of expert knowledge and common-sense knowledge.
Abstract: Th here is growing consensus among those involved with computer-based training that the application of knowledge-based technology can improve the quality and cost effectiveness of training personnel. Recent articles by Freedman,' Kearsley and Seidel,2 Montague and Wulfeck,3 Psotka,4 and Woolf5 conclude that the next generation of CBT systems will incorporate many innovative concepts in the acquisition, representation, and management of expert knowledge and common-sense knowledge.

Journal ArticleDOI
TL;DR: A modern computing system includes hundreds For those who have no desire to program, the Apple of operators and programs, each with its own MacIntosh is ideal, essentially a desk assistant with a control language.
Abstract: A modern computing system includes hundreds For those who have no desire to program, the Apple of operators and programs, each with its own MacIntosh is ideal. It is essentially a desk assistant with a control language. A technological marvel for one-button mouse and hierarchies of menus. The bit-map information processing, it asks only that we display offers a large set of options for drawing, text prepfigure out how to use it in achieving our goals. Unfortuaration, editing, type-setting, and the display, and printing nately, all too often, the description and documentation of of data. One or two hour's experience with such a system such a system entails several fat volumes of difficult text, is sufficient to enable most of the uninitiated to graduate so we frequently settle for using the system in limited ways from a typewriter to a high-technology information control t( avoid the extensive effort required to learn it thoroughly. device. The Macintoshes of the world not only replace the Fortunately, help is on the way. The best equipped modprinting functions of the typewriter, but add capabilities ern laboratories now have specialized workstations such as for drawing diagrams and artwork, for storing files, for Lisp machines and Suns. These large, fast microcomputers graphically displaying and charting data, and for dabbling are supported by megabytes of internal memory, hundreds in an endless variety of creative activities. As long as we of megabytes of local disk storage, and, lurking in the are content with the capabilities provided by the menus, we background, a gigantic file server, providing trillions of are almost fully protected from asking impossible quesmegs of long-term memory, not to mention network comtions-only meaningful sequences of menu choices are promunications with the world. vided-and the system is equally protected from our wellBut wait, there's more. There are bit-map displays on meaning, but occasionally disastrous, efforts. the order of 1000 x 1000 pixels and a mouse whose moIn general, the operating system, whether controlled by tion translates into movements of a pointer on the display. mouse and menu or by formal language commands, proBy merely pushing one or more buttons on top of the vides a set of operations that users can combine to accommouse, we can select a portion of the screen (actually of plish a set of information processing goals. These goals, the array that the screen displays) to focus on specific data. with careful design and a bit of luck, will represent what The most common use of the mouse, however, is to select the system understands of user intentions. From a user a choice from a menu. The pointer is guided to one of the viewpoint, these operations are primitives at the bottom of choices displayed on the screen, a button is pressed on the the human goal system. mouse, and the system passes control to the procedure seIn menu-controlled systems, possible processing goals are lected from the menu. This procedure may present addicompletely prespecified. Users need only select a permissitional menus or actually accomplish some task. ble sequence to accomplish their goals. In formal-language A menu is usually a box drawn on the screen containing controlled systems, the user may be offered much more cells with the names of the procedures that can be selected, freedom in accomplishing goals within the capability of the although it may take other display forms such as a boxed system, but at the cost of learning several formal command border of the screen in which choices are shown. languages at the operating system and production program

Journal ArticleDOI
TL;DR: The unifying concept in robotics is motion, being a visible form of action, and the major issues in robotics will logically include: 1.6 Issues in Robotics Robotics, or the study of robots, is an engineering discipline which is capable of executing motion for the achievement of tasks.
Abstract: (Decision-making) Fig. 1.9 A set of related topics in robotics with a motion-centric theme. 1.6 Issues in Robotics Robotics, or the study of robots, is an engineering discipline. Functionally, a robot is a physical agent which is capable of executing motion for the achievement of tasks. A robot's degree of autonomy depends on its ability to perform the ordered sequence of perception, decision-making and action. As we know, a robot's dynamics in motion execution is dictated by mechanical energy consumption in connection with kinematic constraints imposed by the robot's mechanisms. Literally, the definitions of kinematics and dynamics are as follows: Definition 1.9 Kinematics is the study of motion without consideration of force and torque, while dynamics is the study of motion in relation to force and torque. Therefore, the unifying concept in robotics is motion, being a visible form of action. As illustrated in Fig. 1.9, the major issues in robotics will logically include: 1.6.1 Mechanism and Kinematics From a mechanical point of view, a mechanism is a set of linkages without an actuator. The purpose of a mechanism is to impose kinematic constraints on the types of motion the mechanism can deliver at a particular point. By default, this particular point is at the tip of an end-effector. In general, a mechanism consists of joints and links. In robotics, a link 22 The Fundamentals of Robotics: Linking Perception to Action is a rigid body inside a mechanism, while a joint is the point of intersection between any pair of adjacent links. Any changes in the relative geometry among the links will induce a specific type of motion. Therefore, it is important to study the relationship between the motion parameters of the linkages and the motion parameters of a particular point on the mechanism. This study is the object of robot kinematics. There are two problems with robot kinematics: • How do we determine the motion parameters of a particular point on the mechanism from the knowledge of the motion parameters of the linkages? This is commonly called the forward kinematics problem. • How do we determine the motion parameters of the linkages necessary to produce a desired set of motion parameters at a particular point on a mechanism? This is known as the inverse kinematics problem. In the mechanical domain, any motion is produced by the conversion of mechanical energy. The study of the relationship between motion parameters and …