scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Based Systems in 1994"


Journal Article
TL;DR: Language for procedural models allow the definition of layers of abstraction, which should focus on aspects of the protocols that are relevant for a particular problem, and separate task analysis and psychological theory are usually necessary.
Abstract: ion: Especially computational procedural models require very much detail. This leads to very complex models. One method to deal with this complexity is to define layers of abstraction. As we shall discuss in the next chapter languages for procedural models allow the definition of layers of abstraction. For example, a complex process can be defined as a top-level process and sub-processes. Abstraction should focus on aspects of the protocols that are relevant for a particular problem. A psychologist may for example be interested in different aspects of the problem-solving process than a knowledge engineer. Psychologists can also be interested in different aspects of problemsolving processes. For example, one may be interested in the cause of errors, the effect of education, individual differences in cognitive styles, cognitive skills or general intelligence. Such different interests will result in different models and analyses. The interest must focus the analysis and the process must be modelled from the viewpoint of this interest. Starting with no initial viewpoint or interest is of course difficult. This will lead to gradual development of a viewpoint and corresponding abstractions but that tends to take much effort. The level of detail to which a model must be elaborated depends on the level of detail required for coding the protocols later on. A coarse-grained model is usually more difficult to compare with protocols than a detailed model. If a computational model is required then this will determine the level of detail. Separate task analysis and psychological theory: In case of psychological research it is usually necessary to construct a model that explicitly states the meaning of the psychological theory in the context of the task and that provides more detail on the process than is implied by the psychological theory. Usually there are many other sources available to construct a first approximation of the cognitive behaviour by only looking at possible ways to perform the task. Examples of these sources are textbooks, interviews, etc. Using this information to build a first approximation of a model is called task analysis. 56 The Think Aloud Method

364 citations


Journal ArticleDOI
TL;DR: The environments were augmented with computational agents that identified breakdowns which might have remained unnoticed without them, providing opportunities for enhancing the creativity of designers by giving them support in reframing problems, attempting alternative design solutions, and exploring relevant unknown background knowledge.
Abstract: Design is a creative activity. Computational artifacts change the way people design. The author's research in design and design support systems has been firmly rooted in a cooperative problem solving approach that tries to amplify the creativity of designers by the use of domain-oriented design environments. Evolving design artifacts represented within the environment are able to ‘talk back’ to both system builders and future users, and act as representations for mutual understanding between various stakeholders. The domain orientation allows design activities to be conversations with the materials of the design situation. However, in many situations, artifacts do not speak for themselves. To address this problem, the environments were augmented with computational agents that identified breakdowns which might have remained unnoticed without them. These breakdowns provide opportunities for enhancing the creativity of designers by giving them support in reframing problems, attempting alternative design solutions, and exploring relevant unknown background knowledge.

91 citations


Journal ArticleDOI
TL;DR: In the system described in the paper, agents attempt to make claims using tactical rules (such as fairness and commitment), and agents also say what other claims these claims support or attack, and the strength of arguments are constructed by three knowledge sources: quasi-logic, value transfer, and emotional appeal.
Abstract: In the system described in the paper, agents attempt to make claims using tactical rules (such as fairness and commitment), and agents also say what other claims these claims support or attack. Claims can be set of claims connected by attacking and supporting links. As debate continues, a shared argument map is created. This map is controlled by strategic rules which control the location of the attention focus within the argument map. Unlike in current work on reasoning and argumentation, our system does not require truth propagation or consistency maintenance. Instead, inconsistency is assumed to stand unless it is attacked by an opponent. An evaluation function calculates the strength of arguments in terms of a number of structural constraints, so that invulnerability to attack on the grounds of self inconsistency is merely one among several criteria, which include constructiveness, relevance, and familiarity. Arguments are mappings from source to target domains, and they are constructed by three knowledge sources: quasi-logic, value transfer, and emotional appeal.

31 citations


Journal ArticleDOI
TL;DR: The working hypothesis is that coupling uninterpreted rationale with formal software artifacts will give professional computer users enough background understanding for them to be able to make creative and meaningful extensions to the foreground artifacts with only a partial understanding of the implementation language.
Abstract: Creativity and cognition are discussed in the context of tailoring computer applications. Computer applications, such as word processors and graphics editors, suffer from a lack of informal design rationale to accompany and throw light upon the meaning behind the implementation of the application. The working hypothesis is that coupling uninterpreted rationale (such as pictures, diagrams, stories, scenarios, and argumentation) with formal software artifacts will give professional computer users enough background understanding for them to be able to make creative and meaningful extensions to the foreground artifacts with only a partial understanding of the implementation language.

19 citations


Journal ArticleDOI
TL;DR: The negotiation cycles that arise in the conversation are presented to illustrate the necessity of negotiation in collaborative undertakings.
Abstract: A model of communication using the constructs of Shared-Plans and negotiation is applied to a naturally occurring conversation between a travel agent and traveller. The negotiation cycles that arise in the conversation are presented to illustrate the necessity of negotiation in collaborative undertakings.

18 citations


Journal ArticleDOI
TL;DR: A support system has been built and applied to the domain of automobile design that uses multidimensional data and statistical methods, and configures a space for inspiring the users' design activity.
Abstract: The paper discusses how computers can assist human creativity. A support system has been built and applied to the domain of automobile design. The system uses multidimensional data and statistical methods, and configures a space for inspiring the users' design activity. In the first step, it assists them in forming individual design concepts. In the second step, it assists them in deciding their target design concepts. It has been confirmed that it works effectively through experiments. Some of the open questions on creativity support systems are also discussed.

15 citations


Journal ArticleDOI
TL;DR: A system and a software architecture for the representation and real-time processing of sound, music, and multimedia based on artificial intelligence techniques, called WinProcne/HARP, which is able to represent objects in a two-fold formalism—symbolic and analogical—at different levels of abstraction, and to carry out plans according to the user's goals.
Abstract: The paper introduces a system and a software architecture for the representation and real-time processing of sound, music, and multimedia based on artificial intelligence techniques. This system, called WinProcne/HARP, is able to represent objects in a two-fold formalism—symbolic and analogical—at different levels of abstraction, and to carry out plans according to the user's goals. It also provides both formal and informal analysis capabilities for extracting information. In WinProcne/HARP the user can build, update, browse, and merge various knowledge bases of sound, music, and multimedia material, as well as enter queries, start and manage real time performance, using a high-level graphical user interface. The system is currently used by researchers and composers in various experiments, including (a) advanced robotics projects, in which the system is used as a tool for interacting, cotrolling and simulating robot movements, and (b) theatrical automation, where the system is delegated to manage and integrate sound, music, and three-dimensional computer animations of humanoid figures. The paper explicitly refers to some applications in the music field.

14 citations


Journal ArticleDOI
TL;DR: The DyKOr (Dynamic Knowledge Organization) method combines information that is usually available through execution traces with existing domain knowledge using techniques from machine learning including knowledge compilation, explanation-based learning, and conceptual clustering to improve the quality and content of explanations.
Abstract: The paper presents a methodology for improving the organization of knowledge bases and demonstrates its application for generating the content of explanations. The DyKOr (Dynamic Knowledge Organization) method combines information that is usually available through execution traces with existing domain knowledge using techniques from machine learning including knowledge compilation, explanation-based learning, and conceptual clustering. These techniques allow the separation of the knowledge needed to solve a problem from that which is not required, and the identification of information that is related to the problem but is not explicitly stated. Thus, the analysis performed through the methodology can considerably improve the quality and content of explanations. The paper describes the implementation of the methodology and how it can be integrated into typical rule-based expert systems. Illustrations of how the method can be used to produce the content for explanations are presented in the context of typical consultation and problem solving expert systems. A discussion of how the information produced by the method can be used to prepare explanations for users with different levels of expertise is also presented.

13 citations


Journal ArticleDOI
TL;DR: The paper discusses the required aspects of the model of certainty factors, and presents the major implementation issues relating to the new inference engine, which is written in PASCAL.
Abstract: An expert system inference engine is described which is based on the utilization of certainty factors, and has a structure similar to Naylor's probabilistic inference engine. After a brief description of the latter, a modified probabilistic criterion is developed which naturally leads to the certainty factors question selection criterion employed in the inference engine. Also, the case in which there is user reporting bias is considered, and a method for dealing with it is presented. The paper discusses the required aspects of the model of certainty factors, and presents the major implementation issues relating to the new inference engine, which is written in PASCAL.

11 citations


Journal ArticleDOI
TL;DR: This paper presents a meta-modelling architecture suitable for smart grids and aims to describe the design and operation of these systems in more detail.
Abstract: Note: Applied Computing Reference IMAC-ARTICLE-1994-002 Record created on 2007-08-14, modified on 2016-08-08

10 citations


Journal ArticleDOI
TL;DR: An algorithm for converting neural networks into Boolean functions is presented and it is shown to have a time complexity better than 2 N−1 − 2 ( N 2 )−1 + 1 .
Abstract: An algorithm for converting neural networks into Boolean functions is presented. The absence of such an algorithm has been identified in the literature as a significant problem, and the solution shown in the paper is both complete and efficient. The analysis of the algorithm shows it to have a time complexity of better than 2 N−1 − 2 ( N 2 )−1 + 1 .

Journal ArticleDOI
TL;DR: By linking data about scientific quantities with one or more appropriate formulae, it is shown that the knowledge base can be used for simple problem solving.
Abstract: An object-oriented knowledge based system has been developed to store and manipulate scientific knowledge in the form of data and formulae encountered in conventional textbooks on science and engineering. The formulae are input in the same form as they are written in, in terms of the well known symbols used by engineers and scientists. The system interprets the symbols as representing scientific quantities and links them with the underlying data and methods in the knowledge base. By linking data about scientific quantities with one or more appropriate formulae, it is shown that the knowledge base can be used for simple problem solving. The development of the system on a Sun Sparcstation was facilitated by the use of an object-oriented environment, Objectworks C++.

Journal ArticleDOI
TL;DR: In this paper, the authors present the motivation and basic ideas for the construction and use of modular knowledge bases, including reusability, the restriction of memory searching, and the management of inconsistent (competing) knowledge within one knowledge base.
Abstract: Evolving out of theoretical and practical work, the paper presents the motivation and basic ideas for the construction and use of modular knowledge bases. The approach relates to earlier work carried out by each of the two authors of the paper separately. A model is introduced that merges the two previous approaches, modules for logical knowledge bases, and ordering by generality domains, while maintaining their benefits. Central aims are reusability, the restriction of memory searching, and the management of inconsistent (competing) knowledge within one knowledge base. The model is explained using examples, and the formal semantics are discussed of structured, modular knowledge bases for knowledge representations that are based on logic programming.

Journal ArticleDOI
TL;DR: The paper examines the potential benefits of integrating these two technologies to develop large knowledge-based applications, using each for what it does best and the advantages of combining them.
Abstract: Logic programming and object-oriented programming have made significant contributions to the field of artificial intelligence, particularly as knowledge-based system development tools. Both these tools are extremely powerful and have several interesting features for application developers to exploit. The paper examines the potential benefits of integrating these two technologies to develop large knowledge-based applications, using each for what it does best. The salient features of the object-oriented and logic programming paradigms are summarized, and the advantages of combining them are discussed. A possible strategy is also provided for implementing the object-oriented paradigm within the logic-programming environment.

Journal ArticleDOI
TL;DR: An experiment is proposed which begins to systematically explore the potential contribution of discourse theory to the design of interactive systems and develops a toolkit that can be used to add discourse capabilities to any interactive system.
Abstract: An experiment is proposed which begins to systematically explore the potential contribution of discourse theory to the design of interactive systems. The experiment involves implementing and comparing two versions of a very simple system for planning air travel itineraries. One version of the system includes facilities based on discourse theory, and the other does not. Neither system uses natural language. The long-term goal of this research is to develop a toolkit that can be used to add discourse capabilities to any interactive system.

Journal ArticleDOI
TL;DR: It is argued that work-oriented development of knowledge-based systems requires the parallel use of multiple perspectives (including the information processing perspective), and that the use of any single perspective is potentially dangerous.
Abstract: System development is strongly influenced by the perspectives used by system developers. Current development methods for knowledge-based systems are based on an information processing perspectives of experts and users which has been criticized by a number of researchers. The paper argues that work-oriented development of knowledge-based systems requires the parallel use of multiple perspectives (including the information processing perspective), and that the use of any single perspective is potentially dangerous. As an example of the importance of using multiple perspectives, thepaper presents the situated action perspective, and shows how it complements the information processing perspective.

Journal ArticleDOI
TL;DR: A knowledge-based clustering algorithm is used to extend the abstractions, such as classification and association, which are employed in the semantic modeling of databases, which can be used to design databases in which useful and interesting queries can be answered.
Abstract: Clustering techniques have been used for data abstraction Dara abstraction has many applications in the contect of data-bases Conceptual models are used to bridge the gap between the user's view of a database and the physical view of the database Semantic models evolved to overcome the limitations of classical data models such as network and relational models The paper uses a knowledge-based clustering algorithm to extend the abstractions, such as classification and association, which are employed in the semantic modeling of databases The complexity of the proposed clustering algorithm is analysed The extended semantic model can be used to design databases in which useful and interesting queries can be answered The efficacy of the proposed knowledge-based clustering approach is examined in the context of a library database

Journal ArticleDOI
TL;DR: An interactive blackboard framework that integrates the blackboard model with a graphical user interface is described and details of a practical implementation of the framework using object oriented programming techniques are given.
Abstract: There have been a number of software engineering developments since the blackboard concept originated. These include object oriented programming languages, and graphical user interfaces. This paper describes an interactive blackboard framework that integrates the blackboard model with a graphical user interface. Details of a practical implementation of the framework using object oriented programming techniques are given.

Journal ArticleDOI
TL;DR: The genetic algorithm is used for the learning of prototype control rules for a dynamic system that are point based, but only a limited number of points in the state space with associated control actions are learned.
Abstract: The genetic algorithm is used for the learning of prototype control rules for a dynamic system. Prototype control rules are point based, but only a limited number of points in the state space with associated control actions are learned. The nearest-neighbour algorithm is used to decide which of the rules to fire in any situation. The example of a simulated cart-pole balancing problem is used to demonstrate the advantages of this approach over other rule-learning methods.

Journal ArticleDOI
TL;DR: In this case, more books you read more knowledge you know, but it can mean also the bore is full as mentioned in this paper. But this is some of how reading will give you the kindness.
Abstract: We may not be able to make you love reading, but artificial intelligence frontiers in statistics ai and statistics iii will lead you to love reading starting from now. Book is the window to open the new world. The world that you want is in the better stage and level. World will always guide you to even the prestige stage of the life. You know, this is some of how reading will give you the kindness. In this case, more books you read more knowledge you know, but it can mean also the bore is full.

Journal ArticleDOI
TL;DR: A simple yet powerful technique is presented for interviewing domain specialists in an expert systems' context on the basis of the legal process of cross-examination, which involves seven heuristics that collectively assure the integrity of the knowledge base.
Abstract: A simple yet powerful technique is presented for interviewing domain specialists in an expert systems' context on the basis of the legal process of cross-examination. The technique involves seven heuristics, which collectively assure the integrity of the knowledge base. In addition, combinatorially explosive interviewing problems are rendered tractable.

Journal Article
TL;DR: The authors may not be able to make you love reading, but artificial intelligence frontiers in statistics ai and statistics iii will lead you to love reading starting from now.

Journal ArticleDOI
TL;DR: The design and development of a model-based system for diagnosis which aids in the troubleshooting of complex hydro-mechanical devices is described and the resultant system, named MIDAS, has now been implemented successfully.
Abstract: The traditional approach to the development of diagnostic knowledge-based systems has been rule-based, where experiential and heuristic knowledge is encoded in a suite of production rules. Despite the success of such rule-based systems it has been recognised that they suffer from a number of limitations owing to the shallowness of their knowledge. Much of the current thrust of research is focusing on the use of model-based reasoning, which provides knowledge of the structure and function of the device under diagnosis. The paper describes the design and development of a model-based system for diagnosis which aids in the troubleshooting of complex hydro-mechanical devices. The resultant system, named MIDAS, has now been implemented successfully.

Journal ArticleDOI
TL;DR: The issue is that of whether the computer acts as a catalyst in exploiting such creative possibilities, or as a constraint.
Abstract: Interactive multimedia represent a new and unique medium of communication. Through enabling computer technologies, a diversity of media can be brought together with which the user can actively interact. Designers of this medium are involved in creating multidimensional spaces that can communicate on a variety of conceptual levels, via a variety of senses, through a variety of media. Consequently the potential opportunities for designer creativity relating to the conceptual, visual and functional design are immense. The issue is that of whether the computer acts as a catalyst in exploiting such creative possibilities, or as a constraint. Evidence of both impacts is presented.

Journal ArticleDOI
TL;DR: The paper outlines a mechanism for obtaining variable initiative behaviour and presents experimental results on the performance of an implemented system capable ofVariable initiative behaviour.
Abstract: Flexible spoken natural language dialog systems should permit variable initiative behaviour. This is behaviour where the task initiative can vary from strongly computer controlled to strongly user controlled or somewhere in between. Such behaviour allows a system to effectively communicate with both task novices and experts as well as with intermediate levels of expertise. The paper outlines a mechanism for obtaining variable initiative behaviour and presents experimental results on the performance of an implemented system capable of variable initiative behaviour.

Journal ArticleDOI
TL;DR: The proposed method of the feasibility study was dictated by the need to characterize the environment of communication and commitment for action between the small business and its actors, the under-theorized status of the problem domain, and the distributed and context-dependent nature of the knowledge involved.
Abstract: Failures in attempts to introduce knowledge-based systems, particularly into small businesses, have been attributed to inadequate feasibility exploration and impact assessment at the initial stage of KBS development. The paper shows how these two critical factors have been considered within the exploratory study of an ESPRIT project on an IT planning system. The analysis of the two factors (a) shows how the problem domain influences the method of the feasibility study, and (b) considers prior to development the potential impact of the system on its embedding environment. The proposed method of the feasibility study was dictated by the need to characterize the environment of communication and commitment for action between the small business and its actors, the under-theorized status of the problem domain, and the distributed and context-dependent nature of the knowledge involved. On the basis of the requirements analysis stage of the method, a two-stage procedure was used to evaluate the potential impact of the system. It was concluded that the system will support the organizational and business aims of small firms in terms of improved decision making, an improved decision-making process, and institutionalized decision making. Compared with related work, the method of the feasibility study and the impact-assessment approach have certain advantages.

Journal ArticleDOI
TL;DR: The primary insight that this work contributes is that it may be useful to identify modes of collaboration to build simpler, computationally tractable systems that provide for efficient and effective interaction.
Abstract: The author's work has focused on one mode of collaborative natural language interaction. In this mode, the system is the primary speaker and the user is the primary listener. The Interactive Discourse Planner (IDP) is responsible for planning text to describe and/or justify a domain plan. The user collaborates by providing feedback that lets the system know how to extend the discussion in a way that satisfies the user and thereby achieves the system's goal. As a testbed for the model, IDP discusses driving routes as the domain plans. The primary insight that this work contributes is that it may be useful to identify modes of collaboration. Collaborative interactive modes are defined by the assignment of conversational roles and responsibilities to a computational system and a human. When building a model for an interactive mode, a designer can make assumptions that lead to simpler, computationally tractable systems that provide for efficient and effective interaction.

Journal ArticleDOI
Curry Guinn1
TL;DR: A method for using a probabilistic analysis to create a mode-switching mechanism that selects which participant should control the collaborative problem solving so that efficiency is maximized.
Abstract: Collaboration is more efficient when the most capable participant for each goal has initiative in directing the problem-solving for that goal. The paper describes a method for using a probabilistic analysis to create a mode-switching mechanism that selects which participant should control the collaborative problem solving so that efficiency is maximized.

Journal ArticleDOI
TL;DR: The results of experiments with the validation and analysis of a hypothetical case indicate that the blackboard is an appropriate model for the development of expert systems in the product liability domain.
Abstract: The SKADE 2 expert system evaluates product liability claims and generates 'litigate or settle' decisions. This expert-system application may achieve savings for corporations. It can help to reduce excessive settlements and unnecessary trials by giving decision makers precise information about the eventual outcomes of similar past cases, or by organizing and structuring complex high-stakes product liability claims. The blackboard model provides the conceptual framework and organizing schema for the system. Interviews with litigators, claims adjusters, technical (product) experts, managers and lawyers provided the knowledge needed to develop the conceptual model and the system knowledge base. The results of experiments with the validation and analysis of a hypothetical case indicate that the blackboard is an appropriate model for the development of expert systems in the product liability domain. The initial success with the SKADE 2 system suggests that further work needs to be done on whether more complex models can be built to incorporate a broader range of determinants of product liability claims evaluation.

Journal ArticleDOI
TL;DR: A speedup method for hypothetical reasoning systems that uses an experience-based learning mechanism which can learn knowledge from inference experiences to improve the inference speed in subsequent inference processes, sharing subgoals similar to those of the prior inference processes.
Abstract: While hypothetical reasoning is a useful knowledge system framework that is applicable to many practical problems, its crucial problem is its slow inference speed. The paper presents a speedup method for hypothetical reasoning systems that uses an experience-based learning mechanism which can learn knowledge from inference experiences to improve the inference speed in subsequent inference processes, sharing subgoals similar to those of the prior inference processes. This learning mechanism has common functions with existing explanation-based learning. However, unlike explanation-based learning, the learning mechanism described in the paper has a learning capability even at intermediate subgoals that appear in the inference process. Therefore, the learned knowledge is useful even in the case in which a new goal given to the system shares a subgoal that is similar to those learned in the prior inference. Since the amount of the learned knowledge becomes very large, the learning mechanism also has a function of learning selectively only at subgoals which substantially contribute to the speedup of the inference. In addition, the mechanism also includes a knowledge reformation function, which transforms learned knowledge into an efficient form to be used in subsequent inference.