scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1992"


Journal ArticleDOI
TL;DR: After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases.
Abstract: After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases. The contributors to the AAAI Press book Knowledge Discovery in Databases were excited at the potential benefits of this research. The editors hope that some of this excitement will communicate itself to "AI Magazine readers of this article.

1,332 citations


Journal ArticleDOI
TL;DR: The PROTEGE-II project is one attempt to provide a knowledge-base authoring environment in which developers can experiment with the reuse of knowledge-level problem-solving methods, task models, and domain ontologies.

369 citations


Journal ArticleDOI
TL;DR: In this article, the nature of domain knowledge and its relationship to other knowledge forms are considered, and the status of research on expert-novice differences, on the relationship between domain and strategic knowledge, and on misconceptions are explored.
Abstract: Research in domain knowledge has progressed to the point where it should undergo further examination. As a basis for that reexamination, the nature of domain knowledge and its relationship to other knowledge forms are considered. The status of research on expert-novice differences, on the relationship between domain and strategic knowledge, and on misconceptions is also explored. Clarification of the boundaries between expert and novice performance, a research agenda that incorporates domain and strategic knowledge as partners in knowledge acquisition and use, and specification of the terminology used to examine misconceptions are called for.

258 citations


Journal ArticleDOI
01 Feb 1992
TL;DR: In this paper, the authors considered the problem of first-order theories of expert systems and presented techniques for resolving inconsistencies in such knowledge bases, and also provided algorithms for implementing these techniques.
Abstract: Consider the construction of an expert system by encoding the knowledge of different experts. Suppose the knowledge provided by each expert is encoded into a knowledge base. Then the process of combining the knowledge of these different experts is an important and nontrivial problem. We study this problem here when the expert systems are considered to be first-order theories. We present techniques for resolving inconsistencies in such knowledge bases. We also provide algorithms for implementing these techniques.

224 citations


Patent
11 Jun 1992
TL;DR: In this article, the authors present a windowing system for commodity price information databases, which aids a user in creating and revising formal search language queries, a database searching engine responsive to such queries, means to generate and format results in both textual and graphic reports, and a capacity for echoing a formal query to a display in a near-natural language format for easy comprehension by the user.
Abstract: A computerized data retrieval system, especially for commodity price information databases, having a windowing system which aids a user in creating and revising formal search language queries, a database searching engine responsive to such queries, means to generate and format results in both textual and graphic reports, and a capacity for echoing a formal search language query to a display in a near-natural language format for easy comprehension by the user as the query is constructed using the windowing system. The system has facilities for including domain knowledge in a query, such as market knowledge of calendar events, national holidays, triple-witching hours, and option contract expiration dates. The system has additional facilities that permit a user to include more fundamental domain knowledge, such as dates of political elections, date of issuance and value of company earning reports, the consumer price index, and so on. The near-natural language format of the query may be created and revised either through the windowing system or with a text editor.

186 citations


Proceedings ArticleDOI
01 Jun 1992
TL;DR: When the closest related terms were used in query expansion of a standard information retrieval testbed, the results were much better than that given by document co-occurence techniques, and slightly better than using unexpanded queries, supporting the contention that semantically similar words were indeed extracted by this technique.
Abstract: One aspect of world knowledge essential to information retrieval is knowing when two words are related. Knowing word relatedness allows a system given a user's query terms to retrieve relevant documents not containing those exact terms. Two words can be said to be related if they appear in the same contexts Document co-occurrence gives a measure of word relatedness that has proved to be too rough to be useful. The relatively recent apparition of on-line dictionaries and robust and rapid parsers permits the extraction of finer word contexts from large corpora. In this paper, we will describe such an extraction technique that uses only coarse syntactic analysis and no domain knowledge. This technique produces lists of words related to any work appearing in a corpus. When the closest related terms were used in query expansion of a standard information retrieval testbed, the results were much better than that given by document co-occurence techniques, and slightly better than using unexpanded queries, supporting the contention that semantically similar words were indeed extracted by this technique.

165 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider models for underpinning knowledge and technology policy and management in agriculture, and propose a unifying theory for these models, which are consistent combinations of innovation, knowledge process and structural configuration.
Abstract: As agriculture develops, policy and management decisions increasingly focus on agricultural innovation emerging from utilizing knowledge and/or technology. This paper considers models for underpinning knowledge and technology policy and management. It describes the emergence of knowledge systems thinking. The system construct is applied to actors (individuals, networks and institutions) involved in knowledge processes. These actors potentially form a highly articulated and complex whole. Knowledge policy and management focus on measures that enhance the synergy between actors. Knowledge systems are viewed as “soft systems,” i.e., they only become systems as a result of active construction and joint learning. The soft systems perspective facilitates the identification of various knowledge system models, which have consequences for policy and management decisions with respect to investment, design, and training. In an attempt to create a unifying theory for these models, it is posited that these models are consistent combinations of innovation, knowledge process and structural configuration.

141 citations


Journal ArticleDOI
TL;DR: This approach provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF.
Abstract: An image is segmented into a collection of disjoint regions that form the nodes of an adjacency graph, and image interpretation is achieved through assigning object labels (or interpretations) to the segmented regions (or nodes) using domain knowledge, extracted feature measurements, and spatial relationships between the various regions. The interpretation labels are modeled as a Markov random field (MRF) on the corresponding adjacency graph, and the image interpretation problem is then formulated as a maximum a posteriori (MAP) estimation rule, given domain knowledge and region-based measurements. Simulated annealing is used to find this best realization or optimal MAP interpretation. This approach also provides a systematic method for organizing and representing domain knowledge through appropriate design of the clique functions describing the Gibbs distribution representing the pdf of the underlying MRF. A general methodology is provided for the design of the clique functions. Results of image interpretation experiments on synthetic and real-world images are described. >

137 citations


Journal ArticleDOI
TL;DR: This paper discusses some engineering considerations that should be taken into account when building a knowledge based system, and recommends isomorphism, the well defined correspondence of the knowledge base to the source texts, as a basic principle of system construction in the legal domain.
Abstract: This paper discusses some engineering considerations that should be taken into account when building a knowledge based system, and recommends isomorphism, the well defined correspondence of the knowledge base to the source texts, as a basic principle of system construction in the legal domain Isomorphism, as it has been used in the field of legal knowledge based systems, is characterised and the benefits which stem from its use are described Some objections to and limitations of the approach are discussed The paper concludes with a case study giving a detailed example of the use of the isomorphic approach in a particular application

131 citations



Proceedings ArticleDOI
23 Aug 1992
TL;DR: KANT is described, the first system to combine principled source language design, semi-automated knowledge acquisition, and knowledge compilation techniques to produce fast, high-quality translation to multiple languages.
Abstract: Knowledge-based interlingual machine translation systems produce semantically accurate translations, but typically require massive knowledge acquisition. Ongoing research and development at the Center for Machine Translation has focussed on reducing this requirement to produce large-scale practical applications of knowledge-based MT. This paper describes KANT, the first system to combine principled source language design, semi-automated knowledge acquisition, and knowledge compilation techniques to produce fast, high-quality translation to multiple languages.

Journal ArticleDOI
TL;DR: This paper looks at backward inference, schema-guided forward inference, and planstacking (a form of meta-level backward inference followed by forward inference at the object level) and considers a proposal to introduce neural network modelling into the short-term memory component of a production rule-based problem solver.
Abstract: It is now widely accepted that intelligent problem solving requires access to a source of domain-specific knowledge, and the existence of a control system to constrain the manner in which domain knowledge is searched. An example of a domain which has been intensively investigated is problem solving in physics. A number of researchers have proposed mutually incompatible theories to explain the search control strategies used in this area. This paper looks at backward inference, schema-guided forward inference, and planstacking (a form of meta-level backward inference followed by forward inference at the object level). It also considers a proposal to introduce neural network modelling into the short-term memory component of a production rule-based problem solver. One widely reported finding (Larkin, McDermott, Simon & Simon, 1980), is that novice problem solvers in the domain of physics use backward inference as a search control technique, while experts use forward inference. The backward-to-forward inference shift has indeed become so entrenched that attention has already shifted from whether to how it occurs. In spite of its widespread acceptance, the empirical basis for this claim is somewhat fragile, and a major aim of the present study was to re-examine it using much larger samples of experts and novices than have been employed in the past, and a methodology which does not depend upon protocol analysis. Our data show that both experts and novices exhibit a forward inference rather than a backward inference order of equation generation. Experts were more likely to be able to plan their solutions at a descriptive meta-level than novices. While existing theories are able to account for some of these findings, none of them is completely satisfactory in explaining the full range of data.

01 Jan 1992
TL;DR: This thesis describes a framework to acquire domain knowledge for planning by failure-driven experimentation with the environment, which exploits the characteristics of planning domains in order to search the space of plausible hypotheses without the need for additional background knowledge to build causal explanations for expectation failures.
Abstract: In order for autonomous systems to interact with their environment in an intelligent way, they must be given the ability to adapt and learn incrementally and deliberately. It is virtually impossible to devise and hand code all potentially relevant domain knowledge for complex dynamic tasks. This thesis describes a framework to acquire domain knowledge for planning by failure-driven experimentation with the environment. The initial domain knowledge in the system is an approximate model for planning in the environment, defining the system's expectations. The framework exploits the characteristics of planning domains in order to search the space of plausible hypotheses without the need for additional background knowledge to build causal explanations for expectation failures. Plans are executed while the external environment is monitored, and differences between the internal state and external observations are detected by various methods each correlated with a typical cause for the expectation failure. The methods also construct a set of concrete hypotheses to repair the knowledge deficit. After being heuristically filtered, each hypothesis is tested in turn with an experiment. After the experiment is designed, a plan is constructed to achieve the situation required to carry out the experiment. The experiment plan must meet constraints such as minimizing plan length and negative interference with the main goals. The thesis describes a set of domain-independent constraints for experiments and their incorporation in the planning search space. After the execution of the plan and the experiment, observations are collected to conclude if the experiment was successful or not. Upon success, the hypothesis is confirmed and the domain knowledge is adjusted. Upon failure, the experimentation process is iterated on the remaining hypotheses until success or until no more hypotheses are left to be considered. This framework has shown to be an effective way to address incomplete planning knowledge and is demonstrated in a system called EXPO, implemented on the scPRODIGY planning architecture. The effectiveness and efficiency of EXPO's methods is empirically demonstrated in several domains, including a large-scale process planning task, where the planner can recover from situations missing up to 50% of domain knowledge through repeated experimentation.

01 Jan 1992
TL;DR: This dissertation proposes a formalism that facilitates reasoning with qualitative rules, facts, and deductively closed beliefs, yet permits us to retract beliefs in response to changing contexts and imprecise observations, and provides the necessary machinery for embodying belief updates and belief revision.
Abstract: Intelligent agents are expected to generate plausible predictions and explanations in partially unknown and highly dynamic environments. Thus, they should be able to retract old conclusions in light of new evidence and to efficiently manage wide fluctuations of uncertainty. Neither mathematical logic nor numerical probability fully accommodates these requirements. In this dissertation I propose a formalism that facilitates reasoning with qualitative rules, facts, and deductively closed beliefs (as in logic), yet permits us to retract beliefs in response to changing contexts and imprecise observations (as in probability). Domain knowledge is encoded as if-then rules admitting exceptions with different degrees of abnormality, and queries specify contexts with different levels of precision. I develop effective procedures for testing the consistency of such knowledge bases and for computing whether (and to what degree) a given query is confirmed or denied. These procedures require a polynomial number of propositional satisfiability tests and hence are tractable for Horn expressions. Finally, I show how to give rules causal character by enforcing a Markovian condition of independence. The resulting formalism provides the necessary machinery for embodying belief updates and belief revision, generating explanations, and reasoning about actions and change.

Book
01 Jan 1992
TL;DR: A knowledge-based system developed by this approach can serve as a useful aid for auditors and managers in identifying fraud potentials and take a schema-based reasoning approach in addressing the theoretical issues of domain knowledge, representation scheme, and inference engine.
Abstract: The design and evaluation of internal accounting control system has always been a major task to auditors and managers. This study intends to develop a theoretical foundation for building a knowledge-based system, which can accept a model of an internal accounting control system as input and produce the identification of fraud potentials as output. It takes a schema-based reasoning approach in addressing the theoretical issues of domain knowledge, representation scheme, and inference engine. The approach integrates the schema theory of knowledge representation from the cognitive psychology literature and pattern recognition from the artificial intelligence field. The specific issues of interest include: (1) designing the representation formalism for modeling internal accounting control systems, (2) extracting audit rules and audit patterns based on auditing literature, and (3) developing a prototype system to validate the proposed approach. The formalism of Petri Nets is adopted to represent the two most significant elements of internal accounting control systems: plan of organization and accounting procedures. Past and hypothetical fraudulent cases are surveyed to derive control patterns, audit patterns and audit rules, which form the knowledge base for automatic evaluation. To validate the proposed approach, a prototype system is developed for the evaluation of the purchase and payment cycle. The prototype system uses the CASE/EDI shell as the developing tool and provides users a graphical editing interface to model an internal accounting control system. This study has both theoretical and practical contributions. Theoretically, this study proposes and validates an approach for developing a knowledge-based system for automated identification of fraud potential. An internal control theory consisting of significant control weaknesses and their fraud ramifications can be acquired from employing such an approach. Practically, a knowledge-based system developed by this approach can serve as a useful aid for auditors and managers in identifying fraud potentials.

Journal ArticleDOI
TL;DR: In this article, it is argued that problem-centred knowledge should be the principal objective of instruction, and that active learning generates knowledge organized around problems, which is a different distinction from that between declarative and procedural knowledge.
Abstract: Two distinguishable kinds of knowledge are (1) knowledge organized around referents and (2) knowledge organized around problems. This is a different distinction from that between declarative and procedural knowledge and, it is argued, a more important one for educational design. Schooling, whether traditional or progressivist, has tended to emphasize knowledge organized around referents. Problems of motivation, verbalism, and inertness are thus exacerbated. Active learning generates knowledge organized around problems. Problem-centred knowledge, it is argued, should be the principal objective of instruction.

Journal ArticleDOI
TL;DR: Findings are that knowledge acquisition and knowledge engineering should be separated in the overall ES development process and assigned to different ES team members, and knowledge acquisition is the “bottleneck” in ES development.
Abstract: This paper reports on the results of a survey of knowledge engineers from private organizations, and empirically examines the state of expert systems (ES) in organizational contexts. The knowledge ...

Journal ArticleDOI
TL;DR: This article found that individuals with prior knowledge used their knowledge actively to generate global inferences during reading, and that this anticipatory inferential process guides the construction of a mental model of a text, built partly with explicit information and partly with existing knowledge.
Abstract: The present research tested how domain-related knowledge influences inference generation during text comprehension Two types of inference were examined, those that maintained referential coherence, referred to as local inferences, and those that were anticipatory in nature, referred to as global inferences Three groups of subjects, each with differing degrees of domain knowledge, read a domain-related text that included both types of inference It was found that all knowledge groups processed sentences involved in local inferences similarly, presumably because establishing text coherence is essential to comprehension However, knowledge differences emerged in the processing of the sentences involved in global inferences The results of two experiments suggested that individuals with prior knowledge used their knowledge actively to generate global inferences during reading It was argued that this anticipatory inferential process guides the construction of a mental model of a text, built partly with explicit information and partly with existing knowledge

Journal ArticleDOI
TL;DR: An architecture for knowledge acquisition systems is proposed based upon the integration of existing methodologies, techniques and tools which have been developed within the knowledge acquisition, machine learning, expert systems, hypermedia and knowledge representation research communities.
Abstract: An architecture for knowledge acquisition systems is proposed based upon the integration of existing methodologies, techniques and tools which have been developed within the knowledge acquisition, machine learning, expert systems, hypermedia and knowledge representation research communities. Existing tools are analyzed within a common framework to show that their integration can be achieved in a natural and principled fashion. A system design is synthesized from what already exists, putting a diversity of well-founded and widely used approaches to knowledge acquisition within an integrative framework. The design is intended to be clean and simple, easy to understand, and easy to implement. A detailed architecture for integrated knowledge acquisition systems is proposed that also derives from parallel cognitive and theoretical studies.

Book ChapterDOI
03 Jan 1992
TL;DR: A context sensitive rewrite grammar is developed that allows us to capture a large class of inference layer models and their instantiation in the ACKnowledge Knowledge Engineering Workbench.
Abstract: In this paper we describe Generalised Directive Models and their instantiation in the ACKnowledge Knowledge Engineering Workbench. We have developed a context sensitive rewrite grammar that allows us to capture a large class of inference layer models. We use the grammar to progressively refine the model of problem solving for an application. It is also used as the basis of the scheduling of KA activities and the selection of KA tools.

Journal ArticleDOI
TL;DR: Issues related to people involved in the knowledge acquisition task are discussed, techniques to acquire knowledge are reviewed, and a methodology that offers a structured approach to knowledge acquisition is presented.
Abstract: The application of expert systems has increased drastically in the last decade. The power of these systems derives from the knowledge they possess, rather than from the inference mechanism that they employ. To ensure the performance of an expert system, the acquisition of knowledge becomes a vital task in the development process. This article discusses issues related to people involved in the knowledge acquisition task, reviews techniques to acquire knowledge, and presents a methodology that offers a structured approach to knowledge acquisition. Trends in knowledge acquisition are discussed.

Journal ArticleDOI
01 Nov 1992
TL;DR: The method promotes several general ideas for the automation of knowledge acquisition, such as understanding-based knowledge extension, knowledge acquisition through multistrategy learning, consistency-driven concept formation and refinement, closed-loop learning, and synergistic cooperation between a human expert and a learning system.
Abstract: A method for the automation of knowledge acquisition that is viewed as a process of incremental extension, updating, and improvement of an incomplete and possibly partially incorrect knowledge base of an expert system is presented. The knowledge base is an approximate representation of objects and inference processes in the expertise domain. Its gradual development is guided by the general goal of improving this representation to consistently integrate new input information received from the human expert. The knowledge acquisition method is presented as part of a methodology for the automation of the entire process of building expert systems, and is implemented in the system NeoDISCIPLE. The method promotes several general ideas for the automation of knowledge acquisition, such as understanding-based knowledge extension, knowledge acquisition through multistrategy learning, consistency-driven concept formation and refinement, closed-loop learning, and synergistic cooperation between a human expert and a learning system. >

Journal ArticleDOI
TL;DR: The technique, borrowed from the social sciences and known as network analysis, may be used to identify human experts as well as documented sources of knowledge within organizational settings by introducing a systematic method for identifying expertise (knowledge identification).
Abstract: :The purpose of this work is to introduce a systematic method for identifying expertise (knowledge identification) The technique, borrowed from the social sciences and known as network analysis, may be used to identify human experts as well as documented sources of knowledge within organizational settings Network analysis is simple to administer, cost-effective, and complements interview methods Following a discussion of the theory underlying the technique, its application in a field setting is demonstrated The results are checked against what would be expected due to chance, and cross-validated through interviews To ensure the efficacy of the method, know ledge identification at a second site is briefly described The work closes with some ideas for future management information systems research using network analysis


Book ChapterDOI
01 Jan 1992
TL;DR: It is argued that search for a solution that accommodates several aspects is best carried out through iterative refinement of cases, and that a precise geometrical model of the case is required to link different aspects.
Abstract: Any design process involves two kinds of knowledge: domain knowledge and design knowledge. In this paper, we focus on the formulation of design knowledge within the building domain. Representations which formulate building design knowledge include production rules, shape grammars, prototypes and cases. To avoid blind search, design knowledge must be indexed by function according to different aspects. We argue that search for a solution that accommodates several aspects is best carried out through iterative refinement of cases, and that a precise geometrical model of the case is required to link different aspects. This leads us to employ cases as a design knowledge representation and adaptation as a reasoning methodology for design. We describe a procedure for adaptation of building structures to new environments using interleaved processes of dimensional and topological modifications. We show results of a prototype which implements this procedure.

Journal ArticleDOI
TL;DR: A collaborative approach to acquiring knowledge from a team of experts is developed based on the experience of using Group Support Systems for strategic planning, crisis management, and knowledge acquisition.
Abstract: The increasing complexity of expert systems applications dictates the involvement of many experts in building those systems. Most existing knowledge acquisition techniques are not appropriate for knowledge acquisition from multiple experts. A collaborative approach to acquiring knowledge from a team of experts is developed based on our experience of using Group Support Systems (GSS) for strategic planning, crisis management, and knowledge acquisition. A collaborative environment consists of hardware, software, facility, people, procedures, and facilitation. Techniques for collaborative knowledge acquisition include brainstorming, Nominal Group Technique, Delphi technique, focus group interviews, group repertory grid analysis, and voting. A dimensional analysis framework is used to analyze tools for collaborative knowledge acquisition with regard to three dimensions: the problem-solving process supported, the structure of information generated, and the interaction patterns among participants. Process models including planning, identification, classification, group repertory grid analysis, and verification are developed to facilitate the collaborative knowledge acquisition activities. This paper concludes with observations of a study employing this approach and implications to future research.

Proceedings ArticleDOI
20 Sep 1992
TL;DR: An approach to automated concept recognition and its implementation is described, using a concept model and a library of concept recognition rules to describe what the concepts are and how to recognize them from lower-level concepts.
Abstract: Program understanding can be greatly assisted by automating the recognition of abstract concepts present in the program code. The authors describe an approach to automated concept recognition and its implementation. In the approach, a concept model and a library of concept recognition rules are used to describe what the concepts are and how to recognize them from lower-level concepts. Programming language knowledge as well as domain knowledge are both used to aid the recognition of abstract concepts. >

Journal ArticleDOI
TL;DR: In this paper, a set of six combinatorial problems involving the dressing of toy bears in all possible combinations of clothing items were individually administered to children aged 4 to 9 years.
Abstract: Summary. Seventy-two children aged 4 to 9 years were individually administered a set of six combinatorial problems involving the dressing of toy bears in all possible combinations of clothing items. Because the problem domain was novel, the children had to use their existing general strategies to help them solve the problems. Analyses of the children's responses revealed a series of increasingly sophisticated solution strategies (reflecting a knowledge of the combinatorial domain), plus a number of scanning actions serving primarily in a monitoring capacity (reflecting an application of general strategies). Significant associations were found between children's solution strategies and their scanning actions on each problem, with the children changing the nature of their scanning as they adopted more complex solution strategies. The nature of this association was a key factor in problem success, especially when there was an additional constraint on goal attainment. The findings of the study are examined in terms of changes in children's principled knowledge base and in the nature of their general strategies. Cases involving problem failure in the face of sophisticated domain knowledge highlight the importance of children applying the appropriate domain-general strategies in both novel and routine problem solving.

Journal ArticleDOI
TL;DR: SACD can be used by domain specialists who know the text, characteristics and the intricacies of the legislation, or by knowledge engineers who know SACD and can modify the grammars.
Abstract: The Systeme d'Acquisition des Connaissances Deontique (SACD), part of the Acquisition des Connaissances et Analyse de Textes project, which can generate a knowledge base from the logical structure of regulatory texts, is described. SACD can be used by domain specialists who know the text, characteristics and the intricacies of the legislation, or by knowledge engineers who know SACD and can modify the grammars. SACD performs all syntactic text processing and automatically manipulates the knowledge structures. The specialist interprets all semantic knowledge and performs any tasks requiring domain experience. The knowledge engineer adapts SACD's knowledge structures when the system behaves inaccurately. The logical context of regulatory texts is reviewed, and the text analysis, knowledge base operation, and knowledge base adaptation procedures of SACD are discussed. >

Journal ArticleDOI
TL;DR: Results indicate that analyst knowledge and use of concrete terms in the user knowledge domain is of more utility in the discovery task than abstract, conceptual domain knowledge.