scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1998"


Book
01 Jan 1998
TL;DR: The definitive primer on knowledge management, this book will establish the enduring vocabulary and concepts and serve as the hands-on resource of choice for fast companies that recognize knowledge as the only sustainable source of competitive advantage.
Abstract: From the Publisher: The definitive primer on knowledge management, this book will establish the enduring vocabulary and concepts and serve as the hands-on resource of choice for fast companies that recognize knowledge as the only sustainable source of competitive advantage. Drawing on their work with more than 30 knowledge-rich firms, the authors-experienced consultants with a track record of success-examine how all types of companies can effectively understand, analyze, measure, and manage their intellectual assets, turning corporate knowledge into market value. They consider such questions as: What key cultural and behavioral issues must managers address to use knowledge effectively?; What are the best ways to incorporate technology into knowledge work?; What does a successful knowledge project look like-and how do you know when it has succeeded? In the end, say the authors, the human qualities of knowledge-experience, intuition, and beliefs-are the most valuable and the most difficult to manage. Applying the insights of Working Knowledge is every manager's first step on that rewarding road to long-term success. A Library Journal Best Business Book of the Year. "For an entire company...to have knowledge, that information must be coordinated and made accessible. Thomas H. Davenport...and Laurence Prusak... offer an elegantly simple overview of the 'knowledge market' aimed at fulfilling that goal.... Working Knowledge provides practical advice about implementing a knowledge-management system....A solid dose of common sense for any company looking to acquire -- or maintain -- a competitive edge."--Upside, June 1998

10,791 citations


Journal ArticleDOI
01 Mar 1998
TL;DR: The paradigm shift from a transfer view to a modeling view is discussed and two approaches which considerably shaped research in Knowledge Engineering are described: Role-limiting Methods and Generic Tasks.
Abstract: This paper gives an overview of the development of the field of Knowledge Engineering over the last 15 years. We discuss the paradigm shift from a transfer view to a modeling view and describe two approaches which considerably shaped research in Knowledge Engineering: Role-limiting Methods and Generic Tasks. To illustrate various concepts and methods which evolved in recent years we describe three modeling frameworks: CommonKADS, MIKE and PROTEGE-II. This description is supplemented by discussing some important methodological developments in more detail: specification languages for knowledge-based systems, problem-solving methods and ontologies. We conclude by outlining the relationship of Knowledge Engineering to Software Engineering, Information Integration and Knowledge Management.

3,406 citations


Book
20 Oct 1998
TL;DR: The systematic presentation extends research results to new situations, as well as describes how to build the knowledge structure in practice in practice.
Abstract: From the Publisher: Knowledge Spaces offers a rigorous mathematical foundation for various practical systems of knowledge assessment, applied to real and simulated data. The systematic presentation extends research results to new situations, as well as describes how to build the knowledge structure in practice.

547 citations


Journal ArticleDOI
TL;DR: The business world is becoming so concerned about knowledge management that, according to one report, over 40 percent of the Fortune 1000 now have a chief knowledge officer, a senior-level executive responsible for creating an infrastructure and cultural environment for knowledge sharing.
Abstract: Many enterprises downsize to adapt to more competitive environments, but unless they have captured the knowledge of their employees, downsizing can result in a loss of critical information. Similarly, as employees leave, organizations are likely to lose access to large quantities of critical knowledge. As companies expand internationally, geographic barriers can affect knowledge exchange and prevent easy access to information. These and other forces are pushing enterprises to explore better methods for knowledge management. Enterprise knowledge management entails formally managing knowledge resources, typically by using advanced information technology. KM is formal in that knowledge is classified and categorized according to a prespecified, but evolving, ontology into structured and semistructured data and knowledge bases. The overriding purpose of enterprise KM is to make knowledge accessible and reusable to the enterprise. The business world is becoming so concerned about knowledge management that, according to one report, over 40 percent of the Fortune 1000 now have a chief knowledge officer, a senior-level executive responsible for creating an infrastructure and cultural environment for knowledge sharing. This article surveys some components of this young field.

375 citations


Journal ArticleDOI
TL;DR: A series of three experiments in which an adapted version of Mednick’s (1962) remote associates task was used demonstrates conditions under which domain knowledge may inhibit creative problem solving.
Abstract: Experts generally solve problems in their fields more effectively than novices because their wellstructured, easily activated knowledge allows for efficient search of a solution space. But what happens when a problem requires a broad search for a solution? One concern is that subjects with a large amount of domain knowledge may actually be at a disadvantage, because their knowledge may confine them to an area of the search space in which the solution does not reside. In other words, domain knowledge may act as a mental set, promoting fixation in creative problem-solving attempts. A series of three experiments in which an adapted version of Mednick’s (1962) remote associates task was used demonstrates conditions under which domain knowledge may inhibit creative problem solving.

365 citations


Proceedings Article
01 Jul 1998
TL;DR: Technical design issues faced in the development of Open Knowledge Base Connectivity are discussed, how OKBC improves upon GFP is highlighted, and practical experiences in using it are reported on.
Abstract: The technology for building large knowledge bases (KBs) is yet to witness a breakthrough so that a KB can be constructed by the assembly of prefabricated knowledge components. Knowledge components include both pieces of domain knowledge (for example, theories of economics or fault diagnosis) and KB tools (for example, editors and theorem provers). Most of the current KB development tools can only manipulate knowledge residing in the knowledge representation system (KRS) for which the tools were originally developed. Open Knowledge Base Connectivity (OKBC) is an application programming interface for accessing KRSs, and was developed to enable the construction of reusable KB tools. OKBC improves upon its predecessor, the Generic Frame Protocol (GFP), in several significant ways. OKBC can be used with a much larger range of systems because its knowledge model supports an assertional view of a KRS. OKBC provides an explicit treatment of entities that are not frames, and it has a much better way of controlling inference and specifying default values. OKBC can be used on practically any platform because it supports network transparency and has implementations for multiple programming languages. In this paper, we discuss technical design issues faced in the development of OKBC, highlight how OKBC improves upon GFP, and report on practical experiences in using it.

354 citations


Proceedings ArticleDOI
10 Aug 1998
TL;DR: This paper presented a robust, knowledge-poor approach to resolving pronouns in technical manuals, which operates on texts pre-processed by a part-of-speech tagger and achieved a success rate of 89.7%.
Abstract: Most traditional approaches to anaphora resolution rely heavily on linguistic and domain knowledge. One of the disadvantages of developing a knowledge-based system, however, is that it is a very labour-intensive and time-consuming task. This paper presents a robust, knowledge-poor approach to resolving pronouns in technical manuals, which operates on texts pre-processed by a part-of-speech tagger. Input is checked against agreement and for a number of antecedent indicators. Candidates are assigned scores by each indicator and the candidate with the highest score is returned as the antecedent. Evaluation reports a success rate of 89.7% which is better than the success rates of the approaches selected for comparison and tested on the same data. In addition, preliminary experiments show that the approach can be successfully adapted for other languages with minimum modifications.

353 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: An algorithm is described that efficiently finds all such negative associations by combining previously discovered positive associations with domain knowledge to constrain the search space such that fewer but more interesting negative rules are mined.
Abstract: Mining for association rules is considered an important data mining problem. Many different variations of this problem have been described in the literature. We introduce the problem of mining for negative associations. A naive approach to finding negative associations leads to a very large number of rules with low interest measures. We address this problem by combining previously discovered positive associations with domain knowledge to constrain the search space such that fewer but more interesting negative rules are mined. We describe an algorithm that efficiently finds all such negative associations and present the experimental results.

320 citations


Proceedings Article
27 Aug 1998
TL;DR: This paper proposes a new method of discovering unexpected patterns that takes into consideration prior background knowledge of decision makers and uses these beliefs to seed the search for patterns in data that contradict the beliefs.
Abstract: Several pattern discovery methods proposed in the data mining literature have the drawbacks that they discover too many obvious or irrelevant patterns and that they do not leverage to a full extent valuable prior domain knowledge that decision makers have. In this paper we propose a new method of discovery that addresses these drawbacks. In particular we propose a new method of discovering unexpected patterns that takes into consideration prior background knowledge of decision makers. This prior knowledge constitutes a set of expectations or beliefs about the problem domain. Our proposed method of discovering unexpected patterns uses these beliefs to seed the search for patterns in data that contradict the beliefs. To evaluate the practicality of our approach, we applied our algorithm to consumer purchase data from a major market research company and to web logfile data tracked at an academic Web site and present our findings in the paper.

281 citations


Patent
Ronald M. Swartz1, Jeffrey L. Winkler1, Evelyn A. Janos1, Igor Markidan1, Qun Dou1 
29 Jun 1998
TL;DR: In this paper, the authors present a method and apparatus for first integrating the operation of various independent software applications directed to the management of information within an enterprise, which is an expandable architecture with built-in knowledge integration features that facilitate the monitoring of information flow into, out of, and between the integrated information management applications.
Abstract: The present invention is a method and apparatus for first integrating the operation of various independent software applications directed to the management of information within an enterprise. The system architecture is, however, an expandable architecture, with built-in knowledge integration features that facilitate the monitoring of information flow into, out of, and between the integrated information management applications so as to assimilate knowledge information and facilitate the control of such information. Also included are additional tools which, using the knowledge information enable the more efficient use of the knowledge within an enterprise, including the ability to develop a context for and visualization of such knowledge.

280 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify three priority areas for further research and experimentation in the knowledge field: research on how tacit knowledge can continue to be "tapped into and utilized" despite increasing economic and business forces that are disrupting the social nature of the workplace community where tacit knowledge lives and thrives; research on optimally structure knowledge flow between knowledge seekers and knowledge providers to maximize the impact of knowledge; and research on making knowledge, which by its nature is fuzzy and intangible, visible and concrete.
Abstract: If the knowledge field is to move forward, there are—from a business perspective—three priority areas for further research and experimentation. They are: research on how tacit knowledge can continue to be "tapped into and utilized" despite increasing economic and business forces that are disrupting the social nature of the workplace community where tacit knowledge lives and thrives; research on how to optimally structure knowledge flow between knowledge seekers and knowledge providers to maximize the impact of knowledge; and research on how to make knowledge, which by its nature is fuzzy and intangible, visible and concrete. Progress in each of these three areas would significantly contribute to making the relationship between knowledge and the firm a significant business reality.

Journal ArticleDOI
TL;DR: The authors explore the nature, context and enabling conditions for ART systems and show how ba can be employed in ART systems, which enable companies to implement a multi-dynamic approach to knowledge management.

Proceedings Article
01 Oct 1998
TL;DR: In this article, the authors describe an approach to intelligent knowledge management that explicitly takes into account the social issues involved, and the proof of concept is given by a large-scale initiative involving knowledge management of a virtual organization.
Abstract: Most enterprises agree that knowledge is an essential asset for success and survival on a increasingly competitive and global market. This awareness is one of the main reasons for the exponential growth of knowledge management in the past decade. Our approach to knowledge management is based on ontologies, and makes knowledge assets intelligently accessible to people in organizations. Most company-vital knowledge resides in the heads of people, and thus successful knowledge management does not only consider technical aspects but also social ones. In this paper, we describe an approach to intelligent knowledge management that explicitly takes into account the social issues involved. The proof of concept is given by a large-scale initiative involving knowledge management of a virtual organization.

Journal ArticleDOI
Daniel Mailharro1
TL;DR: The main contribution of the work is to provide an object-oriented model completely integrated in the CSP schema, with inheritance and classification mechanisms, and with specific arc consistency algorithms.
Abstract: One of the main difficulties with configuration problem solving lies in the representation of the domain knowledge because many different aspects, such as taxonomy, topology, constraints, resource balancing, component generation, etc., have to be captured in a single model. This model must be expressive, declarative, and structured enough to be easy to maintain and to be easily used by many different kind of reasoning algorithms. This paper presents a new framework where a configuration problem is considered both as a classification problem and as a constraint satisfaction problem (CSP). Our approach deeply blends concepts from the CSP and object-oriented paradigms to adopt the strengths of both. We expose how we have integrated taxonomic reasoning in the constraint programming schema. We also introduce new constrained variables with nonfinite domains to deal with the fact that the set of components is previously unknown and is constructed during the search for solution. Our work strongly focuses on the representation and the structuring of the domain knowledge, because the most common drawback of previous works is the difficulty to maintain the knowledge base that is due to a lack of structure and expressiveness of the knowledge representation model. The main contribution of our work is to provide an object-oriented model completely integrated in the CSP schema, with inheritance and classification mechanisms, and with specific arc consistency algorithms.

Patent
14 Oct 1998
TL;DR: In this paper, an object management system is provided for managing, cataloging, and discovering various potentially reusable code and data components that exist within an Information Technology (IT) platform, and which each have well-defined interfaces with other components.
Abstract: An object management system is providing for managing, cataloging, and discovering various potentially reusable code and data components that exist within an Information Technology (IT) platform, and which each have well-defined interfaces with other components. For each of these re-usable code and data components, an associated software object called an “asset element” is created that describes the associated component. Relationships are created between various asset elements to represent the relationships existing between the software components. Other software objects called “locator elements” are created that each describes an application concept or sub-concept. This application concept or sub-concept is associated with a problem solved by the code and data components within the IT platform. Relationships are created between the various locator elements to correlate the concepts and sub-concepts to software constructs represented by asset elements. The object management system further supports various object discovery tools capable of identifying locator elements associated with a particular concept. These locator elements and the associated relationships may then be efficiently traced to identify related asset elements and the associated software and code constructs. This provides an efficient concept-based search mechanism for the code constructs. Other tools are provided for creating, modifying, and deleting the elements. A model may be used to define the various types of relationships and elements that may exist within the system, thereby simplifying the various tools needed to support element creation, modification, deletion, and traversal.

Proceedings Article
01 Jul 1998
TL;DR: Andes, an intelligent tutoring system for Newtonian physics, refers to a probabilistic student model to make decisions about responding to help requests, and provides feedback and hints tailored to the student's knowledge and goals.
Abstract: One of the most important problems for an intelligent tutoring system is deciding how to respond when a student asks for help. Responding cooperatively requires an understanding of both what solution path the student is pursuing, and the student's current level of domain knowledge. Andes, an intelligent tutoring system for Newtonian physics, refers to a probabilistic student model to make decisions about responding to help requests. Andes' student model uses a Bayesian network that computes a probabilistic assessment of three kinds of information: (I) the student's general knowledge about physics, (2) the student's specific knowledge about the current problem, and (3) the abstract plans that the student may be pursuing to solve the problem. Using this model, Andes provides feedback and hints tailored to the student's knowledge and goals.

Proceedings Article
01 Jul 1998
TL;DR: This work introduces a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding and ranks concept hypotheses according to credibility and the most credible ones are selected for assimilation into the domain knowledge base.
Abstract: We introduce a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding. A given ontology is incrementally updated as new concepts are acquired from real-world texts. The acquisition process is centered around the linguistic and conceptual "quality" of various forms of evidence underlying the generation and refinement of concept hypotheses. On the basis of the quality of evidence, concept hypotheses are ranked according to credibility and the most credible ones are selected for assimilation into the domain knowledge base.

Journal Article
TL;DR: In this paper, the authors show systematically why local knowledge has a big developmental potential and why its utilization for development is ambiguous, and why activities based on local knowledge are not necessarily sustainable or socially just.
Abstract: This study shows systematically why local knowledge (often called indigenous knowledge) has a big developmental potential and why its utilization for development is ambiguous. Local knowledge consists of factual knowledge, skills, and capabilities, most of which have some empirical grounding. It is culturally situated and is best understood as a social product. The practical application in the development context is less of a technological but a theoretical and political problem, what is shown here generally and by referring to forest-related knowledge. Local knowledge is instrumentalized and idealized by development experts as well as by their critics. But it does not necessarily present itself as a comprehensive knowledge system and activities based on local knowledge are not necessarily sustainable or socially just. The use of local knowledge for development should not be restricted to the extraction of information or applied simply as a countermodel to Western science.

Book ChapterDOI
TL;DR: This paper presents a new hybrid genetic algorithm for VRPTW that investigates the impact of using explicitly domain knowledge and a priori knowledge/characteristics about expected solutions during the recombination and mutation phases of the algorithm.
Abstract: A variety of hybrid genetic algorithms has been recently proposed to address the vehicle routing problem with time windows (VRPTW), a problem known to be NP-hard. However, very few genetic-based approaches exploit implicit knowledge provided by the structure of the intermediate solutions computed during the evolutionary process to explore the solution space. This paper presents a new hybrid genetic algorithm for VRPTW. It investigates the impact of using explicitly domain knowledge and a priori knowledge/characteristics about expected solutions during the recombination and mutation phases of the algorithm. Basic principles borrow from recent hybrid and standard genetic algorithms, and features of well-known heuristics to drive the search process. Designed to support time-constrained reasoning tasks, the procedure is intended to be conceptually simple, easy to implement and allow fast computation of near-optimal solution. A computational experiment has been conducted to compare the performance of the proposed algorithm with similar and standard techniques.

Journal ArticleDOI
TL;DR: A new scheme of knowledge encoding in a fuzzy multilayer perceptron (MLP) using rough set-theoretic concepts is described, demonstrating the superiority of the system over the fuzzy and conventional versions of the MLP (involving no initial knowledge).
Abstract: A scheme of knowledge encoding in a fuzzy multilayer perceptron (MLP) using rough set-theoretic concepts is described. Crude domain knowledge is extracted from the data set in the form of rules. The syntax of these rules automatically determines the appropriate number of hidden nodes while the dependency factors are used in the initial weight encoding. The network is then refined during training. Results on classification of speech and synthetic data demonstrate the superiority of the system over the fuzzy and conventional versions of the MLP (involving no initial knowledge).

Journal ArticleDOI
01 Oct 1998
TL;DR: The paper describes the MIKE (Model-based and Incremental Knowledge Engineering) approach for developing knowledge-based systems, which integrates semiformal and formal specification techniques together with prototyping into a coherent framework.
Abstract: The paper describes the MIKE (Model-based and Incremental Knowledge Engineering) approach for developing knowledge-based systems. MIKE integrates semiformal and formal specification techniques together with prototyping into a coherent framework. All activities in the building process of a knowledge-based system are embedded in a cyclic process model. For the semiformal representation we use a hypermedia-based formalism which serves as a communication basis between expert and knowledge engineer during knowledge acquisition. The semiformal knowledge representation is also the basis for formalization, resulting in a formal and executable model specified in the Knowledge Acquisition and Representation Language (KARL). Since KARL is executable, the model of expertise can be developed and validated by prototyping. A smooth transition from a semiformal to a formal specification and further on to design is achieved because all the description techniques rely on the same conceptual model to describe the functional and nonfunctional aspects of the system. Thus, the system is thoroughly documented at different description levels, each of which focuses on a distinct aspect of the entire development effort. Traceability of requirements is supported by linking the different models to each other.

Patent
30 Mar 1998
TL;DR: In this article, a message understanding and response system recognizes and answers messages based on the message writer's intent in unconstrained natural language text messages, which is initialized by manually classifying a training text corpus according to the respondent's policies.
Abstract: A message understanding and response system recognizes and answers messages based on the message writer's intent in unconstrained natural language text messages. The system has a set of knowledge bases with linked domain specific words, phrases, and regular expressions relating to the domain of the writer and the domain of the respondent. The writer's domain is represented by special purpose lexicons linked to representations of typical intents. The typical intents are linked to a domain knowledge base of typical and appropriate respondent actions. The system is initialized by manually classifying a training text corpus according to the respondent's policies. A lexical analysis tool with prototypical intents and phrases indicating intents is applied to the training text corpus, which includes the domain specific characteristics of both the writer and the respondent. The output results are an operable knowledge base which is a conjunction of keywords used to communicate between the two domains of the writer and the respondent. During automatic operation, the input text is pre-processed to remove irregularities in a manner similar to how the data in the training text corpus was regularized. Sets of extracted keywords and concepts are matched against the sets of stored, pre-classified keywords and concepts, producing a list of intents. The intents and other extracted features are then mapped to appropriate actions as defined by the system operator. The actions use the common linked domain knowledge terms to formulate a textual reply which is tailored to and answering the intent of writer of the input message.

Journal ArticleDOI
TL;DR: The role of knowledge as a primary driver of development is being increasingly recognized in high-tech, service, or traditional industries as mentioned in this paper, however, it is not clear whether managerial approaches based on mindsets rooted in past practice are appropriate for, or capable of, fully realizing the potential value of knowledge within the firm and/or industry.
Abstract: Whether in high-tech, service, or traditional industries, the role of knowledge as a primary driver of development is being increasingly recognized. It is not clear, however, whether managerial approaches based on mindsets rooted in past practice are appropriate for, or capable of, fully realizing the potential value of knowledge within the firm and/or industry. At least three related issues stand in the way of full knowledge utilization: conceptualization and measurement of knowledge capital as a primary organizational asset, the integration of knowledge capital into the strategic management process, and the development of organizational forms and processes that facilitate the use and development of knowledge. While leading-edge firms are already wrestling with these issues, advances in theory and research are needed to help develop appropriate responses and provide frameworks that will help spread these new approaches. In doing so, advances may also be made that allow for the recognition of the central role of collaboration in the knowledge process.

Journal ArticleDOI
TL;DR: A theory of domain knowledge is proposed to define the semantics and composition of generic domain models in the context of requirements engineering and a modeling language and a library of models arranged in families of classes are described.
Abstract: Retrieval, validation, and explanation tools are described for cooperative assistance during requirements engineering and are illustrated by a library system case study. Generic models of applications are reused as templates for modeling and critiquing requirements for new applications. The validation tools depend on a matching process which takes facts describing a new application and retrieves the appropriate generic model from the system library. The algorithms of the matcher, which implement a computational theory of analogical structure matching, are described. A theory of domain knowledge is proposed to define the semantics and composition of generic domain models in the context of requirements engineering. A modeling language and a library of models arranged in families of classes are described. The models represent the basic transaction processing or 'use case' for a class of applications. Critical difference rules are given to distinguish between families and hierarchical levels. Related work and future directions of the domain theory are discussed.

Proceedings ArticleDOI
01 Apr 1998
TL;DR: This work discusses techniques for extracting concepts (abbreviations) from a more informal source of information: file names and shows by experiment that the techniques proposed allow about 90% of the abbreviations to be found automatically.
Abstract: Decomposing complex software systems into conceptually independent subsystems is a significant software engineering activity which received considerable research attention. Most of the research in this domain considers the body of the source code; trying to cluster together files which are conceptually related. We discuss techniques for extracting concepts (abbreviations) from a more informal source of information: file names. The task is difficult because nothing indicates where to split the file names into substrings. In general, finding abbreviations would require domain knowledge to identify the concepts that are referred to in a name and intuition to recognize such concepts in abbreviated forms. We show by experiment that the techniques we propose allow about 90% of the abbreviations to be found automatically.

Journal ArticleDOI
01 Oct 1998
TL;DR: Research into semi-automatic generation of scenarios for validating software-intensive system requirements is reported, which describes a computational mechanism for deriving use cases from object system models, simple rules to link actions in a use case, taxonomies of classes of exceptions which give rise to alternative courses in scenarios.
Abstract: This paper reports research into semi-automatic generation of scenarios for validating software-intensive system requirements. The research was undertaken as part of the ESPRIT IV 21903 ‘CREWS’ long-term research project. The paper presents the underlying theoretical models of domain knowledge, computational mechanisms and user-driven dialogues needed for scenario generation. It describes how CREWS draws on theoretical results from the ESPRIT III 6353 ‘NATURE’ basic research action, that is object system models which are abstractions of the fundamental features of different categories of problem domain. CREWS uses these models to generate normal course scenarios, then draws on theoretical and empirical research from cognitive science, human-computer interaction, collaborative systems and software engineering to generate alternative courses for these scenarios. The paper describes a computational mechanism for deriving use cases from object system models, simple rules to link actions in a use case, taxonomies of classes of exceptions which give rise to alternative courses in scenarios, and a computational mechanism for generation of multiple scenarios from a use case specification.

Proceedings Article
01 Jan 1998
TL;DR: WYSIWYM editing is an alternative solution in which the texts to view and edit the knowledge are generated not by the user but by the system, and each choice directly updates the knowledge base.
Abstract: Many kinds of knowledge-based system would be easier to develop and maintain if domain experts (as opposed to knowledge engineers) were in a position to define and edit the knowledge. From the viewpoint of domain experts, the best medium for defining the knowledge would be a text in natural language; however, natural lan- guage input cannot be decoded reliably unless written in controlled languages, which are difficult for domain experts to learn and use. WYSIWYM editing is an alternative solution in which the texts em- ployed to view and edit the knowledge are generated not by the user but by the system. The user can add knowledge by clicking on 'an- chors' in the text and choosing from a list of semantic alternatives; each choice directly updates the knowledge base, from which a new text is then generated.

01 Jan 1998
TL;DR: The ontology of problem-solving methods as mentioned in this paper has been used to describe objects, relations, states of affairs, events, and processes in the world and to make knowledge sharable by encoding domain knowledge using a standard vocabulary based on ontology.
Abstract: Much of the work on ontologies in AI has focused on describing some aspect of reality: objects, relations, states of affairs, events, and processes in the world. A goal is to make knowledge sharable, by encoding domain knowledge using a standard vocabulary based on the ontology. A parallel attempt at identifying the ontology of problem-solving knowledge has a goal of sharable problem-solving methods. For example, when one is dealing with abductive inference problems, the following are some of the terms that occur in the representation of problem-solving methods: hypotheses, explanatory coverage, evidence, likelihood, plausibility, composite hypothesis, etc. Method ontology is, in good part, goal- and method-specific. ``Generic Tasks,'' ``Heuristic Classification,'' ``Task-specific Architectures,'' ``Task-method Structures,'' ``Inference Structures'' and ``Task Structures'' are representative bodies of work in the knowledge-systems area that have focused on domain-independent problem-solving methods. However, connections have not been made to work that is explicitly concerned with domain ontologies. Making such connections is the goal of this paper. This paper is part review and part synthesis.

Journal ArticleDOI
TL;DR: The paper re-examines the key issues of knowledge discovery by putting them in the context of the technology of fuzzy sets and reveals several interesting links between fuzzy data mining and fuzzy sets.

Journal ArticleDOI
TL;DR: Important aspects of the application process not commonly encountered in the “toy world,” including obtaining labeled training data, the difficulties of working with pixel data, and the automatic extraction of higher-level features are discussed.
Abstract: Dramatic improvements in sensor and image acquisition technology have created a demand for automated tools that can aid in the analysis of large image databases. We describe the development of JARtool, a trainable software system that learns to recognize volcanoes in a large data set of Venusian imagery. A machine learning approach is used because it is much easier for geologists to identify examples of volcanoes in the imagery than it is to specify domain knowledge as a set of pixel-level constraints. This approach can also provide portability to other domains without the need for explicit reprogramming; the user simply supplies the system with a new set of training examples. We show how the development of such a system requires a completely different set of skills than are required for applying machine learning to “toy world” domains. This paper discusses important aspects of the application process not commonly encountered in the “toy world,” including obtaining labeled training data, the difficulties of working with pixel data, and the automatic extraction of higher-level features.