scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1995"


Journal ArticleDOI
TL;DR: The concept of knowledge is complex and its relevance to organization theory has been insuficiently developed as discussed by the authors, and there is current interest in the competitive advantage that knowledge may provide for organizations and in the significance of knowledge workers, organ izational competencies and knowledge intensive firms.
Abstract: There is current interest in the competitive advantage that knowledge may provide for organizations and in the significance of knowledge workers, organ izational competencies and knowledge-intensive firms. Yet the concept of knowledge is complex and its relevance to organization theory has been insuf ficiently developed. The paper offers a review and critique of current approaches, and outlines an alternative. First, common images of knowledge in the organizational literature as embodied, embedded, embrained, encultured and encoded are identified and, to summarize popular writings on knowledge work, a typology of organizations and knowledge types is constructed. How ever, traditional assumptions about knowledge, upon which most current speculation about organizational knowledge is based, offer a compartmental ized and static approach to the subject. Drawing from recent studies of the impact of new technologies and from debates in philosophy, linguistics, social theory and cognitive science, the second par...

2,126 citations



Journal ArticleDOI
TL;DR: The notion of the ontological level is introduced, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives.
Abstract: The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the "modelling view" of knowledge acquisition proposed by Clancey, the modelling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behaviour (i.e. the problem-solving expertise ) and its own environment (the problem domain ). Current knowledge modelling methodologies tend to focus on the former sub-system only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowledge bases (or "ontologies") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual levels discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontologic distinctions which may play an important role for such purpose.

1,140 citations


Journal ArticleDOI
TL;DR: A new approach to information science (IS): domain‐analysis, which states that the most fruitful horizon for IS is to study the knowledge‐domains as thought or discourse communities, which are parts of society's division of labor.
Abstract: This article is a programmatic article, which formulates a new approach to information science (IS): domain-analysis. This approach states that the most fruitful horizon for IS is to study the knowledge-domains as thought or discourse communities, which are parts of society's division of labor. The article is also a review article, providing a multidisciplinary description of research, illuminating this theoretical view. The first section presents contemporary research in IS, sharing the fundamental viewpoint that IS should be seen as a social rather than as a purely mental discipline. In addition, important predecessors to this view are mentioned and the possibilities as well as the limitations of their approaches are discussed. The second section describes recent transdisciplinary tendencies in the understanding of knowledge. In bordering disciplines to IS, such as educational research, psychology, linguistics, and the philosophy of science, an important new view of knowledge is appearing in the 1990s. This new view of knowledge stresses the social, ecological, and content-oriented nature of knowledge. This is opposed to the more formal, computer-like approaches that dominated in the 1980s. The third section compares domain-analysis to other major approaches in IS, such as the cognitive approach. The final section outlines important problems to be investigated, such as how different knowledge-domains affect the informational value of different subject access points in data bases. © 1995 John Wiley & Sons, Inc.

637 citations


Journal ArticleDOI
TL;DR: This paper explores companies' reasons for publishing in the scientific and technical literature; reasons that turn on the need to link with other research organizations, as seen in other areas of technical knowledge exchange.
Abstract: This paper focuses on the movement of scientific and technological knowledge. It explores companies' reasons for publishing in the scientific and technical literature; reasons that turn on the need to link with other research organizations. The analysis begins by establishing that firms do indeed publish. Such publishing mediates links with other organizations, serving to signal the presence of tacit knowledge and to build the technical reputation necessary to engage in the barter-governed exchange of scientific and technical knowledge. Similar processes are seen in other areas of technical knowledge exchange. Copyright 1995 by Oxford University Press.

408 citations


Journal ArticleDOI
TL;DR: In this article, an approach for extracting the 3D shape of buildings from high-resolution Digital Elevation Models (DEMs), having a grid resolution between 0.5 and 5 m, is presented.
Abstract: This paper deals with an approach for extracting the 3D shape of buildings from high-resolution Digital Elevation Models (DEMs), having a grid resolution between 0.5 and 5 m. The steps of the proposed procedure increasingly use explicit domain knowledge, specifically geometric constraints in the form of parametric and prismatic building models. A new MDL-based approach generating a polygonal ground plan from segment boundaries is given. The used knowledge is object-related making adaption to data of different density and resolution simple and transparent.

305 citations


Journal ArticleDOI
TL;DR: This paper's work on using domain knowledge to parse news video programs and to index them on the basis of their visual content is presented and experimental results are discussed in detail.
Abstract: Automatic construction of content-based indices for video source material requires general semantic interpretation of both images and their accompanying sounds; but such a broadly-based semantic analysis is beyond the capabilities of the current technologies of machine vision and audio signal analysis. However, if one can assume a limited and well-demarcated body of domain knowledge for describing the content of a body of video, then it becomes easier to interpret a video source in terms of that domain knowledge. This paper presents our work on using domain knowledge to parse news video programs and to index them on the basis of their visual content. Models based on both the spatial structure of image frames and the temporal structure of the entire program have been developed for news videos, along with algorithms that apply these models by locating and identifying instances of their elements. Experimental results are also discussed in detail to evaluate both the models and the algorithms that use them. Finally, proposals for future work are summarized.

223 citations


Book
01 Jan 1995
TL;DR: The central premise of this book, that the development of LK BS should be centred on the elaboration of explicit models of law, is well demonstrated and it is an extremely worthwhile read for anyone interested in the theoretical foundations of AI and law and knowledge representation in particular.
Abstract: Although the field of Artificial Intelligence and Law has matured considerably, there is still no comprehensive view on the field, its achievements, and no agenda or clear direction for research. Moreover, present approaches to the development of legal knowledge-based systems (LKBS) - such as the use of rule-based systems, case-based systems, or logics - have obtained somewhat limited theoretical and practical results. This book provides a critical overview of the field by describing present approaches and analysing their problems in detail. A new "modelling approach" to legal knowledge engineering is proposed to address these problems and provide an agenda for research and development. This approach applies recent developments in knowledge modelling to the law domain. The book's central premise, that the development of LK BS should be centred on the elaboration of explicit models of law, is well demonstrated, it is an extremely worthwhile read for anyone interested in the theoretical foundations of AI and law and knowledge representation in particular.

202 citations


Journal ArticleDOI
TL;DR: This work compares configuration of the board-game method to that of a chronological-backtracking problem-solving method for the same application tasks (for example, towers of Hanoi and the Sisyphus room-assignment problem), and examines how method designers can specialize problem-Solving methods by making ontological commitments to certain classes of tasks.

179 citations


Journal ArticleDOI
TL;DR: This paper shows how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients, and shows that the goals of reusability and easy maintenance can be achieved.

170 citations


Book ChapterDOI
David Heckerman1
01 Jan 1995
TL;DR: This chapter discusses a knowledge representation, called a Bayesian network, that allows one to learn uncertain relationships in a domain by combining expert domain knowledge and statistical data.
Abstract: Publisher Summary This chapter discusses a knowledge representation, called a Bayesian network, that allows one to learn uncertain relationships in a domain by combining expert domain knowledge and statistical data. A Bayesian network is a graphical representation of uncertain knowledge that most people find easy to construct directly from domain knowledge. In addition, the representation has formal probabilistic semantics, making it suitable for statistical manipulation. Over the past decade, the Bayesian network has become a popular representation for encoding uncertain expert knowledge in expert systems. More recently, researchers have developed methods for learning Bayesian networks from a combination of expert knowledge and data. The techniques that have been developed are new and still evolving, but they have been shown to be remarkably effective in some domains. Learning using Bayesian networks is similar to that using neural networks. The process employing Bayesian networks, however, has two important advantages: (1) one can easily encode expert knowledge in a Bayesian network, and use this knowledge to increase the efficiency and accuracy of learning; and (2) the nodes and arcs in learned Bayesian networks often correspond to recognizable distinctions and causal relationships.

Book
01 Jan 1995
TL;DR: This text provides a guide to the current state of the art in building and sharing very large knowledge bases, and is intended to act as a catalyst to future research, development and applications.
Abstract: In the early days of artificial intelligence it was widely believed that powerful computers would, in the future, enable mankind to solve many real-world problems through the use of very general inference procedures and very little domain-specific knowledge With the benefit of hindsight, this view can now be called quite naive The field of expert systems, which developed during the early 1970s, embraced the paradigm that "Knowledge is Power" - even very fast computers require very large amounts of very specific knowledge to solve non-trivial problems Thus, the field of large knowledge bases has emerged This book presents progress on building and sharing very large-scale knowledge bases Progress has been made in specific scientific domains, including molecular biology, where large knowledge bases have become important tools for researchers Another development is the attention being paid to structuring large knowledge bases The use of a carefully developed set of concepts, called an "ontalogy", is becoming almost standard practice This text provides a guide to the current state of the art in building and sharing very large knowledge bases, and is intended to act as a catalyst to future research, development and applications

Book ChapterDOI
01 Jan 1995
TL;DR: This paper extends results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations, and focuses attention on tailoring alg- orithms and representations to particular problem classes by exploiting domain knowledge.
Abstract: The past twenty years has seen a rapid growth of interest in stochastic search algorithms, particularly those inspired by natural processes in physics and biology. Impressive results have been demonstrated on complex practical optimisation problems and related search applications taken from a variety of fields, but the theoretical understanding of these algorithms remains weak. This results partly from the insufficient attention that has been paid to results showing certain fundamental limitations on universal search algorithms, including the so-called “No Free Lunch” Theorem. This paper extends these results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations. The resulting insights focus attention on tailoring algorithms and representations to particular problem classes by exploiting domain knowledge. This highlights the fundamental importance of gaining a better theoretical grasp of the ways in which such knowledge may be systematically exploited as a major research agenda for the future.

Book
01 Jan 1995
TL;DR: The Knowledge Acquisition and Representation Language (KARL) combines a description of a knowledge based system at the conceptual level (a so called model of expertise) with a description at a formal and executable level that allows the precise and unique specification of the functionality of aknowledge based system independent of any implementation details.
Abstract: The Knowledge Acquisition and Representation Language (KARL) combines a description of a knowledge based system at the conceptual level (a so called model of expertise) with a description at a formal and executable level. Thus, KARL allows the precise and unique specification of the functionality of a knowledge based system independent of any implementation details. A KARL model of expertise contains the description of domain knowledge, inference knowledge, and procedural control knowledge. For capturing these different types of knowledge, KARL provides corresponding modeling primitives based on Frame Logic and Dynamic Logic. A declarative semantics for a complete KARL model of expertise is given by a combination of these two types of logic. In addition, an operational definition of this semantics, which relies on a fixpoint approach, is given. This operational semantics defines the basis for the implementation of the KARL interpreter, which includes appropriate algorithms for efficiently executing KARL specifications. This enables the evaluation of KARL specifications by means of testing.

Journal ArticleDOI
01 May 1995-System
TL;DR: This paper will define and illustrate the various components of task knowledge and attempt to show the functional relationship between task knowledge, specifically “task knowledge” and autonomous learning.

Journal ArticleDOI
TL;DR: A general framework for a living design memory is developed, a design memory tool is built, and the tool is deployed in a large software development organization to help ensure that its knowledge evolves as necessary.
Abstract: We identify an important type of software design knowledge that we call community-specific folklore and discuss problems with current approaches to managing it. We developed a general framework for a living design memory, built a design memory tool, and deployed the tool in a large software development organization. The tool effectively disseminates knowledge relevant to local software design practice. It is embedded in the organizational process to help ensure that its knowledge evolves as necessary. This work illustrates important lessons in building knowledge management systems, integrating novel technology into organizational practice, and carrying out research-development partnerships.

Journal ArticleDOI
TL;DR: A metaposition allows for inquiry of clinical knowledge, inviting an expansion of the traditional medical epistemology, provided that relevant criteria for scientific knowledge within this field are developed and applied.
Abstract: The traditional medical epistemology, resting on a biomedical paradigmatic monopoly, fails to display an adequate representation of medical knowledge. Clinical knowledge, including the complexities of human interaction, is not available for inquiry by means of biomedical approaches, and consequently is denied legitimacy within a scientific context. A gap results between medical research and clinical practice. Theories of knowledge, especially the concept of tacit knowing, seem suitable for description and discussion of clinical knowledge, commonly denoted “the art of medicine.” A metaposition allows for inquiry of clinical knowledge, inviting an expansion of the traditional medical epistemology, provided that relevant criteria for scientific knowledge within this field are developed and applied. The consequences of such approaches are discussed.

Journal ArticleDOI
TL;DR: The nature of knowledge is explored in this paper, with support being given to the constructivist perspective and types of knowledge include propositional, professional craft and personal knowledge.

Proceedings ArticleDOI
02 Dec 1995
TL;DR: The advantages of using domain knowledge within the discovery process are highlighted by providing results from the application of the STRIP algorithm in the actuarial domain.
Abstract: The ideal situation for a Data Mining or Knowledge Discovery system would be for the user to be able to pose a query of the form “Give me something interesting that could be useful” and for the system to discover some useful knowledge for the user. But such a system would be unrealistic as databases in the real world are very large and so it would be too inefficient to be workable. So the role of the human within the discovery process is essential. Moreover, the measure of what is meant by “interesting to the user” is dependent on the user as well as the domain within which the Data Mining system is being used. In this paper we discuss the use of domain knowledge within Data Mining. We define three classes of domain knowledge: Hierarchical Generalization Trees ( HG-Trees), Attribute Relationship Rules (AR-rules) and EnvironmentBased Constraints (EBC). We discuss how each one of these types of domain knowledge is incorporated into the discovery process within the EDM (Evidential Data Mining) framework for Data Mining proposed earlier by the authors [ANAN94], and in particular within the STRIP (Strong Rule Induction in Parallel) algorithm [ANAN95] implemented within the EDM framework. We highlight the advantages of using domain knowledge within the discovery process by providing results from the application of the STRIP algorithm in the actuarial domain.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the role of practical experience in knowledge restructuring in medical student, clerk, intern, and registrar development and how formal and informal, classroom and experiential learning contribute to this process.

Journal ArticleDOI
TL;DR: This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent that combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations.
Abstract: This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called INSTRUCTO-SOAR that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. INSTRUCTO-SOAR meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.

Posted Content
TL;DR: In this article, an agent called Instructo-Soar learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions using a combination of analytic and inductive techniques.
Abstract: This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a flexible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this flexibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of flexible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation specified in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks.

Journal ArticleDOI
TL;DR: The prototype system to query medical multimedia distributed databases by both image content and alphanumeric content is validated and rules derived from application and domain knowledge, approximate and conceptual queries may be answered.

Journal ArticleDOI
TL;DR: The role of application domain knowledge in the processes used to comprehend computer programs is demonstrated by proposing a key role for knowledge of the application domain under examination and arguing that programmers use more top-down comprehension processes when they are familiar with the applicationdomain.
Abstract: The field of software, has, to date, focused almost exclusively on application-independent approaches. In this research, we demonstrate the role of application domain knowledge in the processes used to comprehend computer programs. Our research sought to reconcile two apparently conflicting theories of computer program comprehension by proposing a key role for knowledge of the application domain under examination. We argue that programmers use more top-down comprehension processes when they are familiar with the application domain. When the application domain is unfamiliar, programmers use processes that are more bottom-up in nature. We conducted a protocol analysis study of 24 professional programmers comprehending programs in familiar and unfamiliar application domains. Our findings confirm our thesis.

Proceedings ArticleDOI
20 Aug 1995
TL;DR: In this paper, a sufficient condition is provided for a knowledge-based program to be represented in a unique way in a given context, which applies to many cases of interest, and covers many of the knowledge- based programs considered in the literature.
Abstract: Reasoning about activities in a distributed computer system at the level of the knowledge of individuals and groups allows us to abstract away from many concrete details of the system we are considering. In this paper, we make use of two notions introduced in our recent book to facilitate designing and reasoning about systems in terms of knowledge. The first notion is that of a knowledge-based program. A knowledge-based program is a syntactic object: a program with tests for knowledge. The second notion is that of a context, which captures the setting in which a program is to be executed. In a given context, a standard program (one without tests for knowledge) is represented by (i.e., corresponds in a precise sense to) a unique system. A knowledge-based program, on the other hand, may be represented by no system, one system, or many systems. In this paper, we provide a sufficient condition for a knowledge-based program to be represented in a unique way in a given context. This condition applies to many cases of interest, and covers many of the knowledge-based programs considered in the literature. We also completely characterize the complexity of determining whether a given knowledge-based program has a unique representation, or any representation at all, in a given finite-state context.

Book
01 Oct 1995
TL;DR: In this article, the authors present a knowledge-based process planning approach for machining process planning based on knowledge representations and reasoning systems, and present an object-oriented knowledgebased inspection process planner.
Abstract: Preface. Introduction. Knowledge representations and reasoning systems. Knowledge-based systems approach to process planning. Feature-based modelling for process planning. Knowledge-based process planning for machining. Object-oriented knowledge-based inspection process planner. Knowledge-based assembly planning. Next generation intelligent manufacturing systems. Index.

Journal ArticleDOI
TL;DR: This article investigated the relative contribution and trade-off effects of children's knowledge and reading skill in text comprehension in a single study and found that reading skill contributes to comprehension independent of domain knowledge.
Abstract: Although the concept of general reading skill has been assumed to be the primary contribution to comprehension, a demonstration that reading skill contributes to comprehension independent of domain knowledge has been lacking. This research investigates the relative contribution and trade‐off effects of children's knowledge and reading skill in text comprehension in a single study. Children in Grades 4 through 7, grouped as high or low reading skill and high or low knowledge on the basis of a domain‐specific topic, participated in this study. Comprehension was measured in two parallel texts: one domain specific and one domain general. The results suggested that domain knowledge and reading skill can be traded in order to achieve similar levels of comprehension. Reading skill compensates for deficient knowledge and specific knowledge compensates for deficient reading ability.

Book ChapterDOI
21 Sep 1995
TL;DR: Past research in cartographic generalization has shown that algorithmic methods are well suited to handle narrow tasks, but appear to have limited potential so solve the entire generalization process comprehensively, and knowledge acquisition forms the major bottleneck to progress of knowledge-based techniques.
Abstract: Past research in cartographic generalization has shown that algorithmic methods are well suited to handle narrow tasks, but appear to have limited potential so solve the entire generalization process comprehensively. Attempts to use systems based on explicit knowledge representation (e.g., rule-based or expert systems) also had relatively little success. The major limiting factor to explicit knowledge systems in generalization is the scarcity of formalized knowledge available. That is, knowledge acquisition (KA) forms the major bottleneck to progress of knowledge-based techniques.

Journal ArticleDOI
TL;DR: A conceptual model and a framework for experimenting with it are developed and a system, GASP (Geometric Animation System, Princeton), which implements this model, which allows quick generation of 3D geometric algorithm visualizations and provides a visual debugging facility for geometric computing.
Abstract: Investigates the visualization of geometric algorithms We discuss how limiting the domain makes it possible to create a system that enables others to use it easily Knowledge about the domain can be very helpful in building a system which automates large parts of the user's task A system can be designed to isolate the user from any concern about how graphics is done The application need only specify "what" happens and need not be concerned with "how" to make it happen on the screen We develop a conceptual model and a framework for experimenting with it We also present a system, GASP (Geometric Animation System, Princeton), which implements this model GASP allows quick generation of 3D geometric algorithm visualizations, even for highly complex algorithms It also provides a visual debugging facility for geometric computing We show the utility of GASP by presenting a variety of examples >