scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 1999"


Journal ArticleDOI
TL;DR: For instance, the authors argue that knowledge is a tool of knowing, that knowing is an aspect of our interaction with the social and physical world, and that the interplay of knowledge and knowing can generate new knowledge and new ways of knowing.
Abstract: Much current work on organizational knowledge, intellectual capital, knowledge-creating organizations, knowledge work, and the like rests on a single, traditional understanding of the nature of knowledge. We call this understanding the "episte-mology of possession," since it treats knowledge as something people possess. Yet, this epistemology cannot account for the knowing found in individual and group practice. Knowing as action calls for an "epistemology of practice." Moreover, the epistemology of possession tends to privilege explicit over tacit knowledge, and knowledge possessed by individuals over that possessed by groups. Current work on organizations is limited by this privileging and by the scant attention given to knowing in its own right. Organizations are better understood if explicit, tacit, individual and group knowledge are treated as four distinct and coequal forms of knowledge (each doing work the others cannot), and if knowledge and knowing are seen as mutually enabling (not competing). We hold that knowledge is a tool of knowing, that knowing is an aspect of our interaction with the social and physical world, and that the interplay of knowledge and knowing can generate new knowledge and new ways of knowing. We believe this generative dance between knowledge and knowing is a powerful source of organizational innovation. Harnessing this innovation calls for organizational and technological infrastructures that support the interplay of knowledge and knowing. Ultimately, these concepts make possible a more robust framing of such epistemologically-centered concerns as core competencies, the management of intellectual capital, etc. We explore these views through three brief case studies drawn from recent research.

2,444 citations


Book
17 Dec 1999
TL;DR: The CommonKADS methodology, developed over the last decade by an industry-university consortium led by the authors, is used and makes as much use as possible of the new UML notation standard.
Abstract: The book covers in an integrated fashion the complete route from corporate knowledge management, through knowledge analysis andengineering, to the design and implementation of knowledge-intensiveinformation systems. The disciplines of knowledge engineering and knowledge management are closely tied. Knowledge engineering deals with the development of information systems in which knowledge and reasoning play pivotal roles. Knowledge management, a newly developed field at the intersection of computer science and management, deals with knowledge as a key resource in modern organizations. Managing knowledge within an organization is inconceivable without the use of advanced information systems; the design and implementation of such systems pose great organization as well as technical challenges. The book covers in an integrated fashion the complete route from corporate knowledge management, through knowledge analysis and engineering, to the design and implementation of knowledge-intensive information systems. The CommonKADS methodology, developed over the last decade by an industry-university consortium led by the authors, is used throughout the book. CommonKADS makes as much use as possible of the new UML notation standard. Beyond information systems applications, all software engineering and computer systems projects in which knowledge plays an important role stand to benefit from the CommonKADS methodology.

1,720 citations


Journal ArticleDOI
TL;DR: A critical review of the literature on knowledge management argues for a community‐based model of knowledge management for interactive innovation and contrasts this with the cognitive‐based view that underpins many IT‐led knowledge management initiatives.
Abstract: Begins with a critical review of the literature on knowledge management, arguing that its focus on IT to create a network structure may limit its potential for encouraging knowledge sharing across social communities. Two cases of interactive innovation are contrasted. One focused almost entirely on using IT (intranet) for knowledge sharing, resulting in a plethora of independent intranets which reinforced existing organizational and social boundaries with electronic “fences”. In the other, while IT was used to provide a network to encourage sharing, there was also recognition of the importance of face‐to‐face interaction for sharing tacit knowledge. The emphasis was on encouraging active networking among dispersed communities, rather than relying on IT networks. Argues for a community‐based model of knowledge management for interactive innovation and contrasts this with the cognitive‐based view that underpins many IT‐led knowledge management initiatives.

882 citations


Book
21 Dec 1999
TL;DR: This book discusses knowledge management in the context of a knowledge base, with a focus on the building blocks of knowledge management.
Abstract: Managing Knowledge: The Challenge. The Companya s Knowledge Base. Building Blocks of Knowledge Management. Defining Knowledge Goals. Identifying Knowledge. Acquiring Knowledge. Developing Knowledge. Sharing and Distributing Knowledge. Using Knowledge. Preserving Knowledge. Measuring Knowledge. Incorporating Knowledge Management. Getting Started. First Experiences of Implementation. Appendices. Notes. Bibliography. Index.

696 citations


Journal ArticleDOI
Ilkka Tuomi1
05 Jan 1999
TL;DR: The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory, and this difference may have major implications for organizational flexibility and renewal.
Abstract: In knowledge management literature it is often pointed out that it is important to distinguish between data, information and knowledge. The generally accepted view sees data as simple facts that become information as data is combined into meaningful structures, which subsequently become knowledge as meaningful information is put into a context and when it can be used to make predictions. This view sees data as a prerequisite for information, and information as a prerequisite for knowledge. I explore the conceptual hierarchy of data, information and knowledge, showing that data emerges only after we have information, and that information emerges only after we already have knowledge. The reversed hierarchy of knowledge is shown to lead to a different approach in developing information systems that support knowledge management and organizational memory. It is also argued that this difference may have major implications for organizational flexibility and renewal.

583 citations


Proceedings ArticleDOI
01 Jul 1999
TL;DR: Cognitive modeling applications in advanced character animation and automated cinematography are demonstrated, allowing behaviors to be specified more naturally and intuitively, more succinctly and at a much higher level of abstraction than would otherwise be possible.
Abstract: Recent work in behavioral animation has taken impressive steps toward autonomous, self-animating characters for use in production animation and interactive games. It remains difficult, however, to direct autonomous characters to perform specific tasks. This paper addresses the challenge by introducing cognitive modeling. Cognitive models go beyond behavioral models in that they govern what a character knows, how that knowledge is acquired, and how it can be used to plan actions. To help build cognitive models, we develop the cognitive modeling language CML. Using CML, we can imbue a character with domain knowledge, elegantly specified in terms of actions, their preconditions and their effects, and then direct the character’s behavior in terms of goals. Our approach allows behaviors to be specified more naturally and intuitively, more succinctly and at a much higher level of abstraction than would otherwise be possible. With cognitively empowered characters, the animator need only specify a behavior outline or “sketch plan” and, through reasoning, the character will automatically work out a detailed sequence of actions satisfying the specification. We exploit interval methods to integrate sensing into our underlying theoretical framework, thus enabling our autonomous characters to generate action plans even in highly complex, dynamic virtual worlds. We demonstrate cognitive modeling applications in advanced character animation and automated cinematography.

465 citations


Journal ArticleDOI
TL;DR: An overview of common knowledge discovery tasks and approaches to solve these tasks is provided, and a feature classification scheme that can be used to study knowledge and data mining software is proposed.
Abstract: Knowledge discovery in databases is a rapidly growing field, whose development is driven by strong research interests as well as urgent practical, social, and economical needs. While the last few years knowledge discovery tools have been used mainly in research environments, sophisticated software products are now rapidly emerging. In this paper, we provide an overview of common knowledge discovery tasks and approaches to solve these tasks. We propose a feature classification scheme that can be used to study knowledge and data mining software. This scheme is based on the software's general characteristics, database connectivity, and data mining characteristics. We then apply our feature classification scheme to investigate 43 software products, which are either research prototypes or commercially available. Finally, we specify features that we consider important for knowledge discovery software to possess in order to accommodate its users effectively, as well as issues that are either not addressed or insufficiently solved yet.

427 citations


Proceedings Article
01 Aug 1999
TL;DR: An overview of approaches for ontologies and problem-solving methods is given, which can be viewed as complementary entities that can be used to configure new knowledge systems from existing, reusable components.
Abstract: Ontologies and problem-solving methods are promising candidates for reuse in Knowledge Engineering. Ontologies define domain knowledge at a generic level, while problem-solving methods specify generic reasoning knowledge. Both type of components can be viewed as complementary entities that can be used to configure new knowledge systems from existing, reusable components. In this paper, we give an overview of approaches for ontologies and problem-solving methods.

418 citations


BookDOI
01 Jan 1999
TL;DR: This paper describes how the knowledge networker's toolkit and the knowledge team's tool kit for the knowledge-based enterprise and the interprise toolkit will help shape the future of knowledge globally.
Abstract: Setting the Context - The networked knowledge economy Identifying Unbounded Opportunities - Knowledge: the strategic imperative Technology: the knowledge enhancer Virtualization: networking knowledge globally Toolkits for Tomorrow - The knowledge networker's toolkit The knowledge team's toolkit Toolkit for the knowledge-based enterprise The interprise toolkit Pathways to prosperity - The public policy agenda Forward to the future Postscript References Index

386 citations


Journal ArticleDOI
TL;DR: The relation between the two approaches to formal Interactive Epistemology, in which knowledge is embodied in sentences constructed according to certain syntactic rules, is examined, showing that they are in a sense equivalent.
Abstract: Formal Interactive Epistemology deals with the logic of knowledge and belief when there is more than one agent or “player.” One is interested not only in each person's knowledge about substantive matters, but also in his knowledge about the others' knowledge. This paper examines two parallel approaches to the subject. The first is the semantic approach, in which knowledge is represented by a space Ω of states of the world, together with partitions ℐi of Ω for each player i; the atom of ℐi containing a given state ω of the world represents i's knowledge at that state – the set of those other states that i cannot distinguish from ω. The second is the syntactic approach, in which knowledge is embodied in sentences constructed according to certain syntactic rules. This paper examines the relation between the two approaches, and shows that they are in a sense equivalent. In game theory and economics, the semantic approach has heretofore been most prevalent. A question that often arises in this connection is whether, in what sense, and why the space Ω and the partitions ℐi can be taken as given and commonly known by the players. An answer to this question is provided by the syntactic approach.

361 citations


Journal ArticleDOI
TL;DR: An integrated view on knowledge management and networking being a very powerful combination for the future of knowledge management is described and a framework for knowledge networking is developed which can be used as a basis in order to structure and reveal interdependences.
Abstract: In this article we describe an integrated view on knowledge management and networking being a very powerful combination for the future of knowledge management. We start by giving an overview of the increasing importance of networks in the modern economy. Subsequently, we conceptualize a Network perspective on knowledge management. Therefore we firstly give a theoretical foundation on networks, and secondly explain the interdependences between networks and knowledge management. These reflection lead to the development of a framework for knowledge networking, where we distinguish between a micro-perspective and a macro-perspective. Finally, we develop a framework for knowledge networking which can be used as a basis in order to structure and reveal interdependences. We conclude by giving some implications for management and future research.

Journal ArticleDOI
TL;DR: Consultants and managers should ask themselves strategic, organisational and instrumental questions regarding knowledge management to stay competitive in a highly dynamic and changing world.
Abstract: This article examines and defines the main concepts in knowledge management. Since our economy has evolved over the last couple of years into a knowledge‐based economy, knowledge has become one of the main assets of companies. Knowledge can be defined as: information; the capability to interpret data and information through a process of giving meaning to these data and information; and an attitude aimed at wanting to do so. In making these factors productive knowledge management can be defined as achieving organisational goals through the strategy‐driven motivation and facilitation of (knowledge) workers to develop, enhance and use their capability to interpret data and information (by using available sources of information, experience, skills, culture, character, etc.) through a process of giving meaning to these data and information. Consultants and managers should ask themselves strategic, organisational and instrumental questions regarding knowledge management to stay competitive in a highly dynamic and changing world.

01 Jan 1999
TL;DR: An overview of the evolution of Protégé is given, examining the methodological assumptions underlying the original ProtÉgé system and discussing the ways in which the methodology has changed over time.
Abstract: It has been 13 years since the first version of Protégé was run. The original tool was a small application, aimed mainly at building knowledge-acquisition tools for a few very specialized programs (it grew out of the ONCOCIN project and the subsequent attempts to build expert systems for protocol-based therapy planning). The most recent version, Protégé-2000, incorporates the Open Knowledge Base Connectivity (OKBC) knowledge model, is written to run across a wide variety of platforms, supports customized user-interface extensions, and has been used by over 300 individuals and research groups, most of whom are only peripherally interested in medical informatics. Researchers not directly involved in the project might well wonder how Protégé evolved, what are the reasons for the repeated reimplementations, and how to tell the various versions apart. In this paper, we give an overview of the evolution of Protégé, examining the methodological assumptions underlying the original Protégé system and discussing the ways in which the methodology has changed over time. We conclude with an overview of the latest version of Protégé, Protégé-2000. 1. MOTIVATION AND A TIMELINE The Protégé applications (hereafter ‘Protégé’) are a set of tools that have been evolving for over a decade, from a simple program which helped construct specialized knowledge-bases to a set of general purpose knowledge-base creation and maintenance tools. While Protégé began as a small application designed for a medical domain (protocol-based therapy planning), it has grown and evolved to become a much more general-purpose set of tools for building knowledge-based systems. The original goal of Protégé was to reduce the knowledge-acquisition bottleneck (Hayes-Roth et al, 1983) by minimizing the role of the knowledge-engineer in constructing knowledge-bases. In order to do this, Musen (1988, 1989b) posited that knowledge-acquisition proceeds in welldefined stages and that knowledge acquired in one stage could be used to generate and customize knowledge-acquisition tools for subsequent stages. In (Musen, 1988), Protégé was defined as an application that takes advantage of this structured information to simplify the knowledgeacquisition process. The original Protégé was described this way (Musen, 1988): Protégé is neither an expert system itself nor a program that builds expert systems directly. Instead, Protégé is a tool that helps users build other tools that are custom-tailored to assist with knowledgeacquisition for expert systems in specific application areas. The original Protégé demonstrated the viability of this approach, and of the use of task-specific knowledge to generate and customize knowledge-acquisition tools. But as with many first-

Journal ArticleDOI
Rick Dove1
TL;DR: This paper defines the agile enterprise as one which is able to both manage and apply knowledge effectively, and suggests that value from either capability is impeded if they are not in balance.
Abstract: This paper defines the agile enterprise as one which is able to both manage and apply knowledge effectively, and suggests that value from either capability is impeded if they are not in balance. It looks at the application of knowledge as requiring a change, and overviews a body of analytical work on change proficiency in business systems and processes. It looks at knowledge management as a strategic portfolio management responsibility based on learning functionality, and shares knowledge and experience in organizational collaborative learning mechanisms. It introduces the concept of plug-compatible knowledge packaging as a means for increasing the velocity of knowledge diffusion and the likelihood of knowledge understood at the depth of insight. Finally, it reviews a knowledge portfolio management and collaborative knowledge development architecture used successfully in a sizable cross-industry informal-consortia activity, and suggests that it is a good model for a corporate university architecture.

Proceedings ArticleDOI
06 Dec 1999
TL;DR: This paper built an application which enhances domain knowledge with machine learning techniques to create rules for an intrusion detection expert system, and employs genetic algorithms and decision trees to automatically generate rules for classifying network connections.
Abstract: Differentiating anomalous network activity from normal network traffic is difficult and tedious. A human analyst must search through vast amounts of data to find anomalous sequences of network connections. To support the analyst's job, we built an application which enhances domain knowledge with machine learning techniques to create rules for an intrusion detection expert system. We employ genetic algorithms and decision trees to automatically generate rules for classifying network connections. This paper describes the machine learning methodology and the applications employing this methodology.

Journal ArticleDOI
TL;DR: The authors are working with collaborators at IBM, Informix, and elsewhere to explore ways to improve human-computer interaction during data analysis to develop interactive, intuitive techniques for analyzing massive data sets.
Abstract: Data analysis is fundamentally an iterative process in which you issue a query, receive a response, formulate the next query based on the response, and repeat. You usually don't issue a single, perfectly chosen query and get the information you want from a database; indeed, the purpose of data analysis is to extract unknown information, and in most situations, there is no one perfect query. People naturally start by asking broad, big-picture questions and then continually refine their questions based on feedback and domain knowledge. In the Control (Continuous Output and Navigation Technology with Refinement Online) project at the University of California, Berkeley, the authors are working with collaborators at IBM, Informix, and elsewhere to explore ways to improve human-computer interaction during data analysis. The Control project's goal is to develop interactive, intuitive techniques for analyzing massive data sets.

Journal ArticleDOI
TL;DR: This paper presents three rule extraction techniques, one of which is specific to feedforward networks, with a single hidden layer of sigmoidal units, and a rule-evaluation technique, which orders extracted rules based on three performance measures.
Abstract: Hybrid intelligent systems that combine knowledge-based and artificial neural network systems typically have four phases, involving domain knowledge representation, mapping of this knowledge into an initial connectionist architecture, network training and rule extraction, respectively. The final phase is important because it can provide a trained connectionist architecture with explanation power and validate its output decisions. Moreover, it can be used to refine and maintain the initial knowledge acquired from domain experts. In this paper, we present three rule extraction techniques. The first technique extracts a set of binary rules from any type of neural network. The other two techniques are specific to feedforward networks, with a single hidden layer of sigmoidal units. Technique 2 extracts partial rules that represent the most important embedded knowledge with an adjustable level of detail, while the third technique provides a more comprehensive and universal approach. A rule-evaluation technique, which orders extracted rules based on three performance measures, is then proposed. The three techniques area applied to the iris and breast cancer data sets. The extracted rules are evaluated qualitatively and quantitatively, and are compared with those obtained by other approaches.

Journal ArticleDOI
TL;DR: With the growth in IT capability, a clear operational distinction can be drawn between information and knowledge, and a model of the interaction between knowledge and information, and of the appropriate balance between the two in different situations is developed.
Abstract: Knowledge management is emerging as a significant organizational and management challenge. The pressures of the emergence of the global knowledge economy, and recognition of knowledge as a key and intangible asset are making the effective management of knowledge a priority. This surge of interest has paid relatively little attention to the object of management-knowledge. Epistemologists and sociologists have produced a variety of definitions and classifications, but there is no consensus. However, with the growth in IT capability, a clear operational distinction can be drawn between information and knowledge. The former can be captured, stored and transmitted in digital form. The latter can only exist in an intelligent system. This distinction is used to develop models of the interaction between knowledge and information, and of the appropriate balance between the two in different situations. On the basis of this model, the challenges of 'knowledge management' are: Establishing and optimizing the informat...

Journal ArticleDOI
TL;DR: For instance, the authors found that domain knowledge exhibited positive relations with general intelligence (g), verbal abilities after g was removed, openness, typical Intellectual engagement, and specific vocational interests.
Abstract: Twenty academic knowledge tests were developed to locate domain knowledge within a nomological network of traits. Spatial, numerical, and verbal aptitude measures and personality and interest measures were administered to 141 undergraduates. Domain knowledge factored along curricular lines; a general knowledge factor accounted for about half of knowledge variance. Domain knowledge exhibited positive relations with general intelligence (g), verbal abilities after g was removed, Openness, Typical Intellectual engagement, and specific vocational interests. Spatial and numerical abilities were unrelated to knowledge beyond g. Extraversion related negatively to all knowledge domains. Results provide broad support for R. B. Cattell's (1971/1987) crystallized intelligence as something more than verbal abilities and specific support for P. L. Ackerman's (1996) intelligence-asprocess, personality, interests, and intelligence-as-knowledge theory of adult intelligence.

Proceedings ArticleDOI
01 Apr 1999
TL;DR: The paper argues that knowledge about, within and from processes can be managed in a knowledge medium and illustrates this finding through a case study from the financial industry.
Abstract: This paper combines three major streams of recent business research and shows how they can complement each other. Specifically, it examines which possibilities exist to improve knowledge intensive processes through new media. Thus, the paper combines arguments from the knowledge management discussion (Davenport/Prusak 1998) with the approach of Business Process Reengineering (Hammer/Champy 1993) and Media Theory (Schmid et al. 1999). The paper answers the following question: “How can the management of knowledge in business processes be organized and supported?”. The paper argues that knowledge about, within and from processes can be managed in a knowledge medium and illustrates this finding through a case study from the financial industry. Furthermore, the paper demonstrates that the quality of information is a crucial element for such a knowledge medium. A knowledge medium can be defined as a technical and organizational platform of a community for the purpose of knowledge exchange between its agents (Schmid et al. 1999).

Journal ArticleDOI
TL;DR: The key issue addressed in this article is: how does knowledge engineering relate to a broader perspective of knowledge management?

Journal ArticleDOI
TL;DR: This work focuses on characteristics of this strategic important knowledge and how it can be organized in networks and should be read as a case for paying more attention to knowledge and networks and how to manage these in organizations.
Abstract: Knowledge is a magic term with multiple connotations and interpretations. It is an issue of academic discourse as well as one with important implications for business institutions. How we define and frame knowledge carries implications for the way we try to manage knowledge in organizations and the de facto knowledge in organizations also carries implications for the knowledge existing in organizations. Within the last few decades, there has been an increasing interest in the tacit dimension of knowledge, which is perhaps hardest to manage, as it cannot be formally communicated, and is often embedded in the routines and standard operating procedures of the organization. Focuses on characteristics of this strategic important knowledge and how it can be organized in networks. Should be read as a case for paying more attention to knowledge and networks and how to manage these in organizations.

Proceedings ArticleDOI
19 Mar 1999
TL;DR: This paper presents a data mining algorithm to find association rules in 2-dimensional color images to explore the feasibility of this approach and shows that there is promise in image mining based on content.
Abstract: Our focus for data mining in the paper is concerned with knowledge discovery in image databases. We present a data mining algorithm to find association rules in 2-dimensional color images. The algorithm has four major steps: feature extraction, object identification, auxiliary image creation and object mining. Our emphasis is on data mining of image content without the use of auxiliary domain knowledge. The purpose of our experiments is to explore the feasibility of this approach. A synthetic image set containing geometric shapes was generated to test our initial algorithm implementation. Our experimental results show that there is promise in image mining based on content. We compare these results against the rules obtained from manually identifying the shapes. We analyze the reasons for discrepancies. We also suggest directions for future work.

Book ChapterDOI
TL;DR: A quantitative model based on support logic for determining the interestingness of discovered patterns is developed and incorporated into the Web Site Information Filter system and examples of interesting frequent itemsets automatically discovered from real Web data are presented.
Abstract: Web Usage Mining is the application of data mining techniques to large Web data repositories in order to extract usage patterns. As with many data mining application domains, the identification of patterns that are considered interesting is a problem that must be solved in addition to simply generating them. Aneces sary step in identifying interesting results is quantifying what is considered uninteresting in order to form a basis for comparison. Several research efforts have relied on manually generated sets of uninteresting rules. However, manual generation of a comprehensive set of evidence about beliefs for a particular domain is impractical in many cases. Generally, domain knowledge can be used to automatically create evidence for or against a set of beliefs. This paper develops a quantitative model based on support logic for determining the interestingness of discovered patterns. For Web Usage Mining, there are three types of domain information available; usage, content, and structure. This paper also describes algorithms for using these three types of information to automatically identify interesting knowledge. These algorithms have been incorporated into the Web Site Information Filter (WebSIFT) system and examples of interesting frequent itemsets automatically discovered from real Web data are presented.

Journal ArticleDOI
Kyung Shik Shin, Ingoo Han1
TL;DR: A hybrid approach using genetic algorithms (GAs) to case-based retrieval process in an attempt to increase the overall classification accuracy and a machine learning approach using GAs to find an optimal or near optimal weight vector for the attributes of cases in case indexing and retrieving.
Abstract: A critical issue in case-based reasoning (CBR) is to retrieve not just a similar past case but a usefully similar case to the problem. For this reason, the integration of domain knowledge into the case indexing and retrieving process is highly recommended in building a CBR system. However, this task is difficult to carry out as such knowledge often cannot be successfully and exhaustively captured and represented. This article utilizes a hybrid approach using genetic algorithms (GAs) to case-based retrieval process in an attempt to increase the overall classification accuracy. We propose a machine learning approach using GAs to find an optimal or near optimal weight vector for the attributes of cases in case indexing and retrieving. We apply this weight vector to the matching and ranking procedure of CBR. This GA–CBR integration reaps the benefits of both systems. The CBR technique provides analogical reasoning structures for experience-rich domains while GAs provide CBR with knowledge through machine learning. The proposed approach is demonstrated by applications to corporate bond rating.

Journal ArticleDOI
TL;DR: Comparisons with the results obtained by some of the main neural, symbolic, and hybrid inductive learning systems, using the same domain knowledge, show the effectiveness of C-IL2P.
Abstract: This paper presents the Connectionist Inductive Learning and Logic Programming System (C-IL^2P). C-IL^2P is a new massively parallel computational model based on a feedforward Artificial Neural Network that integrates inductive learning from examples and background knowledge, with deductive learning from Logic Programming. Starting with the background knowledge represented by a propositional logic program, a translation algorithm is applied generating a neural network that can be trained with examples. The results obtained with this refined network can be explained by extracting a revised logic program from it. Moreover, the neural network computes the stable model of the logic program inserted in it as background knowledge, or learned with the examples, thus functioning as a parallel system for Logic Programming. We have successfully applied C-IL^2P to two real-world problems of computational biology, specifically DNA sequence analyses. Comparisons with the results obtained by some of the main neural, symbolic, and hybrid inductive learning systems, using the same domain knowledge, show the effectiveness of C-IL^2P.

Journal ArticleDOI
TL;DR: The consequences of taking a knowledge‐based view of the organisation are examined, with examples drawn from different process types (knowledge‐based manufacturing and service).
Abstract: This paper is concerned with the relevance of knowledge management to operational managers within organisations The consequences of taking a knowledge‐based view of the organisation are examined, with examples drawn from different process types (knowledge‐based manufacturing and service)

Journal ArticleDOI
TL;DR: It is argued that a fragmented mosaic of programs and problematics currently exists, at various levels of incompatibility, and a research program is described that develops a model on four dimensions that appears to order the various programs, practices and processes in this divergent field.
Abstract: This article reviews developments in the field of applied knowledge management dating from 1990 and argues that a fragmented mosaic of programs and problematics currently exists, at various levels of incompatibility. Using a software product, we map the information space around applied knowledge management as an illustration of this basic fact. We then describe a research program that extends this logic and develop a model on four dimensions that appears to order the various programs, practices and processes in this divergent field. Implications for managers of knowledge management initiatives are discussed, and avenues for future research suggested.

Journal ArticleDOI
01 Dec 1999
TL;DR: In this article, the authors focus on generating unexpected patterns with respect to managerial intuition by eliciting managers' beliefs about the domain and using these beliefs to seed the search for unexpected patterns in data, which should lead to the development of decision support systems that provide managers with more relevant patterns from data and aid in effective decision making.
Abstract: Organizations are taking advantage of “data-mining” techniques to leverage the vast amounts of data captured as they process routine transactions. Data mining is the process of discovering hidden structure or patterns in data. However, several of the pattern discovery methods in data-mining systems have the drawbacks that they discover too many obvious or irrelevant patterns and that they do not leverage to a full extent valuable prior domain knowledge that managers have. This research addresses these drawbacks by developing ways to generate interesting patterns by incorporating managers' prior knowledge in the process of searching for patterns in data. Specifically, we focus on providing methods that generate unexpected patterns with respect to managerial intuition by eliciting managers' beliefs about the domain and using these beliefs to seed the search for unexpected patterns in data. Our approach should lead to the development of decision-support systems that provide managers with more relevant patterns from data and aid in effective decision making.

Journal ArticleDOI
TL;DR: A knowledge map can bridge the gap between business and information technology people and the value of these initial knowledge maps can then be extended to incorporate the rest of the intellectual capital of an enterprise.
Abstract: Knowledge mapping can be a powerful tool for CIOs in implementing knowledge management in their organizations. A knowledge map can bridge the gap between business and information technology (IT) people. It can also capture and integrate the knowledge collected in an initial knowledge management identification process. the value of these initial knowledge maps can then be extended to incorporate the rest of the intellectual capital of an enterprise.