scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2003"


Journal ArticleDOI
TL;DR: In this article, the authors provide an integrative framework for organizing the literature on knowledge management and identify where research findings about knowledge management converge and where gaps in our understanding exist, as well as emerging themes in knowledge management.
Abstract: In this concluding article to the Management Science special issue on "Managing Knowledge in Organizations: Creating, Retaining, and Transferring Knowledge," we provide an integrative framework for organizing the literature on knowledge management. The framework has two dimensions. The knowledge management outcomes of knowledge creation, retention, and transfer are represented along one dimension. Properties of the context within which knowledge management occurs are represented on the other dimension. These properties, which affect knowledge management outcomes, can be organized according to whether they are properties of a unit (e.g., individual, group, organization) involved in knowledge management, properties of relationships between units or properties of the knowledge itself. The framework is used to identify where research findings about knowledge management converge and where gaps in our understanding exist. The article discusses mechanisms of knowledge management and how those mechanisms affect a unit's ability to create, retain and transfer knowledge. Emerging themes in the literature on knowledge management are identified. Directions for future research are suggested.

2,046 citations


Proceedings ArticleDOI
31 May 2003
TL;DR: This work has shown that conditionally-trained models, such as conditional maximum entropy models, handle inter-dependent features of greedy sequence modeling in NLP well.
Abstract: Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al., 1998).

1,306 citations


Journal ArticleDOI
TL;DR: This paper follows the evolution of the Protege project through three distinct re-implementations, and describes the overall methodology, the design decisions, and the lessons learned over the duration of the project.
Abstract: The Protege project has come a long way since Mark Musen first built the Protege meta-tool for knowledge-based systems in 1987. The original tool was a small application, aimed at building knowledge-acquisition tools for a few specialized programs in medical planning. From this initial tool, the Protege system has evolved into a durable, extensible platform for knowledge-based systems development and research. The current version, Protege-2000, can be run on a variety of platforms, supports customized user-interface extensions, incorporates the Open Knowledge-Base Connectivity (OKBC) knowledge model, interacts with standard storage formats such as relational databases, XML, and RDF, and has been used by hundreds of individuals and research groups. In this paper, we follow the evolution of the Protege project through three distinct re-implementations. We describe our overall methodology, our design decisions, and the lessons we have learned over the duration of the project. We believe that our success is one of infrastructure: Protege is a flexible, well-supported, and robust development environment. Using Protege, developers and domain experts can easily build effective knowledge-based systems, and researchers can explore ideas in a variety of knowledge-based domains.

1,244 citations


Journal ArticleDOI
TL;DR: The dynamics of knowledge development and transfer in more and less virtual teams are illustrated by illustrating the possibility that information technology plays the role of a jealous mistress when it comes to the development and ownership of valuable knowledge in organizations.
Abstract: Information technology can facilitate the dissemination of knowledge across the organization-even to the point of making virtual teams a viable alternative to face-to-face work. However, unless managed, the combination of information technology and virtual work may serve to change the distribution of different types of knowledge across individuals, teams, and the organization. Implications include the possibility that information technology plays the role of a jealous mistress when it comes to the development and ownership of valuable knowledge in organizations; that is, information technology may destabilize the relationship between organizations and their employees when it comes to the transfer of knowledge. The paper advances theory and informs practice by illustrating the dynamics of knowledge development and transfer in more and less virtual teams.

818 citations


Journal ArticleDOI
TL;DR: A methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept, has the potential to support a range of applications, including information retrieval and ontology engineering.

504 citations


Journal ArticleDOI
TL;DR: In this paper, the importance of social aspects of knowledge retention and transfer has been emphasised in the literature on managing knowledge, with the recognition that knowledge is often tacit and situated and embedded within particular social groups and situations.

498 citations


Journal ArticleDOI
TL;DR: The Artequakt project is considered, which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction and is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology.
Abstract: To bring the Semantic Web to life and provide advanced knowledge services, we need efficient ways to access and extract knowledge from Web documents. Although Web page annotations could facilitate such knowledge gathering, annotations are rare and will probably never be rich or detailed enough to cover all the knowledge these documents contain. Manual annotation is impractical and unscalable, and automatic annotation tools remain largely undeveloped. Specialized knowledge services therefore require tools that can search and extract specific knowledge directly from unstructured text on the Web, guided by an ontology that details what type of knowledge to harvest. An ontology uses concepts and relations to classify domain knowledge. Other researchers have used ontologies to support knowledge extraction, but few have explored their full potential in this domain. The paper considers the Artequakt project which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction. The extraction tool searches online documents and extracts knowledge that matches the given classification structure. It provides this knowledge in a machine-readable format that will be automatically maintained in a knowledge base (KB). Knowledge extraction is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology.

490 citations


Journal ArticleDOI
TL;DR: This paper presents the first scale developed to measure knowledge management behaviors and practices and in so doing provides construct boundaries that should enable the development of a theory of knowledge management.
Abstract: Knowledge management has recently emerged as a new discipline and is generating considerable interest among academics and managers. Given its newness, there is still little guidance in the extant literature on how to measure knowledge management. This paper presents the first scale developed to measure knowledge management behaviors and practices and in so doing provides construct boundaries that should enable the development of a theory of knowledge management (Zaltman et al., 1973).

419 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that knowledge is highly individualistic and concomitant with the various surrounding contexts within which it is shaped and enacted, and that these contexts are also shaped as a consequence of knowledge adding further complexity to the problem domain.

318 citations


Journal ArticleDOI
TL;DR: In this paper, the important elements for the implementation of knowledge management in engineering industries have been analyzed to obtain an Interpretive Structual Modeling (ISM) which shows the interrelationships of the variables and their levels.
Abstract: In this age of globalization, to survive, organizations need a good capacity to retain, develop, organize and utilize their knowledge assets. The knowledge of an organization comprises of professional intellect, such as know‐how, know‐why, and self‐motivated creativity; as also the experience, concepts, values, beliefs and ways of working that can be shared and communicated. Knowledge Management (KM) is described as the management of an organization’s knowledge through the processes of creating, sustaining, applying, sharing and renewing knowledge to enhance organizational performance and create value. Interpretive Structual Modeling (ISM) is a methodology for identifying and summarizing relationships among specific elements, which define an issue or problem. It provides a means by which order can be imposed on the complexity of such elements. In the present paper the important elements (also referred to as variables) for the implementation of KM in engineering industries have been analyzed to obtain an ISM, which shows the inter‐relationships of the variables and their levels. These variables have also been categorized depending on their driving power and dependence.

266 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine the nature and the generation, dissemination and translation of knowledge in large, global management consulting organizations and argue that a fruitful understanding of knowledge management in management consulting requires attention to the relations between the different elements that represent different kinds of knowledge.
Abstract: This article examines the nature and the generation, dissemination and translation of knowledge in large, global management consulting organizations. The knowledge system in consulting organizations is modelled as consisting of three interacting knowledge elements: methods and tools, providing a common language and knowledge structure; cases, carrying knowledge in a narrative form; and the experience of individual consultants that is essential for the adaptation of methods, tools and cases to the specific consulting project.A number of recent studies have characterized knowledge-management strategies as focusing on either articulate knowledge or tacit knowledge. We argue that a fruitful understanding of knowledge management in management consulting requires attention to the relations between the different elements that represent different kinds of knowledge. Based on case studies in Andersen Consulting (now Accenture) and Ernst & Young Management Consulting (now Cap Gemini Ernst & Young) these knowledge e...

Journal ArticleDOI
TL;DR: A generic knowledge management implementation framework is proposed and should provide the building blocks necessary to further understand and develop knowledge management initiatives.

Proceedings ArticleDOI
01 Sep 2003
TL;DR: Bogor as mentioned in this paper is a model checking framework with an extensible input language for defining domain-specific constructs and a modular interface design to ease the optimization of domain specific state-space encodings, reductions and search algorithms.
Abstract: Model checking is emerging as a popular technology for reasoning about behavioral properties of a wide variety of software artifacts including: requirements models, architectural descriptions, designs, implementations, and process models. The complexity of model checking is well-known, yet cost-effective analyses have been achieved by exploiting, for example, naturally occurring abstractions and semantic properties of a target software artifact. semantic properties of target software artifacts. Adapting a model checking tool to exploit this kind of domain knowledge often requires in-depth knowledge of the tool's implementation.We believe that with appropriate tool support, domain experts will be able to develop efficient model checking-based analyses for a variety of software-related models. To explore this hypothesis, we have developed Bogor, a model checking framework with an extensible input language for defining domain-specific constructs and a modular interface design to ease the optimization of domain-specific state-space encodings, reductions and search algorithms. We present the pattern-oriented design of Bogor and discuss our experiences adapting it to efficiently model check Java programs and event-driven component-based designs.

01 Jan 2003
TL;DR: The main process of KDD is the data mining process, in this process different algorithms are applied to produce hidden knowledge, which evaluates the mining result according to users’ requirements and domain knowledge.
Abstract: Data mining [Chen et al. 1996] is the process of extracting interesting (non-trivial, implicit, previously unknown and potentially useful) information or patterns from large information repositories such as: relational database, data warehouses, XML repository, etc. Also data mining is known as one of the core processes of Knowledge Discovery in Database (KDD). Many people take data mining as a synonym for another popular term, Knowledge Discovery in Database (KDD). Alternatively other people treat Data Mining as the core process of KDD. The KDD processes are shown in Figure 1 [Han and Kamber 2000]. Usually there are three processes. One is called preprocessing, which is executed before data mining techniques are applied to the right data. The preprocessing includes data cleaning, integration, selection and transformation. The main process of KDD is the data mining process, in this process different algorithms are applied to produce hidden knowledge. After that comes another process called postprocessing, which evaluates the mining result according to users’ requirements and domain knowledge. Regarding the evaluation results, the knowledge can be presented if the result is satisfactory, otherwise we have to run some or all of those processes again until we get the satisfactory result. The actually processes work as follows. First we need to clean and integrate the databases. Since the data source may come from different databases, which may have some inconsistences and duplications, we must clean the data source by removing those noises or make some compromises. Suppose we have two different databases, different words are used to refer the same thing in their schema. When we try to integrate the two sources we can only choose one of them, if we know that they denote the same thing. And also real world data tend to be incomplete and noisy due to the manual input mistakes. The integrated data sources can be stored in a database, data warehouse or other repositories. As not all the data in the database are related to our mining task, the second process is to select task related data from the integrated resources and transform them into a format that is ready to be mined. Suppose we want to find which items are often purchased together in a supermarket, while the database that records the purchase history may contains customer ID, items bought, transaction time, prices, number of each items and so on, but for this specific task we only need items bought. After selection of relevant data, the database that we are going to apply our data mining techniques to will be much smaller, consequently the whole process will be

Journal ArticleDOI
01 Jul 2003
TL;DR: A design of a knowledge management system called KnowledgeScope is proposed that addresses problems through an integrated workflow support capability that captures and retrieves knowledge as an organizational process proceeds and a process meta-model that organizes that knowledge and context in a knowledge repository.
Abstract: Knowledge repositories have been implemented in many organizations, but they often suffer from non-use. This research considers two key design factors that cause non-use: the extra burden on users to document knowledge in the repository, and the lack of a standard knowledge structure that facilitates knowledge sharing among users with different perspectives. We propose a design of a knowledge management system called KnowledgeScope that addresses these problems through (1) an integrated workflow support capability that captures and retrieves knowledge as an organizational process proceeds, i.e., within the context in which it is created and used, and (2) a process meta-model that organizes that knowledge and context in a knowledge repository. In this paper, we describe this design and report the results from implementing it in a real-life organization.

Journal ArticleDOI
TL;DR: A knowledge management initiative is developed which facilitates knowledge creation and sharing beyond project boundaries, based on exploratory research at pharmaceutical company AstraZeneca, and indicates that, by allowing the emergence of knowledge facilitators, practical knowledge for action is produced and shared.
Abstract: The ability to create knowledge and diffuse it throughout an organization is today recognized as a major strategic capability for gaining competitive advantage. Scholars and managers have shown an increasing interest in understanding and managing organizational knowledge. Despite this, there are few examples in the literature that bridge the gap between knowledge and knowledge application. This article develops a knowledge management initiative which facilitates knowledge creation and sharing beyond project boundaries, based on exploratory research at pharmaceutical company AstraZeneca. The results indicate that, by allowing the emergence of knowledge facilitators, practical knowledge for action is produced and shared. The article explores the dynamic and relational nature of knowledge when managing knowledge, it then develops actionable tools for lateral knowledge creation and knowledge transfer, and concludes with implications for managers using the tools.

MonographDOI
01 Oct 2003
TL;DR: The concept of Knowledge Based Organizations (KBO) as mentioned in this paper brings together high quality concepts closely related to organizational learning, knowledge workers, intellectual capital, virtual teams and will include the methodologies, systems and approaches needed to create and manager knowledge-based organizations of the 21st Century.
Abstract: Creating Knowledge Based Organizations brings together high quality concepts closely related to organizational learning, knowledge workers, intellectual capital, virtual teams and will include the methodologies, systems and approaches needed to create and manager knowledge-based organizations of the 21st Century.

Journal ArticleDOI
TL;DR: This paper proposes a practical methodology to capture and represent organizational knowledge that uses a knowledge map as a tool to represent knowledge.
Abstract: Recently, research interest in knowledge management has grown rapidly. Much research on knowledge management is conducted in academic and industrial communities. Utilizing knowledge accumulated in an organization can be a strategic weapon to acquire a competitive advantage. Capturing and representing knowledge is critical in knowledge management. This paper proposes a practical methodology to capture and represent organizational knowledge. The methodology uses a knowledge map as a tool to represent knowledge. We explore several techniques of knowledge representation and suggest a roadmap with concrete procedures to build the knowledge map. A case study in a manufacturing company is provided.

Proceedings ArticleDOI
09 Jun 2003
TL;DR: Issues of knowledge creation, knowledge conversion and transfer, continuous learning, competence management and team composition, and experience repositories and other tools for knowledge dissemination are examined.
Abstract: This paper presents a comparative analysis of knowledge sharing approaches of agile and Tayloristic (traditional) software development teams. Issues of knowledge creation, knowledge conversion and transfer, continuous learning, competence management and team composition are discussed. Experience repositories and other tools for knowledge dissemination are examined.

01 Jan 2003
TL;DR: The main process of KDD is the data mining process, in this process different algorithms are applied to produce hidden knowledge, which evaluates the mining result according to users’ requirements and domain knowledge.
Abstract: Data mining [Chen et al. 1996] is the process of extracting interesting (non-trivial, implicit, previously unknown and potentially useful) information or patterns from large information repositories such as: relational database, data warehouses, XML repository, etc. Also data mining is known as one of the core processes of Knowledge Discovery in Database (KDD). Many people take data mining as a synonym for another popular term, Knowledge Discovery in Database (KDD). Alternatively other people treat Data Mining as the core process of KDD. The KDD processes are shown in Figure 1 [Han and Kamber 2000]. Usually there are three processes. One is called preprocessing, which is executed before data mining techniques are applied to the right data. The preprocessing includes data cleaning, integration, selection and transformation. The main process of KDD is the data mining process, in this process different algorithms are applied to produce hidden knowledge. After that comes another process called postprocessing, which evaluates the mining result according to users’ requirements and domain knowledge. Regarding the evaluation results, the knowledge can be presented if the result is satisfactory, otherwise we have to run some or all of those processes again until we get the satisfactory result. The actually processes work as follows. First we need to clean and integrate the databases. Since the data source may come from different databases, which may have some inconsistences and duplications, we must clean the data source by removing those noises or make some compromises. Suppose we have two different databases, different words are used to refer the same thing in their schema. When we try to integrate the two sources we can only choose one of them, if we know that they denote the same thing. And also real world data tend to be incomplete and noisy due to the manual input mistakes. The integrated data sources can be stored in a database, data warehouse or other repositories. As not all the data in the database are related to our mining task, the second process is to select task related data from the integrated resources and transform them into a format that is ready to be mined. Suppose we want to find which items are often purchased together in a supermarket, while the database that records the purchase history may contains customer ID, items bought, transaction time, prices, number of each items and so on, but for this specific task we only need items bought. After selection of relevant data, the database that we are going to apply our data mining techniques to will be much smaller, consequently the whole process will be

Journal ArticleDOI
TL;DR: In this paper, the authors argue that for organisational transformation to occur, an organisation's members need to evolve new tacit knowledge about the way they interact both with each other and external stakeholders, and how they co-ordinate their activities.

Journal ArticleDOI
TL;DR: A knowledge strategy planning methodology, called P2-KSP methodology, which places its emphasis on improving organizational performance by identifying and leveraging knowledge directly related to business processes and performance.
Abstract: This study aims at suggesting an integrative methodology for planning knowledge management initiatives. First, four major underpinning assumptions which should be addressed in knowledge management are identified through literature reviews on strategic information systems planning and knowledge management. Based on these assumptions, we introduce a knowledge strategy planning methodology, called P2-KSP methodology. The P2-KSP methodology places its emphasis on improving organizational performance by identifying and leveraging knowledge directly related to business processes and performance. The methodology consists of five phases: business environment analysis, knowledge requirements analysis, knowledge management strategy establishment, knowledge management architecture design, and knowledge management implementation planning. After its detailed procedures and related features are explained, results of applying it to a large semiconductor manufacturer's knowledge management project are discussed.

Journal ArticleDOI
TL;DR: Refuting the notion of technology as a replacement of knowledge, this paper focuses on a gap between them that needs to be bridged, and two models of knowledge are reviewed.

Journal ArticleDOI
TL;DR: The paper demonstrates that the process of embedding knowledge and routines in software holds fundamental implications for the ability of heterogeneous organizational groups, functions and communities to co-ordinate their efforts and share knowledge across function-, discipline- and task-specific boundaries.
Abstract: Recent advances in information and communication technologies have provided a substantial push towards the codification of organizational knowledge and practices. It is argued that codification, and the subsequent delegation of organizational memory to software, entails fundamental structural transformations to knowledge and routines as these are reconfigured and replicated in the form of new computer-embedded representations. The paper demonstrates that the process of embedding knowledge and routines in software holds fundamental implications for the ability of heterogeneous organizational groups, functions and communities to co-ordinate their efforts and share knowledge across function-, discipline- and task-specific boundaries. Copyright 2003, Oxford University Press.

Book
05 Jun 2003
TL;DR: In this article, a model of knowledge-in-the-making is proposed to capture the learning dynamics on the assembly line of an automata factory, and a theory of knowledge in organizational knowledge is discussed.
Abstract: PART ONE: EPISTEMOLOGICAL FOUNDATIONS 1. Introduction 2. Knowing and Organizing 3. Studying Organizational Knowledge PART TWO: ORGANIZATIONAL KNOWLEDGE IN ACTION 4. Tradition and Innovation at Fiat Auto 5. Knowledge-in-the-Making: The 'Construction' of Fiat Melfi's Factory 6. Breakdowns and Bottlenecks: Capturing the Learning Dynamics on the Assembly Line 7. Sense Making on the Shop Floor: The Narrative Dimension of Organizational Knowledge PART THREE: BUILDING A THEORY OF KNOWLEDGE IN ORGANIZATIONS 8. Action, Content, and Time: A Processual Model of Knowing and Organizing 9. Re-thinking Knowledge in Organizations


Journal ArticleDOI
TL;DR: A framework for knowledge management support for teachers where the sharing of concrete knowledge scaffolds the attainment of more abstract levels of knowledge sharing is described.
Abstract: Business organizations worldwide are implementing techniques and technologies to better manage their knowledge. Their objective is to improve the quality of the contributions people make to their organizations by helping them to make sense of the context within which the organization exists; to take responsibility, cooperate, and share what they know and learn; and to effectively challenge, negotiate, and learn from others. We consider how the concepts, tools, and techniques of organizational knowledge management can be applied to the professional practices and development of teachers. We describe a framework for knowledge management support for teachers where the sharing of concrete knowledge scaffolds the attainment of more abstract levels of knowledge sharing. We describe the development of a knowledge management support system emphasizing long-term participatory design relationships between technologists and teachers, regional cooperation among teachers in adjacent school divisions, the integration of communication and practice, synchronous and asynchronous interactions, and multiple metaphors for organizing knowledge resources and activities.

Book
14 Nov 2003
TL;DR: The dawn of the knowledge economy The complex nature of knowledge Intellectual capital The role of technology in knowledge management Knowledge sharing Communication and organizational culture Communities of practice The learning organization and organizational learning Knowledge management education.
Abstract: The dawn of the knowledge economy The complex nature of knowledge Intellectual capital The role of technology in knowledge management Knowledge sharing Communication and organizational culture Communities of practice The learning organization and organizational learning Knowledge management education.

Proceedings ArticleDOI
03 Nov 2003
TL;DR: This paper presents a system to formally annotate medical images captured to aid the diagnosis and management of breast cancer, that enables a series of semantics-based operations to be performed.
Abstract: The interpretation of medical evidence is normally presented in terms of a controlled, but diversely expressed specialist vocabulary and natural language phrases. Such informally expressed data require human intervention to ascertain its relevance in any specific case. In order to facilitate machine-based reasoning about the evidence gathered, additional interpretive semantics must be attached to the data; a shift from a merely data-intensive approach to a semantics-rich model of evidence. In this paper, we present a system to formally annotate medical images captured to aid the diagnosis and management of breast cancer, that enables a series of semantics-based operations to be performed. Our approach is grounded upon an imaging ontology specifying the domain knowledge and a description logic (DL) taxonomic inferential engine responsible for semantics-based reasoning and image retrieval.

Journal ArticleDOI
TL;DR: This paper surveys available software systems that support different knowledge management activities, and categorizes these tools into classes, based on their capabilities and functionality and shows what tasks and knowledge processing operations they support.
Abstract: Human capital is the main asset of many companies, whose knowledge has to be preserved and leveraged from individual to the company level, allowing continual learning and improvement. Knowledge management has various components and aspects such as socio‐cultural, organizational, and technological. In this paper we address the technological aspect; more precisely we survey available software systems that support different knowledge management activities. We categorize these tools into classes, based on their capabilities and functionality and show what tasks and knowledge processing operations they support.