scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2009"


Book
26 Nov 2009
TL;DR: The nature of knowledge, the nature of knowing, and strategic management perspectives: creating knowledge and implementing knowledge management Epilogue
Abstract: PART 1: The Nature of knowledge 1 Introduction to knowledge management 2 The nature of knowing PART 2: Leveraging knowledge 3 Intellectual capital 4 Strategic management perspectives PART 3: Creating knowledge 5 Organisational learning 6 The learning organisation PART 4: Knowledge management tools and systems 7 Knowledge management tools: component technologies 8 Knowledge management systems PART 5: Mobilising knowledge 9 Enabling knowledge contexts and networks 10 Implementing knowledge management Epilogue

442 citations


Proceedings ArticleDOI
14 Jun 2009
TL;DR: This work incorporates domain knowledge about the composition of words that should have high or low probability in various topics using a novel Dirichlet Forest prior in a LatentDirichlet Allocation framework.
Abstract: Users of topic modeling methods often have knowledge about the composition of words that should have high or low probability in various topics. We incorporate such domain knowledge using a novel Dirichlet Forest prior in a Latent Dirichlet Allocation framework. The prior is a mixture of Dirichlet tree distributions with special structures. We present its construction, and inference via collapsed Gibbs sampling. Experiments on synthetic and real datasets demonstrate our model's ability to follow and generalize beyond user-specified domain knowledge.

436 citations


Journal ArticleDOI
TL;DR: A process model for knowledge transfer in using theories relating knowledge communication and knowledge translation is proposed based on a research project titled “Procurement for innovation and knowledge transfer (ProFIK”.
Abstract: Purpose – The purpose of this paper is to propose a process model for knowledge transfer in using theories relating knowledge communication and knowledge translation.Design/methodology/approach – Most of what is put forward in this paper is based on a research project titled “Procurement for innovation and knowledge transfer (ProFIK)”. The project is funded by a UK government research council – The Engineering and Physical Sciences Research Council (EPSRC). The discussions are mainly grounded on a thorough review of literature accomplished as part of the research project.Findings – The process model developed in this paper has built upon the theory of knowledge transfer and the theory of communication. Knowledge transfer, per se, is not a mere transfer of knowledge. It involves different stages of knowledge transformation. Depending on the context of knowledge transfer, it can also be influenced by many factors; some positive and some negative. The developed model of knowledge transfer attempts to encapsu...

386 citations


Proceedings ArticleDOI
10 Oct 2009
TL;DR: KINDROB is a first-order knowledge representation based on description logics that provides specific mechanisms and tools for action-centered representation, for the automated acquisition of grounded concepts through observation and experience, for reasoning about and managing uncertainty, and for fast inference — knowledge processing features that are particularly necessary for autonomous robot control.
Abstract: Knowledge processing is an essential technique for enabling autonomous robots to do the right thing to the right object in the right way. Using knowledge processing the robots can achieve more flexible and general behavior and better performance. While knowledge representation and reasoning has been a well-established research field in Artificial Intelligence for several decades, little work has been done to design and realize knowledge processing mechanisms for the use in the context of robotic control. In this paper, we report on KNOWROB, a knowledge processing system particularly designed for autonomous personal robots. KNOWROB is a first-order knowledge representation based on description logics that provides specific mechanisms and tools for action-centered representation, for the automated acquisition of grounded concepts through observation and experience, for reasoning about and managing uncertainty, and for fast inference — knowledge processing features that are particularly necessary for autonomous robot control.

328 citations


Proceedings ArticleDOI
09 Feb 2009
TL;DR: Large-scale analysis of real-world interactions allows us to understand how expertise relates to vocabulary, resource use, and search task under more realistic search conditions than has been possible in previous small-scale studies.
Abstract: Domain experts search differently than people with little or no domain knowledge. Previous research suggests that domain experts employ different search strategies and are more successful in finding what they are looking for than non-experts. In this paper we present a large-scale, longitudinal, log-based analysis of the effect of domain expertise on web search behavior in four different domains (medicine, finance, law, and computer science). We characterize the nature of the queries, search sessions, web sites visited, and search success for users identified as experts and non-experts within these domains. Large-scale analysis of real-world interactions allows us to understand how expertise relates to vocabulary, resource use, and search task under more realistic search conditions than has been possible in previous small-scale studies. Building upon our analysis we develop a model to predict expertise based on search behavior, and describe how knowledge about domain expertise can be used to present better results and query suggestions to users and to help non-experts gain expertise.

320 citations


Proceedings ArticleDOI
10 Mar 2009
TL;DR: A security ontology which provides an ontological structure for information security domain knowledge is described which can be used to support a broad range of information security risk management approaches.
Abstract: Unified and formal knowledge models of the information security domain are fundamental requirements for supporting and enhancing existing risk management approaches. This paper describes a security ontology which provides an ontological structure for information security domain knowledge. Besides existing best-practice guidelines such as the German IT Grundschutz Manual also concrete knowledge of the considered organization is incorporated. An evaluation conducted by an information security expert team has shown that this knowledge model can be used to support a broad range of information security risk management approaches.

268 citations


Proceedings ArticleDOI
07 Sep 2009
TL;DR: A data mining approach to opponent modeling in strategy games involves encoding game logs as a feature vector representation, where each feature describes when a unit or building type is first produced, which has higher predictive capabilities and is more tolerant of noise.
Abstract: We present a data mining approach to opponent modeling in strategy games. Expert gameplay is learned by applying machine learning techniques to large collections of game logs. This approach enables domain independent algorithms to acquire domain knowledge and perform opponent modeling. Machine learning algorithms are applied to the task of detecting an opponent's strategy before it is executed and predicting when an opponent will perform strategic actions. Our approach involves encoding game logs as a feature vector representation, where each feature describes when a unit or building type is first produced. We compare our representation to a state lattice representation in perfect and imperfect information environments and the results show that our representation has higher predictive capabilities and is more tolerant of noise. We also discuss how to incorporate our data mining approach into a full game playing agent.

240 citations


Proceedings Article
07 Dec 2009
TL;DR: This work advocates using an imperative language to express various aspects of model structure, inference, and learning, and implements imperatively defined factor graphs in a system called FACTORIE, a software library for an object-oriented, strongly-typed, functional language.
Abstract: Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative, statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language. In experimental comparisons to Markov Logic Networks on joint segmentation and coreference, we find our approach to be 3-15 times faster while reducing error by 20-25%—achieving a new state of the art.

236 citations


Journal ArticleDOI
TL;DR: An axiom definition of knowledge granulation in knowledge bases is presented, under which some existing knowledge granulations become its special forms, and the concept of a knowledge distance for calculating the difference between two knowledge structures in the same knowledge base is introduced.

193 citations


Journal ArticleDOI
TL;DR: Experimental results on benchmark data sets give evidence that the proposed approach to particle swarm optimization (PSO) for classification tasks is very effective, despite its simplicity, and results obtained in the framework of a model selection challenge show the competitiveness of the models selected with PSO, compared to modelsselected with other techniques that focus on a single algorithm and that use domain knowledge.
Abstract: This paper proposes the application of particle swarm optimization (PSO) to the problem of full model selection, FMS, for classification tasks. FMS is defined as follows: given a pool of preprocessing methods, feature selection and learning algorithms, to select the combination of these that obtains the lowest classification error for a given data set; the task also includes the selection of hyperparameters for the considered methods. This problem generates a vast search space to be explored, well suited for stochastic optimization techniques. FMS can be applied to any classification domain as it does not require domain knowledge. Different model types and a variety of algorithms can be considered under this formulation. Furthermore, competitive yet simple models can be obtained with FMS. We adopt PSO for the search because of its proven performance in different problems and because of its simplicity, since neither expensive computations nor complicated operations are needed. Interestingly, the way the search is guided allows PSO to avoid overfitting to some extend. Experimental results on benchmark data sets give evidence that the proposed approach is very effective, despite its simplicity. Furthermore, results obtained in the framework of a model selection challenge show the competitiveness of the models selected with PSO, compared to models selected with other techniques that focus on a single algorithm and that use domain knowledge.

179 citations


Journal ArticleDOI
TL;DR: A pragmatist approach to the multiplicity of forms of health-related knowledge, including biomedical knowledge, lay knowledge and critical constructionist knowledge, is proposed and implications for research methodology and the choice of research goals are identified.
Abstract: The multiplicity of forms of health-related knowledge, including biomedical knowledge, lay knowledge and critical constructionist knowledge, raises challenges for health researchers. On one hand, there is a demand for a pluralist acceptance of the variety of health-related knowledge. On the other, the need to improve health calls for action, and thus for choices between opposing forms of knowledge. The present article proposes a pragmatist approach to this epistemological problem. According to pragmatism, knowledge is a tool for action and as such it should be evaluated according to whether it serves our desired interests. We identify implications for research methodology and the choice of research goals.

Journal ArticleDOI
TL;DR: The benefits, trends, current possibilities, and the potential this holds for the biosciences are reviewed.
Abstract: New knowledge is produced at a continuously increasing speed, and the list of papers, databases and other knowledge sources that a researcher in the life sciences needs to cope with is actually turning into a problem rather than an asset. The adequate management of knowledge is therefore becoming fundamentally important for life scientists, especially if they work with approaches that thoroughly depend on knowledge integration, such as systems biology. Several initiatives to organize biological knowledge sources into a readily exploitable resourceome are presently being carried out. Ontologies and Semantic Web technologies revolutionize these efforts. Here, we review the benefits, trends, current possibilities, and the potential this holds for the biosciences.

Journal ArticleDOI
TL;DR: This paper builds on the 3C3R problem design model, which is a systematic conceptual framework for guiding the design of effective and reliable problems for PBL, and introduces a 9-step problem design process.

Journal ArticleDOI
TL;DR: In this article, a value engineering knowledge management system (VE-KMS) is developed, which applies the theory of inventive problem-solving and integrates its creativity tools into the creativity phase of the VE process.

Journal ArticleDOI
TL;DR: This work proposes a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.
Abstract: Visual-interactive cluster analysis provides valuable tools for effectively analyzing large and complex data sets. Owing to desirable properties and an inherent predisposition for visualization, the Kohonen Feature Map (or Self-Organizing Map or SOM) algorithm is among the most popular and widely used visual clustering techniques. However, the unsupervised nature of the algorithm may be disadvantageous in certain applications. Depending on initialization and data characteristics, cluster maps (cluster layouts) may emerge that do not comply with user preferences, expectations or the application context. Considering SOM-based analysis of trajectory data, we propose a comprehensive visual-interactive monitoring and control framework extending the basic SOM algorithm. The framework implements the general Visual Analytics idea to effectively combine automatic data analysis with human expert supervision. It provides simple, yet effective facilities for visually monitoring and interactively controlling the trajectory clustering process at arbitrary levels of detail. The approach allows the user to leverage existing domain knowledge and user preferences, arriving at improved cluster maps. We apply the framework on several trajectory clustering problems, demonstrating its potential in combining both unsupervised (machine) and supervised (human expert) processing, in producing appropriate cluster results.

Journal ArticleDOI
TL;DR: Empirically comparing the performance of two sets of classifiers for bank failure prediction, one built using raw accounting variables and the other built using constructed financial ratios indicates that feature construction, guided by domain knowledge, significantly improves classifier performance and that the degree of improvement varies significantly across the methods.
Abstract: While extensive research in data mining has been devoted to developing better classification algorithms, relatively little research has been conducted to examine the effects of feature construction, guided by domain knowledge, on classification performance. However, in many application domains, domain knowledge can be used to construct higher-level features to potentially improve performance. For example, past research and regulatory practice in early warning of bank failures has resulted in various explanatory variables, in the form of financial ratios, that are constructed based on bank accounting variables and are believed to be more effective than the original variables in identifying potential problem banks. In this study, we empirically compare the performance of two sets of classifiers for bank failure prediction, one built using raw accounting variables and the other built using constructed financial ratios. Four popular data mining methods are used to learn the classifiers: logistic regression, decision tree, neural network, and k-nearest neighbor. We evaluate the classifiers on the basis of expected misclassification cost under a wide range of possible settings. The results of the study strongly indicate that feature construction, guided by domain knowledge, significantly improves classifier performance and that the degree of improvement varies significantly across the methods.

Journal ArticleDOI
TL;DR: This study investigates a particular type of GDT working 'around the clock': the 24-h knowledge factory and shows how the co-located team and the GDT enacted a knowledge codification strategy and a personalization strategy, respectively; in each case grafting elements of the other strategy in order to attain both knowledge re-use and creativity.
Abstract: The relocation of knowledge work to emerging countries is leading to an increasing use of globally distributed teams (GDT) engaged in complex tasks. In the present study, we investigate a particular type of GDT working 'around the clock': the 24-h knowledge factory (Gupta, 2008). Adopting the productivity perspective on knowledge sharing (Haas and Hansen, 2005, 2007), we hypothesize how a 24-h knowledge factory and a co-located team will differ in technology use, knowledge sharing processes, and performance. We conducted a quasi-experiment in IBM, collecting both quantitative and qualitative data, over a period of 12months, on a GDT and a co-located team. Both teams were composed of the same number of professionals, provided with the same technologies, engaged in similar tasks, and given similar deadlines. We found significant differences in their use of technologies and in knowledge sharing processes, but not in efficiency and quality of outcomes. We show how the co-located team and the GDT enacted a knowledge codification strategy and a personalization strategy, respectively; in each case grafting elements of the other strategy in order to attain both knowledge re-use and creativity. We conclude by discussing theoretical contributions to knowledge sharing and GDT literatures, and by highlighting managerial implications to those organizations interested in developing a fully functional 24-h knowledge factory.

Book
01 Jan 2009
TL;DR: Invited Talks.
Abstract: Invited Talks.- Knowledge Patterns.- Computational Semantics and Knowledge Engineering.- Principles for Knowledge Engineering on the Web.- Knowledge Patterns and Knowledge Representation.- Applying Ontology Design Patterns in Bio-ontologies.- A Pattern and Rule-Based Approach for Reusing Adaptive Hypermedia Creator's Models.- Natural Language-Based Approach for Helping in the Reuse of Ontology Design Patterns.- On the Influence of Description Logics Ontologies on Conceptual Similarity.- Polishing Diamonds in OWL 2.- Formalizing Ontology Modularization through the Notion of Interfaces.- Correspondence Patterns for Ontology Alignment.- Matching Ontologies and Data Integration.- Learning Disjointness for Debugging Mappings between Lightweight Ontologies.- Towards a Rule-Based Matcher Selection.- An Analysis of the Origin of Ontology Mismatches on the Semantic Web.- Preference-Based Uncertain Data Integration.- Natural Language, Knowledge Acquisition and Annotations.- Unsupervised Discovery of Compound Entities for Relationship Extraction.- Formal Concept Analysis: A Unified Framework for Building and Refining Ontologies.- Contextualized Knowledge Acquisition in a Personal Semantic Wiki.- Using the Intension of Classes and Properties Definition in Ontologies for Word Sense Disambiguation.- Mapping General-Specific Noun Relationships to WordNet Hypernym/Hyponym Relations.- Analysing Ontological Structures through Name Pattern Tracking.- Semi-automatic Construction of an Ontology and of Semantic Annotations from a Discussion Forum of a Community of Practice.- OMEGA: An Automatic Ontology Metadata Generation Algorithm.- Automatic Tag Suggestion Based on Resource Contents.- Integration of Semantically Annotated Data by the KnoFuss Architecture.- Search, Query and Interaction.- A Visual Approach to Semantic Query Design Using a Web-Based Graphical Query Designer.- Search Query Generation with MCRDR Document Classification Knowledge.- Ontological Profiles in Enterprise Search.- Ontologies.- A Generic Ontology for Collaborative Ontology-Development Workflows.- GoodRelations: An Ontology for Describing Products and Services Offers on the Web.- An Ontology-Centric Approach to Sensor-Mission Assignment.- Ontology Based Legislative Drafting: Design and Implementation of a Multilingual Knowledge Resource.- Situated Cognition in the Semantic Web Era.- E-Business Vocabularies as a Moving Target: Quantifying the Conceptual Dynamics in Domains.- A Platform for Object-Action Semantic Web Interaction.

Journal ArticleDOI
TL;DR: A case-based reasoning and rule- based reasoning based model which can provide clinical decision support for all domains of ICU unlike rule-based inference models which are highly domain knowledge specific.
Abstract: This paper presents a hybrid approach of case-based reasoning and rule-based reasoning, as an alternative to the purely rule-based method, to build a clinical decision support system for ICU. This enables the system to tackle problems like high complexity, low experienced new staff and changing medical conditions. The purely rule-based method has its limitations since it requires explicit knowledge of the details of each domain of ICU, such as cardiac domain hence takes years to build knowledge base. Case-based reasoning uses knowledge in the form of specific cases to solve a new problem, and the solution is based on the similarities between the new problem and the available cases. This paper presents a case-based reasoning and rule-based reasoning based model which can provide clinical decision support for all domains of ICU unlike rule-based inference models which are highly domain knowledge specific. Experiments with real ICU data as well as simulated data clearly demonstrate the efficacy of the proposed method.

Journal ArticleDOI
TL;DR: This paper proposed an attributes-based ant colony system (AACS) to help learners find an adaptive learning object more effectively and presented an attribute-based search mechanism to find adaptive learning objects effectively.
Abstract: Teachers usually have a personal understanding of what ''good teaching'' means, and as a result of their experience and educationally related domain knowledge, many of them create learning objects (LO) and put them on the web for study use. In fact, most students cannot find the most suitable LO (e.g. learning materials, learning assets, or learning packages) from webs. Consequently, many researchers have focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and to adaptively provide learning paths. However, although most personalized learning mechanism systems neglect to consider the relationship between learner attributes (e.g. learning style, domain knowledge) and LO's attributes. Thus, it is not easy for a learner to find an adaptive learning object that reflects his own attributes in relationship to learning object attributes. Therefore, in this paper, based on an ant colony optimization (ACO) algorithm, we proposed an attributes-based ant colony system (AACS) to help learners find an adaptive learning object more effectively. Our paper makes three critical contributions: (1) It presents an attribute-based search mechanism to find adaptive learning objects effectively; (2) An attributes-ant algorithm was proposed; (3) An adaptive learning rule was developed to identify how learners with different attributes may locate learning objects which have a higher probability of being useful and suitable; (4) A web-based learning portal was created for learners to find the learning objects more effectively.

Journal ArticleDOI
TL;DR: This paper provides a study on word clustering and selection based feature reduction approaches for named entity recognition using a maximum entropy classifier and the performance is found to be superior to existing systems which do not use domain knowledge.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: This work builds upon the belief propagation algorithm for use in detecting collusion and other fraud schemes, and proposes an algorithm called SNARE (Social Network Analysis for Risk Evaluation), which is robust to the choice of parameters and highly scalable-linearly with the number of edges in a graph.
Abstract: Classifying nodes in networks is a task with a wide range of applications. It can be particularly useful in anomaly and fraud detection. Many resources are invested in the task of fraud detection due to the high cost of fraud, and being able to automatically detect potential fraud quickly and precisely allows human investigators to work more efficiently. Many data analytic schemes have been put into use; however, schemes that bolster link analysis prove promising. This work builds upon the belief propagation algorithm for use in detecting collusion and other fraud schemes. We propose an algorithm called SNARE (Social Network Analysis for Risk Evaluation). By allowing one to use domain knowledge as well as link knowledge, the method was very successful for pinpointing misstated accounts in our sample of general ledger data, with a significant improvement over the default heuristic in true positive rates, and a lift factor of up to 6.5 (more than twice that of the default heuristic). We also apply SNARE to the task of graph labeling in general on publicly-available datasets. We show that with only some information about the nodes themselves in a network, we get surprisingly high accuracy of labels. Not only is SNARE applicable in a wide variety of domains, but it is also robust to the choice of parameters and highly scalable-linearly with the number of edges in a graph.

Proceedings Article
07 Dec 2009
TL;DR: An infinite POMDP (iPOMDP) model is defined that does not require knowledge of the size of the state space and assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly.
Abstract: The Partially Observable Markov Decision Process (POMDP) framework has proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs are complex structures with a large number of parameters. In many real-world problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made headway in learning POMDP models; however, this work has largely focused on learning the parameters of the POMDP model. We define an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly. We demonstrate the iPOMDP on several standard problems.

Journal ArticleDOI
TL;DR: This paper presents a learning algorithm to incorporate domain knowledge into the learning to regularize the otherwise ill-posed problem, to limit the search space, and to avoid local optima.

Journal ArticleDOI
TL;DR: A system is developed, which provides the user with a platform to analyze opinion expressions crawled from a set of pre-defined blogs, aimed at extracting and consolidating opinions of customers from blogs and feedbacks, at multiple levels of granularity.
Abstract: The proliferation of Internet has not only led to the generation of huge volumes of unstructured information in the form of web documents, but a large amount of text is also generated in the form of emails, blogs, and feedbacks, etc. The data generated from online communication acts as potential gold mines for discovering knowledge, particularly for market researchers. Text analytics has matured and is being successfully employed to mine important information from unstructured text documents. The chief bottleneck for designing text mining systems for handling blogs arise from the fact that online communication text data are often noisy. These texts are informally written. They suffer from spelling mistakes, grammatical errors, improper punctuation and irrational capitalization. This paper focuses on opinion extraction from noisy text data. It is aimed at extracting and consolidating opinions of customers from blogs and feedbacks, at multiple levels of granularity. We have proposed a framework in which these texts are first cleaned using domain knowledge and then subjected to mining. Ours is a semi-automated approach, in which the system aids in the process of knowledge assimilation for knowledge-base building and also performs the analytics. Domain experts ratify the knowledge base and also provide training samples for the system to automatically gather more instances for ratification. The system identifies opinion expressions as phrases containing opinion words, opinionated features and also opinion modifiers. These expressions are categorized as positive or negative with membership values varying from zero to one. Opinion expressions are identified and categorized using localized linguistic techniques. Opinions can be aggregated at any desired level of specificity i.e. feature level or product level, user level or site level, etc. We have developed a system based on this approach, which provides the user with a platform to analyze opinion expressions crawled from a set of pre-defined blogs.

Journal ArticleDOI
TL;DR: Results involving topic familiarity ratings and a no-reading control group suggest that higher knowledge readers are not more likely to ignore text-specific cues in favor of a domain familiarity heuristic, but they do appear to make more effective use of domain familiarity in predicting absolute performance levels.
Abstract: In the present research, we examined the relationship between readers' domain knowledge and their ability to judge their comprehension of novel domain-related material. Participants with varying degrees of baseball knowledge read five texts on baseball-related topics and five texts on non-baseball-related topics, predicted their performance, and completed tests for each text. Baseball knowledge was positively related to absolute accuracy within the baseball domain but was unrelated to relative accuracy within the baseball domain. Also, the readers showed a general underconfidence bias, but the bias was less extreme for higher knowledge readers. The results challenge common assumptions that experts' metacognitive judgments are less accurate than novices'. Results involving topic familiarity ratings and a no-reading control group suggest that higher knowledge readers are not more likely to ignore text-specific cues in favor of a domain familiarity heuristic, but they do appear to make more effective use of domain familiarity in predicting absolute performance levels.

Journal ArticleDOI
TL;DR: The developing paradigm of product-service systems and the requirement for co-design of products and services has influenced the structure of the knowledge base, as well as outlining specific service related requirements.
Abstract: This paper presents a framework for knowledge reuse in a product-service systems design scenario. The project aim is to develop a methodology to capture, represent and reuse knowledge to support pr...

Journal ArticleDOI
TL;DR: This paper proposes the framework for process-centered knowledge model and enterprise ontology for the context-rich and networked knowledge storage and retrieval required during task execution, and a process- centered KMS (knowledge management system) was developed.
Abstract: Among many enterprise assets, knowledge is treated as a critical driving force for attaining enterprise performance goals. This is because knowledge facilitates the better business decision makings in a timely fashion. However, since knowledge is created and utilized during the execution of business processes, if knowledge is separated from the business process context, it does not lead to the ability to take the right action for target performance.This paper proposes the framework for process-centered knowledge model and enterprise ontology for the context-rich and networked knowledge storage and retrieval required during task execution. The enterprise knowledge object for a process-centered knowledge model is classified into two types: process knowledge and task support knowledge. In the proposed enterprise ontology, which represents major enterprise concepts, and the relationships between them, all domain concepts are related to the "process" concept, both directly and indirectly. As a result, networked and sophisticated knowledge, rather than single-level knowledge, is provided to the participant of unit activity.In order to show the applicability of the proposed framework, a process-centered KMS (knowledge management system) was also developed, which is classified into 3 parts: (1) project management sub-system based on process knowledge. (2) Knowledge management sub-system for maintaining task support knowledge. (3) Infrastructure sub-system which supports the above two sub-systems.

Journal ArticleDOI
TL;DR: In this paper, the idea that knowledge is dispersed among different individuals and entities is discussed. And the authors suggest that for international entrepreneurial firms to create new knowledge, they need to find ways to combi...
Abstract: This article rests on the idea that knowledge is dispersed among different individuals and entities. For international entrepreneurial firms to create new knowledge, they need to find ways to combi...

Journal ArticleDOI
TL;DR: An analysis of what software engineering ontology is, what it consists of, and what it is used for in the form of usage example scenarios is given.
Abstract: This paper aims to present an ontology model of software engineering to represent its knowledge. The fundamental knowledge relating to software engineering is well described in the textbook entitled Software Engineering by Sommerville that is now in its eighth edition (2004) and the white paper, Software Engineering Body of Knowledge (SWEBOK), by the IEEE (203) upon which software engineering ontology is based. This paper gives an analysis of what software engineering ontology is, what it consists of, and what it is used for in the form of usage example scenarios. The usage scenarios presented in this paper highlight the characteristics of the software engineering ontology. The software engineering ontology assists in defining information for the exchange of semantic project information and is used as a communication framework. Its users are software engineers sharing domain knowledge as well as instance knowledge of software engineering.