scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2011"


Proceedings ArticleDOI
16 Jul 2011
TL;DR: The potential of recent machine learning methods for discovering universal features for context-aware applications of activity recognition is investigated and an alternative data representation based on the empirical cumulative distribution function of the raw data, which effectively abstracts from absolute values is described.
Abstract: Feature extraction for activity recognition in context-aware ubiquitous computing applications is usually a heuristic process, informed by underlying domain knowledge. Relying on such explicit knowledge is problematic when aiming to generalize across different application domains. We investigate the potential of recent machine learning methods for discovering universal features for context-aware applications of activity recognition. We also describe an alternative data representation based on the empirical cumulative distribution function of the raw data, which effectively abstracts from absolute values. Experiments on accelerometer data from four publicly available activity recognition datasets demonstrate the significant potential of our approach to address both contemporary activity recognition tasks and next generation problems such as skill assessment and the detection of novel activities.

354 citations


Proceedings Article
01 Jan 2011
TL;DR: This paper presents and discusses the development and views of three terms: knowledge transfer, knowledge sharing and knowledge barriers and highlights the effects on the terms when two different knowledge perspectives, knowledge as an object (or the K-O view) and knowledge as a subjective contextual construction are applied.
Abstract: In the knowledge management world there are many different terms flying around. Some are more important and frequently used than others. In this paper, we present and discuss the development and views of three terms: knowledge transfer, knowledge sharing and knowledge barriers. Knowledge transfer and knowledge sharing are sometimes used synonymously or have overlapping content. Several authors have pointed out this confusion while other authors have attempted to clarify the differences and define the terms. Knowledge barriers in themselves seem to have a more obvious content although the borders between knowledge barriers and connecting terms, such as 'barriers to knowledge sharing', seem to blur discussions and views. Our aim is to make a contribution to finding appropriate demarcations between these concepts. After reviewing Knowledge Management literature, we can state that the three terms, knowledge transfer, knowledge sharing and knowledge barriers, are somewhat blurred. For knowledge transfer and knowledge sharing, the blurriness is linked mainly to the fact that the analytical level each term is related to has come and gone and come back again. For knowledge barriers, the blurriness comes from the development of the term. The mere existence of the many different categorizations of knowledge barriers implies that the concept itself is blurry. The concept seems clear cut and focuses on knowledge although it is also broad and later sources have included much more than knowledge. This paper concludes by highlighting the effects on the terms when two different knowledge perspectives, knowledge as an object (or the K-O view) and knowledge as a subjective contextual construction (or the K-SCC view) are applied. The clarifications are supported by examples from companies in different industries (such as Cargotec and IKEA) and the public sector (police, fire brigade, ambulance and other emergency services).

284 citations


Journal ArticleDOI
01 Feb 2011
TL;DR: A novel fuzzy expert system can work effectively for diabetes decision support application and the semantic fuzzy decision making mechanism simulates the semantic description of medical staff for diabetes-related application.
Abstract: An increasing number of decision support systems based on domain knowledge are adopted to diagnose medical conditions such as diabetes and heart disease. It is widely pointed that the classical ontologies cannot sufficiently handle imprecise and vague knowledge for some real world applications, but fuzzy ontology can effectively resolve data and knowledge problems with uncertainty. This paper presents a novel fuzzy expert system for diabetes decision support application. A five-layer fuzzy ontology, including a fuzzy knowledge layer, fuzzy group relation layer, fuzzy group domain layer, fuzzy personal relation layer, and fuzzy personal domain layer, is developed in the fuzzy expert system to describe knowledge with uncertainty. By applying the novel fuzzy ontology to the diabetes domain, the structure of the fuzzy diabetes ontology (FDO) is defined to model the diabetes knowledge. Additionally, a semantic decision support agent (SDSA), including a knowledge construction mechanism, fuzzy ontology generating mechanism, and semantic fuzzy decision making mechanism, is also developed. The knowledge construction mechanism constructs the fuzzy concepts and relations based on the structure of the FDO. The instances of the FDO are generated by the fuzzy ontology generating mechanism. Finally, based on the FDO and the fuzzy ontology, the semantic fuzzy decision making mechanism simulates the semantic description of medical staff for diabetes-related application. Importantly, the proposed fuzzy expert system can work effectively for diabetes decision support application.

243 citations


Journal ArticleDOI
TL;DR: The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search.

241 citations


Journal ArticleDOI
TL;DR: This work proposes a framework that can identify cause and effect relationships among strategic objectives of strategy map through processing of the expertise and knowledge of senior managers and deploys DEMATEL as a framework for structural modeling approach subject to the problem.
Abstract: Research highlights? The proposed framework can identify cause and effect relationships among strategic objectives of strategy map through processing of the expertise and knowledge of senior managers. In this process, the integrated structure is introduced for the macro level balanced scorecard. And in addition to the mathematical modeling of cause and effect relationships, a decision support system could be presented that may increase the efficiency of the balanced scorecard management system. The Balanced Scorecard (BSC) is a widely adopted performance management framework first introduced in the early 1990s. More recently, it has been proposed as the basic for a strategic management system. Strategy mapping is the most important task in building a Balanced Scorecard system. Strategy mapping is the process for visually making cause and effect relationships between all possible strategic objectives in an organization. The process for building and constructing a strategy map is a human centric activity which could be considered as the combination and integration of all knowledge and preferences of the managerial boards. From the view point of strategic decision making in an organization, the process for building a strategy map could be viewed in a general body of a unified group decision making context. If we see the strategy map, as a structural modeling framework for making the cause and effect relationships among the strategic objectives, it is possible to deploy Decision Making Trial and Evaluation Laboratory (DEMATEL) as a framework for structural modeling approach subject to the problem. The DEMATEL method gathers collective knowledge to capture the causal relationships between strategic criteria. The model is especially practical and useful for visualizing the structure of complicated causal relationships with matrices or digraphs. Generally speaking, because in building any strategy map, the assigned preferences between the objectives are not crisp necessarily, and experts' domain knowledge could be extracted in a fuzzy environment, then the extended fuzzy DEMATEL is proposed to deal with the ambiguities inherent of such the judgments.

185 citations


Journal ArticleDOI
TL;DR: The objective of this article is to provide practical information for researchers and knowledge users as they consider what to include in dissemination and exchange plans developed as part of grant applications.

178 citations


Journal ArticleDOI
01 Feb 2011
TL;DR: It is suggested that, in an era with evolutionary financial frauds, computer assisted automated fraud detection mechanisms will be more effective and efficient with specialized domain knowledge.
Abstract: A fraudulent financial statement involves the intentional furnishing and/or publishing of false information in it and this has become a severe economic and social problem. We consider Data Mining (DM) based financial fraud detection techniques (such as regression, decision tree, neural networks and Bayesian networks) that help identify fraud. The effectiveness of these DM methods (and their limitations) is examined, especially when new schemes of financial statement fraud adapt to the detection techniques. We then explore a self-adaptive framework (based on a response surface model) with domain knowledge to detect financial statement fraud. We conclude by suggesting that, in an era with evolutionary financial frauds, computer assisted automated fraud detection mechanisms will be more effective and efficient with specialized domain knowledge.

176 citations


Journal ArticleDOI
TL;DR: This paper presents a flexible architecture for density maps to enable custom, versatile exploration using multiple density fields and defines six different types of blocks to create, compose, and enhance trajectories or density fields.
Abstract: We consider moving objects as multivariate time-series. By visually analyzing the attributes, patterns may appear that explain why certain movements have occurred. Density maps as proposed by Scheepens et al. [25] are a way to reveal these patterns by means of aggregations of filtered subsets of trajectories. Since filtering is often not sufficient for analysts to express their domain knowledge, we propose to use expressions instead. We present a flexible architecture for density maps to enable custom, versatile exploration using multiple density fields. The flexibility comes from a script, depicted in this paper as a block diagram, which defines an advanced computation of a density field. We define six different types of blocks to create, compose, and enhance trajectories or density fields. Blocks are customized by means of expressions that allow the analyst to model domain knowledge. The versatility of our architecture is demonstrated with several maritime use cases developed with domain experts. Our approach is expected to be useful for the analysis of objects in other domains.

143 citations


Journal ArticleDOI
TL;DR: The causes and effects of knowledge barriers are determined and the use of conversational knowledge sharing as an effective instrument for knowledge sharing is proposed by using HOQ.
Abstract: Knowledge management involves the systematic management of vital knowledge resources and the associated processes of creating, gathering, organizing, diffusion, utilizing and exploiting information. A key challenge emerging for organizations is how to encourage knowledge sharing within an organization because knowledge is an organization's intellectual capital and is of increasing importance in gaining a competitive business advantage. Isolated initiatives for promoting knowledge sharing and team collaboration without taking into consideration the limitations and constraints of knowledge sharing can halt any further development in the KM culture of an operation. This article investigates knowledge sharing bottlenecks and proposes the use of conversational knowledge sharing as an effective instrument for knowledge sharing. And to develop strategies, this paper determines the causes and effects of knowledge barriers and proposes solutions by using HOQ. The article introduces a financial company case study as a best practice example of conversational knowledge sharing. Then, the paper analyzes the case study to provide evidence for the feasibility and effectiveness of the proposed approach.

141 citations


Patent
31 Oct 2011
TL;DR: In this article, the authors present a system and method to enable rapid knowledge transfer between a plurality of experts and apprentices located remotely from the experts, by integrating a shared repository and collaboration tools for use by the expert and apprentice.
Abstract: A system and method enable rapid knowledge transfer, for example between a plurality of experts and a plurality of apprentices located remotely from the experts. The system makes use of unique tools to facilitate transfer of knowledge and collaboration between individuals, even among remotely located individuals. An input to the system is a Knowledge Transfer Plan which has been designed to orchestrate the knowledge transfer. The knowledge transfer system integrates a shared repository and collaboration tools for use by the expert and apprentice. The collaboration tools may be accessed through role-specific portals which are automatically created from the Knowledge Transfer Plan. In one embodiment, the system is configured with a World Wide Web-based interface and an integrated suite of tools to support knowledge transfer activities on a global basis to facilitate knowledge transfer among workers engaged in an outsourcing business process.

141 citations


Proceedings ArticleDOI
16 Jul 2011
TL;DR: A scalable inference technique using stochastic gradient descent is developed which may also be useful to the Markov Logic Network (MLN) research community and the expressive power of Foldċall is demonstrated.
Abstract: Topic models have been used successfully for a variety of problems, often in the form of application-specific extensions of the basic Latent Dirichlet Allocation (LDA) model Because deriving these new models in order to encode domain knowledge can be difficult and time-consuming, we propose the Foldċall model, which allows the user to specify general domain knowledge in First-Order Logic (FOL) However, combining topic modeling with FOL can result in inference problems beyond the capabilities of existing techniques We have therefore developed a scalable inference technique using stochastic gradient descent which may also be useful to the Markov Logic Network (MLN) research community Experiments demonstrate the expressive power of Foldċall, as well as the scalability of our proposed inference method

Proceedings ArticleDOI
23 Oct 2011
TL;DR: The results show that most users (and particularly domain experts) are most satisfied with a hybrid recommender that combines implicit and explicit preference elicitation, but that novices and maximizers seem to benefit more from a non-personalizedRecommender that just displays the most popular items.
Abstract: This paper compares five different ways of interacting with an attribute-based recommender system and shows that different types of users prefer different interaction methods. In an online experiment with an energy-saving recommender system the interaction methods are compared in terms of perceived control, understandability, trust in the system, user interface satisfaction, system effectiveness and choice satisfaction. The comparison takes into account several user characteristics, namely domain knowledge, trusting propensity and persistence. The results show that most users (and particularly domain experts) are most satisfied with a hybrid recommender that combines implicit and explicit preference elicitation, but that novices and maximizers seem to benefit more from a non-personalized recommender that just displays the most popular items.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: It is found that feature interactions can be detected automatically based on specifications that have only local knowledge, and this work developed the tool suite SPLVERIFIER for feature-aware verification, and applied it to an e-mail system that incorporates domain knowledge of AT&T.
Abstract: A software product line is a set of software products that are distinguished in terms of features (i.e., end-user-visible units of behavior). Feature interactions-- situations in which the combination of features leads to emergent and possibly critical behavior --are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line-verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLVERIFIER for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only local knowledge.

Journal ArticleDOI
01 Nov 2011
TL;DR: A complete framework to assess the overall performance of classification models from a user perspective in terms of accuracy, comprehensibility, and justifiability is proposed.
Abstract: This paper proposes a complete framework to assess the overall performance of classification models from a user perspective in terms of accuracy, comprehensibility, and justifiability. A review is provided of accuracy and comprehensibility measures, and a novel metric is introduced that allows one to measure the justifiability of classification models. Furthermore, taxonomy of domain constraints is introduced, and an overview of the existing approaches to impose constraints and include domain knowledge in data mining techniques is presented. Finally, justifiability metric is applied to a credit scoring and customer churn prediction case.

Journal ArticleDOI
TL;DR: The differences and similarities between the two approaches are identified, and a review is given regarding the integration of the two fields.

Journal ArticleDOI
TL;DR: The first ontology to describe HTS experiments and screening results using expressive description logic is developed and BAO opens new functionality for annotating, querying, and analyzing HTS datasets and the potential for discovering new knowledge by means of inference.
Abstract: High-throughput screening (HTS) is one of the main strategies to identify novel entry points for the development of small molecule chemical probes and drugs and is now commonly accessible to public sector research. Large amounts of data generated in HTS campaigns are submitted to public repositories such as PubChem, which is growing at an exponential rate. The diversity and quantity of available HTS assays and screening results pose enormous challenges to organizing, standardizing, integrating, and analyzing the datasets and thus to maximize the scientific and ultimately the public health impact of the huge investments made to implement public sector HTS capabilities. Novel approaches to organize, standardize and access HTS data are required to address these challenges. We developed the first ontology to describe HTS experiments and screening results using expressive description logic. The BioAssay Ontology (BAO) serves as a foundation for the standardization of HTS assays and data and as a semantic knowledge model. In this paper we show important examples of formalizing HTS domain knowledge and we point out the advantages of this approach. The ontology is available online at the NCBO bioportal http://bioportal.bioontology.org/ontologies/44531 . After a large manual curation effort, we loaded BAO-mapped data triples into a RDF database store and used a reasoner in several case studies to demonstrate the benefits of formalized domain knowledge representation in BAO. The examples illustrate semantic querying capabilities where BAO enables the retrieval of inferred search results that are relevant to a given query, but are not explicitly defined. BAO thus opens new functionality for annotating, querying, and analyzing HTS datasets and the potential for discovering new knowledge by means of inference.

Journal ArticleDOI
TL;DR: In this article, the authors examine the organizational patterns of knowledge sourcing in the media industry of southern Sweden and find that firms rely above all on knowledge that is generated in the local context.
Abstract: This paper deals with geographical and organizational patterns of knowledge flows in the media industry of southern Sweden, an industry that is characterized by a strong “symbolic” knowledge base. The aim is to address the question of the local versus the non-local as the prime arena for knowledge exchange, and to examine the organizational patterns of knowledge sourcing with specific attention paid to the nature of the knowledge sourced. Symbolic industries draw heavily on creative production and a cultural awareness that is strongly embedded in the local context; thus knowledge flows and networks are expected to be most of all locally configured, and firms to rely on less formalized knowledge sources rather than scientific knowledge or principles. Based on structured and semi-structured interviews with firm representatives, these assumptions are empirically assessed through social network analysis and descriptive statistics. Our findings show that firms rely above all on knowledge that is generated in p...

Journal ArticleDOI
TL;DR: This paper presents the simultaneous use and integration of the core ontologies at the example of a complex, distributed socio-technical system of emergency response, and describes the design approach for core ontology and discusses the lessons learned in designing them.
Abstract: One of the key factors that hinders integration of distributed, heterogeneous information systems is the lack of a formal basis for modeling the complex, structured knowledge that is to be exchanged. To alleviate this situation, we present an approach based on core ontologies. Core ontologies are characterized by a high degree of axiomatization and formal precision. This is achieved by basing on a foundational ontology. In addition, core ontologies should follow a pattern-oriented design approach. By this, they are modular and extensible. Core ontologies allow for reusing the structured knowledge they define as well as integrating existing domain knowledge. The structured knowledge of the core ontologies is clearly separated from the domain-specific knowledge. Such core ontologies allow for both formally conceptualize their particular fields and to be flexibly combined to cover the needs of concrete, complex application domains. Over the last years, we have developed three independent core ontologies for events and objects, multimedia annotations and personal information management. In this paper, we present the simultaneous use and integration of our core ontologies at the example of a complex, distributed socio-technical system of emergency response. We describe our design approach for core ontologies and discuss the lessons learned in designing them. Finally, we elaborate on the beauty aspects of our core ontologies.

Journal ArticleDOI
TL;DR: This pilot study demonstrated that front-line public health workers draw upon both tacit knowledge and explicit knowledge in their everyday lived reality and indicates a need to broaden the scope of knowledge translation to include other forms of knowledge beyond explicit knowledge acquired through research.
Abstract: All sectors in health care are being asked to focus on the knowledge-to-practice gap, or knowledge translation, to increase service effectiveness. A social interaction approach to knowledge translation assumes that research evidence becomes integrated with previously held knowledge, and practitioners build on and co-create knowledge through mutual interactions. Knowledge translation strategies for public health have not provided anticipated positive changes in evidence-based practice, possibly due in part to a narrow conceptualization of knowledge. More work is needed to understand the role of tacit knowledge in decision-making and practice. This pilot study examined how health practitioners applied tacit knowledge in public health program planning and implementation.

Proceedings Article
01 Jan 2011
TL;DR: This work is the first attempt to combine the notions of syntactic and semantic dependencies in the domain of review mining by jointly discovering latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner.
Abstract: Facet-based sentiment analysis involves discovering the latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies in the domain of review mining. Further, the concept of facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis.

Proceedings Article
23 Jun 2011
TL;DR: This paper presents and compares three methods based on domain-knowledge and machine-learning techniques for medical Entity Recognition and shows that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.
Abstract: Medical Entity Recognition is a crucial step towards efficient medical texts analysis. In this paper we present and compare three methods based on domain-knowledge and machine-learning techniques. We study two research directions through these approaches: (i) a first direction where noun phrases are extracted in a first step with a chunker before the final classification step and (ii) a second direction where machine learning techniques are used to identify simultaneously entities boundaries and categories. Each of the presented approaches is tested on a standard corpus of clinical texts. The obtained results show that the hybrid approach based on both machine learning and domain knowledge obtains the best performance.

Journal ArticleDOI
TL;DR: A comprehensive review on the recent development of KBS, methods and tools in supporting rapid product development and how product knowledge is identified, captured, represented and reused during the processes of One-of-a-Kind product development is provided.
Abstract: In recent years, product knowledge has played increasingly significant roles in new product development process especially in the development of One-of-a-Kind products. Although knowledge-based systems (KBSs) have been proposed to support product development activities and new knowledge modelling methodologies have been developed, they are still far from complete. This area has become attractive to many researchers and as a result, many new knowledge-based systems, methods and tools have been developed. However, to the best of our knowledge, knowledge-based systems for product development have not been systematically reviewed, compared and summarized. This paper provides a comprehensive review on the recent development of KBS, methods and tools in supporting rapid product development. In the paper, the relevant technologies for modelling, managing and representing knowledge are investigated and reviewed systematically for better understanding their characteristics. The focus is placed on knowledge-based systems that support product development, and how product knowledge is identified, captured, represented and reused during the processes of One-of-a-Kind product development. The limitations and the future trend of KBS are presented in terms of how they can help One-of-a-Kind Production (OKP) companies.

Proceedings Article
07 Aug 2011
TL;DR: A novel Bayesian probabilistic model is proposed to handle multiple source and multiple target domains and can tell whether each word's polarity is domain-dependent or domain-independent, and construct a word polarity dictionary for each domain.
Abstract: Sentiment analysis is the task of determining the attitude (positive or negative) of documents. While the polarity of words in the documents is informative for this task, polarity of some words cannot be determined without domain knowledge. Detecting word polarity thus poses a challenge for multiple-domain sentiment analysis. Previous approaches tackle this problem with transfer learning techniques, but they cannot handle multiple source domains and multiple target domains. This paper proposes a novel Bayesian probabilistic model to handle multiple source and multiple target domains. In this model, each word is associated with three factors: Domain label, domain dependence/independence and word polarity. We derive an efficient algorithm using Gibbs sampling for inferring the parameters of the model, from both labeled and unlabeled texts. Using real data, we demonstrate the effectiveness of our model in a document polarity classification task compared with a method not considering the differences between domains. Moreover our method can also tell whether each word's polarity is domain-dependent or domain-independent. This feature allows us to construct a word polarity dictionary for each domain.

Book ChapterDOI
05 Sep 2011
TL;DR: The proposed framework, Active Learning Domain Adapted (Alda), uses source domain knowledge to transfer information that facilitates active learning in the target domain and empirical comparisons with numerous baselines on real-world datasets establish the efficacy of the proposed methods.
Abstract: In this paper, we harness the synergy between two important learning paradigms, namely, active learning and domain adaptation. We show how active learning in a target domain can leverage information from a different but related source domain. Our proposed framework, Active Learning Domain Adapted (Alda), uses source domain knowledge to transfer information that facilitates active learning in the target domain. We propose two variants of Alda: a batch B-Alda and an online O-Alda. Empirical comparisons with numerous baselines on real-world datasets establish the efficacy of the proposed methods.

Journal ArticleDOI
TL;DR: The authors compared a scientific inquiry learning environment providing instructional support with directive self-explanation prompts to relate and translate between representations to foster domain knowledge, and found that learners who received the directive prompts outperformed the learners who did not receive general prompts on test items assessing domain knowledge.
Abstract: Processing of multiple representations in multimedia learning environments is considered to help learners obtain a more complete overview of the domain and gain deeper knowledge. This is based on the idea that relating and translating different representations leads to reflection beyond the boundaries and details of the separate representations. To achieve this, the design of a learning environment should support learners in adequately processing multiple representations. In this study, we compared a scientific inquiry learning environment providing instructional support with directive self-explanation prompts to relate and translate between representations with a scientific inquiry learning environment providing instructional support with general self-explanation prompts. Learners who received the directive prompts outperformed the learners who received general prompts on test items assessing domain knowledge. These positive results did not stretch to transfer items and items measuring learners' capabilities to relate and translate representations in general. The results suggest that learner support should promote the active relation of representations and translation between them to foster domain knowledge, and that other forms of support (e.g. extended training) might be necessary to make learners more expert processors of multiple representations.

Journal ArticleDOI
TL;DR: By utilizing previous studies, the researchers present an integrated view of how learning organization affects knowledge creation and transfer.
Abstract: This paper describes how knowledge is created and transferred in organizations. It also discuses conditions required in promoting knowledge creation, the techniques used to capture knowledge in organizations, the nature of learning organizations and how it can influence knowledge creation and transfer. By utilizing previous studies, the researchers present an integrated view of how learning organization affects knowledge creation and transfer.

Journal ArticleDOI
TL;DR: This paper presents an ontology that is an abstract (yet extendable) philosophical (yet practical) conceptualization of the essence of knowledge that relates to construction aspects of infrastructure products.

Journal ArticleDOI
TL;DR: A hybrid pruning method that involve the use of objective analysis and subjective analysis, with the latter involving the Use of an ontology, is proposed and demonstrated using a medical database.

Proceedings Article
19 Jun 2011
TL;DR: This work identifies 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.
Abstract: Understanding language requires both linguistic knowledge and knowledge about how the world works, also known as common-sense knowledge. We attempt to characterize the kinds of common-sense knowledge most often involved in recognizing textual entailments. We identify 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.

Proceedings Article
27 Jul 2011
TL;DR: A domain-assisted approach to organize various aspects of a product into a hierarchy by integrating domain knowledge, as well as consumer reviews, and applies the hierarchy to the task of implicit aspect identification.
Abstract: This paper presents a domain-assisted approach to organize various aspects of a product into a hierarchy by integrating domain knowledge (e.g., the product specifications), as well as consumer reviews. Based on the derived hierarchy, we generate a hierarchical organization of consumer reviews on various product aspects and aggregate consumer opinions on these aspects. With such organization, user can easily grasp the overview of consumer reviews. Furthermore, we apply the hierarchy to the task of implicit aspect identification which aims to infer implicit aspects of the reviews that do not explicitly express those aspects but actually comment on them. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach.