scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2013"


Journal ArticleDOI
TL;DR: FCA explicitly formalises extension and intension of a concept, their mutual relationships, and the fact that increasing intent implies decreasing extent and vice versa, and allows to derive a concept hierarchy from a given dataset.

2,029 citations


Journal ArticleDOI
TL;DR: YAGO2 as mentioned in this paper is an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space, and it contains 447 million facts about 9.8 million entities.

1,186 citations


Journal ArticleDOI
TL;DR: This article introduces the KnowRob knowledge processing system, a system specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks, and evaluates the system’s scalability and present different integrated experiments that show its versatility and comprehensiveness.
Abstract: Autonomous service robots will have to understand vaguely described tasks, such as “set the table” or “clean up”. Performing such tasks as intended requires robots to fully, precisely, and appropriately parameterize their low-level control programs. We propose knowledge processing as a computational resource for enabling robots to bridge the gap between vague task descriptions and the detailed information needed to actually perform those tasks in the intended way. In this article, we introduce the KnowRob knowledge processing system that is specifically designed to provide autonomous robots with the knowledge needed for performing everyday manipulation tasks. The system allows the realization of “virtual knowledge bases”: collections of knowledge pieces that are not explicitly represented but computed on demand from the robot's internal data structures, its perception system, or external sources of information. This article gives an overview of the different kinds of knowledge, the different inference mechanisms, and interfaces for acquiring knowledge from external sources, such as the robot's perception system, observations of human activities, Web sites on the Internet, as well as Web-based knowledge bases for information exchange between robots. We evaluate the system's scalability and present different integrated experiments that show its versatility and comprehensiveness.

373 citations


Book ChapterDOI
01 Jan 2013
TL;DR: The latest iteration of ConceptNet is presented, ConceptNet 5, with a focus on its fundamental design decisions and ways to interoperate with it.
Abstract: ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. Here we present the latest iteration, ConceptNet 5, with a focus on its fundamental design decisions and ways to interoperate with it.

244 citations


Posted Content
TL;DR: In this article, the authors present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which they call network fragments, providing for representation of asymmetric independence and canonical intercausal interaction.
Abstract: In most current applications of belief networks, domain knowledge is represented by a single belief network that applies to all problem instances in the domain. In more complex domains, problem-specific models must be constructed from a knowledge base encoding probabilistic relationships in the domain. Most work in knowledge-based model construction takes the rule as the basic unit of knowledge. We present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which we call network fragments. Our framework provides for representation of asymmetric independence and canonical intercausal interaction. We discuss the combination of network fragments to form problem-specific models to reason about particular problem instances. The framework is illustrated using examples from the domain of military situation awareness.

179 citations


Book ChapterDOI
30 Jul 2013
TL;DR: An introduction to Answer Set Programming is provided, starting with historical perspectives, followed by a definition of the core language, a guideline to knowledge representation, an overview of existing ASP solvers, and a panorama of current research topics in the field.
Abstract: Answer Set Programming (ASP) evolved from various fields such as Logic Programming, Deductive Databases, Knowledge Representation, and Nonmonotonic Reasoning, and serves as a flexible language for declarative problem solving. There are two main tasks in problem solving, representation and reasoning, which are clearly separated in the declarative paradigm. In ASP, representation is done using a rule-based language, while reasoning is performed using implementations of general-purpose algorithms, referred to as ASP solvers. Rules in ASP are interpreted according to common sense principles, including a variant of the closed-world-assumption (CWA) and the unique-name-assumption (UNA). Collections of ASP rules are referred to as ASP programs, which represent the modelled knowledge. To each ASP program a collection of answer sets, or intended models, is associated, which stand for the solutions to the modelled problem; this collection can also be empty, meaning that the modelled problem does not admit a solution. Several reasoning tasks exist: the classical ASP task is enumerating all answer sets or determining whether an answer set exists, but ASP also allows for query answering in brave or cautious modes. This article provides an introduction to the field, starting with historical perspectives, followed by a definition of the core language, a guideline to knowledge representation, an overview of existing ASP solvers, and a panorama of current research topics in the field.

174 citations



Journal ArticleDOI
TL;DR: This work is studying an approach to the underlying multi-relational data mining (mrdm) problem, which relies on formal concept analysis (fca) as a framework for clustering and classification, and describes implementations of rca and list applications to problems from software and knowledge engineering.
Abstract: The processing of complex data is admittedly among the major concerns of knowledge discovery from data (kdd). Indeed, a major part of the data worth analyzing is stored in relational databases and, since recently, on the Web of Data. This clearly underscores the need for Entity-Relationship and rdf compliant data mining (dm) tools. We are studying an approach to the underlying multi-relational data mining (mrdm) problem, which relies on formal concept analysis (fca) as a framework for clustering and classification. Our relational concept analysis (rca) extends fca to the processing of multi-relational datasets, i.e., with multiple sorts of individuals, each provided with its own set of attributes, and relationships among those. Given such a dataset, rca constructs a set of concept lattices, one per object sort, through an iterative analysis process that is bound towards a fixed-point. In doing that, it abstracts the links between objects into attributes akin to role restrictions from description logics (dls). We address here key aspects of the iterative calculation such as evolution in data description along the iterations and process termination. We describe implementations of rca and list applications to problems from software and knowledge engineering.

146 citations


Posted Content
TL;DR: The language ACP as mentioned in this paper is a probabilistic extension of terminological logics and aims at closing the gap between the two areas of research, i.e., classical terminological knowledge representation excludes the possibility of handling uncertain concept descriptions involving, e.g., "usually true" concept properties, generalized quantifiers or exceptions.
Abstract: On the one hand, classical terminological knowledge representation excludes the possibility of handling uncertain concept descriptions involving, e.g., "usually true" concept properties, generalized quantifiers, or exceptions. On the other hand, purely numerical approaches for handling uncertainty in general are unable to consider terminological knowledge. This paper presents the language ACP which is a probabilistic extension of terminological logics and aims at closing the gap between the two areas of research. We present the formal semantics underlying the language ALUP and introduce the probabilistic formalism that is based on classes of probabilities and is realized by means of probabilistic constraints. Besides inferring implicitly existent probabilistic relationships, the constraints guarantee terminological and probabilistic consistency. Altogether, the new language ALUP applies to domains where both term descriptions and uncertainty have to be handled.

145 citations


Journal ArticleDOI
TL;DR: A knowledge acquisition and representation approach using the fuzzy evidential reasoning approach and dynamic adaptive FPNs to solve the problems of domain experts' diversity experience and reason the rule-based knowledge more intelligently is presented.
Abstract: The two most important issues of expert systems are the acquisition of domain experts' professional knowledge and the representation and reasoning of the knowledge rules that have been identified. First, during expert knowledge acquisition processes, the domain expert panel often demonstrates different experience and knowledge from one another and produces different types of knowledge information such as complete and incomplete, precise and imprecise, and known and unknown because of its cross-functional and multidisciplinary nature. Second, as a promising tool for knowledge representation and reasoning, fuzzy Petri nets (FPNs) still suffer a couple of deficiencies. The parameters in current FPN models could not accurately represent the increasingly complex knowledge-based systems, and the rules in most existing knowledge inference frameworks could not be dynamically adjustable according to propositions' variation as human cognition and thinking. In this paper, we present a knowledge acquisition and representation approach using the fuzzy evidential reasoning approach and dynamic adaptive FPNs to solve the problems mentioned above. As is illustrated by the numerical example, the proposed approach can well capture experts' diversity experience, enhance the knowledge representation power, and reason the rule-based knowledge more intelligently.

112 citations


Posted Content
TL;DR: SPOOK as mentioned in this paper implements a more expressive language that allows it to represent the battlespace domain naturally and compactly, and shows empirically that it achieves orders of magnitude speedup over existing approaches.
Abstract: In previous work, we pointed out the limitations of standard Bayesian networks as a modeling framework for large, complex domains. We proposed a new, richly structured modeling language, {em Object-oriented Bayesian Netorks}, that we argued would be able to deal with such domains. However, it turns out that OOBNs are not expressive enough to model many interesting aspects of complex domains: the existence of specific named objects, arbitrary relations between objects, and uncertainty over domain structure. These aspects are crucial in real-world domains such as battlefield awareness. In this paper, we present SPOOK, an implemented system that addresses these limitations. SPOOK implements a more expressive language that allows it to represent the battlespace domain naturally and compactly. We present a new inference algorithm that utilizes the model structure in a fundamental way, and show empirically that it achieves orders of magnitude speedup over existing approaches.

Journal ArticleDOI
TL;DR: The formal language developed for encoding information is reported on and the approaches to solve the inference problems related to finding information, to determining if information is usable by a robot, and to grounding it on the robot platform are presented.
Abstract: The community-based generation of content has been tremendously successful in the World-Wide Web - people help each other by providing information that could be useful to others. We are trying to transfer this approach to robotics in order to help robots acquire the vast amounts of knowledge needed to competently perform everyday tasks. RoboEarth is intended to be a web community by robots for robots to autonomously share descriptions of tasks they have learned, object models they have created, and environments they have explored. In this paper, we report on the formal language we developed for encoding this information and present our approaches to solve the inference problems related to finding information, to determining if information is usable by a robot, and to grounding it on the robot platform.

Posted Content
TL;DR: A multimedia analysis framework to process video and text jointly for understanding events and answering user queries and shows that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where, and why.
Abstract: We propose a framework for parsing video and text jointly for understanding events and answering user queries. Our framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events) and causal information (causalities between events and fluents) in the video and text. The knowledge representation of our framework is based on a spatial-temporal-causal And-Or graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. We present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs and the joint parse graph. Based on the probabilistic model, we propose a joint parsing system consisting of three modules: video parsing, text parsing and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text respectively. The joint inference module produces a joint parse graph by performing matching, deduction and revision on the video and text parse graphs. The proposed framework has the following objectives: Firstly, we aim at deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; Secondly, we perform parsing and reasoning across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG representation; Thirdly, we show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where and why. We empirically evaluated our system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results.

Journal ArticleDOI
TL;DR: An efficient evaluation of all existing semantic similarity methods based on structure, information content and feature approaches is given to help researcher and practitioners to select the measure that best fit for their requirements.
Abstract: In recent years, semantic similarity measure has a great interest in Semantic Web and Natural Language Processing (NLP). Several similarity measures have been developed, being given the existence of a structured knowledge representation offered by ontologies and corpus which enable semantic interpretation of terms. Semantic similarity measures compute the similarity between concepts/terms included in knowledge sources in order to perform estimations. This paper discusses the existing semantic similarity methods based on structure, information content and feature approaches. Additionally, we present a critical evaluation of several categories of semantic similarity approaches based on two standard benchmarks. The aim of this paper is to give an efficient evaluation of all these measures which help researcher and practitioners to select the measure that best fit for their requirements.

Journal ArticleDOI
TL;DR: This work provides a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and demonstrates how research results in biomedical ontologies can be evaluated.
Abstract: Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research.

Journal ArticleDOI
TL;DR: It is shown that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data.
Abstract: Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas of mathematics, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants. We argue that such scenarios will benefit from Semantic Web technology.Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge.Mathematical knowledge comprises structures given in a logical language --formulae, statements e.g. axioms, and theo-ries --, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and counter-examples. Our review of vocabularies for representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics.Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data.We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.

Journal ArticleDOI
TL;DR: Based on the normal distribution, a method to obtain basic probability assignment (BPA) is proposed, and several benchmark pattern classification problems are used to demonstrate the proposed method and to compare against existing methods.
Abstract: The Dempster-Shafer evidence theory (D-S theory) is one of the primary tools for knowledge representation and uncertain reasoning, and has been widely used in many information fusion systems. However, how to determine the basic probability assignment (BPA), which is the main and first step in D-S theory, is still an open issue. In this paper, based on the normal distribution, a method to obtain BPA is proposed. The training data are used to build a normal distribution-based model for each attribute of the data. Then, a nested structure BPA function can be constructed, using the relationship between the test data and the normal distribution model. A normality test and normality transformation are integrated into the proposed method to handle non-normal data. The missing attribute values in datasets are addressed as ignorance in the framework of the evidence theory. Several benchmark pattern classification problems are used to demonstrate the proposed method and to compare against existing methods. Experiments provide encouraging results in terms of classification accuracy, and the proposed method is seen to perform well without a large amount of training data.

Journal ArticleDOI
TL;DR: The affine model is presented, a computational model that simulates modal reasoning by using iconic visual representations together with affine and set transformations over these representations to solve a given RPM problem.

Proceedings Article
02 Apr 2013
TL;DR: The system, NetSieve, combines statistical natural language processing (NLP), knowledge representation, and ontology modeling to achieve these goals and achieves 89%-100% accuracy and its inference output is useful to learn global problem trends.
Abstract: This paper presents NetSieve, a system that aims to do automated problem inference from network trouble tickets Network trouble tickets are diaries comprising fixed fields and free-form text written by operators to document the steps while troubleshooting a problem Unfortunately, while tickets carry valuable information for network management, analyzing them to do problem inference is extremely difficult--fixed fields are often inaccurate or incomplete, and the free-form text is mostly written in natural language This paper takes a practical step towards automatically analyzing natural language text in network tickets to infer the problem symptoms, troubleshooting activities and resolution actions Our system, NetSieve, combines statistical natural language processing (NLP), knowledge representation, and ontology modeling to achieve these goals To cope with ambiguity in free-form text, NetSieve leverages learning from human guidance to improve its inference accuracy We evaluate NetSieve on 10K+ tickets from a large cloud provider, and compare its accuracy using (a) an expert review, (b) a study with operators, and (c) vendor data that tracks device replacement and repairs Our results show that NetSieve achieves 89%-100% accuracy and its inference output is useful to learn global problem trends We have used NetSieve in several key network operations: analyzing device failure trends, understanding why network redundancy fails, and identifying device problem symptoms

Journal ArticleDOI
TL;DR: This paper proposes a novel temporal knowledge representation and learning framework to perform large-scale temporal signature mining of longitudinal heterogeneous event data and presents a doubly constrained convolutional sparse coding framework that learns interpretable and shift-invariant latent temporal event signatures.
Abstract: This paper proposes a novel temporal knowledge representation and learning framework to perform large-scale temporal signature mining of longitudinal heterogeneous event data. The framework enables the representation, extraction, and mining of high-order latent event structure and relationships within single and multiple event sequences. The proposed knowledge representation maps the heterogeneous event sequences to a geometric image by encoding events as a structured spatial-temporal shape process. We present a doubly constrained convolutional sparse coding framework that learns interpretable and shift-invariant latent temporal event signatures. We show how to cope with the sparsity in the data as well as in the latent factor model by inducing a double sparsity constraint on the β-divergence to learn an overcomplete sparse latent factor model. A novel stochastic optimization scheme performs large-scale incremental learning of group-specific temporal event signatures. We validate the framework on synthetic data and on an electronic health record dataset.

Book
01 Dec 2013
TL;DR: This book provides comprehensive coverage of the primary exact algorithms for reasoning with graphical models, and believes the principles outlined here would serve well in moving forward to approximation and anytime-based schemes.
Abstract: Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well known that the tasks are computationally hard, but research during the past three decades has yielded a variety of principles and techniques that significantly advanced the state of the art. In this book we provide comprehensive coverage of the primary exact algorithms for reasoning with such models. The main feature exploited by the algorithms is the model's graph. We present inference-based, message-passing schemes (e.g., variable-elimination) and search-based, conditioning schemes (e.g., cycle-cutset conditioning and AND/OR search). Each class possesses distinguished characteristics and in particular has different time vs. space behavior. We emphasize the dependence of both schemes on few graph parameters such as the treewidth, cycle-cutset, and (the pseudo-tree) height. We believe the principles outlined here would serve well in moving forward to approximation and anytime-based schemes. The target audience of this book is researchers and students in the artificial intelligence and machine learning area, and beyond. Table of Contents: Preface / Introduction / What are Graphical Models / Inference: Bucket Elimination for Deterministic Networks / Inference: Bucket Elimination for Probabilistic Networks / Tree-Clustering Schemes / AND/OR Search Spaces and Algorithms for Graphical Models / Combining Search and Inference: Trading Space for Time / Conclusion / Bibliography / Author's Biography

Journal ArticleDOI
TL;DR: In this paper, a Transferable Belief Model (TBM) is used to support collaborative decision making and risk-based maintenance in industrial environments, and a case-based reasoning mechanism is used for solving new problems based on similar past problems and integrating the experts' beliefs.
Abstract: Collaborative maintenance management and problem solving in industrial environments.Case based reasoning as the process for using past experiences to solve new problems.Conceptual graphs for knowledge representation and visual reasoning using taxonomy.Transferable belief model for collaborative decision making/risk based maintenance.Methodological aspects linked to functionality (e.g. diagnosis or health assessment). Distributed environments, technological evolution, outsourcing market and information technology (IT) are factors that considerably influence current and future industrial maintenance management. Repairing and maintaining the plants and installations requires a better and more sophisticated skill set and continuously updated knowledge. Today, maintenance solutions involve increasing the collaboration of several experts to solve complex problems. These solutions imply changing the requirements and practices for maintenance; thus, conceptual models to support multidisciplinary expert collaboration in decision making are indispensable. The objectives of this work are as follows: (i) knowledge formalization of domain vocabulary to improve the communication and knowledge sharing among a number of experts and technical actors with Conceptual Graphs (CGs) formalism, (ii) multi-expert knowledge management with the Transferable Belief Model (TBM) to support collaborative decision making, and (iii) maintenance problem solving with a variant of the Case-Based Reasoning (CBR) mechanism with a process of solving new problems based on the solutions of similar past problems and integrating the experts' beliefs. The proposed approach is applied for the maintenance management of the illustrative case study.

Journal ArticleDOI
TL;DR: The design of a comprehensive and lightweight semantic description model for knowledge representation in the IoT domain is presented, follows the widely recognised best practices in knowledge engineering and ontology modelling and is allowed to extend by linking to external ontologies, knowledge bases or existing linked data.
Abstract: Semantic modelling provides a potential basis for interoperating among different systems and applications in the Internet of Things (IoT). However, current work has mostly focused on IoT resource management while not on the access and utilisation of information generated by the ``Things''. We present the design of a comprehensive and lightweight semantic description model for knowledge representation in the IoT domain. The design follows the widely recognised best practices in knowledge engineering and ontology modelling. Users are allowed to extend the model by linking to external ontologies, knowledge bases or existing linked data. Scalable access to IoT services and resources is achieved through a distributed, semantic storage design. The usefulness of the model is also illustrated through an IoT service discovery method.

Journal ArticleDOI
TL;DR: This paper will try to show that FCA actually provides support for processing large dynamical complex data augmented with additional knowledge.
Abstract: During the last three decades, formal concept analysis (FCA) became a well-known formalism in data analysis and knowledge discovery because of its usefulness in important domains of knowledge discovery in databases (KDD) such as ontology engineering, association rule mining, machine learning, as well as relation to other established theories for representing knowledge processing, like description logics, conceptual graphs, and rough sets. In early days, FCA was sometimes misconceived as a static crisp hardly scalable formalism for binary data tables. In this paper, we will try to show that FCA actually provides support for processing large dynamical complex (may be uncertain) data augmented with additional knowledge. © 2013 Wiley Periodicals, Inc.

Proceedings Article
09 Jul 2013
TL;DR: A probabilistic normalcy model of vessel dynamics is learned using unsupervised techniques applied on historical S-AIS data and used for anomaly detection and prediction tasks, thus providing functionalities for high-level situational awareness (level 2 and 3 of the JDL).
Abstract: Automatic vessel behaviour analysis is a key factor for maritime surveillance and relies on an efficient representation of knowledge about vessels activity. Emerging technologies such as space-based AIS provides a new dimension of service and creates a need for new methods able to learn a maritime scene model at an oceanic scale. In this paper, we propose such a framework: a probabilistic normalcy model of vessel dynamics is learned using unsupervised techniques applied on historical S-AIS data and used for anomaly detection and prediction tasks, thus providing functionalities for high-level situational awareness (level 2 and 3 of the JDL).

Journal ArticleDOI
03 Jun 2013
TL;DR: This work presents a new type of FPN model, dynamic adaptive fuzzy Petri nets, and proposes a max-algebra based parallel reasoning algorithm so that the reasoning process can be implemented automatically.
Abstract: Although a promising tool for knowledge representation and reasoning, fuzzy Petri nets (FPNs) still suffer from some deficiencies. First, the parameters in current FPN models, such as weight, threshold, and certainty factor do not accurately represent increasingly complex knowledge-based expert systems and do not capture the dynamic nature of fuzzy knowledge. Second, the fuzzy rules of most existing knowledge inference frameworks are static and cannot be adjusted dynamically according to variations of antecedent propositions. To address these problems, we present a new type of FPN model, dynamic adaptive fuzzy Petri nets, for knowledge representation and reasoning. We also propose a max-algebra based parallel reasoning algorithm so that the reasoning process can be implemented automatically. As illustrated by a numerical example, the proposed model can well represent the experts' diverse experience and can implement the knowledge reasoning dynamically.

Journal ArticleDOI
TL;DR: In this paper, the authors present a critical evaluation of several categories of semantic similarity approaches based on two standard benchmarks and give an efficient evaluation of all these measures which help researcher and practitioners to select the measure that best fit for their requirements.
Abstract: In recent years, semantic similarity measure has a great interest in Semantic Web and Natural Language Processing (NLP). Several similarity measures have been developed, being given the existence of a structured knowledge representation offered by ontologies and corpus which enable semantic interpretation of terms. Semantic similarity measures compute the similarity between concepts/terms included in knowledge sources in order to perform estimations. This paper discusses the existing semantic similarity methods based on structure, information content and feature approaches. Additionally, we present a critical evaluation of several categories of semantic similarity approaches based on two standard benchmarks. The aim of this paper is to give an efficient evaluation of all these measures which help researcher and practitioners to select the measure that best fit for their requirements. General Terms Similarity Measures, Ontology, Semantic Web, NLP

Journal Article
TL;DR: A new method to achieve higher-level and more advanced active feature driven product model definition and its implementation will be new application-oriented model entity generation and representation utilizing existing modeling resources in industrial PLM systems by use of application programming interfaces (API).
Abstract: The current product model consists of features and unstructured contextual connections in order to relate features. The feature modifies the previous state of the product model producing contextual connections with previously defined features. Active knowledge is applied for the adaptive modification of product model features in the case of a changed situation or event. Starting from this state-of-the-art, the authors of this paper introduced a new method to achieve higher-level and more advanced active feature driven product model definition. As part of the related research program, new situation driven model definition processes and model entities are explained in this paper. Higher-level knowledge representation in the product model is motivated by a recent trend in industrial product modeling systems towards more advanced and efficient situation-based self- adaptive model generation. The proposed model represents one of the possible future ways of product model development for product lifecycle management (PLM) systems on the global or product level of decisions. Its implementation will be new application-oriented model entity generation and representation utilizing existing modeling resources in industrial PLM systems by use of application programming interfaces (API).

Journal ArticleDOI
TL;DR: This paper investigates inconsistency measurement in probabilistic conditional logic, a logic that incorporates uncertainty and focuses on the role of conditionals, i.e. if-then rules by extending inconsistency measures for classical logic to the probabilism setting and proposes novel inconsistency measures that are specifically tailored for the Probabilistic case.

Dissertation
27 Feb 2013
TL;DR: In this paper, the authors propose a rule-based approach to solve the problem of homonymity in homonyms, and they propose a solution: 1.19.195.
Abstract: 195