scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2021"


Journal ArticleDOI
TL;DR: A comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research.
Abstract: Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction toward cognition and human-level intelligence. In this survey, we provide a comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning are reviewed. We further explore several emerging topics, including metarelational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of data sets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions.

1,025 citations


Journal ArticleDOI
TL;DR: This survey is the first to provide an inclusive definition to the notion of domain KG, and a comprehensive review of the state-of-the-art approaches drawn from academic works relevant to seven dissimilar domains of knowledge is provided.

138 citations


Journal ArticleDOI
TL;DR: The model using semantic representation as input verifies that more accurate results can be obtained by introducing a high-level semantic representation, and shows that it is feasible and effective to introduce high- level and abstract forms of knowledge representation into deep learning tasks.
Abstract: In visual reasoning, the achievement of deep learning significantly improved the accuracy of results. Image features are primarily used as input to get answers. However, the image features are too redundant to learn accurate characterizations within a limited complexity and time. While in the process of human reasoning, abstract description of an image is usually to avoid irrelevant details. Inspired by this, a higher-level representation named semantic representation is introduced. In this paper, a detailed visual reasoning model is proposed. This new model contains an image understanding model based on semantic representation, feature extraction and process model refined with watershed and u-distance method, a feature vector learning model using pyramidal pooling and residual network, and a question understanding model combining problem embedding coding method and machine translation decoding method. The feature vector could better represent the whole image instead of overly focused on specific characteristics. The model using semantic representation as input verifies that more accurate results can be obtained by introducing a high-level semantic representation. The result also shows that it is feasible and effective to introduce high-level and abstract forms of knowledge representation into deep learning tasks. This study lays a theoretical and experimental foundation for introducing different levels of knowledge representation into deep learning in the future.

116 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: Knowledge Reasoning with Implicit and Symbolic rePresentations (KRISP) as mentioned in this paper combines implicit knowledge from unsupervised language pre-training and symbolic knowledge encoded in knowledge bases.
Abstract: One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image. In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time. We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pretraining and supervised training data with transformer-based models. Second, explicit, symbolic knowledge encoded in knowledge bases. Our approach combines both—exploiting the powerful implicit reasoning of transformer models for answer prediction, and integrating symbolic representations from a knowledge graph, while never losing their explicit semantics to an implicit embedding. We combine diverse sources of knowledge to cover the wide variety of knowledge needed to solve knowledge-based questions. We show our approach, KRISP (Knowledge Reasoning with Implicit and Symbolic rePresentations), significantly out-performs state-of-the-art on OK-VQA, the largest available dataset for open-domain knowledge-based VQA. We show with extensive ablations that while our model successfully exploits implicit knowledge reasoning, the symbolic answer module which explicitly connects the knowledge graph to the answer vocabulary is critical to the performance of our method and generalizes to rare answers.1

90 citations


Journal ArticleDOI
TL;DR: Traditional and modern computational models of semantic memory are reviewed, within the umbrella of network (free association-based), feature (property generation norms- based), and distributional semantic (natural language corpora-based) models, and the contribution of these models to important debates in the literature regarding knowledge representation and learning is discussed.
Abstract: Adult semantic memory has been traditionally conceptualized as a relatively static memory system that consists of knowledge about the world, concepts, and symbols. Considerable work in the past few decades has challenged this static view of semantic memory, and instead proposed a more fluid and flexible system that is sensitive to context, task demands, and perceptual and sensorimotor information from the environment. This paper (1) reviews traditional and modern computational models of semantic memory, within the umbrella of network (free association-based), feature (property generation norms-based), and distributional semantic (natural language corpora-based) models, (2) discusses the contribution of these models to important debates in the literature regarding knowledge representation (localist vs. distributed representations) and learning (error-free/Hebbian learning vs. error-driven/predictive learning), and (3) evaluates how modern computational models (neural network, retrieval-based, and topic models) are revisiting the traditional "static" conceptualization of semantic memory and tackling important challenges in semantic modeling such as addressing temporal, contextual, and attentional influences, as well as incorporating grounding and compositionality into semantic representations. The review also identifies new challenges regarding the abundance and availability of data, the generalization of semantic models to other languages, and the role of social interaction and collaboration in language learning and development. The concluding section advocates the need for integrating representational accounts of semantic memory with process-based accounts of cognitive behavior, as well as the need for explicit comparisons of computational models to human baselines in semantic tasks to adequately assess their psychological plausibility as models of human semantic memory.

88 citations


Journal ArticleDOI
TL;DR: An overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods is provided, which outlines how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models.
Abstract: Ontologies have long been employed in the life sciences to formally represent and reason over domain knowledge and they are employed in almost every major biological database. Recently, ontologies are increasingly being used to provide background knowledge in similarity-based analysis and machine learning models. The methods employed to combine ontologies and machine learning are still novel and actively being developed. We provide an overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods; in particular, we outline how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models. The methods and experiments we describe are available as a set of executable notebooks, and we also provide a set of slides and additional resources at https://github.com/bio-ontology-research-group/machine-learning-with-ontologies.

81 citations


Journal ArticleDOI
TL;DR: In this paper, the authors promote the idea that including semantic and goal-oriented aspects in future 6G networks can produce a significant leap forward in terms of system effectiveness and sustainability.

78 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors evaluate existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators.
Abstract: Cyber threat intelligence is the provision of evidence-based knowledge about existing or emerging threats. Benefits of threat intelligence include increased situational awareness and efficiency in security operations and improved prevention, detection, and response capabilities. To process, analyze, and correlate vast amounts of threat information and derive highly contextual intelligence that can be shared and consumed in meaningful times requires utilizing machine-understandable knowledge representation formats that embed the industry-required expressivity and are unambiguous. To a large extend, this is achieved by technologies like ontologies, interoperability schemas, and taxonomies. This research evaluates existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators. The results confirmed that little emphasis has been given to developing a comprehensive cyber threat intelligence ontology with existing efforts not being thoroughly designed, non-interoperable and ambiguous, and lacking semantic reasoning capability.

70 citations


Journal ArticleDOI
TL;DR: An enhanced grey reasoning Petri net (EGRPN) based on matrix operations is presented to address the limitations and improves the flexibility of the existing FPN and experimental results show that the new EGRPN model is promising for cause analysis.
Abstract: Cause analysis makes great contributions to identifying the priorities of the causes in fault diagnosis system. A fuzzy Petri net (FPN) is a preferable model for knowledge representation and reasoning and has become an effective fault diagnosis tool. However, the existing FPN has some limitations in cause analysis. It is criticized for the inability to fully consider incomplete and unknown knowledge in uncertain situations. In this paper, an enhanced grey reasoning Petri net (EGRPN) based on matrix operations is presented to address the limitations and improves the flexibility of the existing FPN. The proposed EGRPN model uses grey numbers to handle the greyness and inaccuracy of uncertain knowledge. Then, the EGRPN inference algorithm is executed based on the matrix operations, which can express the relevance of uncertain events in the form of grey numbers and improve the reliability of the knowledge reasoning process. Finally, industrial examples of cause diagnosis are used to illustrate the feasibility and reliability of the EGRPN model. The experimental results show that the new EGRPN model is promising for cause analysis.

66 citations


Journal ArticleDOI
25 Jan 2021
TL;DR: In this article, the authors present a comprehensive view on AI-driven Cybersecurity that can play an important role for intelligent cybersecurity services and management, which can make the cybersecurity computing process automated and intelligent than the conventional security systems.
Abstract: Artificial intelligence (AI) is one of the key technologies of the Fourth Industrial Revolution (or Industry 4.0), which can be used for the protection of Internet-connected systems from cyber threats, attacks, damage, or unauthorized access. To intelligently solve today’s various cybersecurity issues, popular AI techniques involving machine learning and deep learning methods, the concept of natural language processing, knowledge representation and reasoning, as well as the concept of knowledge or rule-based expert systems modeling can be used. Based on these AI methods, in this paper, we present a comprehensive view on “AI-driven Cybersecurity” that can play an important role for intelligent cybersecurity services and management. The security intelligence modeling based on such AI methods can make the cybersecurity computing process automated and intelligent than the conventional security systems. We also highlight several research directions within the scope of our study, which can help researchers do future research in the area. Overall, this paper’s ultimate objective is to serve as a reference point and guidelines for cybersecurity researchers as well as industry professionals in the area, especially from an intelligent computing or AI-based technical point of view.

61 citations


Journal ArticleDOI
TL;DR: The research introduces the related concepts of the knowledge representation and analyzes knowledge representation of knowledge graphs by category, which includes some classical general knowledge graphs and several typical domain knowledge graphs.
Abstract: Domain knowledge graph has become a research topic in the era of artificial intelligence. Knowledge representation is the key step to construct domain knowledge graph. There have been quite a few well-established general knowledge graphs. However, there are still gaps on the domain knowledge graph construction. The research introduces the related concepts of the knowledge representation and analyzes knowledge representation of knowledge graphs by category, which includes some classical general knowledge graphs and several typical domain knowledge graphs. The paper also discusses the development of knowledge representation in accordance with the difference of entities, relationships and properties. It also presents the unsolved problems and future research trends in the knowledge representation of domain knowledge graph study.

Journal ArticleDOI
01 Jul 2021
TL;DR: In this article, the authors present a characterization of different types of KGs along with their construction approaches and discuss the current KG applications, problems, and challenges as well as discuss the perspective of future research.
Abstract: With the extensive growth of data that has been joined with the thriving development of the Internet in this century, finding or getting valuable information and knowledge from these huge noisy data became harder. The Concept of Knowledge Graph (KG) is one of the concepts that has come into the public view as a result of this development. In addition, with that thriving development especially in the last two decades, the need to process and extract valuable information in a more efficient way is increased. KG presents a common framework for knowledge representation, based on the analysis and extraction of entities and relationships. Techniques for KG construction can extract information from either structured, unstructured or even semi-structured data sources, and finally organize the information into knowledge, represented in a graph. This paper presents a characterization of different types of KGs along with their construction approaches. It reviews the existing academia, industry and expert KG systems and discusses in detail about the features of it. A systematic review methodology has been followed to conduct the review. Several databases (Scopus, GS, WoS) and journals (SWJ, Applied Ontology, JWS) are analysed to collect the relevant study and filtered by using inclusion and exclusion criteria. This review includes the state-of-the-art, literature review, characterization of KGs, and the knowledge extraction techniques of KGs. In addition, this paper overviews the current KG applications, problems, and challenges as well as discuss the perspective of future research. The main aim of this paper is to analyse all existing KGs with their features, techniques, applications, problems, and challenges. To the best of our knowledge, such a characterization table among these most commonly used KGs has not been presented earlier.

Proceedings ArticleDOI
26 Oct 2021
TL;DR: The Federated Knowledge Graphs Embedding (FKGE) as mentioned in this paper exploits adversarial generation between pairs of knowledge graphs to translate identical entities and relations of different domains into near embedding spaces.
Abstract: Knowledge graph embedding plays an important role in knowledge representation, reasoning, and data mining applications. However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data. In addition, the centralized embedding model may not scale to the extensive real-world knowledge graphs. Therefore, we propose a novel decentralized scalable learning framework, Federated Knowledge Graphs Embedding (FKGE), where embeddings from different knowledge graphs can be learnt in an asynchronous and peer-to-peer manner while being privacy-preserving. FKGE exploits adversarial generation between pairs of knowledge graphs to translate identical entities and relations of different domains into near embedding spaces. In order to protect the privacy of the training data, FKGE further implements a privacy-preserving neural network structure to guarantee no raw data leakage. We conduct extensive experiments to evaluate FKGE on 11 knowledge graphs, demonstrating a significant and consistent improvement in model quality with at most 17.85% and 7.90% increases in performance on triple classification and link prediction tasks.

Proceedings ArticleDOI
19 Apr 2021
TL;DR: Zhang et al. as discussed by the authors explored richer and more competitive prior knowledge to model the inter-class relationship for zero-shot learning via ontology-based knowledge representation and semantic embedding.
Abstract: Zero-shot Learning (ZSL), which aims to predict for those classes that have never appeared in the training data, has arisen hot research interests. The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e.g., features) from training classes (i.e., seen classes) to unseen classes. However, the priors adopted by the existing methods are relatively limited with incomplete semantics. In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL via ontology-based knowledge representation and semantic embedding. Meanwhile, to address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs). Our main findings include: (i) an ontology-enhanced ZSL framework that can be applied to different domains, such as image classification (IMGC) and knowledge graph completion (KGC); (ii) a comprehensive evaluation with multiple zero-shot datasets from different domains, where our method often achieves better performance than the state-of-the-art models. In particular, on four representative ZSL baselines of IMGC, the ontology-based class semantics outperform the previous priors e.g., the word embeddings of classes by an average of 12.4 accuracy points in the standard ZSL across two example datasets (see Figure 4).

Proceedings ArticleDOI
01 Jun 2021
TL;DR: A novel time-aware knowledge graph embebdding approach, TeLM, which performs 4th-order tensor factorization of a Temporal knowledge graph using a Linear temporal regularizer and Multivector embeddings and investigates the effect of the temporal dataset’s time granularity on temporal knowledge graph completion.
Abstract: Representation learning approaches for knowledge graphs have been mostly designed for static data. However, many knowledge graphs involve evolving data, e.g., the fact (The President of the United States is Barack Obama) is valid only from 2009 to 2017. This introduces important challenges for knowledge representation learning since the knowledge graphs change over time. In this paper, we present a novel time-aware knowledge graph embebdding approach, TeLM, which performs 4th-order tensor factorization of a Temporal knowledge graph using a Linear temporal regularizer and Multivector embeddings. Moreover, we investigate the effect of the temporal dataset’s time granularity on temporal knowledge graph completion. Experimental results demonstrate that our proposed models trained with the linear temporal regularizer achieve the state-of-the-art performances on link prediction over four well-established temporal knowledge graph completion benchmarks.

Proceedings ArticleDOI
TL;DR: In this paper, a decentralized scalable learning framework, federated knowledge graph embedding (FKGE), is proposed, where embeddings from different knowledge graphs can be learned in an asynchronous and peer-to-peer manner while being privacy-preserving.
Abstract: Knowledge graph embedding plays an important role in knowledge representation, reasoning, and data mining applications. However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data. In addition, the centralized embedding model may not scale to the extensive real-world knowledge graphs. Therefore, we propose a novel decentralized scalable learning framework, \emph{Federated Knowledge Graphs Embedding} (FKGE), where embeddings from different knowledge graphs can be learnt in an asynchronous and peer-to-peer manner while being privacy-preserving. FKGE exploits adversarial generation between pairs of knowledge graphs to translate identical entities and relations of different domains into near embedding spaces. In order to protect the privacy of the training data, FKGE further implements a privacy-preserving neural network structure to guarantee no raw data leakage. We conduct extensive experiments to evaluate FKGE on 11 knowledge graphs, demonstrating a significant and consistent improvement in model quality with at most 17.85\% and 7.90\% increases in performance on triple classification and link prediction tasks.

Journal ArticleDOI
TL;DR: A new way of understanding theory of mind is suggested – one that is focused on understanding others' minds in relation to the actual world, rather than independent from it.
Abstract: Research on the capacity to understand others' minds has tended to focus on representations of beliefs, which are widely taken to be among the most central and basic theory of mind representations. Representations of knowledge, by contrast, have received comparatively little attention and have often been understood as depending on prior representations of belief. After all, how could one represent someone as knowing something if one does not even represent them as believing it? Drawing on a wide range of methods across cognitive science, we ask whether belief or knowledge is the more basic kind of representation. The evidence indicates that nonhuman primates attribute knowledge but not belief, that knowledge representations arise earlier in human development than belief representations, that the capacity to represent knowledge may remain intact in patient populations even when belief representation is disrupted, that knowledge (but not belief) attributions are likely automatic, and that explicit knowledge attributions are made more quickly than equivalent belief attributions. Critically, the theory of mind representations uncovered by these various methods exhibits a set of signature features clearly indicative of knowledge: they are not modality-specific, they are factive, they are not just true belief, and they allow for representations of egocentric ignorance. We argue that these signature features elucidate the primary function of knowledge representation: facilitating learning from others about the external world. This suggests a new way of understanding theory of mind – one that is focused on understanding others' minds in relation to the actual world, rather than independent from it.

Proceedings ArticleDOI
19 Jun 2021
TL;DR: In this paper, an adaptive knowledge accumulation (AKA) framework is proposed to learn continuously across multiple domains and even generalise on new and unseen domains, which can alleviate catastrophic forgetting on seen domains and demonstrates the ability to generalize to unseen domains.
Abstract: Person re-identification (ReID) methods always learn through a stationary domain that is fixed by the choice of a given dataset. In many contexts (e.g., lifelong learning), those methods are ineffective because the domain is continually changing in which case incremental learning over multiple domains is required potentially. In this work we explore a new and challenging ReID task, namely lifelong person re-identification (LReID), which enables to learn continuously across multiple domains and even generalise on new and unseen domains. Following the cognitive processes in the human brain, we design an Adaptive Knowledge Accumulation (AKA) framework that is endowed with two crucial abilities: knowledge representation and knowledge operation. Our method alleviates catastrophic forgetting on seen domains and demonstrates the ability to generalize to unseen domains. Correspondingly, we also provide a new and large-scale benchmark for LReID. Extensive experiments demonstrate our method outperforms other competitors by a margin of 5.8% mAP in generalising evaluation. The codes will be available at https://github.com/TPCD/LifelongReID.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: ECKPN as discussed by the authors proposes an Explicit Class Knowledge Propagation Network (ECKPN), which is composed of the comparison, squeeze and calibration modules to address the problem of few-shot classification.
Abstract: Recently, the transductive graph-based methods have achieved great success in the few-shot classification task. However, most existing methods ignore exploring the class-level knowledge that can be easily learned by humans from just a handful of samples. In this paper, we propose an Explicit Class Knowledge Propagation Network (ECKPN), which is composed of the comparison, squeeze and calibration modules, to address this problem. Specifically, we first employ the comparison module to explore the pairwise sample relations to learn rich sample representations in the instance-level graph. Then, we squeeze the instance-level graph to generate the class-level graph, which can help obtain the class-level visual knowledge and facilitate modeling the relations of different classes. Next, the calibration module is adopted to characterize the relations of the classes explicitly to obtain the more discriminative class-level knowledge representations. Finally, we combine the class-level knowledge with the instance-level sample representations to guide the inference of the query samples. We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the art methods.

Journal ArticleDOI
TL;DR: A lifelong learning model is presented, to solve challenging problem of real world underwater image classification and demonstrates that the proposed method outperforms base line method and state-of-the-art convolution neural network (CNN) methods.

Journal ArticleDOI
TL;DR: A formal and sophisticated system engineering ontology is achieved, which can be used to harmonize the extant standards, unify the languages, and improve the interoperability of the model-based systems engineering approach.
Abstract: Extant systems engineering standards are so fragmented that the conceptualization of a cohesive body of knowledge is not easy. The discrepancies between different standards lead to misunderstanding and misinterpretation, making communication between stakeholders increasingly difficult. Moreover, these standards remain document centric, whereas systems engineering is transforming from paper-based to a model-based discipline. This requires the use of advanced information exchange schema and digital artifacts to enhance interoperability. Ontologies have been advocated as a mechanism to address these problems, as they can support the model-based transition and formalize the domain knowledge. However, manually creating ontologies is a time-consuming, error-prone, and tedious process. Little has been known about how to automate the development and little work has been conducted for building systems engineering ontologies. Therefore, in this article, we propose an ontology learning methodology to extract a systems engineering ontology from the extant standards. This methodology employs natural language processing techniques to carry out the lexical and morphological analyses on the standard documents. From the learning process, important terminologies, synonyms, concepts, and relations constructing the systems engineering body of knowledge are automatically recognized and classified. A formal and sophisticated system engineering ontology is achieved, which can be used to harmonize the extant standards, unify the languages, and improve the interoperability of the model-based systems engineering approach.

Posted Content
Abstract: Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploring how fact-checking can be automated, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the veracity of claims. In this paper, we survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines. In this process, we present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts. Finally, we highlight challenges for future research.

Journal ArticleDOI
TL;DR: This paper proposes a model based on multi-view clustering framework, which could generate semantic representations of knowledge elements (i.e., entities/relations), and presents an empowered solution to entity retrieval with entity description.
Abstract: Knowledge representation is one of the critical problems in knowledge engineering and artificial intelligence, while knowledge embedding as a knowledge representation methodology indicates entities and relations in knowledge graph as low-dimensional, continuous vectors. In this way, knowledge graph is compatible with numerical machine learning models. Major knowledge embedding methods employ geometric translation to design score function, which is weak-semantic for natural language processing. To overcome this disadvantage, in this paper, we propose our model based on multi-view clustering framework, which could generate semantic representations of knowledge elements (i.e., entities/relations ). With our semantic model, we also present an empowered solution to entity retrieval with entity description. Extensive experiments show that our model achieves substantial improvements against baselines on the task of knowledge graph completion, triple classification, entity classification, and entity retrieval.

Book ChapterDOI
17 May 2021
TL;DR: In this paper, the authors investigate the relationship between a multipreferential semantics for defeasible reasoning in knowledge representation and a deep neural network model, and further extend the semantics to fuzzy interpretations and provide a preferential interpretation of multilayer perceptrons under some condition.
Abstract: In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a deep neural network model. Weighted knowledge bases for description logics are considered under a “concept-wise” multipreference semantics. The semantics is further extended to fuzzy interpretations and exploited to provide a preferential interpretation of Multilayer Perceptrons, under some condition.

Journal ArticleDOI
TL;DR: These authors are looking at Memory-augmented networks, Logic Tensor Networks, and compositions of LSTM models to explore their capabilities and limitations in conducting deductive reasoning, and applying these models on Resource Description Framework, first-order logic, and the description logic respectively.
Abstract: Symbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of –not necessarily easily obtained– data, and are slow to learn and prone to adversarial examples Either paradigm excels at certain types of problems where the other paradigm performs poorly In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of trainable artificial neural networks This paper provides a brief summary of the authors’ recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners Throughout the paper we will discuss strengths and limitations of models in term of accuracy, scalability, transferability, generalizabiliy, speed, and interpretability, and finally, will talk about possible modifications to enhance desirable capabilities More specifically, in terms of architectures, we are looking at Memory-augmented networks, Logic Tensor Networks, and compositions of LSTM models to explore their capabilities and limitations in conducting deductive reasoning We are applying these models on Resource Description Framework (RDF), first-order logic, and the description logic $\mathcal {E}{\mathscr{L}}^{+}$ respectively

Journal ArticleDOI
01 Aug 2021
TL;DR: In this paper, a network mapping method that is powered by technology semantic network (TechNet) is proposed to help engineers quickly understand a complex technical design description new to them, by representing it as a network graph of the design-related entities and their relations as an abstract summary of a design.
Abstract: Engineers often need to discover and learn designs from unfamiliar domains for inspiration or other particular uses. However, the complexity of the technical design descriptions and the unfamiliarity to the domain make it hard for engineers to comprehend the function, behavior, and structure of a design. To help engineers quickly understand a complex technical design description new to them, one approach is to represent it as a network graph of the design-related entities and their relations as an abstract summary of the design. While graph or network visualizations are widely adopted in the engineering design literature, the challenge remains in retrieving the design entities and deriving their relations. In this paper, we propose a network mapping method that is powered by Technology Semantic Network (TechNet). Through a case study, we showcase how TechNet’s unique characteristic of being trained on a large technology-related data source advantages itself over common-sense knowledge bases, such as WordNet and ConceptNet, for design knowledge representation.

Journal ArticleDOI
TL;DR: FACE is presented, a supervised feature-based machine learning method for automatic concept extractions from digital textbooks that has been created for building domain and student models that form the core of intelligent textbooks.
Abstract: The increasing popularity of digital textbooks as a new learning media has resulted in a growing interest in developing a new generation of adaptive textbooks that can help readers to learn better through adapting to the readers’ learning goals and the current state of knowledge. These adaptive textbooks are most frequently powered by internal knowledge models, which associate a list of unique domain knowledge concepts with each section of the textbook. With this kind of concept-level knowledge representation, a number of intelligent operations could be performed, which include student modeling, adaptive navigation support, and content recommendation. However, manual indexing of each textbook section with concepts is challenging, time-consuming, and prone to errors. Modern research in the area of natural language processing offers an attractive alternative, called automatic keyphrase extraction. While a range of keyphrase and concept extraction methods have been developed over the last twenty years, few of the known approaches were applied and evaluated in a textbook context. In this paper, we present FACE, a supervised feature-based machine learning method for automatic concept extractions from digital textbooks. This method has been created for building domain and student models that form the core of intelligent textbooks. We evaluated FACE on a newly constructed full-scale dataset by assessing how well it approximates concept annotations produced by human experts and how well it supports the needs of student modeling. The results show that FACE outperforms several state-of-the-art keyphrase extraction methods.

Journal ArticleDOI
01 Jul 2021
TL;DR: The theory of types in UFO is revised in response to empirical evidence, showing that many of OntoUML’s meta-types should be considered not as restricted to substantial types but instead should be applied to model endurant types in general, including relator types, quality types, and mode types.
Abstract: Types are fundamental for conceptual modeling and knowledge representation, being an essential construct in all major modeling languages in these fields. Despite that, from an ontological and cognitive point of view, there has been a lack of theoretical support for precisely defining a consensual view on types. As a consequence, there has been a lack of precise methodological support for users when choosing the best way to model general terms representing types that appear in a domain, and for building sound taxonomic structures involving them. For over a decade now, a community of researchers has contributed to the development of the Unified Foundational Ontology (UFO) - aimed at providing foundations for all major conceptual modeling constructs. At the core of this enterprise, there has been a theory of types specially designed to address these issues. This theory is ontologically well-founded, psychologically informed, and formally characterized. These results have led to the development of a Conceptual Modelling language dubbed OntoUML, reflecting the ontological micro-theories comprising UFO. Over the years, UFO and OntoUML have been successfully employed on conceptual model design in a variety of domains including academic, industrial, and governmental settings. These experiences exposed improvement opportunities for both the OntoUML language and its underlying theory, UFO. In this paper, we revise the theory of types in UFO in response to empirical evidence. The new version of this theory shows that many of OntoUML’s meta-types (e.g. kind, role, phase, mixin) should be considered not as restricted to substantial types but instead should be applied to model endurant types in general, including relator types, quality types, and mode types. We also contribute with a formal characterization of this fragment of the theory, which is then used to advance a new metamodel for OntoUML (termed OntoUML 2). To demonstrate that the benefits of this approach are extended beyond OntoUML, the proposed formal theory is then employed to support the definition of UFO-based lightweight Semantic Web ontologies with ontological constraint checking in OWL. Additionally, we report on empirical evidence from the literature, mainly from cognitive psychology but also from linguistics, supporting some of the key claims made by this theory. Finally, we propose a computational support for this updated metamodel.


Journal ArticleDOI
TL;DR: The model is useful for AWP and can reduce delays and reworks by saving a significant amount of time for constraint monitoring and removal, and in a scenario implementation, it is shown that the model can automate constraint modelling for ongoing projects.