Neural-Symbolic Relational Reasoning on Graph Models: Effective Link Inference and Computation from Knowledge Bases.
Henrique Lemos,Pedro H. C. Avelar,Marcelo O. R. Prates,Artur S. d'Avila Garcez,Luis C. Lamb +4 more
- pp 647-659
TLDR
A neural-symbolic methodology leverages the resolution of relational inference in large graphs and is shown more effective than path-based approaches.Abstract:
The recent developments and growing interest in neural-symbolic models has shown that hybrid approaches can offer richer models for Artificial Intelligence. The integration of effective relational learning and reasoning methods is one of the key challenges in this direction, as neural learning and symbolic reasoning offer complementary characteristics that can benefit the development of AI systems. Relational labelling or link prediction on knowledge graphs has become one of the main problems in deep learning-based natural language processing research. Moreover, other fields which make use of neural-symbolic techniques may also benefit from such research endeavours. There have been several efforts towards the identification of missing facts from existing ones in knowledge graphs. Two lines of research try and predict knowledge relations between two entities by considering all known facts connecting them or several paths of facts connecting them. We propose a neural-symbolic graph neural network which applies learning over all the paths by feeding the model with the embedding of the minimal subset of the knowledge graph containing such paths. By learning to produce representations for entities and facts corresponding to word embeddings, we show how the model can be trained end-to-end to decode these representations and infer relations between entities in a multitask approach. Our contribution is two-fold: a neural-symbolic methodology leverages the resolution of relational inference in large graphs, and we also demonstrate that such neural-symbolic model is shown more effective than path-based approaches.read more
Citations
More filters
Journal ArticleDOI
Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing? A Structured Review
TL;DR: A structured review of studies implementing NeSy for NLP is conducted, with the aim of answering the question of whether NeSy is indeed meeting its promises: reasoning, out-of-distribution generalization, interpretability, learning and reasoning from small data, and transferability to new domains.
Journal ArticleDOI
Is neuro-symbolic AI meeting its promises in natural language processing? A structured review
TL;DR: The authors conducted a systematic review of studies implementing NeSy for NLP, with the aim of answering the question of whether NeSy is indeed meeting its promises: reasoning, out-of-distribution generalization, interpretability, learning and reasoning from small data, and transferability to new domains.
References
More filters
Proceedings Article
Translating Embeddings for Modeling Multi-relational Data
TL;DR: TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.
Journal ArticleDOI
A Comprehensive Survey on Graph Neural Networks
TL;DR: This article provides a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields and proposes a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNS, convolutional GNN’s, graph autoencoders, and spatial–temporal Gnns.
Book ChapterDOI
Modeling Relational Data with Graph Convolutional Networks
Michael Sejr Schlichtkrull,Thomas Kipf,Peter Bloem,Rianne van den Berg,Ivan Titov,Ivan Titov,Max Welling,Max Welling +7 more
TL;DR: It is shown that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.
Posted Content
Neural Message Passing for Quantum Chemistry
TL;DR: Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Posted Content
Relational inductive biases, deep learning, and graph networks
Peter W. Battaglia,Jessica B. Hamrick,Victor Bapst,Alvaro Sanchez-Gonzalez,Vinicius Zambaldi,Mateusz Malinowski,Andrea Tacchetti,David Raposo,Adam Santoro,Ryan Faulkner,Caglar Gulcehre,H. Francis Song,Andrew J. Ballard,Justin Gilmer,George E. Dahl,Ashish Vaswani,Kelsey R. Allen,Charlie Nash,Victoria Langston,Chris Dyer,Nicolas Heess,Daan Wierstra,Pushmeet Kohli,Matthew Botvinick,Oriol Vinyals,Yujia Li,Razvan Pascanu +26 more
TL;DR: It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.