scispace - formally typeset
Open AccessProceedings ArticleDOI

Learning Relational Representations with Auto-encoding Logic Programs

Reads0
Chats0
TLDR
A novel framework for relational representation learning that combines the best of both worlds, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks.
Abstract
Deep learning methods capable of handling relational data have proliferated over the last years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these deep learning methods aim at re-representing symbolic relational data in Euclidean spaces. They offer better scalability, but can only numerically approximate relational structures and are less flexible in terms of reasoning tasks supported. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Machine learning

TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Posted Content

Inductive logic programming at 30: a new introduction

TL;DR: The necessary logical notation and the main ILP learning settings are introduced, the main building blocks of an ILP system are described, and several ILP systems on several dimensions are compared.
Journal ArticleDOI

Learning programs by learning from failures

TL;DR: Popper is introduced, an ILP system that implements this approach by combining answer set programming and Prolog, and shows that constraints drastically improve learning performance, and Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.
Journal ArticleDOI

Propositionalization and embeddings: two sides of the same coin

TL;DR: In this paper, a unifying methodology combining propositionalization and embeddings is proposed, which benefits from the advantages of both in solving complex data transformation and learning tasks, respectively.
Posted Content

Relational Neural Machines.

TL;DR: Relational Neural Machines as mentioned in this paper is a novel framework allowing to jointly train the parameters of the learners and of a First-Order Logic-based reasoner, which is able to recover both classical learning from supervised data in case of pure sub-symbolic learning, and Markov Logic Networks from pure symbolic reasoning.
Related Papers (5)