scispace - formally typeset
Open AccessPosted Content

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

Reads0
Chats0
TLDR
A comprehensive taxonomy of representation learning methods for graph-structured data is proposed, aiming to unify several disparate bodies of work and provide a solid foundation for understanding the intuition behind these methods, and enables future research in the area.
Abstract
There has been a surge of recent interest in learning representations for graph-structured data. Graph representation learning methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding (such as shallow graph embedding or graph auto-encoders), focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. However, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between graph neural networks, network embedding and graph regularization models. We propose a comprehensive taxonomy of representation learning methods for graph-structured data, aiming to unify several disparate bodies of work. Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs (e.g. GraphSage, Graph Convolutional Networks, Graph Attention Networks), and unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc) into a single consistent approach. To illustrate the generality of this approach, we fit over thirty existing methods into this framework. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area.

read more

Citations
More filters
Proceedings Article

CopulaGNN: Towards Integrating Representational and Correlational Roles of Graphs in Graph Neural Networks

TL;DR: Wang et al. as mentioned in this paper proposed Copula Graph Neural Network (CopulaGNN), which can take a wide range of GNN models as base models and utilize both representational and correlational information stored in the graphs.
Posted Content

Size-Invariant Graph Representations for Graph Classification Extrapolations

TL;DR: The authors used a causal model to learn approximately invariant representations that better extrapolate between train and test data, and demonstrated the benefits of representations that are invariant to train/test distribution shifts.
Proceedings Article

Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning

TL;DR: The Graph Traversal via Tensor Functionals (GTTF) as mentioned in this paper is a meta-algorithm framework for scaling graph representation learning algorithms to large graph datasets with only a few lines of code.
Proceedings ArticleDOI

Equivariant Hypergraph Diffusion Neural Operators

TL;DR: This work proposes a new HNN architecture named ED-HNN, which provably approximates any continuous equivariant hypergraph diffusion operators that can model a wide range of higher-order relations and shows great superiority in processing heterophilic hypergraphs and constructing deep models.
Posted Content

A Graph Feature Auto-Encoder for the Prediction of Unobserved Node Features on Biological Networks

TL;DR: In this article, the authors used graph neural networks to map graph nodes into a low-dimensional vector space representation, which can be trained to preserve both the local graph structure and the similarity between node features, and showed that using gene expression data as node features improves the reconstruction of the graph from the embedding.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Related Papers (5)