scispace - formally typeset
Open AccessProceedings ArticleDOI

Graph Representation Learning via Graphical Mutual Information Maximization

TLDR
An unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder is developed, which outperforms state-of-the-art unsuper supervised counterparts, and even sometimes exceeds the performance of supervised ones.
Abstract
The richness in the content of various information networks such as social networks and communication networks provides the unprecedented potential for learning high-quality expressive representations without external supervision. This paper investigates how to preserve and extract the abundant information from graph-structured data into embedding space in an unsupervised manner. To this end, we propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs—an inevitable constraint in many existing graph representation learning algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE; Finally, our theoretical analysis confirms its correctness and rationality. With the aid of GMI, we develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder. Considerable experiments on transductive as well as inductive node classification and link prediction demonstrate that our method outperforms state-of-the-art unsupervised counterparts, and even sometimes exceeds the performance of supervised ones.

read more

Citations
More filters
Proceedings ArticleDOI

Graph Contrastive Learning with Adaptive Augmentation

TL;DR: This paper proposes a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph that consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
Posted Content

Self-Supervised Graph Transformer on Large-Scale Molecular Data

TL;DR: GROVER as discussed by the authors integrates message passing networks into the Transformer-style architecture to deliver a class of more expressive encoders of molecules, which can learn rich structural and semantic information of molecules from enormous unlabeled molecular data.
Posted Content

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

TL;DR: A comprehensive taxonomy of representation learning methods for graph-structured data is proposed, aiming to unify several disparate bodies of work and provide a solid foundation for understanding the intuition behind these methods, and enables future research in the area.
Proceedings ArticleDOI

Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation

TL;DR: Coder et al. as discussed by the authors proposed a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations, where each channel in the network encodes a hypergraph that depicts a common highorder user relation pattern via hypergraph CNN.
Proceedings ArticleDOI

Graph Contrastive Learning with Adaptive Augmentation

TL;DR: Wang et al. as mentioned in this paper proposed a graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph to improve the performance.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Silhouettes: a graphical aid to the interpretation and validation of cluster analysis

TL;DR: A new graphical display is proposed for partitioning techniques, where each cluster is represented by a so-called silhouette, which is based on the comparison of its tightness and separation, and provides an evaluation of clustering validity.
Proceedings Article

Understanding the difficulty of training deep feedforward neural networks

TL;DR: The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Proceedings ArticleDOI

DeepWalk: online learning of social representations

TL;DR: DeepWalk as mentioned in this paper uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences, which encode social relations in a continuous vector space, which is easily exploited by statistical models.
Posted Content

Inductive Representation Learning on Large Graphs

TL;DR: GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Related Papers (5)