scispace - formally typeset
Open AccessJournal ArticleDOI

Graph convolutional networks: a comprehensive review

TLDR
A comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, is conducted and several open challenges are presented and potential directions for future research are discussed.
Abstract
Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Neural Link Prediction with Walk Pooling

TL;DR: WalkPool as mentioned in this paper combines the expressivity of topological heuristics with the feature-learning ability of neural networks, and summarizes a putative link by random walk probabilities of adjacent paths.

Dual Convolutional Neural Network for Graph of Graphs Link Prediction

TL;DR: A dual convolutional neural network is proposed that extracts node representations by combining the external and internal graph structures in an end-to-end manner.
Journal ArticleDOI

BehaviorNet: A Fine-grained Behavior-aware Network for Dynamic Link Prediction

TL;DR: BehaviorNet as discussed by the authors adapts a transformer-based graph convolutional network to capture the latent structural representations of nodes by adding edge behaviors as an additional attribute of edges, and applies GRU to learn the temporal features of given snapshots of a dynamic network by utilizing node behaviors as auxiliary information.
Posted Content

GraSSNet: Graph Soft Sensing Neural Networks

TL;DR: Zhang et al. as mentioned in this paper proposed a graph-based soft-sensing neural network (GraSSNet) for multivariate time-series classification of noisy and highly-imbalanced soft sensing data.
Posted ContentDOI

Repurposing Non-pharmacological Interventions for Alzheimer's Diseases through Link Prediction on Biomedical Literature

TL;DR: Zhang et al. as mentioned in this paper constructed a comprehensive knowledge graph containing AD concepts and various potential interventions, called ADInt, by integrating a dietary supplement domain knowledge graph, SuppKG, with semantic relations from SemMedDB database.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)