scispace - formally typeset
Open AccessJournal ArticleDOI

Graph convolutional networks: a comprehensive review

Reads0
Chats0
TLDR
A comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, is conducted and several open challenges are presented and potential directions for future research are discussed.
Abstract
Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Graph neural networks and cross-protocol analysis for detecting malicious IP addresses

TL;DR: In this article , a semi-supervised approach using cross-protocol analysis and graph neural networks (GNNs) was proposed to address the speed and scalability of assessing IP reputation, which achieved 85.28% accuracy in detecting malicious IP addresses at scale with only 5% of labeled data.
Journal ArticleDOI

Granger-Causality-Based Multi-Frequency Band EEG Graph Feature Extraction and Fusion for Emotion Recognition

TL;DR: Based on the causal connectivity between the EEG channels obtained by Granger causality (GC) analysis, this paper proposed a multi-frequency band EEG graph feature extraction and fusion method for EEG emotion recognition.
Journal ArticleDOI

COLA: Improving Conversational Recommender Systems by Collaborative Augmentation

TL;DR: Lin et al. as mentioned in this paper proposed a collaborative augmentation method to simultaneously improve both item representation learning and user preference modeling, which augments item representations with user-aware information, i.e., item popularity.
Journal ArticleDOI

Embedding gene sets in low-dimensional space

TL;DR: An important task in system biology is to understand cellular processes through the lens of gene sets and their expression patterns, but genes form complex interaction networks, and levarging this information in machine learning applications requires a sophisticated data representation.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)