scispace - formally typeset
Open AccessJournal ArticleDOI

Graph convolutional networks: a comprehensive review

TLDR
A comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models, is conducted and several open challenges are presented and potential directions for future research are discussed.
Abstract
Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Evolving Social Media Background Representation with Frequency Weights and Co-Occurrence Graphs

TL;DR: This article proposed a representation method that considers temporal novelties as well as the fine details of word inter-dependencies based on the tf-idf and graph embedding techniques, which has superiority over other representation methods because it takes the advantage of both the temporal aspect of tf-IDF and the semantic aspect of graph embeddings.
Book ChapterDOI

Point cloud compression

TL;DR: In this paper , the authors introduce and motivate some basic concepts and common tools for point cloud compression, including voxelization, octrees, graph representations, and 2D projections.
Posted ContentDOI

DeepBindGCN: Integrating Molecular Vector Representation with Graph Convolutional Neural Networks for Accurate Protein-Ligand Interaction Prediction

TL;DR: DeepBindGCN as mentioned in this paper is a non-complex dependent model that is non-dependent on docking conformation and concisely keeps the spatial information and physical-chemical feature, which can be used in many important large-scale virtual screening application scenarios.
Book ChapterDOI

Streamlined Training of GCN for Node Classification with Automatic Loss Function and Optimizer Selection

TL;DR: In this article , a learning rate scheduler was implemented to adjust the learning rate based on the model's performance, which led to improved results, highlighting the importance of selecting the appropriate loss and optimizer functions.
Journal ArticleDOI

Dynamic Graph Representation Learning for Depression Screening with Transformer

TL;DR: ContrastEgo as mentioned in this paper treats each user as a dynamic time-evolving attributed graph (ego-network) and leverages supervised contrastive learning to maximize the agreement of users' representations at different scales.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)