scispace - formally typeset
Open AccessJournal ArticleDOI

Deep Network Embedding for Graph Representation Learning in Signed Networks

Reads0
Chats0
TLDR
Extensive experimental results in real-world datasets demonstrate the superiority of the proposed model over the state-of-the-art network embedding algorithms for graph representation learning in signed networks.
Abstract
Network embedding has attracted an increasing attention over the past few years. As an effective approach to solve graph mining problems, network embedding aims to learn a low-dimensional feature vector representation for each node of a given network. The vast majority of existing network embedding algorithms, however, are only designed for unsigned networks, and the signed networks containing both positive and negative links, have pretty distinct properties from the unsigned counterpart. In this paper, we propose a deep network embedding model to learn the low-dimensional node vector representations with structural balance preservation for the signed networks. The model employs a semisupervised stacked auto-encoder to reconstruct the adjacency connections of a given signed network. As the adjacency connections are overwhelmingly positive in the real-world signed networks, we impose a larger penalty to make the auto-encoder focus more on reconstructing the scarce negative links than the abundant positive links. In addition, to preserve the structural balance property of signed networks, we design the pairwise constraints to make the positively connected nodes much closer than the negatively connected nodes in the embedding space. Based on the network representations learned by the proposed model, we conduct link sign prediction and community detection in signed networks. Extensive experimental results in real-world datasets demonstrate the superiority of the proposed model over the state-of-the-art network embedding algorithms for graph representation learning in signed networks.

read more

Citations
More filters
Journal ArticleDOI

Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition

TL;DR: Two deep learning-based frameworks with novel spatio-temporal preserving representations of raw EEG streams to precisely identify human intentions are introduced with high accuracy and outperform a set of state-of-the-art and baseline models.
Journal ArticleDOI

Learning Graph Embedding With Adversarial Training Methods

TL;DR: In this paper, the adversarial training principle is applied to enforce the latent codes to match a prior Gaussian or uniform distribution, which can be used to learn the graph embedding effectively.
Proceedings ArticleDOI

Deep Learning for Community Detection: Progress, Challenges and Opportunities

TL;DR: This article summarizes the contributions of the various frameworks, models, and algorithms in each stream of deep learning in this domain along with the current challenges that remain unsolved and the future research opportunities yet to be explored.
Proceedings ArticleDOI

Adversarial Training Methods for Network Embedding

TL;DR: A more succinct and effective local regularization method, namely adversarial training, to network embedding so as to achieve model robustness and better generalization performance and takes DeepWalk as the base model for illustration.
Journal ArticleDOI

A Comprehensive Survey on Community Detection With Deep Learning

TL;DR: A comprehensive review of the latest progress in community detection through deep learning is presented in this paper , where the authors have devised a new taxonomy covering different state-of-theart methods, including deep learning models based on deep neural networks (DNNs), deep nonnegative matrix factorization, and deep sparse filtering.
References
More filters
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Journal ArticleDOI

The Pascal Visual Object Classes (VOC) Challenge

TL;DR: The state-of-the-art in evaluated methods for both classification and detection are reviewed, whether the methods are statistically different, what they are learning from the images, and what the methods find easy or confuse.
Journal ArticleDOI

Birds of a Feather: Homophily in Social Networks

TL;DR: The homophily principle as mentioned in this paper states that similarity breeds connection, and that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics.
Proceedings ArticleDOI

DeepWalk: online learning of social representations

TL;DR: DeepWalk as mentioned in this paper uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences, which encode social relations in a continuous vector space, which is easily exploited by statistical models.
Proceedings ArticleDOI

node2vec: Scalable Feature Learning for Networks

TL;DR: Node2vec as mentioned in this paper learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes by using a biased random walk procedure.