Open AccessPosted Content
node2vec: Scalable Feature Learning for Networks
Aditya Grover,Jure Leskovec +1 more
Reads0
Chats0
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.Abstract:
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.read more
Citations
More filters
Proceedings ArticleDOI
How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision
Dongkwan Kim,Alice Oh +1 more
TL;DR: A self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs that generalizes across 15 datasets of them, and the models designed by recipe show improved performance over baselines.
Journal ArticleDOI
A knowledge graph to interpret clinical proteomics data
Alberto Santos,Ana Colaço,Annelaura Bach Nielsen,Lili Niu,Maximilian T. Strauss,Philipp E. Geyer,Fabian Coscia,Nicolai J. Wewer Albrechtsen,Filip Mundt,Lars Juhl Jensen,Matthias Mann +10 more
TL;DR: The Clinical Knowledge Graph (CKG) as mentioned in this paper is an open-source platform comprising close to 20 million nodes and 220 million relationships that represent relevant experimental data, public databases and literature.
Proceedings ArticleDOI
SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation
TL;DR: A Simple framework for GRAph Contrastive lEarning, SimGRACE, which does not require data augmentations and can yield competitive or better performance compared with state-of-the-art methods in terms of generalizability, transferability and robustness, while enjoying unprecedented degree of flexibility and efficiency.
Proceedings ArticleDOI
GraphMAE: Self-Supervised Masked Graph Autoencoders
TL;DR: This study identifies and examines the issues that negatively impact the development of GAEs, including their reconstruction objective, training robustness, and error metric, and presents a masked graph autoencoder GraphMAE that mitigates these issues for generative self-supervised graph learning.
References
More filters
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Posted Content
Efficient Estimation of Word Representations in Vector Space
TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Journal ArticleDOI
Nonlinear dimensionality reduction by locally linear embedding.
Sam T. Roweis,Lawrence K. Saul +1 more
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Journal ArticleDOI
A global geometric framework for nonlinear dimensionality reduction.
TL;DR: An approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set and efficiently computes a globally optimal solution, and is guaranteed to converge asymptotically to the true structure.
Posted Content
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: In this paper, the Skip-gram model is used to learn high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships and improve both the quality of the vectors and the training speed.