Open AccessPosted Content
Machine Learning on Graphs: A Model and Comprehensive Taxonomy
Reads0
Chats0
TLDR
A comprehensive taxonomy of representation learning methods for graph-structured data is proposed, aiming to unify several disparate bodies of work and provide a solid foundation for understanding the intuition behind these methods, and enables future research in the area.Abstract:
There has been a surge of recent interest in learning representations for graph-structured data. Graph representation learning methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding (such as shallow graph embedding or graph auto-encoders), focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. However, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between graph neural networks, network embedding and graph regularization models. We propose a comprehensive taxonomy of representation learning methods for graph-structured data, aiming to unify several disparate bodies of work. Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs (e.g. GraphSage, Graph Convolutional Networks, Graph Attention Networks), and unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc) into a single consistent approach. To illustrate the generality of this approach, we fit over thirty existing methods into this framework. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area.read more
Citations
More filters
Posted ContentDOI
Interacting brains revisited: A cross-brain network neuroscience perspective
Christian Gerloff,Christian Gerloff,Kerstin Konrad,Kerstin Konrad,Danilo Bzdok,C. Büsing,Vanessa Reindl,Vanessa Reindl +7 more
TL;DR: In this article, a comprehensive framework based on bipartite graphs for interbrain networks is proposed, which can provide meaningful insights into the neural underpinnings of social interactions.
Posted Content
Accelerating science with human versus alien artificial intelligences
Jamshid Sourati,James A. Evans +1 more
TL;DR: This paper showed that incorporating the distribution of human expertise into self-supervised models by training on inferences cognitively available to experts dramatically improves AI prediction of future human discoveries and inventions.
Journal ArticleDOI
node2coords: Graph Representation Learning with Wasserstein Barycenters
TL;DR: Node2Coord as discussed by the authors is a representation learning algorithm for graphs, which learns simultaneously a low-dimensional space and coordinates for the nodes in that space, revealing the proximity of the nodes' local structure to the graph structural patterns.
Posted Content
Deep Lagrangian Constraint-based Propagation in Graph Neural Networks.
TL;DR: This work proposes a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework, and shows that the proposed approach compares favourably with popular models on several benchmarks.
Journal ArticleDOI
Multi-modal intermediate integrative methods in neuropsychiatric disorders: A review
TL;DR: In this article , the authors reviewed multi-modal intermediate integrative techniques based on component analysis, matrix factorization, similarity network, multiple kernel learning, Bayesian network, artificial neural networks, and graph transformation, as well as their applications in neuropsychiatric domains.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Journal Article
Visualizing Data using t-SNE
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.