Proceedings ArticleDOI
Estimating complex networks centrality via neural networks and machine learning
Felipe Grando,Luis C. Lamb +1 more
- pp 1-8
Reads0
Chats0
TLDR
The results show that the regression output of the machine learning algorithms applied in the experiments successfully approximate the real metric values and are a robust alternative in real world applications, in particular in complex and social network analysis.Abstract:
Vertex centrality measures are important analysis elements in complex networks and systems. These metrics have high space and time complexity, which is a severe problem in applications that typically involve large networks. To apply such high complexity metrics in large networks we trained and tested off-the-shelf machine learning algorithms on several generated networks using five well-known complex network models. Our main hypothesis is that if one uses low complexity metrics as inputs to train the algorithms, one will achieve good approximations of high complexity measures. Our results show that the regression output of the machine learning algorithms applied in our experiments successfully approximate the real metric values and are a robust alternative in real world applications, in particular in complex and social network analysis.read more
Citations
More filters
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Proceedings ArticleDOI
Random graphs
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Journal ArticleDOI
Machine Learning in Network Centrality Measures: Tutorial and Outlook
TL;DR: In this paper, the authors explain how the use of neural network learning algorithms can render the application of the metrics in complex networks of arbitrary size, besides presenting an easy way to generate and acquire training data.
Book ChapterDOI
Multitask Learning on Graph Neural Networks: Learning Multiple Graph Centrality Measures with a Unified Network
TL;DR: In this paper, the authors show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings and decode them into multiple outputs by connecting MLPs at the end of the pipeline.
Proceedings ArticleDOI
On approximating networks centrality measures via neural learning algorithms
Felipe Grando,Luis C. Lamb +1 more
TL;DR: The results show that the regression output of the machine learning algorithms successfully approximate the real metric values and are a robust alternative in real world applications.
References
More filters
Journal ArticleDOI
Collective dynamics of small-world networks
TL;DR: Simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder are explored, finding that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs.
Journal ArticleDOI
Emergence of Scaling in Random Networks
TL;DR: A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
Book
Neural Networks: A Comprehensive Foundation
TL;DR: Thorough, well-organized, and completely up to date, this book examines all the important aspects of this emerging technology, including the learning process, back-propagation learning, radial-basis function networks, self-organizing systems, modular networks, temporal processing and neurodynamics, and VLSI implementation of neural networks.
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI
The WEKA data mining software: an update
TL;DR: This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.