scispace - formally typeset
Search or ask a question

Showing papers on "Network theory published in 2017"


Book ChapterDOI
Nan Lin1
12 Jul 2017
TL;DR: In this paper, a review of social capital as discussed in the literature, identifies controversies and debates, considers some critical issues, and provides conceptual and research strategies for building a theory.
Abstract: This chapter reviews social capital as discussed in the literature, identifies controversies and debates, considers some critical issues, and provides conceptual and research strategies for building a theory. It argues that such a theory and the research enterprise must be based on the fundamental understanding that social capital is captured from embedded resources in social networks. Such measurements can strength of tie network bridge, or intimacy, intensity, interaction and reciprocity be made relative to two frameworks: network resources and contact resources. There are many other measures, such as size, density, cohesion, and closeness of social networks which are candidates as measures for social capital. Network locations are necessary conditions of embedded resources. By considering social capital as assets in networks, the chapter discusses some issues in conceptualization, measurement, and causal mechanism. A proposed model identifies the exogenous factors leading to the acquisition (or the lack) of social capital as well as the expected returns of social capital.

3,733 citations


Journal ArticleDOI
TL;DR: Challenges to network theory may propel the network approach from its adolescence into adulthood and promises advances in understanding psychopathology both at the nomothetic and idiographic level.
Abstract: Since the introduction of mental disorders as networks of causally interacting symptoms, this novel framework has received considerable attention. The past years have resulted in over 40 scientific publications and numerous conference symposia and workshops. Now is an excellent moment to take stock of the network approach: What are its most fundamental challenges, and what are potential ways forward in addressing them? After a brief conceptual introduction, we first discuss challenges to network theory: (1) What is the validity of the network approach beyond some commonly investigated disorders such as major depression? (2) How do we best define psychopathological networks and their constituent elements? And (3) how can we gain a better understanding of the causal nature and real-life underpinnings of associations among symptoms? Next, after a short technical introduction to network modeling, we discuss challenges to network methodology: (4) heterogeneity of samples studied with network analytic models, and (5) a lurking replicability crisis in this strongly data-driven and exploratory field. Addressing these challenges may propel the network approach from its adolescence into adulthood and promises advances in understanding psychopathology both at the nomothetic and idiographic level.

485 citations


MonographDOI
27 Oct 2017
TL;DR: This textbook presents a detailed overview of the new theory and methods of network science, covering algorithms for graph exploration, node ranking and network generation, among the others, and allows students to experiment with network models and real-world data sets.
Abstract: Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, mathematics, engineering, biology, neuroscience and the social sciences.

313 citations


Journal ArticleDOI
TL;DR: It is shown that although the prominent centrality measures in network analysis make use of different information about nodes' positions, they all process that information in an identical way: they all spring from a common family that are characterized by the same simple axioms.
Abstract: We show that although the prominent centrality measures in network analysis make use of different information about nodes' positions, they all process that information in an identical way: they all spring from a common family that are characterized by the same simple axioms. In particular, they are all based on a monotonic and additively separable treatment of a statistic that captures a node's position in the network.

130 citations


Journal ArticleDOI
TL;DR: This article shows how one can estimate a regularized network on typical attitude data and highlights that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs.
Abstract: In this article, we provide a brief tutorial on the estimation, analysis, and simulation on attitude networks using the programming language R. We first discuss what a network is and subsequently show how one can estimate a regularized network on typical attitude data. For this, we use open-access data on the attitudes toward Barack Obama during the 2012 American presidential election. Second, we show how one can calculate standard network measures such as community structure, centrality, and connectivity on this estimated attitude network. Third, we show how one can simulate from an estimated attitude network to derive predictions from attitude networks. By this, we highlight that network theory provides a framework for both testing and developing formalized hypotheses on attitudes and related core social psychological constructs.

123 citations


Book ChapterDOI
TL;DR: In this paper, the authors make two general arguments focusing on the process of norm emergence in networks based on the history of global human rights norms and the formation of Amnesty International, and argue that the network which eventually emerges is not a function of the inherent "goodness" of one set of norms over another, since the quality of any norm is difficult to judge prior to its manifestation in a network of shared adherents.
Abstract: Despite considerable interest in political networks, especially transnational advocacy networks (TANs), political scientists have imported few insights from network theory into their studies. His essay aims to begin an exchange between network theorists and political scientists by addressing two related questions. How can network theory inform the study of international relations, particularly in the examination of TANs? Conversely, what problems arise in political phenomena that can enrich network theory? We make two general arguments focusing on the process of norm emergence in networks based on the history of global human rights norms and the formation of Amnesty International. First, political power can be an emergent property of networks, found most likely in scale-free structures. That is, central (or more connected) nodes can influence a network directly or indirectly and thereby shape the ends towards which the nodes collectively move. Second, norms are also emergent properties of networks. In the earliest stages of change, many norms compete for acceptance and many potential networks built on different norms or combinations of norms exist but are not yet activated. We argue that the network which eventually emerges is not a function of the inherent "goodness" of one set of norms over another, since the quality of any norm is difficult to judge prior to its manifestation in a network of shared adherents. Rather, at least in the case of human rights, the crystallization of the observed network from the range of possible alternatives preceded the widespread acceptance of the norm and occurred as a result of a central node that exercised agenda-setting power by controlling the flow of information in the network.

108 citations


Journal ArticleDOI
TL;DR: Synthesizing recent developments in the network ecology literature, it is proposed that applying these solutions will aid in synthesizing ecological sub-disciplines and allied fields by improving the accessibility of network methods and models.
Abstract: Network ecology provides a systems basis for approaching ecological questions, such as factors that influence biological diversity, the role of particular species or particular traits in structuring ecosystems, and long-term ecological dynamics (e.g., stability). Whereas the introduction of network theory has enabled ecologists to quantify not only the degree, but also the architecture of ecological complexity, these advances have come at the cost of introducing new challenges, including new theoretical concepts and metrics, and increased data complexity and computational intensity. Synthesizing recent developments in the network ecology literature, we point to several potential solutions to these issues: integrating network metrics and their terminology across sub-disciplines; benchmarking new network algorithms and models to increase mechanistic understanding; and improving tools for sharing ecological network research, in particular “model” data provenance, to increase the reproducibility of network models and analyses. We propose that applying these solutions will aid in synthesizing ecological sub-disciplines and allied fields by improving the accessibility of network methods and models.

66 citations


Journal ArticleDOI
TL;DR: It is shown that in general the weight neighborhood centrality can rank the spreading ability of nodes more accurately than its benchmark centrality, especially when using the degree k or coreness ks as the benchmarkcentrality.
Abstract: Identifying the most influential spreaders in complex networks is crucial for optimally using the network structure and designing efficient strategies to accelerate information dissemination or prevent epidemic outbreaks. In this paper, by taking into account the centrality of a node and its neighbors’ centrality which depends on the diffusion importance of links, we propose a novel influence measure, the weight neighborhood centrality, to quantify the spreading ability of nodes in complex networks. To evaluate the performance of our method, we use the Susceptible–Infected–Recovered (SIR) model to simulate the epidemic spreading process on six real-world networks and four artificial networks. By measuring the rank imprecision and the rank correlation between the rank lists generated by simulation results via SIR and the ones generated by centrality measures, it shows that in general the weight neighborhood centrality can rank the spreading ability of nodes more accurately than its benchmark centrality, especially when using the degree k or coreness k s as the benchmark centrality. Further, we compare the monotonicity and the computational complexity of different ranking methods, which show that our method not only can be better at distinguishing the spreading ability of nodes but also can be used in large-scale networks due to the high computation efficiency.

65 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel method for identifying top-K viral information propagators from a reduced search space by computes the Katz centrality and Local average centrality values of each node and tests the values against two threshold values.
Abstract: Network theory concepts form the core of algorithms that are designed to uncover valuable insights from various datasets. Especially, network centrality measures such as Eigenvector centrality, Katz centrality, PageRank centrality etc., are used in retrieving top-K viral information propagators in social networks,while web page ranking in efficient information retrieval, etc. In this paper, we propose a novel method for identifying top-K viral information propagators from a reduced search space. Our algorithm computes the Katz centrality and Local average centrality values of each node and tests the values against two threshold (constraints) values. Only those nodes, which satisfy these constraints, form the search space for top-K propagators. Our proposed algorithm is tested against four datasets and the results show that the proposed algorithm is capable of reducing the number of nodes in search space at least by 70%. We also considered the parameter ( $$\alpha$$ and $$\beta$$ ) dependency of Katz centrality values in our experiments and established a relationship between the $$\alpha$$ values, number of nodes in search space and network characteristics. Later, we compare the top-K results of our approach against the top-K results of degree centrality.

60 citations


Journal ArticleDOI
11 Dec 2017
TL;DR: The bridge and cluster structure of social networks is a proxy indicator of variation in knowledge and practice and network brokers have a competitive advantage in detecting and developing new strategies, a subset of which are great strategies.
Abstract: We use network theory to define the social origins of great strategies. Our argument proceeds in four steps: (1) The bridge and cluster structure of social networks is a proxy indicator of variation in knowledge and practice (homogeneity within clusters, heterogeneity between); (2) people with strong connections to multiple clusters (network brokers) have breadth, timing, and arbitrage advantages in moving knowledge/practice from clusters where it is a commodity into clusters where it is valuable. (3) New strategy is a new perspective on, or new combination of, prior knowledge/practice; so (4) network brokers have a competitive advantage in detecting and developing new strategies, a subset of which are great strategies.

55 citations


Journal ArticleDOI
15 Nov 2017-Entropy
TL;DR: A novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes.
Abstract: Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: It is demonstrated that the centrality measures are affected differently by the edge effect, and that the same centrality measure is affected differently depending on the type of network distance used, which highlights the importance of defining the network's boundary in a way that is relevant to the research question.
Abstract: With increased interest in the use of network analysis to study the urban and regional environment, it is important to understand the sensitivity of centrality analysis results to the so-called “edge effect”. Most street network models have artificial boundaries, and there are principles that can be applied to minimise or eliminate the effect of the boundary condition. However, the extent of this impact has not been systematically studied and remains little understood. In this article we present an empirical study on the impact of different network model boundaries on the results of closeness and betweenness centrality analysis of street networks. The results demonstrate that the centrality measures are affected differently by the edge effect, and that the same centrality measure is affected differently depending on the type of network distance used. These results highlight the importance, in any study of street networks, of defining the network's boundary in a way that is relevant to the research question, and of selecting appropriate analysis parameters and statistics.

Journal ArticleDOI
TL;DR: This work draws attention to the importance of dynamics inside and between state variables by adding functional relationship defined edges to the original topology of dynamical systems by defining the typical connection types and highlighting how the reinterpreted topologies change the number of the necessary sensors and actuators in benchmark networks.
Abstract: Network theory based controllability and observability analysis have become widely used techniques. We realized that most applications are not related to dynamical systems, and mainly the physical topologies of the systems are analysed without deeper considerations. Here, we draw attention to the importance of dynamics inside and between state variables by adding functional relationship defined edges to the original topology. The resulting networks differ from physical topologies of the systems and describe more accurately the dynamics of the conservation of mass, momentum and energy. We define the typical connection types and highlight how the reinterpreted topologies change the number of the necessary sensors and actuators in benchmark networks widely studied in the literature. Additionally, we offer a workflow for network science-based dynamical system analysis, and we also introduce a method for generating the minimum number of necessary actuator and sensor points in the system.

Journal ArticleDOI
01 Oct 2017
TL;DR: The first application of multivariate visibility graphs to fMRI data is presented, and some relevant aspects of its application to BOLD time series are described, and the analogies and differences with existing methods are discussed.
Abstract: Visibility algorithms are a family of methods that map time series into graphs, such that the tools of graph theory and network science can be used for the characterization of time series. This approach has proved a convenient tool, and visibility graphs have found applications across several disciplines. Recently, an approach has been proposed to extend this framework to multivariate time series, allowing a novel way to describe collective dynamics. Here we test their application to fMRI time series, following two main motivations, namely that (a) this approach allows vs to simultaneously capture and process relevant aspects of both local and global dynamics in an easy and intuitive way, and (b) this provides a suggestive bridge between time series and network theory that nicely fits the consolidating field of network neuroscience. Our application to a large open dataset reveals differences in the similarities of temporal networks (and thus in correlated dynamics) across resting-state networks, and gives indications that some differences in brain activity connected to psychiatric disorders could be picked up by this approach.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new definition of eigenvector centrality that relies on the Perron eigen vector of a multi-homogeneous map defined in terms of the tensor describing the network.
Abstract: Eigenvector-based centrality measures are among the most popular centrality measures in network science. The underlying idea is intuitive and the mathematical description is extremely simple in the framework of standard, mono-layer networks. Moreover, several efficient computational tools are available for their computation. Moving up in dimensionality, several efforts have been made in the past to describe an eigenvector-based centrality measure that generalizes Bonacich index to the case of multiplex networks. In this work, we propose a new definition of eigenvector centrality that relies on the Perron eigenvector of a multi-homogeneous map defined in terms of the tensor describing the network. We prove that existence and uniqueness of such centrality are guaranteed under very mild assumptions on the multiplex network. Extensive numerical studies are proposed to test the newly introduced centrality measure and to compare it to other existing eigenvector-based centralities.

Book
22 Nov 2017
TL;DR: This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications and puts particular emphasis on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs withgiven properties.
Abstract: This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties. After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem of network reconstruction and introduces various advanced techniques to reliably infer the topology of a network from partial local information. Chapter 5 is devoted to the reformulation of certain “hard” combinatorial operations, such as the enumeration and unbiased sampling of graphs with given constraints, within a “softened” maximum-entropy framework. A final chapter offers various overarching remarks and take-home messages.By requiring no prior knowledge of network theory, the book targets a broad audience ranging from PhD students approaching these topics for the first time to senior researchers interested in the application of advanced network techniques to their field.

Journal ArticleDOI
TL;DR: This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures, even enabling whole-head real-time network analysis of brain function.
Abstract: Functional Connectivity has been demonstrated to be a key tool for unravelling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.

Journal ArticleDOI
TL;DR: A novel measure based on local centrality with a coefficient, which ranks nodes that have the same number of four-layer neighbors and distinguishes node influence most effectively among the six tested measures.
Abstract: Influential nodes are rare in social networks, but their influence can quickly spread to most nodes in the network. Identifying influential nodes allows us to better control epidemic outbreaks, accelerate information propagation, conduct successful e-commerce advertisements, and so on. Classic methods for ranking influential nodes have limitations because they ignore the impact of the topology of neighbor nodes on a node. To solve this problem, we propose a novel measure based on local centrality with a coefficient. The proposed algorithm considers both the topological connections among neighbors and the number of neighbor nodes. First, we compute the number of neighbor nodes to identify nodes in cluster centers and those that exhibit the “bridge” property. Then, we construct a decreasing function for the local clustering coefficient of nodes, called the coefficient of local centrality, which ranks nodes that have the same number of four-layer neighbors. We perform experiments to measure node influence on both real and computer-generated networks using six measures: Degree Centrality, Betweenness Centrality, Closeness Centrality, K-Shell, Semi-local Centrality and our measure. The results show that the rankings obtained by the proposed measure are most similar to those of the benchmark Susceptible-Infected-Recovered model, thus verifying that our measure more accurately reflects the influence of nodes than do the other measures. Further, among the six tested measures, our method distinguishes node influence most effectively.

Reference EntryDOI
06 Mar 2017

Journal ArticleDOI
TL;DR: In this article, a tuning parameter δ regulates the relative impact of resources held by more close versus more distant others, where a specific δ is chosen degree-centrality and reciprocal closeness centrality are two specific instances of this more general measure.

Journal ArticleDOI
TL;DR: A continuous-time quantum walk algorithm for determining vertex centrality is proposed, and it is shown that it generalizes to arbitrary graphs via a statistical analysis of randomly generated scale-free and Erd\ifmmode networks.
Abstract: Network centrality has important implications well beyond its role in physical and information transport analysis; as such, various quantum-walk-based algorithms have been proposed for measuring network vertex centrality. In this work, we propose a continuous-time quantum walk algorithm for determining vertex centrality, and show that it generalizes to arbitrary graphs via a statistical analysis of randomly generated scale-free and Erd\ifmmode \mbox{\H{o}}\else \H{o}\fi{}s-R\'enyi networks. As a proof of concept, the algorithm is detailed on a four-vertex star graph and physically implemented via linear optics, using spatial and polarization degrees of freedoms of single photons. This paper reports a successful physical demonstration of a quantum centrality algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a multi-layered network approach that enables the assessment of: (i) the different factors that influence connectivity to international markets; and (ii) the extent to which a country's connections matter for its international trade activities.

Proceedings ArticleDOI
27 Jun 2017
TL;DR: BeBeCA is developed, a benchmark for betweenness centrality approximation methods on large graphs and an evaluation methodology to assess various aspects of approximation accuracy, such as average error and quality of node ranking.
Abstract: Betweenness centrality quantifies the importance of graph nodes in a variety of applications including social, biological and communication networks. Its computation is very costly for large graphs; therefore, many approximate methods have been proposed. Given the lack of a golden standard, the accuracy of most approximate methods is evaluated on tiny graphs and is not guaranteed to be representative of realistic datasets that are orders of magnitude larger. In this paper, we develop BeBeCA, a benchmark for betweenness centrality approximation methods on large graphs. Specifically: (i) We generate a golden standard by deploying a parallel implementation of Brandes algorithm using 96,000 CPU cores on a supercomputer to compute exact betweenness centrality values for several large graphs with up to 126M edges. (ii) We propose an evaluation methodology to assess various aspects of approximation accuracy, such as average error and quality of node ranking. (iii) We survey a large number of existing approximation methods and compare their performance and accuracy using our benchmark. (iv) We publicly share our benchmark, which includes the golden standard exact betweenness centrality values together with the scripts that implement our evaluation methodology; for researchers to compare their own algorithms and practitioners to select the appropriate algorithm for their application and data.

Journal ArticleDOI
TL;DR: This study examined one of the most devastating pandemic in human history, the fourteenth century plague pandemic called Black Death, and found that cities with higher values of both centrality and transitivity were more severely affected by the plague.
Abstract: Epidemics can spread across large regions becoming pandemics by flowing along transportation and social networks. Two network attributes, transitivity (when a node is connected to two other nodes that are also directly connected between them) and centrality (the number and intensity of connections with the other nodes in the network), are widely associated with the dynamics of transmission of pathogens. Here we investigate how network centrality and transitivity influence vulnerability to diseases of human populations by examining one of the most devastating pandemic in human history, the fourteenth century plague pandemic called Black Death. We found that, after controlling for the city spatial location and the disease arrival time, cities with higher values of both centrality and transitivity were more severely affected by the plague. A simulation study indicates that this association was due to central cities with high transitivity undergo more exogenous re-infections. Our study provides an easy method to identify hotspots in epidemic networks. Focusing our effort in those vulnerable nodes may save time and resources by improving our ability of controlling deadly epidemics.

Journal ArticleDOI
TL;DR: This work proposes a new hierarchical decomposition approach to speed up the betweenness computation of complex networks, and features a parallel structure, which is very suitable for parallel computation.
Abstract: Betweenness centrality is an indicator of a node's centrality in a network. It is equal to the number of shortest paths from all vertices to all others that pass through that node. Most of real-world large networks display a hierarchical community structure, and their betweenness computation possesses rather high complexity. Here we propose a new hierarchical decomposition approach to speed up the betweenness computation of complex networks. The advantage of this new method is its effective utilization of the local structural information from the hierarchical community. The presented method can significantly speed up the betweenness calculation. This improvement is much more evident in those networks with numerous homogeneous communities. Furthermore, the proposed method features a parallel structure, which is very suitable for parallel computation. Moreover, only a small amount of additional computation is required by our method, when small changes in the network structure are restricted to some local communities. The effectiveness of the proposed method is validated via the examples of two real-world power grids and one artificial network, which demonstrates that the performance of the proposed method is superior to that of the traditional method.

Journal ArticleDOI
TL;DR: High clustering and low density seems to be tied to inefficient dissemination of expertise among Vietnamese social scientists, and consequently low scientific output.
Abstract: Background: Collaboration is a common occurrence among Vietnamese scientists; however, insights into Vietnamese scientific collaborations have been scarce. On the other hand, the application of social network analysis in studying science collaboration has gained much attention all over the world. The technique could be employed to explore Vietnam's scientific community. Methods: This paper employs network theory to explore characteristics of a network of 412 Vietnamese social scientists whose papers can be found indexed in the Scopus database. Two basic network measures, density and clustering coefficient, were taken, and the entire network was studied in comparison with two of its largest components. Results: The networks connections are very sparse, with a density of only 0.47%, while the clustering coefficient is very high (58.64%). This suggests an inefficient dissemination of information, knowledge, and expertise in the network. Secondly, the disparity in levels of connection among individuals indicates that the network would easily fall apart if a few highly-connected nodes are removed. Finally, the two largest components of the network were found to differ from the entire networks in terms of measures and were both led by the most productive and well-connected researchers. Conclusions: High clustering and low density seems to be tied to inefficient dissemination of expertise among Vietnamese social scientists, and consequently low scientific output. Also low in robustness, the network shows the potential of an intellectual elite composed of well-connected, productive, and socially significant individuals.

Journal ArticleDOI
TL;DR: This work proposes the framework BADIOS that manipulates the graph by compressing it and splitting into pieces so that the centrality computation can be handled independently for each piece.
Abstract: The betweenness and closeness metrics are widely used metrics in many network analysis applications. Yet, they are expensive to compute. For that reason, making the betweenness and closeness centrality computations faster is an important and well-studied problem. In this work, we propose the framework BADIOS that manipulates the graph by compressing it and splitting into pieces so that the centrality computation can be handled independently for each piece. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for various types and sizes of networks. In particular, it reduces the betweenness centrality computation time of a 4.6 million edges graph from more than 5 days to less than 16 hours. For the same graph, the closeness computation time is decreased from more than 3 days to 6 hours (12.7x speedup).

Journal ArticleDOI
TL;DR: A semi-local centrality index is proposed to incorporate the shortest distance, the number of shortest paths and the reciprocal of average degree simultaneously, and it is verified that the proposed centrality can outperform well-known centralities, such as degree centrality, betweenness centrality
Abstract: The problem of identifying influential nodes in complex networks has attracted much attention owing to its wide applications, including how to maximize the information diffusion, boost product promotion in a viral marketing campaign, prevent a large scale epidemic and so on. From spreading viewpoint, the probability of one node propagating its information to one other node is closely related to the shortest distance between them, the number of shortest paths and the transmission rate. However, it is difficult to obtain the values of transmission rates for different cases, to overcome such a difficulty, we use the reciprocal of average degree to approximate the transmission rate. Then a semi-local centrality index is proposed to incorporate the shortest distance, the number of shortest paths and the reciprocal of average degree simultaneously. By implementing simulations in real networks as well as synthetic networks, we verify that our proposed centrality can outperform well-known centralities, such as degree centrality, betweenness centrality, closeness centrality, k-shell centrality, and nonbacktracking centrality. In particular, our findings indicate that the performance of our method is the most significant when the transmission rate nears to the epidemic threshold, which is the most meaningful region for the identification of influential nodes.

Journal ArticleDOI
TL;DR: This paper considers the problem of identifying the most influential (or central) group of nodes (of some predefined size) in a network that has the largest value of betweenness centrality or one of its variants, for example, the length-scaled or the bounded-distance betweennessCentrality concepts.
Abstract: In this paper we consider the problem of identifying the most influential or centralgroup of nodes of some predefined size in a network. Such a group has the largest value of betweenness centrality or one of its variants, for example, the length-scaled or the bounded-distance betweenness centralities. We demonstrate that this problem can be modelled as a mixed integer program MIP that can be solved for reasonably sized network instances using off-the-shelf MIP solvers. We also discuss interesting relations between the group betweenness and the bounded-distance betweenness centrality concepts. In particular, we exploit these relations in an algorithmic scheme to identify approximate solutions for the original problem of identifying the most central group of nodes. Furthermore, we generalize our approach for identification of not only the most central groups of nodes, but also central groups of graph elements that consists of either nodes or edges exclusively, or their combination according to some pre-specified criteria. If necessary, additional cohesiveness properties can also be enforced, for example, the targeted group should form a clique or a κ-club. Finally, we conduct extensive computational experiments with different types of real-life and synthetic network instances to show the effectiveness and flexibility of the proposed framework. Even more importantly, our experiments reveal some interesting insights into the properties of influential groups of graph elements modelled using the maximum betweenness centrality concept or one of its variations.

Posted ContentDOI
26 Apr 2017-bioRxiv
TL;DR: Synthesizing recent developments in the network ecology literature, it is proposed that applying these solutions will aid in synthesizing ecological subdisciplines and allied fields by improving the accessibility of network methods and models.
Abstract: Network ecology provides a systems basis for approaching ecological questions, such as factors that influence biological diversity, the role of particular species or particular traits in structuring ecosystems, and long-term ecological dynamics (e.g. stability). Network theory has enabled ecologists to quantify not just the degree but also the architecture of ecological complexity. Synthesizing recent reviews and developments in the network ecology literature, we identify areas where efforts could have a major impact on the field. We point toward the the need for: integrating network metrics and their terminology across sub-disciplines; benchmarking new network algorithms and models to increase mechanistic understanding; and improving tools for sharing ecological network research, in particular “model” data provenance, to increase the reproducibility of network models and analyses. Given the impact that network theory and methods have had on the field of ecology, advances in these areas area likely to have ramifications across ecology and allied fields.