scispace - formally typeset
Search or ask a question

Showing papers on "Betweenness centrality published in 2015"


Journal ArticleDOI
28 Aug 2015-Energies
TL;DR: In this article, the authors review the most relevant works that have investigated robustness in power grids using complex networks (CN) concepts, and propose strategies to improve robustness such as intentional islanding, restricted link addition, and more efficient electrical metrics such as electrical betweenness, net-ability and others.
Abstract: This paper reviews the most relevant works that have investigated robustness in power grids using Complex Networks (CN) concepts. In this broad field there are two different approaches. The first one is based solely on topological concepts, and uses metrics such as mean path length, clustering coefficient, efficiency and betweenness centrality, among many others. The second, hybrid approach consists of introducing (into the CN framework) some concepts from Electrical Engineering (EE) in the effort of enhancing the topological approach, and uses novel, more efficient electrical metrics such as electrical betweenness, net-ability, and others. There is however a controversy about whether these approaches are able to provide insights into all aspects of real power grids. The CN community argues that the topological approach does not aim to focus on the detailed operation, but to discover the unexpected emergence of collective behavior, while part of the EE community asserts that this leads to an excessive simplification. Beyond this open debate it seems to be no predominant structure (scale-free, small-world) in high-voltage transmission power grids, the vast majority of power grids studied so far. Most of them have in common that they are vulnerable to targeted attacks on the most connected nodes and robust to random failure. In this respect there are only a few works that propose strategies to improve robustness such as intentional islanding, restricted link addition, microgrids and Energies 2015, 8 9212 smart grids, for which novel studies suggest that small-world networks seem to be the best topology.

208 citations


Journal ArticleDOI
01 Jun 2015-Brain
TL;DR: In this paper, structural connectivity matrices were calculated from skeletonized diffusion tensor imaging data and a comprehensive range of graph metrics was calculated from structural connectivity measures for 52 patients with traumatic brain injury, 21 of whom had microbleed evidence of traumatic axonal injury, and 25 age-matched controls.
Abstract: Traumatic brain injury affects brain connectivity by producing traumatic axonal injury. This disrupts the function of large-scale networks that support cognition. The best way to describe this relationship is unclear, but one elegant approach is to view networks as graphs. Brain regions become nodes in the graph, and white matter tracts the connections. The overall effect of an injury can then be estimated by calculating graph metrics of network structure and function. Here we test which graph metrics best predict the presence of traumatic axonal injury, as well as which are most highly associated with cognitive impairment. A comprehensive range of graph metrics was calculated from structural connectivity measures for 52 patients with traumatic brain injury, 21 of whom had microbleed evidence of traumatic axonal injury, and 25 age-matched controls. White matter connections between 165 grey matter brain regions were defined using tractography, and structural connectivity matrices calculated from skeletonized diffusion tensor imaging data. This technique estimates injury at the centre of tract, but is insensitive to damage at tract edges. Graph metrics were calculated from the resulting connectivity matrices and machine-learning techniques used to select the metrics that best predicted the presence of traumatic brain injury. In addition, we used regularization and variable selection via the elastic net to predict patient behaviour on tests of information processing speed, executive function and associative memory. Support vector machines trained with graph metrics of white matter connectivity matrices from the microbleed group were able to identify patients with a history of traumatic brain injury with 93.4% accuracy, a result robust to different ways of sampling the data. Graph metrics were significantly associated with cognitive performance: information processing speed (R2 = 0.64), executive function (R2 = 0.56) and associative memory (R2 = 0.25). These results were then replicated in a separate group of patients without microbleeds. The most influential graph metrics were betweenness centrality and eigenvector centrality, which provide measures of the extent to which a given brain region connects other regions in the network. Reductions in betweenness centrality and eigenvector centrality were particularly evident within hub regions including the cingulate cortex and caudate. Our results demonstrate that betweenness centrality and eigenvector centrality are reduced within network hubs, due to the impact of traumatic axonal injury on network connections. The dominance of betweenness centrality and eigenvector centrality suggests that cognitive impairment after traumatic brain injury results from the disconnection of network hubs by traumatic axonal injury. * Abbreviations : DTI : diffusion tensor imaging SVM : support vector machine TBI : traumatic brain injury TBSS : tract-based spatial statistics

175 citations


Book ChapterDOI
01 Jan 2015
TL;DR: It is shown that neural networks can be effective in learning and estimating the ordering of vertices in a social network based on centrality measures, requiring far less computational effort, and proving to be faster than early termination of the power grid method that can be used for computing these measures.
Abstract: Centrality measures are extremely important in the analysis of social networks, with applications such as identification of the most influential individuals for effective target marketing. Eigenvector centrality and PageRank are among the most useful centrality measures, but computing these measures can be prohibitively expensive for large social networks. This paper shows that neural networks can be effective in learning and estimating the ordering of vertices in a social network based on these measures, requiring far less computational effort, and proving to be faster than early termination of the power grid method that can be used for computing the centrality measures. Two features describing the size of the social network and two vertex-specific attributes sufficed as inputs to the neural networks, requiring very few hidden neurons.

137 citations


Proceedings ArticleDOI
04 Jan 2015
TL;DR: The complexity of the mentioned centrality problems is related to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter, and it is shown that Radius, Median and Betweenness Centrality are equivalent under sub cubic reductions to APSP, and Reach Centrality is equivalent to Diameter under subcUBic reductions.
Abstract: Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node w measures the fraction of shortest paths that have w as an intermediate node. Finally, the reach centrality of a node w is the smallest distance r such that any s-t shortest path passing through w has either s or t in the ball of radius r around w.The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running time O(n3-δ) for some constant δ > 01.We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP requires essentially cubic time.

130 citations


Journal ArticleDOI
TL;DR: Betweenness offers a more granular and objective means of measuring the street network than categorical classifications previously used, and its meaning links more directly to theory, suggesting a higher risk of burglary for streets with more potential usage.
Abstract: To test the hypothesis that the spatial distribution of residential burglary is shaped by the configuration of the street network, as predicted by, for example, crime pattern theory. In particular, the study examines whether burglary risk is higher on street segments with higher usage potential. Residential burglary data for Birmingham (UK) are examined at the street segment level using a hierarchical linear model. Estimates of the usage of street segments are derived from the graph theoretical metric of betweenness, which measures how frequently segments feature in the shortest paths (those most likely to be used) through the network. Several variants of betweenness are considered. The geometry of street segments is also incorporated—via a measure of their linearity—as are several socio-demographic factors. As anticipated by theory, the measure of betweenness was found to be a highly-significant predictor of the burglary victimization count at the street segment level for all but one of the variants considered. The non-significant result was found for the most localized measure of betweenness considered. More linear streets were generally found to be at lower risk of victimization. Betweenness offers a more granular and objective means of measuring the street network than categorical classifications previously used, and its meaning links more directly to theory. The results provide support for crime pattern theory, suggesting a higher risk of burglary for streets with more potential usage. The apparent negative effect of linearity suggests the need for further research into the visual component of target choice, and the role of guardianship.

126 citations


Journal ArticleDOI
TL;DR: The network and station vulnerabilities of the urban rail transit system were analyzed and a vulnerability evaluation model was proposed based on complex network and graph theories and validated by a case study of Shanghai Metro with full-scale network and real-world traffic data.
Abstract: Rail transit is developing rapidly in major cities of China and has become a key component of urban transport. Nevertheless, the security and reliability in operation are significant issues that cannot be neglected. In this paper, the network and station vulnerabilities of the urban rail transit system were analyzed based on complex network and graph theories. A vulnerability evaluation model was proposed by accounting metro interchange and passenger flow and further validated by a case study of Shanghai Metro with full-scale network and real-world traffic data. It is identified that the urban rail transit network is rather robust to random attacks, but is vulnerable to the largest degree node-based attacks and the highest betweenness node-based attacks. Metro stations with a large node degree are more important in maintaining the network size, while stations with a high node betweenness are critical to network efficiency and origin-destination (OD) connectivity. The most crucial stations in maintaining network serviceability do not necessarily have the highest passenger throughput or the largest structural connectivity. A comprehensive evaluation model as proposed is therefore essential to assess station vulnerability, so that attention can be placed on appropriate nodes within the metro system. The findings of this research are of both theoretical and practical significance for urban rail transit network design and performance evaluation.

112 citations


Journal Article
TL;DR: This research empirically examines the social organization of a hacker community by analyzing one network called Shadowcrew, which exhibits the characteristics of deviant team organization structure.
Abstract: Computer crime hackers have been identified as a primary threat to computer systems, users, and organizations. Much extant research on hackers is conducted from a technical perspective and at an individual level of analysis. This research empirically examines the social organization of a hacker community by analyzing one network called Shadowcrew. The social network structure of this infamous hacker group is established using social networking methods for text mining and network analysis. Analysis of relationships among hackers shows a decentralized network structure. Leaders are identified using four actor centrality measures (degree, betweenness, closeness, and eigenvector) and found to be more involved in thirteen smaller sub-groups. Based on our social network analysis, Shadowcrew exhibits the characteristics of deviant team organization structure.

96 citations


Journal ArticleDOI
TL;DR: The results show that functional networks in focal epilepsy are altered in a way that the nodes of the network are more isolated, and it remains possible that this may be part of the epileptogenic process or an effect of medications.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes was studied. And the effect of inflexible contrarians selected based on different centrality measures in helping one opinion to compete with another in the inflexibility contrarian opinion (ICO) model was investigated.
Abstract: In recent decades, a number of centrality metrics describing network properties of nodes have been proposed to rank the importance of nodes. In order to understand the correlations between centrality metrics and to approximate a high-complexity centrality metric by a strongly correlated low-complexity metric, we first study the correlation between centrality metrics in terms of their Pearson correlation coefficient and their similarity in ranking of nodes. In addition to considering the widely used centrality metrics, we introduce a new centrality measure, the degree mass. The mth-order degree mass of a node is the sum of the weighted degree of the node and its neighbors no further than m hops away. We find that the betweenness, the closeness, and the components of the principal eigenvector of the adjacency matrix are strongly correlated with the degree, the 1st-order degree mass and the 2nd-order degree mass, respectively, in both network models and real-world networks. We then theoretically prove that the Pearson correlation coefficient between the principal eigenvector and the 2nd-order degree mass is larger than that between the principal eigenvector and a lower order degree mass. Finally, we investigate the effect of the inflexible contrarians selected based on different centrality metrics in helping one opinion to compete with another in the inflexible contrarian opinion (ICO) model. Interestingly, we find that selecting the inflexible contrarians based on the leverage, the betweenness, or the degree is more effective in opinion-competition than using other centrality metrics in all types of networks. This observation is supported by our previous observations, i.e., that there is a strong linear correlation between the degree and the betweenness, as well as a high centrality similarity between the leverage and the degree.

90 citations


Journal ArticleDOI
TL;DR: The structure of co-authorship networks in three different fields in Spain over a three-year period (2006–2008) is analyzed and the relationship between the research performance of scientists and their position in co- authorship networks is explored.

90 citations


Journal ArticleDOI
TL;DR: By defining and computing the value of each protein's topology potential, this paper can obtain a more precise ranking which reflects the importance of proteins from the PPI network when controlled by topological potential.
Abstract: Essential proteins are indispensable for cellular life. It is of great significance to identify essential proteins that can help us understand the minimal requirements for cellular life and is also very important for drug design. However, identification of essential proteins based on experimental approaches are typically time-consuming and expensive. With the development of high-throughput technology in the post-genomic era, more and more protein-protein interaction data can be obtained, which make it possible to study essential proteins from the network level. There have been a series of computational approaches proposed for predicting essential proteins based on network topologies. Most of these topology based essential protein discovery methods were to use network centralities. In this paper, we investigate the essential proteins' topological characters from a completely new perspective. To our knowledge it is the first time that topology potential is used to identify essential proteins from a protein-protein interaction (PPI) network. The basic idea is that each protein in the network can be viewed as a material particle which creates a potential field around itself and the interaction of all proteins forms a topological field over the network. By defining and computing the value of each protein's topology potential, we can obtain a more precise ranking which reflects the importance of proteins from the PPI network. The experimental results show that topology potential-based methods TP and TP-NC outperform traditional topology measures: degree centrality (DC), betweenness centrality (BC), closeness centrality (CC), subgraph centrality (SC), eigenvector centrality (EC), information centrality (IC), and network centrality (NC) for predicting essential proteins. In addition, these centrality measures are improved on their performance for identifying essential proteins in biological network when controlled by topology potential.

Journal ArticleDOI
TL;DR: In this article, a cross-sectional analysis of political brokerage and political entrepreneurship is presented, where the authors identify actors and graph their relations of influence within a specific policy event; then they select the most central actors; and compare their rank in a series of statistics that capture different aspects of their network advantage.
Abstract: Policy brokers and policy entrepreneurs are assumed to have a decisive impact on policy outcomes. Their access to social and political resources is contingent on their influence on other agents. In social network analysis (SNA), entrepreneurs are often closely associated with brokers, because both are agents presumed to benefit from bridging structural holes; for example, gaining advantage through occupying a strategic position in relational space. Our aim here is twofold. First, to conceptually and operationally differentiate policy brokers from policy entrepreneurs premised on assumptions in the policy-process literature; and second, via SNA, to use the output of core algorithms in a cross-sectional analysis of political brokerage and political entrepreneurship. We attempt to simplify the use of graph algebra in answering questions relevant to policy analysis by placing each algorithm within its theoretical context. In the methodology employed, we first identify actors and graph their relations of influence within a specific policy event; then we select the most central actors; and compare their rank in a series of statistics that capture different aspects of their network advantage. We examine betweenness centrality, positive and negative Bonacich power, Burt’s effective size and constraint and honest brokerage as paradigmatic. We employ two case studies to demonstrate the advantages and limitations of each algorithm for differentiating between brokers and entrepreneurs: one on Swiss climate policy and one on EU competition and transport policy.

Journal ArticleDOI
TL;DR: In this article, the authors contribute to the literature of supply chain innovations by developing and testing theoretically derived hypotheses regarding the effect of network structure on innovation output and distribution, as measured by the aggregate count and variance in the distribution of patents of the ego network in which a firm exists.
Abstract: A great deal of research on innovation implicitly relies upon the network in which a firm is embedded to explain its innovative capabilities. Interestingly, however, most research examines innovation at the firm level, rather than at the network level. Thus, there is a significant gap in the literature regarding the effects of network structure on innovation within a firm's network. In this research, we contribute to the literature of supply chain innovations by developing and testing theoretically derived hypotheses regarding the effect of network structure on innovation output and distribution, as measured by the aggregate count and variance in the distribution of patents of the ego network in which a firm exists. Utilizing a manufacturing joint venture network dataset, we identify effects of various ego network constructs such as betweenness, density, brokerage, and weakness on ego network innovation. We find support for the idea that innovation in a supply chain is highly dependent upon the network structure of the interfirm relationships. Thus, it is not just what you know or how well you individually innovate, but also how well the firm can leverage its supply network connections that engender superior innovation outcomes.

Journal ArticleDOI
TL;DR: EDENetworks aims to fill this void by providing an easy‐to‐use interface for the whole analysis pipeline of ecological and evolutionary networks starting from matrices of species distributions, genotypes, bacterial OTUs or populations characterized genetically.
Abstract: The recent application of graph-based network theory analysis to biogeography, community ecology and population genetics has created a need for user-friendly software, which would allow a wider accessibility to and adaptation of these methods. EDENetworks aims to fill this void by providing an easy-to-use interface for the whole analysis pipeline of ecological and evolutionary networks starting from matrices of species distributions, genotypes, bacterial OTUs or populations characterized genetically. The user can choose between several different ecological distance metrics, such as Bray-Curtis or Sorensen distance, or population genetic metrics such as FST or Goldstein distances, to turn the raw data into a distance/dissimilarity matrix. This matrix is then transformed into a network by manual or automatic thresholding based on percolation theory or by building the minimum spanning tree. The networks can be visualized along with auxiliary data and analysed with various metrics such as degree, clustering coefficient, assortativity and betweenness centrality. The statistical significance of the results can be estimated either by resampling the original biological data or by null models based on permutations of the data.

Journal ArticleDOI
TL;DR: In this article, an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York is presented, showing that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines.
Abstract: Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of coupling different modes and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London—but not for New York—there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions.

Journal ArticleDOI
TL;DR: Results suggest that if a network is more highly centered on major roads, there will be fewer non-motorist-involved crashes and a network with a greater average number of intersections on the shortest path connecting each pair of roads tends to experience fewer crashes involving pedestrians and bicyclists.

Journal ArticleDOI
TL;DR: It is proposed that an idealized sampling network should sample high-betweenness stations, small-membership communities which are by definition rare or undersampled relative to other communities, and index stations having large numbers of intracommunity links, while retaining some degree of redundancy to maintain network robustness.
Abstract: . Network theory is applied to an array of streamflow gauges located in the Coast Mountains of British Columbia (BC) and Yukon, Canada. The goal of the analysis is to assess whether insights from this branch of mathematical graph theory can be meaningfully applied to hydrometric data, and, more specifically, whether it may help guide decisions concerning stream gauge placement so that the full complexity of the regional hydrology is efficiently captured. The streamflow data, when represented as a complex network, have a global clustering coefficient and average shortest path length consistent with small-world networks, which are a class of stable and efficient networks common in nature, but the observed degree distribution did not clearly indicate a scale-free network. Stability helps ensure that the network is robust to the loss of nodes; in the context of a streamflow network, stability is interpreted as insensitivity to station removal at random. Community structure is also evident in the streamflow network. A network theoretic community detection algorithm identified separate communities, each of which appears to be defined by the combination of its median seasonal flow regime (pluvial, nival, hybrid, or glacial, which in this region in turn mainly reflects basin elevation) and geographic proximity to other communities (reflecting shared or different daily meteorological forcing). Furthermore, betweenness analyses suggest a handful of key stations which serve as bridges between communities and might be highly valued. We propose that an idealized sampling network should sample high-betweenness stations, small-membership communities which are by definition rare or undersampled relative to other communities, and index stations having large numbers of intracommunity links, while retaining some degree of redundancy to maintain network robustness.

Journal ArticleDOI
TL;DR: Overall, the findings suggest that changes in functional hubs are associated with schizophrenia, reflecting a variation of the underlying functional network and neuronal communications.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed IDB and RDB attack strategies are more efficient than the traditional ID and RD strategies, and the WS small-world network behaves more sensitive to the proposed strategies.
Abstract: The invulnerability of complex networks is an important issue in that the behavior of scale-free network differs from that of exponential network. According to the structural characteristics of the networks, we propose two new attack strategies named IDB (initial degree and betweenness) and RDB (recalculated degree and betweenness). The strategies are originated from ID (initial degree distribution) and RD (recalculated degree distribution) strategies in which attacks are based on initial structural information of a network. The probability of node removals depends on a new metric combining degree centrality and betweenness centrality. We evaluate the efficiency of the proposed strategies on one real-world network and three network models. Experimental results indicate that the proposed strategies are more efficient than the traditional ID and RD strategies. Specially, the WS small-world network behaves more sensitive to the proposed strategies. The attack efficiency of RDB strategy is improved by 20% to RD strategy, and IDB strategy is improved by 40% to ID strategy.

Journal ArticleDOI
TL;DR: A variation of entropy centrality is defined based on a discrete, random Markovian transfer process and allows for varying locality in centrality analyses, thereby distinguishing locally central and globally central network nodes.

Journal ArticleDOI
TL;DR: This paper proposes the first truly scalable algorithm for online computation of betweenness centrality of both vertices and edges in an evolving graph where new edges are added and existing edges are removed and is carefully engineered with out-of-core techniques and tailored for modern parallel stream processing engines that run on clusters of shared-nothing commodity hardware.
Abstract: Betweenness centrality is a classic measure that quantifies the importance of a graph element (vertex or edge) according to the fraction of shortest paths passing through it. This measure is notoriously expensive to compute, and the best known algorithm runs in $\mathcal {O}(nm)$ time. The problems of efficiency and scalability are exacerbated in a dynamic setting, where the input is an evolving graph seen edge by edge, and the goal is to keep the betweenness centrality up to date. In this paper, we propose the first truly scalable algorithm for online computation of betweenness centrality of both vertices and edges in an evolving graph where new edges are added and existing edges are removed. Our algorithm is carefully engineered with out-of-core techniques and tailored for modern parallel stream processing engines that run on clusters of shared-nothing commodity hardware. Hence, it is amenable to real-world deployment. We experiment on graphs that are two orders of magnitude larger than previous studies. Our method is able to keep the betweenness centrality measures up-to-date online, i.e., the time to update the measures is smaller than the inter-arrival time between two consecutive updates.

Journal ArticleDOI
01 Oct 2015
TL;DR: This paper presents the first fully dynamic method for managing betweenness centrality of all vertices in a large dynamic network and carefully design dynamic update procedure with theoretical accuracy guarantee, and proposes two auxiliary data structures called two-ball index and special-purpose reachability index.
Abstract: Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Among several importance measures, betweenness centrality, in particular, plays key roles in many real applications. Considerable effort has been made for developing algorithms for static settings. However, real networks today are highly dynamic and are evolving rapidly, and scalable dynamic methods that can instantly reflect graph changes into centrality values are required.In this paper, we present the first fully dynamic method for managing betweenness centrality of all vertices in a large dynamic network. Its main data structure is the weighted hyperedge representation of shortest paths called hypergraph sketch. We carefully design dynamic update procedure with theoretical accuracy guarantee. To accelerate updates, we further propose two auxiliary data structures called two-ball index and special-purpose reachability index. Experimental results using real networks demonstrate its high scalability and efficiency. In particular, it can reflect a graph change in less than a millisecond on average for a large-scale web graph with 106M vertices and 3.7B edges, which is several orders of magnitude larger than the limits of previous dynamic methods.

Journal ArticleDOI
TL;DR: In this paper, the degree, betweenness, and closeness centrality measures of the project participants in a wayfinding signage project at a major airport construction project are calculated using social network analysis on the e-mail communication network between the participants.
Abstract: Building design and construction require the collective effort of diverse project participants. The coordination performance of these project participants is important for effective management and needs to be assessed periodically. However, there is no uncomplicated quantitative way to measure coordination. Measuring coordination is cumbersome and time-consuming particularly during the project execution phase. This study proposes an easy procedure for monitoring the coordinative performance of project participants. The degree, betweenness, and closeness centrality measures of the project participants in a wayfinding signage project at a major airport construction project are calculated using social network analysis on the e-mail communication network between the participants. A centrality index is defined for each firm based on the average of these three centrality measures. The firm’s coordination score is also calculated based on content analysis of the sent and received e-mails between the part...

Posted Content
TL;DR: An empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York shows that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions.
Abstract: Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of the coupling between them and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralising force creating congestions in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London -- but not for New York -- there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multi-layer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions.

Journal ArticleDOI
TL;DR: In this paper, the authors defined the hybrid flow betweenness (HFB) by considering the actual path of power flow and the maximum transmission capacity of line with a more comprehensible physical background.
Abstract: Most previous research on the identification of vulnerable line in power systems based on topology betweenness or electrical betweenness suppose that power flow is transferred only along the shortest path. In this study, the hybrid flow betweenness (HFB) is defined by considering the actual path of power flow. The betweenness covers the direction of power flow and the maximum transmission capacity of line with a more comprehensible physical background. Electrical coupling degree between the lines and outage transfer distribution factor is also considered in this model to quantify physical network connection and fault effect between lines. The tenacity and directed global efficiency are introduced to measure the effects on network structure and power transfer caused by fault of vulnerable line. The simulation results of the IEEE-57 bus system and the China southern power grid show that the lines with higher HFB play important role in network structure and power transfer of power grid. The chain attack on these lines would make serious damage on system. The results prove that the model proposed in this study can fully embody the physical characteristics of the power grid, efficiently and accurately identify the critical line of power grid.

Journal ArticleDOI
08 Oct 2015-PLOS ONE
TL;DR: Examination of the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL) based on complex network theory and covers both the topological structure of the rail system and future trends in network growth reveals that the rail network’s growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those node with the largest degrees and the highest betweenness values.
Abstract: Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL) based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD) of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality’s closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network’s growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

Journal ArticleDOI
TL;DR: The relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability is quantified.
Abstract: We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.

Journal ArticleDOI
TL;DR: Wanzenbock et al. as mentioned in this paper focused on the embeddedness of regions in research and development (R&D) networks within European Union Framework Programmes by estimating how distinct regional factors affect a region's network positioning.
Abstract: Wanzenbock I., Scherngell T. and Lata R. Embeddedness of European regions in European Union-funded research and development (R&D) networks: a spatial econometric perspective, Regional Studies. This study focuses on the embeddedness of regions in research and development (R&D) networks within European Union Framework Programmes by estimating how distinct regional factors affect a region's network positioning. Graph theoretic centrality measures in terms of betweenness and eigenvector centrality are calculated at the organizational level to reflect the relevant network structure before aggregation to the region level. Panel spatial Durbin error models (SDEM) reveal that region-internal knowledge production capacities, a region's level of economic development as well as spatial spillovers are important determinants for a region's positioning in the European Union-funded R&D network, but their significance differs depending on the centrality concept.

Journal ArticleDOI
TL;DR: It is shown that the proposed optimal placement strategy considerably outperforms heuristic methods including choosing hub nodes with high degree or betweenness centrality as drivers and properties of optimal drivers in terms of various centrality measures including degree, betweenness, closeness, and clustering coefficient.
Abstract: Controlling networked structures has many applications in science and engineering. In this paper, we consider the problem of pinning control (pinning the dynamics into the reference state), and optimally placing the driver nodes, i.e., the nodes to which the control signal is fed. Considering the local controllability concept, a metric based on the eigenvalues of the Laplacian matrix is taken into account as a measure of controllability. We show that the proposed optimal placement strategy considerably outperforms heuristic methods including choosing hub nodes with high degree or betweenness centrality as drivers. We also study properties of optimal drivers in terms of various centrality measures including degree, betweenness, closeness, and clustering coefficient. The profile of these centrality values depends on the network structure. For homogeneous networks such as random small-world networks, the optimal driver nodes have almost the mean centrality value of the population (much lower than the centrality value of hub nodes), whereas the centrality value of optimal drivers in heterogeneous networks such as scale-free ones is much higher than the average and close to that of hub nodes. However, as the degree of heterogeneity decreases in such networks, the profile of centrality approaches the population mean.

Journal ArticleDOI
16 Nov 2015-PLOS ONE
TL;DR: In this paper, a module-based method to efficiently fragment complex networks is presented. But the method is limited to the case of betweenness centrality centrality rankings and it is not suitable for the case where the node degree is not a factor.
Abstract: In the multidisciplinary field of Network Science, optimization of procedures for efficiently breaking complex networks is attracting much attention from a practical point of view In this contribution, we present a module-based method to efficiently fragment complex networks The procedure firstly identifies topological communities through which the network can be represented using a well established heuristic algorithm of community finding Then only the nodes that participate of inter-community links are removed in descending order of their betweenness centrality We illustrate the method by applying it to a variety of examples in the social, infrastructure, and biological fields It is shown that the module-based approach always outperforms targeted attacks to vertices based on node degree or betweenness centrality rankings, with gains in efficiency strongly related to the modularity of the network Remarkably, in the US power grid case, by deleting 3% of the nodes, the proposed method breaks the original network in fragments which are twenty times smaller in size than the fragments left by betweenness-based attack