scispace - formally typeset
Search or ask a question

Showing papers on "Betweenness centrality published in 2017"


Journal ArticleDOI
TL;DR: Simulation results on sample networks reveal just how relevant the centrality of initiator nodes is on the latter development of an information cascade, and the spreading influence of a node is defined as the fraction of nodes that is activated as a result of the initial activation of that node.
Abstract: Information cascades are important dynamical processes in complex networks. An information cascade can describe the spreading dynamics of rumour, disease, memes, or marketing campaigns, which initially start from a node or a set of nodes in the network. If conditions are right, information cascades rapidly encompass large parts of the network, thus leading to epidemics or epidemic spreading. Certain network topologies are particularly conducive to epidemics, while others decelerate and even prohibit rapid information spreading. Here we review models that describe information cascades in complex networks, with an emphasis on the role and consequences of node centrality. In particular, we present simulation results on sample networks that reveal just how relevant the centrality of initiator nodes is on the latter development of an information cascade, and we define the spreading influence of a node as the fraction of nodes that is activated as a result of the initial activation of that node. A systemic review of existing results shows that some centrality measures, such as the degree and betweenness, are positively correlated with the spreading influence, while other centrality measures, such as eccentricity and the information index, have negative correlation. A positive correlation implies that choosing a node with the highest centrality value will activate the largest number of nodes, while a negative correlation implies that the node with the lowest centrality value will have the same effect.We discuss possible applications of these results, and we emphasize how information cascades can help us identify nodes with the highest spreading capability in complex networks.

225 citations


Proceedings ArticleDOI
01 Mar 2017
TL;DR: This paper presents these 3 centrality in-depth, from principle to algorithm, and prospect good in the future use.
Abstract: Social network theory is becoming more and more significant in social science, and the centrality measure is underlying this burgeoning theory. In perspective of social network, individuals, organizations, companies etc. are like nodes in the network, and centrality is used to measure these nodes’ power, activity, communication convenience and so on. Meanwhile, degree centrality, betweenness centrality and closeness centrality are the popular detailed measurements. This paper presents these 3 centrality in-depth, from principle to algorithm, and prospect good in the future use. Keywordssocial network; centrality; degree centrality; betweenness centrality; closeness centrality

151 citations


Journal ArticleDOI
TL;DR: The results show that the HSR network largely increased overall connectivity according to the increasing Beta index and clustering coefficient, and decreasing average path length, and the centrality tended to intensify in large cities in terms of the WCC indicator, but intensify in small cities according toThe WDC and WBC indicators.

147 citations


Journal ArticleDOI
TL;DR: It is shown that although the prominent centrality measures in network analysis make use of different information about nodes' positions, they all process that information in an identical way: they all spring from a common family that are characterized by the same simple axioms.
Abstract: We show that although the prominent centrality measures in network analysis make use of different information about nodes' positions, they all process that information in an identical way: they all spring from a common family that are characterized by the same simple axioms. In particular, they are all based on a monotonic and additively separable treatment of a statistic that captures a node's position in the network.

130 citations


Journal ArticleDOI
TL;DR: Dense electroencephalography data recorded during task-free paradigm is used to track the fast temporal dynamics of spontaneous brain networks and revealed the existence of a functional dynamic core network formed of a set of key brain regions that ensure segregation and integration functions.
Abstract: The human brain is an inherently complex and dynamic system. Even at rest, functional brain networks dynamically reconfigure in a well-organized way to warrant an efficient communication between brain regions. However, a precise characterization of this reconfiguration at very fast time-scale (hundreds of millisecond) during rest remains elusive. In this study, we used dense electroencephalography data recorded during task-free paradigm to track the fast temporal dynamics of spontaneous brain networks. Results obtained from network-based analysis methods revealed the existence of a functional dynamic core network formed of a set of key brain regions that ensure segregation and integration functions. Brain regions within this functional core share high betweenness centrality, strength and vulnerability (high impact on the network global efficiency) and low clustering coefficient. These regions are mainly located in the cingulate and the medial frontal cortex. In particular, most of the identified hubs were found to belong to the Default Mode Network. Results also revealed that the same central regions may dynamically alternate and play the role of either provincial (local) or connector (global) hubs.

114 citations


Journal ArticleDOI
31 Jan 2017-PLOS ONE
TL;DR: Results suggested that lower team passing dependency for a given player and high intra-team well-connected passing relations were related to better outcomes, and the social network analysis allowed to reveal novel key determinants of collective performance.
Abstract: Understanding how youth football players base their game interactions may constitute a solid criterion for fine-tuning the training process and, ultimately, to achieve better individual and team performances during competition. The present study aims to explore how passing networks and positioning variables can be linked to the match outcome in youth elite association football. The participants included 44 male elite players from under-15 and under-17 age groups. A passing network approach within positioning-derived variables was computed to identify the contributions of individual players for the overall team behaviour outcome during a simulated match. Results suggested that lower team passing dependency for a given player (expressed by lower betweenness network centrality scores) and high intra-team well-connected passing relations (expressed by higher closeness network centrality scores) were related to better outcomes. The correlation between the dyads' positioning regularity and the passing density showed a most likely higher correlation in under-15 (moderate effect), indicating a possible more dependence of the ball position rather than in the under-17 teams (small/unclear effects). Overall, this study emphasizes the potential of coupling notational analyses with spatial-temporal relations to produce a more functional and holistic understanding of teams' sports performance. Also, the social network analysis allowed to reveal novel key determinants of collective performance.

113 citations


Journal ArticleDOI
TL;DR: It is significant to rank spreaders in complex networks by using Network Efficiency, and the proposed efficiency centrality (EffC) is proved to be a feasible and effective measure to identify influential nodes.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the association between innovative behavior, firm position within the network of a destination, and the knowledge and relational trust characteristics of a firm's innovation-oriented relationships.

105 citations


Journal ArticleDOI
TL;DR: A communication scheme that relaxes the assumption that information travels exclusively through optimally short paths and assumes that communication between a pair of brain regions may take place through a path ensemble comprising the k-shortest paths between those regions is explored.
Abstract: Computational analysis of communication efficiency of brain networks often relies on graph-theoretic measures based on the shortest paths between network nodes. Here, we explore a communication scheme that relaxes the assumption that information travels exclusively through optimally short paths. The scheme assumes that communication between a pair of brain regions may take place through a path ensemble comprising the k-shortest paths between those regions. To explore this approach, we map path ensembles in a set of anatomical brain networks derived from diffusion imaging and tractography. We show that while considering optimally short paths excludes a significant fraction of network connections from participating in communication, considering k-shortest path ensembles allows all connections in the network to contribute. Path ensembles enable us to assess the resilience of communication pathways between brain regions, by measuring the number of alternative, disjoint paths within the ensemble, and to compare generalized measures of path length and betweenness centrality to those that result when considering only the single shortest path between node pairs. Furthermore, we find a significant correlation, indicative of a trade-off, between communication efficiency and resilience of communication pathways in structural brain networks. Finally, we use k-shortest path ensembles to demonstrate hemispherical lateralization of efficiency and resilience.

90 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that the distribution of betweenness centrality (BC) is an invariant quantity in most planar graphs, and they confirm this invariance through an empirical analysis of street networks from 97 of the most populous cities worldwide, at scales significantly larger than previous studies.
Abstract: We demonstrate that the distribution of betweenness centrality (BC), a global structural metric based on network flow, is an invariant quantity in most planar graphs. We confirm this invariance through an empirical analysis of street networks from 97 of the most populous cities worldwide, at scales significantly larger than previous studies. We also find that the BC distribution is robust to major alterations in the network, including significant changes to its topology and edge weight structure, indicating that the only relevant factors shaping the distribution are the number of nodes and edges as well as the constraint of planarity. Through simulations of random planar graph models and analytical calculations on Cayley trees, this invariance is demonstrated to be a consequence of a bimodal regime consisting of an underlying tree structure for high BC nodes, and a low BC regime arising from the presence of loops providing local path alternatives. Furthermore, the high BC nodes display a non-trivial spatial dependence, with increasing spatial correlation as a function of the number of edges, leading them to cluster around the barycenter at large densities. Our results suggest that the spatial distribution of the BC is a more accurate discriminator when comparing patterns across cities. Moreover, the BC being a static predictor of congestion in planar graphs, the observed invariance and spatial dependence has practical implications for infrastructural and biological networks. In particular, for the case of street networks, as long as planarity is conserved, bottlenecks continue to persist, and the effect of planned interventions to alleviate structural congestion will be limited primarily to load redistribution, a feature confirmed by analyzing 200 years of data for central Paris.

79 citations


Proceedings ArticleDOI
12 Nov 2017
TL;DR: Maximal Frontier Betweenness Centrality (MFBC) as discussed by the authors is a succinct algorithm based on sparse matrix multiplication that performs a factor of p 1/3 less communication on p processors than the best known algorithms.
Abstract: Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.

Journal ArticleDOI
TL;DR: Deterministic algorithms, which converge in finite time, are proposed for the distributed computation of the degree, closeness and betweenness centrality measures in directed graphs and the concept of persistent graph is introduced, which eliminates the effect of spamming nodes.
Abstract: This paper is concerned with distributed computation of several commonly used centrality measures in complex networks. In particular, we propose deterministic algorithms, which converge in finite time, for the distributed computation of the degree, closeness and betweenness centrality measures in directed graphs. Regarding eigenvector centrality, we consider the PageRank problem as its typical variant, and design distributed randomized algorithms to compute PageRank for both fixed and time-varying graphs. A key feature of the proposed algorithms is that they do not require to know the network size, which can be simultaneously estimated at every node, and that they are clock-free. To address the PageRank problem of time-varying graphs, we introduce the concept of persistent graph, which eliminates the effect of spamming nodes. Moreover, we prove that these algorithms converge almost surely and in the sense of $L^p$ . Finally, the effectiveness of the proposed algorithms is illustrated via extensive simulations using a classical benchmark.

Journal ArticleDOI
TL;DR: This article adopts a deterministic approach to simulate extreme flooding events in two cities, New York City and Chicago, by removing entire sections of road systems using U.S. FEMA floodplains, and measures and discusses how betweenness centrality is being redistributed after flooding.
Abstract: The main objective of this article is to study the robustness of road networks to extreme flooding events that can negatively affect entire regional systems in a relatively unpredictable way. Here, we adopt a deterministic approach to simulate extreme flooding events in two cities, New York City and Chicago, by removing entire sections of road systems using U.S. FEMA floodplains. We then measure changes in the number of real trips that can be completed (using travel demand data), Geographical Information Systems properties, and network topological indicators. We notably measure and discuss how betweenness centrality is being redistributed after flooding. Broadly, robustness in spatial systems like road networks is dependent on many factors, including system size (number of nodes and links) and topological structure of the network. Expectedly, robustness also depends on geography, and cities that are naturally more at risk will tend to be less robust, and therefore the notion of robustness rapidly becomes sensitive to individual contexts.

Journal ArticleDOI
TL;DR: It is shown that in general the weight neighborhood centrality can rank the spreading ability of nodes more accurately than its benchmark centrality, especially when using the degree k or coreness ks as the benchmarkcentrality.
Abstract: Identifying the most influential spreaders in complex networks is crucial for optimally using the network structure and designing efficient strategies to accelerate information dissemination or prevent epidemic outbreaks. In this paper, by taking into account the centrality of a node and its neighbors’ centrality which depends on the diffusion importance of links, we propose a novel influence measure, the weight neighborhood centrality, to quantify the spreading ability of nodes in complex networks. To evaluate the performance of our method, we use the Susceptible–Infected–Recovered (SIR) model to simulate the epidemic spreading process on six real-world networks and four artificial networks. By measuring the rank imprecision and the rank correlation between the rank lists generated by simulation results via SIR and the ones generated by centrality measures, it shows that in general the weight neighborhood centrality can rank the spreading ability of nodes more accurately than its benchmark centrality, especially when using the degree k or coreness k s as the benchmark centrality. Further, we compare the monotonicity and the computational complexity of different ranking methods, which show that our method not only can be better at distinguishing the spreading ability of nodes but also can be used in large-scale networks due to the high computation efficiency.

Journal ArticleDOI
TL;DR: An individual user's item relations can be utilized to remedy the problems occurring when the external relations are biased or insufficient, and the suggested method performed better than the basic item-based and user-based collaborative filtering methods in terms of Accuracy, Recall, and F1 scores for top- k recommendations.
Abstract: Recommendation systems are becoming important with the increased availability of online services A typical approach used in recommendations is collaborative filtering However, because it largely relies on external relations, such as items-to-items or users-to-users, problems occur when the relations are biased or insufficient Focusing on that limitation, we here suggest a new method, item-network-based collaborative filtering, which recommends items through four steps First, the system constructs item networks based on users’ item usage history and calculates three types of centrality: betweenness, closeness, and degree Next, the system secures significant items based on the betweenness centrality of the items in each user's item network Then, by using the closeness and degree centrality of the secured items, the algorithm predicts preference scores for items and their rank orders from each user's perspective In the last step, the system organizes a recommendation list based on the predicted scores To evaluate the performance of our system, we applied it to a sample dataset of 196 Lastfm users’ listening history and compared the results with those from existing collaborative filtering methods The results showed that the suggested method performed better than the basic item-based and user-based collaborative filtering methods in terms of Accuracy, Recall, and F1 scores for top- k recommendations This indicates that an individual user's item relations can be utilized to remedy the problems occurring when the external relations are biased or insufficient

Journal ArticleDOI
TL;DR: The results show that the modified measurements, for analyzing traffic flow, are superior to conventional centrality measurements, and helps to shed light into the understanding of urban traffic flow in relation to different modes from the perspective of complex networks.
Abstract: In this paper, we propose an improved network centrality measure framework that takes into account both the topological characteristics and the geometric properties of a road network in order to analyze urban traffic flow in relation to different modes: intersection, road, and community, which correspond to point mode, line mode, and area mode respectively. Degree, betweenness, and PageRank centralities are selected as the analysis measures, and GPS-enabled taxi trajectory data is used to evaluate urban traffic flow. The results show that the mean value of the correlation coefficients between the modified degree, the betweenness, and the PageRank centralities and the traffic flow in all periods are higher than the mean value of the correlation coefficients between the conventional degree, the betweenness, the PageRank centralities and the traffic flow at different modes; this indicates that the modified measurements, for analyzing traffic flow, are superior to conventional centrality measurements. This study helps to shed light into the understanding of urban traffic flow in relation to different modes from the perspective of complex networks.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for identifying top-K viral information propagators from a reduced search space by computes the Katz centrality and Local average centrality values of each node and tests the values against two threshold values.
Abstract: Network theory concepts form the core of algorithms that are designed to uncover valuable insights from various datasets. Especially, network centrality measures such as Eigenvector centrality, Katz centrality, PageRank centrality etc., are used in retrieving top-K viral information propagators in social networks,while web page ranking in efficient information retrieval, etc. In this paper, we propose a novel method for identifying top-K viral information propagators from a reduced search space. Our algorithm computes the Katz centrality and Local average centrality values of each node and tests the values against two threshold (constraints) values. Only those nodes, which satisfy these constraints, form the search space for top-K propagators. Our proposed algorithm is tested against four datasets and the results show that the proposed algorithm is capable of reducing the number of nodes in search space at least by 70%. We also considered the parameter ( $$\alpha$$ and $$\beta$$ ) dependency of Katz centrality values in our experiments and established a relationship between the $$\alpha$$ values, number of nodes in search space and network characteristics. Later, we compare the top-K results of our approach against the top-K results of degree centrality.

Journal ArticleDOI
TL;DR: An efficient algorithm is proposed for identifying influential nodes, using weighted formal concept analysis (WFCA), which is a typical computational intelligence technique, and outperforms several state-of-the-art algorithms.
Abstract: The identification of influential nodes is essential to research regarding network attacks, information dissemination, and epidemic spreading. Thus, techniques for identifying influential nodes in complex networks have been the subject of increasing attention. During recent decades, many methods have been proposed from various viewpoints, each with its own advantages and disadvantages. In this paper, an efficient algorithm is proposed for identifying influential nodes, using weighted formal concept analysis (WFCA), which is a typical computational intelligence technique. We call this a WFCA-based influential nodes identification algorithm. The basic idea is to quantify the importance of nodes via WFCA. Specifically, this model converts the binary relationships between nodes in a given network into a knowledge hierarchy, and employs WFCA to aggregate the nodes in terms of their attributes. The more nodes aggregated, the more important each attribute becomes. WFCA not only works on undirected or directed networks, but is also applicable to attributed networks. To evaluate the performance of WFCA, we employ the SIR model to examine the spreading efficiency of each node, and compare the WFCA algorithm with PageRank, HITS, K-shell, H-index, eigenvector centrality, closeness centrality, and betweenness centrality on several real-world networks. Extensive experiments demonstrate that the WFCA algorithm ranks nodes effectively, and outperforms several state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: The analysis provides a positive answer to the research question: CBR scores allow for predicting HBR ones and Eigenvector Centrality was found to be the most important predictor.
Abstract: In collaborative Web-based platforms, user reputation scores are generally computed according to two orthogonal perspectives: (a) helpfulness-based reputation (HBR) scores and (b) centrality-based reputation (CBR) scores. In HBR approaches, the most reputable users are those who post the most helpful reviews according to the opinion of the members of their community. In CBR approaches, a “who-trusts-whom” network—known as a trust network—is available and the most reputable users occupy the most central position in the trust network, according to some definition of centrality. The identification of users featuring large HBR scores is one of the most important research issue in the field of Social Networks, and it is a critical success factor of many Web-based platforms like e-marketplaces, product review Web sites, and question-and-answering systems. Unfortunately, user reviews/ratings are often sparse, and this makes the calculation of HBR scores inaccurate. In contrast, CBR scores are relatively easy to calculate provided that the topology of the trust network is known. In this article, we investigate if CBR scores are effective to predict HBR ones, and, to perform our study, we used real-life datasets extracted from CIAO and Epinions (two product review Web sites) and Wikipedia and applied five popular centrality measures—Degree Centrality, Closeness Centrality, Betweenness Centrality, PageRank and Eigenvector Centrality—to calculate CBR scores. Our analysis provides a positive answer to our research question: CBR scores allow for predicting HBR ones and Eigenvector Centrality was found to be the most important predictor. Our findings prove that we can leverage trust relationships to spot those users producing the most helpful reviews for the whole community.

Journal ArticleDOI
TL;DR: In this paper, the authors model information dissemination as a susceptible-infected epidemic process and formulate a problem to jointly optimize seeds for the epidemic and time varying resource allocation over the period of a fixed duration campaign running on a social network with a given adjacency matrix.
Abstract: We model information dissemination as a susceptible-infected epidemic process and formulate a problem to jointly optimize seeds for the epidemic and time varying resource allocation over the period of a fixed duration campaign running on a social network with a given adjacency matrix. Individuals in the network are grouped according to their centrality measure and each group is influenced by an external control function—implemented through advertisements—during the campaign duration. The aim is to maximize an objective function which is a linear combination of the reward due to the fraction of informed individuals at the deadline, and the aggregated cost of applying controls (advertising) over the campaign duration. We also study a problem variant with a fixed budget constraint. We set up the optimality system using Pontryagin’s maximum principle from optimal control theory and solve it numerically using the forward–backward sweep technique. Our formulation allows us to compare the performance of various centrality measures (pagerank, degree, closeness, and betweenness) in maximizing the spread of a message in the optimal control framework. We find that degree—a simple and local measure—performs well on the three social networks used to demonstrate results: 1) scientific collaboration; 2) Slashdot; and 3) Facebook. The optimal strategy targets central nodes when the resource is scarce, but noncentral nodes are targeted when the resource is in abundance. Our framework is general and can be used in similar studies for other disease or information spread models—that can be modeled using a system of ordinary differential equations—for a network with a known adjacency matrix.

Journal ArticleDOI
15 Nov 2017-Entropy
TL;DR: A novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes.
Abstract: Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method.

Journal ArticleDOI
Qian Ma1, Jun Ma1
TL;DR: Compared with known centrality measures such as DC, LC, BC, CC, CC and KS, HC can evaluate the spreading ability of the nodes more accurately on most range of spreading probabilities and can better distinguish the spreads ability of nodes.
Abstract: Identifying the influential spreaders in complex network has great theoretical and practical significance. In order to evaluate the spreading ability of the nodes, some centrality measures are usually computed, which include degree centrality (DC), betweenness centrality (BC), closeness centrality (CC), k-shell centrality (KS) and local centrality (LC). However, we observe that the performance of different centrality measures may change when these measures are used in a real network with different spreading probabilities. Specifically, DC performs well for small spreading probabilities and LC is more suitable for larger ones. To alleviate the sensitivity of these centrality measures to the spreading probability, we modify LC and then integrate it with DC by considering the spreading probability. We call the proposed measure hybrid degree centrality (HC). HC can take the advantages of DC or LC depending on the given spreading probability. We use SIR model to evaluate the performance of HC in both real networks and artificial networks. Experimental results show that HC performs robustly under different spreading probabilities. Compared with these known centrality measures such as DC, LC, BC, CC and KS, HC can evaluate the spreading ability of the nodes more accurately on most range of spreading probabilities. Furthermore, we show that our method can better distinguish the spreading ability of nodes.

Journal ArticleDOI
TL;DR: Interestingly, it is found that node degree is negatively correlated with the cascade depth, meaning that failing a high-degree node has less severe effect than the case when lower-degree nodes fail.

Journal ArticleDOI
TL;DR: The present study can be regarded as a “proof of concept” about a procedure for the classification of MRI markers between AD dementia, MCI, and normal old individuals, due to the small and not well-defined groups of AD and MCI patients.
Abstract: The human brain is a complex network of interacting regions. The grey matter regions of brain are interconnected by white matter tracts, together forming one integrative complex network. In this article, we report our investigation about the potential of applying brain connectivity patterns as an aid in diagnosing Alzheimer’s disease and Mild Cognitive Impairment. We performed pattern analysis of graph theoretical measures derived from Diffusion Tensor Imaging (DTI) data representing structural brain networks of 45 subjects, consisting of 15 patients of Alzheimer’s disease (AD), 15 patients of Mild Cognitive Impairment (MCI), and 15 healthy subjects (CT). We considered pair-wise class combinations of subjects, defining three separate classification tasks, i.e. AD-CT, AD-MCI, and CT-MCI, and used an ensemble classification module to perform the classification tasks. Our ensemble framework with feature selection shows a promising performance with classification accuracy of 83.3% for AD v/s MCI, 80% for AD v/s CT, and 70% for MCI v/s CT. Moreover, our findings suggest that AD can be related to graph measures abnormalities at Brodmann areas in the sensorimotor cortex and piriform cortex. In this way, node redundancy coefficient and load centrality in the primary motor cortex were recognized as good indicators of AD in contrast to MCI. In general, load centrality, betweenness centrality and closeness centrality were found to be the most relevant network measures, as they were the top identified features at different nodes. The present study can be regarded as a “proof of concept” about a procedure for the classification of MRI markers between AD dementia, MCI and normal old individuals, due to the small and not well defined groups of AD and MCI patients. Future studies with larger samples of subjects and more sophisticated patient exclusion criteria are necessary towards the development of a more precise technique for clinical diagnosis.

Journal ArticleDOI
TL;DR: It is demonstrated that the centrality measures are affected differently by the edge effect, and that the same centrality measure is affected differently depending on the type of network distance used, which highlights the importance of defining the network's boundary in a way that is relevant to the research question.
Abstract: With increased interest in the use of network analysis to study the urban and regional environment, it is important to understand the sensitivity of centrality analysis results to the so-called “edge effect”. Most street network models have artificial boundaries, and there are principles that can be applied to minimise or eliminate the effect of the boundary condition. However, the extent of this impact has not been systematically studied and remains little understood. In this article we present an empirical study on the impact of different network model boundaries on the results of closeness and betweenness centrality analysis of street networks. The results demonstrate that the centrality measures are affected differently by the edge effect, and that the same centrality measure is affected differently depending on the type of network distance used. These results highlight the importance, in any study of street networks, of defining the network's boundary in a way that is relevant to the research question, and of selecting appropriate analysis parameters and statistics.

Journal ArticleDOI
TL;DR: The model is simulated in Network Simulator (ns2), and results show that the proposed model performs better than the schemes with random malicious nodes and existing game theory based approach in terms of throughput, retransmission attempts and data drop rate for different attacker and defender scenarios.

Journal ArticleDOI
TL;DR: An artificial bee colony algorithm is proposed, which is a swarm intelligence approach inspired in the foraging behaviour of honeybees, which exploits useful problem knowledge in this neighbourhood exploration by considering the partial destruction and heuristic reconstruction of selected solutions.

Journal ArticleDOI
TL;DR: Two novel consensus protocols weighted by calculating the betweenness and eigenvector centralities for agent and link which are determined by the interconnection structure of MASs are proposed.
Abstract: This technical note aims at constructing and analyzing an efficient framework for the leader-following consensus protocol in multi-agent systems (MASs). We propose two novel consensus protocols weighted by calculating the betweenness and eigenvector centralities for agent and link which are determined by the interconnection structure of MASs. The concepts of centrality were introduced in the field of social science. Ultimately, the use of the proposed protocols can be described with regard to not only the number of each agent's neighbors, which was utilized in the existing works, but also more information about agents through considering two such centralities. By utilizing the Lyapunov method and some mathematical techniques, the leader-following guaranteed cost consensus conditions for MASs with the proposed protocols and sampled-data will be established in terms of linear matrix inequalities (LMIs). Based on the result of consensus criteria, two new protocol design methods which utilize the betweenness and eigenvector centralities will be proposed. Finally, some simulation results are given to illustrate the advantages of the proposed protocols in point of the robustness on sampling interval and the transient consensus performance.

Journal ArticleDOI
TL;DR: The practical application of tools provided by social network theory for the detection of potential influencers from the point of view of marketing within online communities is used and a method to detect significant actors based on centrality metrics is proposed.
Abstract: Purpose The purpose of this paper is to use the practical application of tools provided by social network theory for the detection of potential influencers from the point of view of marketing within online communities. It proposes a method to detect significant actors based on centrality metrics. Design/methodology/approach A matrix is proposed for the classification of the individuals that integrate a social network based on the combination of eigenvector centrality and betweenness centrality. The model is tested on a Facebook fan page for a sporting event. NodeXL is used to extract and analyze information. Semantic analysis and agent-based simulation are used to test the model. Findings The proposed model is effective in detecting actors with the potential to efficiently spread a message in relation to the rest of the community, which is achieved from their position within the network. Social network analysis (SNA) and the proposed model, in particular, are useful to detect subgroups of components with particular characteristics that are not evident from other analysis methods. Originality/value This paper approaches the application of SNA to online social communities from an empirical and experimental perspective. Its originality lies in combining information from two individual metrics to understand the phenomenon of influence. Online social networks are gaining relevance and the literature that exists in relation to this subject is still fragmented and incipient. This paper contributes to a better understanding of this phenomenon of networks and the development of better tools to manage it through the proposal of a novel method.

Journal ArticleDOI
TL;DR: A method based on e-mail social network analysis to compare the communication behavior of managers who voluntarily quit their job and managers who decide to stay is proposed and results indicate that on average managers who quit had lower closeness centrality and less engaged conversations.