scispace - formally typeset
Search or ask a question

Showing papers on "Degree distribution published in 2010"


Journal ArticleDOI
15 Apr 2010-Nature
TL;DR: In this paper, the authors develop a framework for understanding the robustness of interacting networks subject to cascading failures and present exact analytical solutions for the critical fraction of nodes that, on removal, will lead to a failure cascade and to a complete fragmentation of two interdependent networks.
Abstract: Complex networks have been studied intensively for a decade, but research still focuses on the limited case of a single, non-interacting network. Modern systems are coupled together and therefore should be modelled as interdependent networks. A fundamental property of interdependent networks is that failure of nodes in one network may lead to failure of dependent nodes in other networks. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of several interdependent networks. A dramatic real-world example of a cascade of failures ('concurrent malfunction') is the electrical blackout that affected much of Italy on 28 September 2003: the shutdown of power stations directly led to the failure of nodes in the Internet communication network, which in turn caused further breakdown of power stations. Here we develop a framework for understanding the robustness of interacting networks subject to such cascading failures. We present exact analytical solutions for the critical fraction of nodes that, on removal, will lead to a failure cascade and to a complete fragmentation of two interdependent networks. Surprisingly, a broader degree distribution increases the vulnerability of interdependent networks to random failure, which is opposite to how a single network behaves. Our findings highlight the need to consider interdependent network properties in designing robust networks.

3,651 citations


Journal ArticleDOI
TL;DR: It is shown that targeted transport processes without global topology knowledge are maximally efficient, according to all efficiency measures, in networks with strongest heterogeneity and clustering, and that this efficiency is remarkably robust with respect to even catastrophic disturbances and damages to the network structure.
Abstract: We develop a geometric framework to study the structure and function of complex networks. We assume that hyperbolic geometry underlies these networks, and we show that with this assumption, heterogeneous degree distributions and strong clustering in complex networks emerge naturally as simple reflections of the negative curvature and metric property of the underlying hyperbolic geometry. Conversely, we show that if a network has some metric structure, and if the network degree distribution is heterogeneous, then the network has an effective hyperbolic geometry underneath. We then establish a mapping between our geometric framework and statistical mechanics of complex networks. This mapping interprets edges in a network as noninteracting fermions whose energies are hyperbolic distances between nodes, while the auxiliary fields coupled to edges are linear functions of these energies or distances. The geometric network ensemble subsumes the standard configuration model and classical random graphs as two limiting cases with degenerate geometric structures. Finally, we show that targeted transport processes without global topology knowledge, made possible by our geometric framework, are maximally efficient, according to all efficiency measures, in networks with strongest heterogeneity and clustering, and that this efficiency is remarkably robust with respect to even catastrophic disturbances and damages to the network structure.

1,002 citations


Journal ArticleDOI
TL;DR: This work presents the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations, and explores how the interdependence of different network types determines the organization of the social system.
Abstract: The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.

886 citations


Journal ArticleDOI
TL;DR: It is conjecture that on quenched scale-rich networks the threshold of generic epidemic models is vanishing or finite depending on the presence or absence of a steady state.
Abstract: We study the threshold of epidemic models in quenched networks with degree distribution given by a power-law. For the susceptible-infected-susceptible model the activity threshold λ(c) vanishes in the large size limit on any network whose maximum degree k(max) diverges with the system size, at odds with heterogeneous mean-field (HMF) theory. The vanishing of the threshold has nothing to do with the scale-free nature of the network but stems instead from the largest hub in the system being active for any spreading rate λ>1/√k(max) and playing the role of a self-sustained source that spreads the infection to the rest of the system. The susceptible-infected-removed model displays instead agreement with HMF theory and a finite threshold for scale-rich networks. We conjecture that on quenched scale-rich networks the threshold of generic epidemic models is vanishing or finite depending on the presence or absence of a steady state.

619 citations


Journal ArticleDOI
TL;DR: Investigating how variations in parcellation templates affect key graph analytic measures of functional brain organization using resting-state fMRI in 30 healthy volunteers found that gross inferences regarding network topology were robust to the template used, but that both absolute values of, and individual differences in, specific parameters such as path length, clustering, small-worldness, and degree distribution descriptors varied considerably.
Abstract: Graph analysis has become an increasingly popular tool for characterizing topological properties of brain connectivity networks. Within this approach, the brain is modeled as a graph comprising N nodes connected by M edges. In functional magnetic resonance imaging (fMRI) studies, the nodes typically represent brain regions and the edges some measure of interaction between them. These nodes are commonly defined using a variety of regional parcellation templates, which can vary both in the volume sampled by each region, and the number of regions parcellated. Here, we sought to investigate how such variations in parcellation templates affect key graph analytic measures of functional brain organization using resting-state fMRI in thirty healthy volunteers. Seven different parcellation resolutions (84, 91, 230, 438, 890, 1314 and 4320 regions) were investigated. We found that gross inferences regarding network topology, such as whether the brain is small-world or scale-free, were robust to the template used, but that both absolute values of, and individual differences in, specific parameters such as path length, clustering, small-worldness and degree distribution descriptors varied considerably across the resolutions studied. These findings underscore the need to consider the effect that a specific parcellation approach has on graph analytic findings in human fMRI studies, and indicate that results obtained using different templates may not be directly comparable.

410 citations


Journal ArticleDOI
TL;DR: This work constructed functional brain networks at multiple resolutions using the same resting-state fMRI data, and compared various network metrics, degree distribution, and localization of nodes of interest, finding that the networks with higher resolutions exhibited the properties of small-world networks more prominently.

358 citations


Journal ArticleDOI
TL;DR: An algorithm is proposed that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.
Abstract: In order to design an efficient communication scheme and examine the efficiency of any networked control architecture in smart grid applications, we need to characterize statistically its information source, namely the power grid itself. Investigating the statistical properties of power grids has the immediate benefit of providing a natural simulation platform, producing a large number of power grid test cases with realistic topologies, with scalable network size, and with realistic electrical parameter settings. The second benefit is that one can start analyzing the performance of decentralized control algorithms over information networks whose topology matches that of the underlying power network and use network scientific approaches to determine analytically if these architectures would scale well. With these motivations, in this paper we study both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. The most interesting discoveries include: the power grid is sparsely connected with obvious small-world properties; its nodal degree distribution can be well fitted by a mixture distribution coming from the sum of a truncated geometric random variable and an irregular discrete random variable; the power grid has very distinctive graph spectral density and its algebraic connectivity scales as a power function of the network size; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a clipped double Pareto lognormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random topology power grids featuring the same topology and electrical characteristics found from the real data.

271 citations


Journal ArticleDOI
TL;DR: The horizontal visibility algorithm is used to characterize and distinguish between correlated stochastic, uncorrelated and chaotic processes, and it is shown that in every case the series maps into a graph with exponential degree distribution P(k)∼exp(-λk), where the value of λ characterizes the specific process.
Abstract: Nonlinear time series analysis is an active field of research that studies the structure of complex signals in order to derive information of the process that generated those series, for understanding, modeling and forecasting purposes. In the last years, some methods mapping time series to network representations have been proposed. The purpose is to investigate on the properties of the series through graph theoretical tools recently developed in the core of the celebrated complex network theory. Among some other methods, the so-called visibility algorithm has received much attention, since it has been shown that series correlations are captured by the algorithm and translated in the associated graph, opening the possibility of building fruitful connections between time series analysis, nonlinear dynamics, and graph theory. Here we use the horizontal visibility algorithm to characterize and distinguish between correlated stochastic, uncorrelated and chaotic processes. We show that in every case the series maps into a graph with exponential degree distribution P(k)∼exp(-λk), where the value of λ characterizes the specific process. The frontier between chaotic and correlated stochastic processes, λ=ln(3/2) , can be calculated exactly, and some other analytical developments confirm the results provided by extensive numerical simulations and (short) experimental time series.

227 citations


Journal ArticleDOI
08 Apr 2010-PLOS ONE
TL;DR: This work proposes an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence, and argues that for large, and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution.
Abstract: Uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation-based measurements of network observables, with applications ranging from epidemics, through social networks to Internet modeling. Existing graph sampling methods are either link-swap based (Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration Model). Both types are ill-controlled, with typically unknown mixing times for link-swap methods and uncontrolled rejections for the Configuration Model. Here we propose an efficient, polynomial time algorithm that generates statistically independent graph samples with a given, arbitrary, degree sequence. The algorithm provides a weight associated with each sample, allowing the observable to be measured either uniformly over the graph ensemble, or, alternatively, with a desired distribution. Unlike other algorithms, this method always produces a sample, without back-tracking or rejections. Using a central limit theorem-based reasoning, we argue, that for large , and for degree sequences admitting many realizations, the sample weights are expected to have a lognormal distribution. As examples, we apply our algorithm to generate networks with degree sequences drawn from power-law distributions and from binomial distributions.

192 citations


Journal ArticleDOI
TL;DR: It is found that an outbreak of a first pathogen providing immunity to another one spreading subsequently on a second network connecting the same set of nodes does so most effectively if the degrees on the two networks are positively correlated.
Abstract: The interaction between multiple pathogens spreading on networks connecting a given set of nodes presents an ongoing theoretical challenge. Here, we aim to understand such interactions by studying bond percolation of two different processes on overlay networks of arbitrary joint degree distribution. We find that an outbreak of a first pathogen providing immunity to another one spreading subsequently on a second network connecting the same set of nodes does so most effectively if the degrees on the two networks are positively correlated. In that case, the protection is stronger the more heterogeneous the degree distributions of the two networks are. If, on the other hand, the degrees are uncorrelated or negatively correlated, increasing heterogeneity reduces the potential of the first process to prevent the second one from reaching epidemic proportions. We generalize these results to cases where the edges of the two networks overlap to arbitrary amount, or where the immunity granted is only partial. If both processes grant immunity to each other, we find a wide range of possible situations of coexistence or mutual exclusion, depending on the joint degree distribution of the underlying networks and the amount of immunity granted mutually. These results generalize the concept of a coexistence threshold and illustrate the impact of large-scale network structure on the interaction between multiple spreading agents.

179 citations


Proceedings ArticleDOI
25 Oct 2010
TL;DR: This paper quantify the degree bias of BFS sampling, and calculates the node degree distribution expected to be observed by BFS as a function of the fraction of covered nodes, in a random graph RG(pk) with a given degree distribution pk.
Abstract: Breadth First Search (BFS) and other graph traversal techniques are widely used for measuring large unknown graphs, such as online social networks. It has been empirically observed that incomplete BFS is biased toward high degree nodes. In contrast to more studied sampling techniques, such as random walks, the bias of BFS has not been characterized to date. In this paper, we quantify the degree bias of BFS sampling. In particular, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction of covered nodes, in a random graph RG(p k ) with a given (and arbitrary) degree distribution p k . Furthermore, we also show that, for RG(p k ), all commonly used graph traversal techniques (BFS, DFS, Forest Fire, and Snowball Sampling) lead to the same bias, and we show how to correct for this bias. To give a broader perspective, we compare this class of exploration techniques to random walks that are well-studied and easier to analyze. Next, we study by simulation the effect of graph properties not captured directly by our model. We find that the bias gets amplified in graphs with strong positive assortativity. Finally, we demonstrate the above results by sampling the Facebook social network, and we provide some practical guidelines for graph sampling in practice.

Journal ArticleDOI
TL;DR: This work developed the Graph Evolution Rule Miner software to extract graph evolution rules and applied these rules to predict future network evolution, and investigated a variety of network formation strategies, showing that edge locality plays a critical role in network evolution.
Abstract: With the increasing availability of large social network data, there is also an increasing interest in analyzing how those networks evolve over time. Traditionally, the analysis of social networks has focused only on a single snapshot of a network. Researchers have already verified that social networks follow power-law degree distribution, have a small diameter, and exhibit small-world structure and community structure. Attempts to explain the properties of social networks have led to dynamic models inspired by the preferential attachment models which assumes that new network nodes have a higher probability of forming links with high-degree nodes, creating a rich-get-richer effect. Although some effort has been devoted to analyzing global properties of social network evolution, not much has been done to study graph evolution at a microscopic level. A first step in this direction investigated a variety of network formation strategies, showing that edge locality plays a critical role in network evolution. We propose a different approach. Following the paradigm of association rules and frequent-pattern mining, our work searches for typical patterns of structural changes in dynamic networks. Mining for such local patterns is a computationally challenging task that can provide further insight into the increasing amount of evolving network data. Beyond the notion of graph evolution rules (GERs), a concept that we introduced in an earlier work, we developed the Graph Evolution Rule Miner (GERM) software to extract such rules and applied these rules to predict future network evolution.

Journal ArticleDOI
TL;DR: In this article, the authors study the statistical properties of complex networks constructed from time series of energy dissipation rates in three-dimensional fully developed turbulence using the visibility algorithm and find that the skeleton of the visibility network exhibits excellent allometric scaling with the scaling exponent η = 1.163 ± 0.005.
Abstract: We study the statistical properties of complex networks constructed from time series of energy dissipation rates in three-dimensional fully developed turbulence using the visibility algorithm. The degree distribution is found to have a power-law tail with the tail exponent α = 3.0 . The exponential relation between the number of the boxes N B and the box size l B based on the edge-covering box-counting method illustrates that the network is not self-similar, which is also confirmed by the hub-hub attraction according to the visibility algorithm. In addition, it is found that the skeleton of the visibility network exhibits excellent allometric scaling with the scaling exponent η = 1.163 ± 0.005 .

Journal ArticleDOI
TL;DR: The cascading failure in Watts–Strogatz small-world networks is found to have a heterogeneous betweenness distribution, although its degree distribution is homogeneous, which suggests that the authors have to be very careful when using terms such as homogeneous network and heterogeneous network, unless the distribution they refer to is specified.
Abstract: In this paper, we study the cascading failure in Watts–Strogatz small-world networks. We find that this network model has a heterogeneous betweenness distribution, although its degree distribution is homogeneous. Further study shows that this small-world network is robust to random attack but fragile to intentional attack, in the cascading failure scenario. With comparison to standard random graph and scale-free networks, our result indicates that the robust yet fragile property in the cascading failure scenario is mainly related to heterogeneous betweenness, rather than the network degree distribution. Thus, it suggests that we have to be very careful when we use terms such as homogeneous network and heterogeneous network, unless the distribution we refer to is specified.

Journal ArticleDOI
TL;DR: In this paper, the authors study a large-scale production network with a million firms and millions of supplier-customer links and show that scale-free degree distribution, disassortativity, correlation of degree to firm size, and community structure having sectoral and regional modules.
Abstract: Production in an economy is a set of firms' activities as suppliers and customers; a firm buys goods from other firms, puts value added and sells products to others in a giant network of production. Empirical study is lacking despite the fact that the structure of the production network is important to understand and make models for many aspects of dynamics in economy. We study a nation-wide production network comprising a million firms and millions of supplier-customer links by using recent statistical methods developed in physics. We show in the empirical analysis scale-free degree distribution, disassortativity, correlation of degree to firm-size, and community structure having sectoral and regional modules. Since suppliers usually provide credit to their customers, who supply it to theirs in turn, each link is actually a creditor-debtor relationship. We also study chains of failures or bankruptcies that take place along those links in the network, and corresponding avalanche-size distribution.

Posted Content
TL;DR: Frontier sampling as discussed by the authors is a new sampling method that uses $m$-dimensional random walkers and exhibits all of the nice sampling properties of a regular random walk, which is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.
Abstract: Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new $m$-dimensional random walk that uses $m$ dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.

Journal ArticleDOI
TL;DR: Modelling highly clustered networks shows the presence of triangles in these model networks is shown to lead to a larger bond percolation threshold, compared to the threshold in an unclustered network with the same degree distribution and correlation structure.
Abstract: The question of how clustering (nonzero density of triangles) in networks affects their bond percolation threshold has important applications in a variety of disciplines. Recent advances in modeling highly clustered networks are employed here to analytically study the bond percolation threshold. In comparison to the threshold in an unclustered network with the same degree distribution and correlation structure, the presence of triangles in these model networks is shown to lead to a larger bond percolation threshold (i.e. clustering increases the epidemic threshold or decreases resilience of the network to random edge deletion).

Journal ArticleDOI
TL;DR: In this article, the authors considered an SIR epidemic model propagating on a configuration model network, where the degree distribution of the vertices is given and where the edges are randomly matched, and the evolution of the epidemic is summed up into three measure-valued equations that describe the degrees of the susceptible individuals and the number of edges from an infectious or removed individual to the set of susceptibles.
Abstract: We consider an SIR epidemic model propagating on a configuration model network, where the degree distribution of the vertices is given and where the edges are randomly matched. The evolution of the epidemic is summed up into three measure-valued equations that describe the degrees of the susceptible individuals and the number of edges from an infectious or removed individual to the set of susceptibles. These three degree distributions are sufficient to describe the course of the disease. The limit in large population is investigated. As a corollary, this provides a rigorous proof of the equations obtained by Volz [Mathematical Biology 56 (2008) 293--310].

Posted Content
TL;DR: A new tool is developed, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models, which provides insight into the interaction between strategies and network structure and finds a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.
Abstract: One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering co-efficient, and degree. We measure strategy eff ectiveness by simulating adoption on a Bass-like agent-based model, with five diff erent social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This volutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs signifi cantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.

Journal ArticleDOI
TL;DR: This study is unique in being the first to derive mathematical models for its distributions of node degree, session length, and peer participation in simultaneous overlays, and finding that PPLive overlays are similar to random graphs in structure and thus more robust and resilient to the massive failure of nodes.
Abstract: This article presents results from our measurement and modeling efforts on the large-scale peer-to-peer (p2p) overlay graphs spanned by the PPLive system, the most popular and largest p2p IPTV (Internet Protocol Television) system today. Unlike other previous studies on PPLive, which focused on either network-centric or user-centric measurements of the system, our study is unique in (a) focusing on PPLive overlay-specific characteristics, and (b) being the first to derive mathematical models for its distributions of node degree, session length, and peer participation in simultaneous overlays.Our studies reveal characteristics of multimedia streaming p2p overlays that are markedly different from existing file-sharing p2p overlays. Specifically, we find that: (1) PPLive overlays are similar to random graphs in structure and thus more robust and resilient to the massive failure of nodes, (2) Average degree of a peer in the overlay is independent of the channel population size and the node degree distribution can be fitted by a piecewise function, (3) The availability correlation between PPLive peer pairs is bimodal, that is, some pairs have highly correlated availability, while others have no correlation, (4) Unlike p2p file-sharing peers, PPLive peers are impatient and session lengths (discretized, per channel) are typically geometrically distributed, (5) Channel population size is time-sensitive, self-repeated, event-dependent, and varies more than in p2p file-sharing networks, (6) Peering relationships are slightly locality-aware, and (7) Peer participation in simultaneous overlays follows a Zipf distribution. We believe that our findings can be used to understand current large-scale p2p streaming systems for future planning of resource usage, and to provide useful and practical hints for future design of large-scale p2p streaming systems.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors provided an empirical investigation aimed at uncovering the statistical properties of intricate stock trading networks based on the order flow data of a highly liquid stock (Shenzhen Development Bank) listed on Shenzhen Stock Exchange during the whole year of 2003.
Abstract: We provide an empirical investigation aimed at uncovering the statistical properties of intricate stock trading networks based on the order flow data of a highly liquid stock (Shenzhen Development Bank) listed on Shenzhen Stock Exchange during the whole year of 2003. By reconstructing the limit order book, we can extract detailed information of each executed order for each trading day and demonstrate that the trade size distributions for different trading days exhibit power-law tails and that most of the estimated power-law exponents are well within the Levy stable regime. Based on the records of order matching among investors, we can construct a stock trading network for each trading day, in which the investors are mapped into nodes and each transaction is translated as a direct edge from the seller to the buyer with the trade size as its weight. We find that all the trading networks comprise a giant component and have power-law degree distributions and disassortative architectures. In particular, the degrees are correlated with order sizes by a power-law function. By regarding the size of executed order as its fitness, the fitness model can reproduce the empirical power-law degree distribution.

Journal ArticleDOI
TL;DR: It is found, that as in static networks under a mean-field approximation, rewired networks with degree distribution exponent γ>3 exhibit a threshold in the infection rate below which epidemics die out in the steady state, however the threshold is higher in the rewiring case.
Abstract: A model for epidemic spreading on rewiring networks is introduced and analyzed for the case of scale free steady state networks. It is found that contrary to what one would have naively expected, the rewiring process typically tends to suppress epidemic spreading. In particular it is found, that as in static networks under a mean-field approximation, rewiring networks with degree distribution exponent γ>3 exhibit a threshold in the infection rate below which epidemics die out in the steady state. However the threshold is higher in the rewiring case. For 2<γ≤3 no such threshold exists, but for small infection rate the steady state density of infected nodes (prevalence) is smaller for rewiring networks.

Journal ArticleDOI
TL;DR: In this article, the feasibility of using complex networks in the study of linguistic typology was investigated and 15 linguistic complex networks based on the dependency syntactic treebanks of 15 languages were built and explored.
Abstract: To investigate the feasibility of using complex networks in the study of linguistic typology, this paper builds and explores 15 linguistic complex networks based on the dependency syntactic treebanks of 15 languages. The results show that it is possible to classify human languages by means of the following main parameters of complex networks: (a) average degree of the node, (b) cluster coefficients, (c) average path length, (d) network centralization, (e) diameter, (f) power exponent of degree distribution, and (g) the determination coefficient of power law distributions. The precision of this method is similar to the results achieved by means of modern word order typology. This paper tries to solve two problems of current linguistic typology. First, the language sample of a typological study is not real text; second, typological studies pay too much attention to local language structures in the course of choosing typological parameters. This study performs better in global typological features of language and not only enhances typological methods, but it is also valuable for developing the applications of complex networks in the humanities, social, and life sciences.

Journal ArticleDOI
TL;DR: In this paper, the thermodynamic limit of the pressure when the mean degree is finite (degree exponent τ>2), for a random graph with a tree-like structure, was derived.
Abstract: We study a ferromagnetic Ising model on random graphs with a power-law degree distribution and compute the thermodynamic limit of the pressure when the mean degree is finite (degree exponent τ>2), for which the random graph has a tree-like structure. For this, we closely follow the analysis by Dembo and Montanari (Ann. Appl. Probab. 20(2):565–592, 2010) which assumes finite variance degrees (τ>3), adapting it when necessary and also simplifying it when possible. Our results also apply in cases where the degree distribution does not obey a power law.

Posted Content
TL;DR: Li et al. as mentioned in this paper provided an empirical investigation aimed at uncovering the statistical properties of intricate stock trading networks based on the order flow data of a highly liquid stock (Shenzhen Development Bank) listed on Shenzhen Stock Exchange during the whole year of 2003.
Abstract: We provide an empirical investigation aimed at uncovering the statistical properties of intricate stock trading networks based on the order flow data of a highly liquid stock (Shenzhen Development Bank) listed on Shenzhen Stock Exchange during the whole year of 2003. By reconstructing the limit order book, we can extract detailed information of each executed order for each trading day and demonstrate that the trade size distributions for different trading days exhibit power-law tails and that most of the estimated power-law exponents are well within the L{\'e}vy stable regime. Based on the records of order matching among investors, we can construct a stock trading network for each trading day, in which the investors are mapped into nodes and each transaction is translated as a direct edge from the seller to the buyer with the trade size as its weight. We find that all the trading networks comprise a giant component and have power-law degree distributions and disassortative architectures. In particular, the degrees are correlated with order sizes by a power-law function. By regarding the size executed order as its fitness, the fitness model can reproduce the empirical power-law degree distribution.

Proceedings ArticleDOI
23 May 2010
TL;DR: This paper numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data.
Abstract: In this paper we numerically study the topology robustness of power grids under random and selective node breakdowns, and analytically estimate the critical node-removal thresholds to disintegrate a system, based on the available US power grid data. We also present an analysis on the node degree distribution in power grids because it closely relates with the topology robustness. It is found that the node degree in a power grid can be well fitted by a mixture distribution coming from the sum of a truncated Geometric random variable and an irregular Discrete random variable. With the findings we obtain better estimates of the threshold under selective node breakdowns which predict the numerical thresholds more correctly.

Journal ArticleDOI
TL;DR: Based on the daily data of American and Chinese stock markets, the dynamic behavior of a financial network with static and dynamic thresholds is investigated in this article, where the dynamic threshold suppresses the large fluctuation induced by the cross-correlation of individual stock prices, and leads to a stable topological structure in the dynamic evolution.
Abstract: Based on the daily data of American and Chinese stock markets, the dynamic behavior of a financial network with static and dynamic thresholds is investigated Compared with the static threshold, the dynamic threshold suppresses the large fluctuation induced by the cross-correlation of individual stock prices, and leads to a stable topological structure in the dynamic evolution Long-range time-correlations are revealed for the average clustering coefficient, average degree and cross-correlation of degrees The dynamic network shows a two-peak behavior in the degree distribution

Proceedings ArticleDOI
07 Jul 2010
TL;DR: In this article, the authors define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree, and measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures.
Abstract: One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree. We measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This evolutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs significantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.

Posted Content
TL;DR: This paper quantifies the degree bias of BFS sampling, and studies by simulation the effect of graph properties not captured directly by the model, finding that the bias gets amplified in graphs with strong positive assortativity.
Abstract: Breadth First Search (BFS) and other graph traversal techniques are widely used for measuring large unknown graphs, such as online social networks It has been empirically observed that an incomplete BFS is biased toward high degree nodes In contrast to more studied sampling techniques, such as random walks, the precise bias of BFS has not been characterized to date In this paper, we quantify the degree bias of BFS sampling In particular, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction of covered nodes, in a random graph $RG(p_k)$ with a given degree distribution $p_k$ Furthermore, we also show that, for $RG(p_k)$, all commonly used graph traversal techniques (BFS, DFS, Forest Fire, and Snowball Sampling) lead to the same bias, and we show how to correct for this bias To give a broader perspective, we compare this class of exploration techniques to random walks that are well-studied and easier to analyze Next, we study by simulation the effect of graph properties not captured directly by our model We find that the bias gets amplified in graphs with strong positive assortativity Finally, we demonstrate the above results by sampling the Facebook social network, and we provide some practical guidelines for graph sampling in practice

Journal ArticleDOI
TL;DR: It is shown that a group of synchronized nodes may appear in scale-free networks: hubs undergo a transition to synchronization while the other nodes remain unsynchronized, suggesting that scale- free networks may have evolved to complement various levels of synchronization.
Abstract: Heterogeneity in the degree distribution is known to suppress global synchronization in complex networks of symmetrically coupled oscillators. Scale-free networks display a great deal of heterogeneity, containing a few nodes, termed hubs, that are highly connected, while most nodes receive only a few connections. Here, we show that a group of synchronized nodes may appear in scale-free networks: hubs undergo a transition to synchronization while the other nodes remain unsynchronized. This general phenomenon can occur even in the absence of global synchronization. Our results suggest that scale-free networks may have evolved to complement various levels of synchronization.