scispace - formally typeset
Search or ask a question

Showing papers by "Tiziano Squartini published in 2023"


13 Mar 2023
TL;DR: In this paper , the authors extend the exponential random graph framework to signed networks with both global (homogeneous) and local (heterogeneous) constraints and then employ them to assess the significance of unbalanced patterns in several real-world networks.
Abstract: The abundance of data about social, economic and political relationships allows social theories to be tested against empirical evidence and human behaviour to be analyzed just as any other natural phenomenon. Here we focus on balance theory, stating that actors in signed social networks tend to avoid the formation of `unbalanced', or `frustrated', cycles, i.e. cycles with an odd number of negative links. This statement can be supported statistically only after a comparison with a null model. Since the existing benchmarks do not typically account for the heterogeneity of individual actors, here we first extend the Exponential Random Graphs framework to signed networks with both global (homogeneous) and local (heterogeneous) constraints and then employ them to assess the significance of unbalanced patterns in several real-world networks. We find that the nature and level of balance in social networks crucially depends on the null model employed. In particular, the study of signed triangles and signed communities reveals that homogeneous null models favour the weak version of balance theory, according to which only triangles with one negative link should be under-represented in social networks, while heterogeneous null models favour the strong version of balance theory, according to which also triangles with all negative links should be under-represented. Biological networks, instead, display almost inverted patterns and strong frustration under any benchmark, confirming that structural balance inherently distinguishes social networks from other signed networks.

Journal ArticleDOI
TL;DR: In this article , the authors propose an alternative procedure to overcome such a limitation, based upon the framework of entropy maximisation and implementing a proper test of hypothesis: the ''key products' of a country are, now, the ones whose production is significantly larger than expected, under a null-model constraining the same amount of information employed by Balassa's approach.
Abstract: We revise the procedure proposed by Balassa to infer comparative advantage, which is a standard tool, in Economics, to analyze specialization (of countries, regions, etc.). Balassa's approach compares the export of a product for each country with what would be expected from a benchmark based on the total volumes of countries and products flows. Based on results in the literature, we show that the implementation of Balassa's idea generates a bias: the prescription of the maximum likelihood used to calculate the parameters of the benchmark model conflicts with the model's definition. Moreover, Balassa's approach does not implement any statistical validation. Hence, we propose an alternative procedure to overcome such a limitation, based upon the framework of entropy maximisation and implementing a proper test of hypothesis: the `key products' of a country are, now, the ones whose production is significantly larger than expected, under a null-model constraining the same amount of information employed by Balassa's approach. What we found is that countries diversification is always observed, regardless of the strictness of the validation procedure. Besides, the ranking of countries' fitness is only partially affected by the details of the validation scheme employed for the analysis while large differences are found to affect the rankings of products Complexities. The routine for implementing the entropy-based filtering procedures employed here is freely available through the official Python Package Index PyPI.

05 Mar 2023
TL;DR: In this paper , the authors highlight three approaches for estimating the parameters responsible for weighted networks: econometric techniques treating topology as deterministic and statistical techniques either ensemble-averaging parameters or maximising an averaged likelihood over the topological randomness.
Abstract: Analysing weighted networks requires modelling the binary and weighted properties simultaneously. We highlight three approaches for estimating the parameters responsible for them: econometric techniques treating topology as deterministic and statistical techniques either ensemble-averaging parameters or maximising an averaged likelihood over the topological randomness. In homogeneous models, equivalence holds; in heterogeneous network models, the local disorder breaks it, in a way reminiscent of the difference between `quenched' and `annealed' averages in the physics of disordered systems.

TL;DR: In this paper , the authors propose a solution to solve the problem of the problem: this paper ] of "uniformity" and "uncertainty" of the solution.
Abstract: ,

11 Jul 2023
TL;DR: In this article , the authors consider the description length induced by the Normalized Maximum Likelihood (NML), which consists of two terms, i.e., a model log-likelihood and its complexity.
Abstract: Non-equivalence between the canonical and the microcanonical ensemble has been shown to arise for models defined by an extensive number of constraints (e.g. the Configuration Model). Here, we focus on the framework induced by entropy maximization and study the extent to which ensemble non-equivalence affects the description length of binary, canonical, and microcanonical models. Specifically, we consider the description length induced by the Normalized Maximum Likelihood (NML), which consists of two terms, i.e. a model log-likelihood and its complexity: while the effects of ensemble non-equivalence on the log-likelihood term are well understood, its effects on the complexity term have not been systematically studied yet. Here, we find that i) microcanonical models are always more complex than their canonical counterparts and ii) the difference between the canonical and the microcanonical description length is strongly influenced by the degree of non-equivalence, a result suggesting that non-equivalence should be taken into account when selecting models. Finally, we compare the NML-based approach to model selection with the Bayesian one induced by Jeffreys prior, showing that the two cannot be reconciled when non-equivalence holds.