scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Nonstandard regular variation of in-degree and out-degree in the preferential attachment model

01 Jan 2016-Journal of Applied Probability (University of Sheffield)-Vol. 53, Iss: 1, pp 146-161
TL;DR: It is proved that the joint distribution of in-degree and out-degree has jointly regularly varying tails.
Abstract: The research of the authors was supported by MURI ARO Grant W911NF-12-10385 to Cornell University
Citations
More filters
Posted Content
TL;DR: The different forms of extremal dependence that can arise between the largest observations of a multivariate random vector are described and identification of groups of variables which can be concomitantly extreme is addressed.
Abstract: Extreme value statistics provides accurate estimates for the small occurrence probabilities of rare events. While theory and statistical tools for univariate extremes are well-developed, methods for high-dimensional and complex data sets are still scarce. Appropriate notions of sparsity and connections to other fields such as machine learning, graphical models and high-dimensional statistics have only recently been established. This article reviews the new domain of research concerned with the detection and modeling of sparse patterns in rare events. We first describe the different forms of extremal dependence that can arise between the largest observations of a multivariate random vector. We then discuss the current research topics including clustering, principal component analysis and graphical modeling for extremes. Identification of groups of variables which can be concomitantly extreme is also addressed. The methods are illustrated with an application to flood risk assessment.

45 citations


Cites background from "Nonstandard regular variation of in..."

  • ...…Wadsworth & Tawn 2012, Papastathopoulos et al. 2017), flexible models between different dependence classes (e.g., Wadsworth et al. 2017, Huser & Wadsworth 2019, Engelke et al. 2019c) and connections of extreme values to the theory of networks (e.g., Samorodnitsky et al. 2016, Wan et al. 2020)....

    [...]

Journal ArticleDOI
10 Mar 2015-Extremes
TL;DR: In this article, a multivariate version of Abel-Tauberian theorems for non-standard regularly varying measures on a directed edge preferential attachment model has been proposed and applied to prove that the joint distribution of in-and out-degree in a directed-edge preferential attachment has jointly regularly varying tails.
Abstract: Abel-Tauberian theorems relate power law behavior of distributions and their transforms. We formulate and prove a multivariate version for non-standard regularly varying measures on \(\mathbb {R}_{+}^{p}\) and then apply it to prove that the joint distribution of in- and out-degree in a directed edge preferential attachment model has jointly regularly varying tails.

37 citations

Journal ArticleDOI
TL;DR: This paper considers methods for fitting a 5-parameter linear preferential model to network data under two data scenarios and derives the maximum likelihood estimator of the parameters and shows that it is strongly consistent and asymptotically normal.
Abstract: Preferential attachment is an appealing mechanism for modeling power-law behavior of the degree distributions in directed social networks. In this paper, we consider methods for fitting a 5-parameter linear preferential model to network data under two data scenarios. In the case where full history of the network formation is given, we derive the maximum likelihood estimator of the parameters and show that it is strongly consistent and asymptotically normal. In the case where only a single-time snapshot of the network is available, we propose an estimation method which combines method of moments with an approximation to the likelihood. The resulting estimator is also strongly consistent and performs quite well compared to the MLE estimator. We illustrate both estimation procedures through simulated data and explore the usage of this model in a real data example.

37 citations


Cites background from "Nonstandard regular variation of in..."

  • ...While the marginal degree power laws in a simple linear preferential attachment model were established in [3, 11, 12], the joint regular variation (see [15, 16]) which is akin to a joint power law, was only recently established [17, 19]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the influence of sharing large exogeneous losses to the reinsurance market by a bipartite graph was modeled using Pareto-tailed claims and multivariate regular variation and obtained asymptotic results for the value-at-risk and the conditional tail expectation.
Abstract: We model the influence of sharing large exogeneous losses to the reinsurance market by a bipartite graph. Using Pareto-tailed claims and multivariate regular variation we obtain asymptotic results for the value-at-risk and the conditional tail expectation. We show that the dependence on the network structure plays a fundamental role in their asymptotic behaviour. As is well known in a nonnetwork setting, if the Pareto exponent is larger than 1, then for the individual agent (reinsurance company) diversification is beneficial, whereas when it is less than 1, concentration on a few objects is the better strategy. An additional aspect of this paper is the amount of uninsured losses that are covered by society. In our setting of networks of agents, diversification is never detrimental to the amount of uninsured losses. If the Pareto-tailed claims have finite mean, diversification is never detrimental, to society or individual agents. By contrast, if the Pareto-tailed claims have infinite mean, a conflicting s...

28 citations

Journal ArticleDOI
TL;DR: This paper focuses on convergence of the joint empirical measure for in- and out-degrees and proves the consistency of the Hill estimator.

23 citations


Cites background or methods or result from "Nonstandard regular variation of in..."

  • ...With embedding techniques, we prove the limit distribution of the empirical joint degree frequencies in a way that is different from the one used in [20], and then justify the concentration results....

    [...]

  • ...1 coincides with the known results proven in [19, 20], since ein is a Pareto random variable on [1,∞) with index c in , denoted by Z, and eout = Z, with a := cout/cin....

    [...]

  • ...where pij is a probability mass function (pmf) and [19, 20, 26] show that pij is jointly regularly varying and so is the associated joint measure....

    [...]

  • ...1) tP [( X b1(t) , Y b2(t) ) ∈ · ] v −→ ν(·), in M+([0,∞](2) \ {0}), where ν(·) ∈ M+([0,∞](2) \{0}) is called the limit or tail measure [19, 20], and “ v −→” denotes the vague convergence of measures in M+([0,∞](2) \ {0})....

    [...]

References
More filters
Book
01 Jan 1963
TL;DR: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers.
Abstract: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers. 1 Sample spaces and events To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment. For example, the sample space might be the outcomes of the roll of a die, or flips of a coin. To each element x of the sample space, we assign a probability, which will be a non-negative number between 0 and 1, which we will denote by p(x). We require that x∈S p(x) = 1, so the total probability of the elements of our sample space is 1. What this means intuitively is that when we perform our process, exactly one of the things in our sample space will happen. Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2, p(b) = 1/3, p(c) = 1/6. If all elements of our sample space have equal probabilities, we call this the uniform probability distribution on our sample space. For example, if our sample space was the outcomes of a die roll, the sample space could be denoted S = {x 1 , x 2 ,. .. , x 6 }, where the event x i correspond to rolling i. The uniform distribution, in which every outcome x i has probability 1/6 describes the situation for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and T (tails), each with probability 1/2. In this situation we have the uniform probability distribution on the sample space S = {H, T }. We define an event A to be a subset of the sample space. For example, in the roll of a die, if the event A was rolling an even number, then A = {x 2 , x 4 , x 6 }. The probability of an event A, denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space. For rolling an even number, we have P(A) = p(x 2) + p(x 4) + p(x 6) = 1 2 Given an event A of our sample space, there is a complementary event which consists of all points in our sample space that are not …

6,236 citations

Book
29 Mar 1977

6,171 citations

Book
Fritz John1
01 Jan 1953
TL;DR: In this article, the authors focus on the quasilinear PDEs of second order, where the second derivatives of u appear in linear order only (i.e., in linear time).
Abstract: Entering now the vast field of partial differential equations, we immediately announce that our discussion shall be restricted to those types of equations that are of major importance in physics. These are the quasilinear PDEs of second order, which may be written in the general form $${a_{11}}\frac{{{\partial ^2}u}}{{\partial {x^2}}} + 2{a_{12}}\frac{{{\partial ^2}u}}{{\partial x\partial y}} + {a_{22}}\frac{{{\partial ^{2u}}}}{{a{y^2}}} + f\left( {x,y,u,\frac{{\partial u}}{{\partial x}},\frac{{\partial u}}{{\partial y}}} \right) = 0$$ (5.1) (“Quasilinear” means that the second derivatives of u appear in linear order only).

1,580 citations

Book
01 Dec 2006
TL;DR: In this paper, a crash course on regular variation and weak convergence is presented, with a focus on strong convergence and the Poisson process for heavy-tailed analysis of statistics.
Abstract: Crash Courses.- Crash Course I: Regular Variation.- Crash Course II: Weak Convergence Implications for Heavy-Tail Analysis.- Statistics.- Dipping a Toe in the Statistical Water.- Probability.- The Poisson Process.- Multivariate Regular Variation and the Poisson Transform.- Weak Convergence and the Poisson Process.- Applied Probability Models and Heavy Tails.- More Statistics.- Additional Statistics Topics.- Appendices.- Notation and Conventions.- Software.

1,082 citations

Journal ArticleDOI
TL;DR: The organizational development of growing random networks is investigated, and the combined age and degree distribution of nodes shows that old nodes typically have a large degree.
Abstract: The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability ${A}_{k}.$ When ${A}_{k}$ grows more slowly than linearly with k, the number of nodes with k links, ${N}_{k}(t),$ decays faster than a power law in k, while for ${A}_{k}$ growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When ${A}_{k}$ is asymptotically linear, ${N}_{k}(t)\ensuremath{\sim}{\mathrm{tk}}^{\ensuremath{-}\ensuremath{ u}},$ with $\ensuremath{ u}$ dependent on details of the attachment probability, but in the range $2l\ensuremath{ u}l\ensuremath{\infty}.$ The combined age and degree distribution of nodes shows that old nodes typically have a large degree. There is also a significant correlation in the degrees of neighboring nodes, so that nodes of similar degree are more likely to be connected. The size distributions of the in and out components of the network with respect to a given node---namely, its ``descendants'' and ``ancestors''---are also determined. The in component exhibits a robust ${s}^{\ensuremath{-}2}$ power-law tail, where s is the component size. The out component has a typical size of order $\mathrm{ln}t,$ and it provides basic insights into the genealogy of the network.

833 citations