scispace - formally typeset
Search or ask a question
Author

Francesca Greselin

Bio: Francesca Greselin is an academic researcher from University of Milano-Bicocca. The author has contributed to research in topics: Economic inequality & Outlier. The author has an hindex of 15, co-authored 78 publications receiving 545 citations. Previous affiliations of Francesca Greselin include University of Milan & Universidad Católica "Nuestra Señora de la Asunción".


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors analyzed a data set from the Bank of Italy year 2006 sample survey on household budgets and introduced the L-process on which statistical inferential results about the population L-function hinges.
Abstract: L-statistics play prominent roles in various research areas and applications, including development of robust statistical methods, measuring economic inequality and insurance risks. In many applications the score functions of L-statistics depend on parameters (e.g., distortion parameter in insurance, risk aversion parameter in econometrics), which turn the L-statistics into functions that we call L-functions. A simple example of an L-function is the Lorenz curve. Ratios of L-functions play equally important roles, with the Zenga curve being a prominent example. To illustrate real life uses of these functions/curves, we analyze a data set from the Bank of Italy year 2006 sample survey on household budgets. Naturally, empirical counterparts of the population L-functions need to be employed and, importantly, adjusted and modified in order to meaningfully capture situations well beyond those based on simple random sampling designs. In the processes of our investigations, we also introduce the L-process on which statistical inferential results about the population L-function hinges. Hence, we provide notes and references facilitating ways for deriving asymptotic properties of the L-process.

57 citations

Journal ArticleDOI
TL;DR: A constrained monotone algorithm implementing maximum likelihood mixture decomposition of multivariate t distributions is proposed, to achieve improved convergence capabilities and robustness.
Abstract: Mixtures of multivariate t distributions provide a robust parametric extension to the fitting of data with respect to normal mixtures. In presence of some noise component, potential outliers or data with longer-than-normal tails, one way to broaden the model can be provided by considering t distributions. In this framework, the degrees of freedom can act as a robustness parameter, tuning the heaviness of the tails, and downweighting the effect of the outliers on the parameters estimation. The aim of this paper is to extend to mixtures of multivariate elliptical distributions some theoretical results about the likelihood maximization on constrained parameter spaces. Further, a constrained monotone algorithm implementing maximum likelihood mixture decomposition of multivariate t distributions is proposed, to achieve improved convergence capabilities and robustness. Monte Carlo numerical simulations and a real data study illustrate the better performance of the algorithm, comparing it to earlier proposals.

50 citations

Journal ArticleDOI
TL;DR: In this article, the authors derive desired statistical inferential results, explore their performance in a simulation study, and then use the results to analyze data from the Bank of Italy Survey on Household Income and Wealth (SHIW).
Abstract: For at least a century academics and governmental researchers have been developing measures that would aid them in understanding income distributions, their differences with respect to geographic regions, and changes over time periods. It is a fascinating area due to a number of reasons, one of them being the fact that different measures, or indices, are needed to reveal different features of income distributions. Keeping also in mind that the notions of poor and rich are relative to each other, Zenga (2007) proposed a new index of economic inequality. The index is remarkably insightful and useful, but deriving statistical inferential results has been a challenge. For example, unlike many other indices, Zenga's new index does not fall into the classes of -, -, and -statistics. In this paper we derive desired statistical inferential results, explore their performance in a simulation study, and then use the results to analyze data from the Bank of Italy Survey on Household Income and Wealth (SHIW).

37 citations

Journal ArticleDOI
TL;DR: In this article, the authors explore and contrast the classical Gini index with a new Zenga index, the latter being based on comparisons of the means of less and more fortunate sub-populations, irrespectively of the threshold that might be used to delineate the two subpopulations.
Abstract: The current financial turbulence in Europe inspires and perhaps requires researchers to rethink how to measure incomes, wealth, and other parameters of interest to policy-makers and others. The noticeable increase in disparities between less and more fortunate individuals suggests that measures based upon comparing the incomes of less fortunate with the mean of the entire population may not be adequate. The classical Gini and related indices of economic inequality, however, are based exactly on such comparisons. It is because of this reason that in this paper we explore and contrast the classical Gini index with a new Zenga index, the latter being based on comparisons of the means of less and more fortunate sub-populations, irrespectively of the threshold that might be used to delineate the two sub-populations. The empirical part of the paper is based on the 2001 wave of the European Community Household Panel data set provided by EuroStat. Even though sample sizes appear to be large, we supplement the est...

32 citations

Journal ArticleDOI
TL;DR: Assessment of the performance of asymptotic confidence intervals for Zenga's new inequality measure shows that the coverage accuracy and the size of the confidence interval for the two measures are very similar in samples from economic size distributions.
Abstract: This work aims at assessing, by simulation methods, the performance of asymptotic confidence intervals for Zenga's new inequality measure. The results are compared with those obtained on Gini's measure, perhaps the most widely used index for measuring inequality in income and wealth distributions. Our findings show that the coverage accuracy and the size of the confidence intervals for the two measures are very similar in samples from economic size distributions.

28 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Benjamini and Hochberg as discussed by the authors presented a new approach to controlling the false discovery rate, which was published in the Journal of the Royal Statistical Society, Series B, 1995.
Abstract: Summary. I describe the background for the paper ‘Controlling the false discovery rate: a new and powerful approach to multiple comparisons’ by Benjamini and Hochberg that was published in the Journal of the Royal Statistical Society, Series B, in 1995. I review the progress since made on the false discovery rate, as well as the major conceptual developments that followed.

514 citations

Journal ArticleDOI
TL;DR: A review of work to date in model-based clustering, from the famous paper by Wolfe in 1965 to work that is currently available only in preprint form, and a look ahead to the next decade or so.
Abstract: The notion of defining a cluster as a component in a mixture model was put forth by Tiedeman in 1955; since then, the use of mixture models for clustering has grown into an important subfield of classification. Considering the volume of work within this field over the past decade, which seems equal to all of that which went before, a review of work to date is timely. First, the definition of a cluster is discussed and some historical context for model-based clustering is provided. Then, starting with Gaussian mixtures, the evolution of model-based clustering is traced, from the famous paper by Wolfe in 1965 to work that is currently available only in preprint form. This review ends with a look ahead to the next decade or so.

288 citations

Posted Content
01 Jan 1998
TL;DR: In this paper, the influence function of the MCD scatter estimator is derived and the asymptotic variances of its elements are compared with the one step reweighted MCD and with S-estimators.
Abstract: The minimum Covariance Determinant (MCD) scatter estimator is a highly robust estimator for the dispersion matrix of a multivariate, elliptically symmetric distribution. It is fast to compute and intuitively appealing. In this note we derive its influence function and compute the asymptotic variances of its elements. A comparison with the one step reweighted MCD and with S-estimators is made. Also finite-sample results are reported.

226 citations

Journal ArticleDOI
TL;DR: A novel family of mixture models wherein each component is modeled using a multivariate t-distribution with an eigen-decomposed covariance structure is put forth, known as the tEIGEN family.
Abstract: The last decade has seen an explosion of work on the use of mixture models for clustering The use of the Gaussian mixture model has been common practice, with constraints sometimes imposed upon the component covariance matrices to give families of mixture models Similar approaches have also been applied, albeit with less fecundity, to classification and discriminant analysis In this paper, we begin with an introduction to model-based clustering and a succinct account of the state-of-the-art We then put forth a novel family of mixture models wherein each component is modeled using a multivariate t-distribution with an eigen-decomposed covariance structure This family, which is largely a t-analogue of the well-known MCLUST family, is known as the tEIGEN family The efficacy of this family for clustering, classification, and discriminant analysis is illustrated with both real and simulated data The performance of this family is compared to its Gaussian counterpart on three real data sets

151 citations