Book•
The Skew-Normal and Related Families
01 Dec 2013-
TL;DR: This comprehensive treatment, blending theory and practice, will be the standard resource for statisticians and applied researchers, and Assuming only basic knowledge of (non-measure-theoretic) probability and statistical inference, the book is accessible to the wide range of researchers who use statistical modelling techniques.
Abstract: Preface 1. Modulation of symmetric densities 2. The skew-normal distribution: probability 3. The skew-normal distribution: statistics 4. Heavy and adaptive tails 5. The multivariate skew-normal distribution 6. Skew-elliptical distributions 7. Further extensions and other directions 8. Application-oriented work Appendices References.
Citations
More filters
••
TL;DR: A review of work to date in model-based clustering, from the famous paper by Wolfe in 1965 to work that is currently available only in preprint form, and a look ahead to the next decade or so.
Abstract: The notion of defining a cluster as a component in a mixture model was put forth by Tiedeman in 1955; since then, the use of mixture models for clustering has grown into an important subfield of classification. Considering the volume of work within this field over the past decade, which seems equal to all of that which went before, a review of work to date is timely. First, the definition of a cluster is discussed and some historical context for model-based clustering is provided. Then, starting with Gaussian mixtures, the evolution of model-based clustering is traced, from the famous paper by Wolfe in 1965 to work that is currently available only in preprint form. This review ends with a look ahead to the next decade or so.
288 citations
••
TL;DR: The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, non Parametric, and permutation tests through extensive simulations under various conditions and using real data examples to overcome the problem related with small samples in hypothesis testing.
Abstract: Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd.
152 citations
••
TL;DR: In this paper, a new semiparametric quantile regression method is introduced based on sequentially fitting a likelihood optimal D-vine copula to given data resulting in highly flexible models with easily extractable conditional quantiles.
123 citations
••
TL;DR: Lee and McLachlan as mentioned in this paper introduced a finite mixture of canonical fundamental skew $$t$$t (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed.
Abstract: This paper introduces a finite mixture of canonical fundamental skew $$t$$t (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed (in: Lee and McLachlan, arXiv:1401.8182 [statME], 2014b). The family of CFUST distributions includes the restricted multivariate skew $$t$$t and unrestricted multivariate skew $$t$$t distributions as special cases. In recent years, a few versions of the multivariate skew $$t$$t (MST) mixture model have been put forward, together with various EM-type algorithms for parameter estimation. These formulations adopted either a restricted or unrestricted characterization for their MST densities. In this paper, we examine a natural generalization of these developments, employing the CFUST distribution as the parametric family for the component distributions, and point out that the restricted and unrestricted characterizations can be unified under this general formulation. We show that an exact implementation of the EM algorithm can be achieved for the CFUST distribution and mixtures of this distribution, and present some new analytical results for a conditional expectation involved in the E-step.
105 citations
••
TL;DR: In the simulations the proposed methods achieve better accuracy than the alternative methods, the computational complexity of the filter being roughly 5 to 10 times that of the Kalman filter.
Abstract: Filtering and smoothing algorithms for linear discrete- time state-space models with skewed and heavy-tailed measurement noise are presented. The algorithms use a variational Bayes approximation of the posterior distribution of models that have normal prior and skew- $t$ -distributed measurement noise. The proposed filter and smoother are compared with conventional low- complexity alternatives in a simulated pseudorange positioning scenario. In the simulations the proposed methods achieve better accuracy than the alternative methods, the computational complexity of the filter being roughly 5 to 10 times that of the Kalman filter.
105 citations
References
More filters
••
28 Jan 2005TL;DR: The important role of finite mixture models in statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and geospatial literature.
Abstract: The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and ge...
8,258 citations
•
02 Oct 2000TL;DR: The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the mathematical and statistical literature.
Abstract: The important role of finite mixture models in the statistical analysis of data is underscored by the ever-increasing rate at which articles on mixture applications appear in the statistical and ge...
8,095 citations
••
TL;DR: In this paper, the authors define the disturbance term as the sum of symmetric normal and (negative) half-normal random variables, and consider various aspects of maximum-likelihood estimation for the coefficients of a production function with an additive disturbance term of this sort.
8,058 citations
•
30 Nov 1997TL;DR: This book is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners.
Abstract: The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).
7,616 citations