scispace - formally typeset
Search or ask a question
Author

John Aitchison

Bio: John Aitchison is an academic researcher from University of Glasgow. The author has contributed to research in topics: Compositional data & Dirichlet distribution. The author has an hindex of 38, co-authored 69 publications receiving 15481 citations. Previous affiliations of John Aitchison include Princeton University & University of Virginia.


Papers
More filters
Book
21 Aug 1986
TL;DR: In this article, the authors present an approach to perform compositional analysis of geochemical compositions of rocks using logratio linear models and a combination of matrix covariance analysis and linear linear models.
Abstract: 1 Compositional data: some challenging problems.- 1.1 Introduction.- 1.2 Geochemical compositions of rocks.- 1.3 Sediments at different depths.- 1.4 Ternary diagrams.- 1.5 Partial analyses and subcompositions.- 1.6 Supervisory behaviour.- 1.7 Household budget surveys.- 1.8 Steroid metabolite patterns in adults and children.- 1.9 Activity patterns of a statistician.- 1.10 Calibration of white-cell compositions.- 1.11 Fruit evaluation.- 1.12 Firework mixtures.- 1.13 Clam ecology.- 1.14 Bibliographic notes.- Problems.- 2 The simplex as sample space.- 2.1 Choice of sample space.- 2.2 Compositions and simplexes.- 2.3 Spaces, vectors, matrices.- 2.4 Bases and compositions.- 2.5 Subcompositions.- 2.6 Amalgamations.- 2.7 Partitions.- 2.8 Perturbations.- 2.9 Geometrical representations of compositional data.- 2.10 Bibliographic notes.- Problems.- 3 The special difficulties of compositional data analysis.- 3.1 Introduction.- 3.2 High dimensionality.- 3.3 Absence of an interpretable covariance structure.- 3.4 Difficulty of parametric modelling.- 3.5 The mixture variation difficulty.- 3.6 Bibliographic notes.- Problems.- 4 Covariance structure.- 4.1 Fundamentals.- 4.2 Specification of the covariance structure.- 4.3 The compositional variation array.- 4.4 Recovery of the compositional variation array from the crude mean vector and covariance matrix.- 4.5 Subcompositional analysis.- 4.6 Matrix specifications of covariance structures.- 4.7 Some important elementary matrices.- 4.8 Relationships between the matrix specifications.- 4.9 Estimated matrices for hongite compositions.- 4.10 Logratios and logcontrasts.- 4.11 Covariance structure of a basis.- 4.12 Commentary.- 4.13 Bibliographic notes.- Problems.- 5 Properties of matrix covariance specifications.- 5.1 Logratio notation.- 5.2 Logcontrast variances and covariances.- 5.3 Permutations.- 5.4 Properties of P and QP matrices.- 5.5 Permutation invariants involving ?.- 5.6 Covariance matrix inverses.- 5.7 Subcompositions.- 5.8 Equivalence of characteristics of ?, ?, ?.- 5.9 Logratio-uncorrelated compositions.- 5.10 Isotropic covariance structures.- 5.11 Bibliographic notes.- Problems.- 6 Logistic normal distributions on the simplex.- 6.1 Introduction.- 6.2 The additive logistic normal class.- 6.3 Density function.- 6.4 Moment properties.- 6.5 Composition of a lognormal basis.- 6.6 Class-preserving properties.- 6.7 Conditional subcompositional properties.- 6.8 Perturbation properties.- 6.9 A central limit theorem.- 6.10 A characterization by logcontrasts.- 6.11 Relationships with the Dirichlet class.- 6.12 Potential for statistical analysis.- 6.13 The multiplicative logistic normal class.- 6.14 Partitioned logistic normal classes.- 6.15 Some notation.- 6.16 Bibliographic notes.- Problems.- 7 Logratio analysis of compositions.- 7.1 Introduction.- 7.2 Estimation of ? and ?.- 7.3 Validation: tests of logistic normality.- 7.4 Hypothesis testing strategy and techniques.- 7.5 Testing hypotheses about ? and ?.- 7.6 Logratio linear modelling.- 7.7 Testing logratio linear hypotheses.- 7.8 Further aspects of logratio linear modelling.- 7.9 An application of logratio linear modelling.- 7.10 Predictive distributions, atypicality indices and outliers.- 7.11 Statistical discrimination.- 7.12 Conditional compositional modelling.- 7.13 Bibliographic notes.- Problems.- 8 Dimension-reducing techniques.- 8.1 Introduction.- 8.2 Crude principal component analysis.- 8.3 Logcontrast principal component analysis.- 8.4 Applications of logcontrast principal component analysis.- 8.5 Subcompositional analysis.- 8.6 Applications of subcompositional analysis.- 8.7 Canonical component analysis.- 8.8 Bibliographic notes.- Problems.- 9 Bases and compositions.- 9.1 Fundamentals.- 9.2 Covariance relationships.- 9.3 Principal and canonical component comparisons.- 9.4 Distributional relationships.- 9.5 Compositional invariance.- 9.6 An application to household budget analysis.- 9.7 An application to clinical biochemistry.- 9.8 Reappraisal of an early shape and size analysis.- 9.9 Bibliographic notes.- Problems.- 10 Subcompositions and partitions.- 10.1 Introduction.- 10.2 Complete subcompositional independence.- 10.3 Partitions of order 1.- 10.4 Ordered sequences of partitions.- 10.5 Caveat.- 10.6 Partitions of higher order.- 10.7 Bibliographic notes.- Problems.- 11 Irregular compositional data.- 11.1 Introduction.- 11.2 Modelling imprecision in compositions.- 11.3 Analysis of sources of imprecision.- 11.4 Imprecision and tests of independence.- 11.5 Rounded or trace zeros.- 11.6 Essential zeros.- 11.7 Missing components.- 11.8 Bibliographic notes.- Problems.- 12 Compositions in a covariate role.- 12.1 Introduction.- 12.2 Calibration.- 12.3 A before-and-after treatment problem.- 12.4 Experiments with mixtures.- 12.5 An application to firework mixtures.- 12.6 Classification from compositions.- 12.7 An application to geological classification.- 12.8 Bibliographic notes.- Problems.- 13 Further distributions on the simplex.- 13.1 Some generalizations of the Dirichlet class.- 13.2 Some generalizations of the logistic normal classes.- 13.3 Recapitulation.- 13.4 The Ad(?,B) class.- 13.5 Maximum likelihood estimation.- 13.6 Neutrality and partition independence.- 13.7 Subcompositional independence.- 13.8 A generalized lognormal gamma distribution with compositional in variance.- 13.9 Discussion.- 13.10 Bibliographic notes.- Problems.- 14 Miscellaneous problems.- 14.1 Introduction.- 14.2 Multi-way compositions.- 14.3 Multi-stage compositions.- 14.4 Multiple compositions.- 14.5 Kernel density estimation for compositional data.- 14.6 Compositional stochastic processes.- 14.7 Relation to Bayesian statistical analysis.- 14.8 Compositional and directional data.- Problems.- Appendices.- A Algebraic properties of elementary matrices.- B Bibliography.- C Computer software for compositional data analysis.- D Data sets.- Author index.

4,162 citations

Journal ArticleDOI
01 Jul 1987

4,051 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an approach to perform compositional analysis of geochemical compositions of rocks using logratio linear models and a combination of matrix covariance analysis and linear linear models.
Abstract: 1 Compositional data: some challenging problems.- 1.1 Introduction.- 1.2 Geochemical compositions of rocks.- 1.3 Sediments at different depths.- 1.4 Ternary diagrams.- 1.5 Partial analyses and subcompositions.- 1.6 Supervisory behaviour.- 1.7 Household budget surveys.- 1.8 Steroid metabolite patterns in adults and children.- 1.9 Activity patterns of a statistician.- 1.10 Calibration of white-cell compositions.- 1.11 Fruit evaluation.- 1.12 Firework mixtures.- 1.13 Clam ecology.- 1.14 Bibliographic notes.- Problems.- 2 The simplex as sample space.- 2.1 Choice of sample space.- 2.2 Compositions and simplexes.- 2.3 Spaces, vectors, matrices.- 2.4 Bases and compositions.- 2.5 Subcompositions.- 2.6 Amalgamations.- 2.7 Partitions.- 2.8 Perturbations.- 2.9 Geometrical representations of compositional data.- 2.10 Bibliographic notes.- Problems.- 3 The special difficulties of compositional data analysis.- 3.1 Introduction.- 3.2 High dimensionality.- 3.3 Absence of an interpretable covariance structure.- 3.4 Difficulty of parametric modelling.- 3.5 The mixture variation difficulty.- 3.6 Bibliographic notes.- Problems.- 4 Covariance structure.- 4.1 Fundamentals.- 4.2 Specification of the covariance structure.- 4.3 The compositional variation array.- 4.4 Recovery of the compositional variation array from the crude mean vector and covariance matrix.- 4.5 Subcompositional analysis.- 4.6 Matrix specifications of covariance structures.- 4.7 Some important elementary matrices.- 4.8 Relationships between the matrix specifications.- 4.9 Estimated matrices for hongite compositions.- 4.10 Logratios and logcontrasts.- 4.11 Covariance structure of a basis.- 4.12 Commentary.- 4.13 Bibliographic notes.- Problems.- 5 Properties of matrix covariance specifications.- 5.1 Logratio notation.- 5.2 Logcontrast variances and covariances.- 5.3 Permutations.- 5.4 Properties of P and QP matrices.- 5.5 Permutation invariants involving ?.- 5.6 Covariance matrix inverses.- 5.7 Subcompositions.- 5.8 Equivalence of characteristics of ?, ?, ?.- 5.9 Logratio-uncorrelated compositions.- 5.10 Isotropic covariance structures.- 5.11 Bibliographic notes.- Problems.- 6 Logistic normal distributions on the simplex.- 6.1 Introduction.- 6.2 The additive logistic normal class.- 6.3 Density function.- 6.4 Moment properties.- 6.5 Composition of a lognormal basis.- 6.6 Class-preserving properties.- 6.7 Conditional subcompositional properties.- 6.8 Perturbation properties.- 6.9 A central limit theorem.- 6.10 A characterization by logcontrasts.- 6.11 Relationships with the Dirichlet class.- 6.12 Potential for statistical analysis.- 6.13 The multiplicative logistic normal class.- 6.14 Partitioned logistic normal classes.- 6.15 Some notation.- 6.16 Bibliographic notes.- Problems.- 7 Logratio analysis of compositions.- 7.1 Introduction.- 7.2 Estimation of ? and ?.- 7.3 Validation: tests of logistic normality.- 7.4 Hypothesis testing strategy and techniques.- 7.5 Testing hypotheses about ? and ?.- 7.6 Logratio linear modelling.- 7.7 Testing logratio linear hypotheses.- 7.8 Further aspects of logratio linear modelling.- 7.9 An application of logratio linear modelling.- 7.10 Predictive distributions, atypicality indices and outliers.- 7.11 Statistical discrimination.- 7.12 Conditional compositional modelling.- 7.13 Bibliographic notes.- Problems.- 8 Dimension-reducing techniques.- 8.1 Introduction.- 8.2 Crude principal component analysis.- 8.3 Logcontrast principal component analysis.- 8.4 Applications of logcontrast principal component analysis.- 8.5 Subcompositional analysis.- 8.6 Applications of subcompositional analysis.- 8.7 Canonical component analysis.- 8.8 Bibliographic notes.- Problems.- 9 Bases and compositions.- 9.1 Fundamentals.- 9.2 Covariance relationships.- 9.3 Principal and canonical component comparisons.- 9.4 Distributional relationships.- 9.5 Compositional invariance.- 9.6 An application to household budget analysis.- 9.7 An application to clinical biochemistry.- 9.8 Reappraisal of an early shape and size analysis.- 9.9 Bibliographic notes.- Problems.- 10 Subcompositions and partitions.- 10.1 Introduction.- 10.2 Complete subcompositional independence.- 10.3 Partitions of order 1.- 10.4 Ordered sequences of partitions.- 10.5 Caveat.- 10.6 Partitions of higher order.- 10.7 Bibliographic notes.- Problems.- 11 Irregular compositional data.- 11.1 Introduction.- 11.2 Modelling imprecision in compositions.- 11.3 Analysis of sources of imprecision.- 11.4 Imprecision and tests of independence.- 11.5 Rounded or trace zeros.- 11.6 Essential zeros.- 11.7 Missing components.- 11.8 Bibliographic notes.- Problems.- 12 Compositions in a covariate role.- 12.1 Introduction.- 12.2 Calibration.- 12.3 A before-and-after treatment problem.- 12.4 Experiments with mixtures.- 12.5 An application to firework mixtures.- 12.6 Classification from compositions.- 12.7 An application to geological classification.- 12.8 Bibliographic notes.- Problems.- 13 Further distributions on the simplex.- 13.1 Some generalizations of the Dirichlet class.- 13.2 Some generalizations of the logistic normal classes.- 13.3 Recapitulation.- 13.4 The Ad(?,B) class.- 13.5 Maximum likelihood estimation.- 13.6 Neutrality and partition independence.- 13.7 Subcompositional independence.- 13.8 A generalized lognormal gamma distribution with compositional in variance.- 13.9 Discussion.- 13.10 Bibliographic notes.- Problems.- 14 Miscellaneous problems.- 14.1 Introduction.- 14.2 Multi-way compositions.- 14.3 Multi-stage compositions.- 14.4 Multiple compositions.- 14.5 Kernel density estimation for compositional data.- 14.6 Compositional stochastic processes.- 14.7 Relation to Bayesian statistical analysis.- 14.8 Compositional and directional data.- Problems.- Appendices.- A Algebraic properties of elementary matrices.- B Bibliography.- C Computer software for compositional data analysis.- D Data sets.- Author index.

1,306 citations

MonographDOI
TL;DR: This paper presents a meta-modelling procedure that automates the very labor-intensive and therefore time-heavy and expensive and expensive process of manually cataloging and forecasting the distribution of distributions in a discrete-time manner.
Abstract: Preface 1. Introduction 2. Predictive distributions 3. Decisive prediction 4. Informative prediction 5. Mean coverage tolerance prediction 6. Guaranteed coverage tolerance prediction 7. Other approaches to prediction 8. Sampling inspection 9. Regulation and optimisation 10. Calibration 11. Diagnosis 12. Treatment allocation Appendix Bibliography Author Index Subject Index Example and problem index.

778 citations

Journal ArticleDOI
TL;DR: The kernel method of density estimation from continuous to multivariate binary spaces is described, finding its simple nonparametric nature together with its consistency properties make it an attractive tool in discrimination problems, with some advantages over already proposed parametric counterparts.
Abstract: SUMMARY An extension of the kernel method of density estimation from continuous to multivariate binary spaces is described. Its simple nonparametric nature together with its consistency properties make it an attractive tool in discrimination problems, with some advantages over already proposed parametric counterparts. The method is illustrated by an application to a particular medical diagnostic problem. Simple extensions of the method to categorical data and to data of mixed binary and continuous form are indicated.

600 citations


Cited by
More filters
Book
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations

Journal ArticleDOI
TL;DR: It was found that methods specifically designed for collinearity, such as latent variable methods and tree based models, did not outperform the traditional GLM and threshold-based pre-selection and the value of GLM in combination with penalised methods and thresholds when omitted variables are considered in the final interpretation.
Abstract: Collinearity refers to the non independence of predictor variables, usually in a regression-type analysis. It is a common feature of any descriptive ecological data set and can be a problem for parameter estimation because it inflates the variance of regression parameters and hence potentially leads to the wrong identification of relevant predictors in a statistical model. Collinearity is a severe problem when a model is trained on data from one region or time, and predicted to another with a different or unknown structure of collinearity. To demonstrate the reach of the problem of collinearity in ecology, we show how relationships among predictors differ between biomes, change over spatial scales and through time. Across disciplines, different approaches to addressing collinearity problems have been developed, ranging from clustering of predictors, threshold-based pre-selection, through latent variable methods, to shrinkage and regularisation. Using simulated data with five predictor-response relationships of increasing complexity and eight levels of collinearity we compared ways to address collinearity with standard multiple regression and machine-learning approaches. We assessed the performance of each approach by testing its impact on prediction to new data. In the extreme, we tested whether the methods were able to identify the true underlying relationship in a training dataset with strong collinearity by evaluating its performance on a test dataset without any collinearity. We found that methods specifically designed for collinearity, such as latent variable methods and tree based models, did not outperform the traditional GLM and threshold-based pre-selection. Our results highlight the value of GLM in combination with penalised methods (particularly ridge) and threshold-based pre-selection when omitted variables are considered in the final interpretation. However, all approaches tested yielded degraded predictions under change in collinearity structure and the ‘folk lore’-thresholds of correlation coefficients between predictor variables of |r| >0.7 was an appropriate indicator for when collinearity begins to severely distort model estimation and subsequent prediction. The use of ecological understanding of the system in pre-analysis variable selection and the choice of the least sensitive statistical approaches reduce the problems of collinearity, but cannot ultimately solve them.

6,199 citations

Journal ArticleDOI
TL;DR: The Lagrange multiplier (LM) statistic as mentioned in this paper is based on the maximum likelihood ratio (LR) procedure and is used to test the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis.
Abstract: Many econometric models are susceptible to analysis only by asymptotic techniques and there are three principles, based on asymptotic theory, for the construction of tests of parametric hypotheses. These are: (i) the Wald (W) test which relies on the asymptotic normality of parameter estimators, (ii) the maximum likelihood ratio (LR) procedure and (iii) the Lagrange multiplier (LM) method which tests the effect on the first order conditions for a maximum of the likelihood of imposing the hypothesis. In the econometric literature, most attention seems to have been centred on the first two principles. Familiar " t-tests " usually rely on the W principle for their validity while there have been a number of papers advocating and illustrating the use of the LR procedure. However, all three are equivalent in well-behaved problems in the sense that they give statistics with the same asymptotic distribution when the null hypothesis is true and have the same asymptotic power characteristics. Choice of any one principle must therefore be made by reference to other criteria such as small sample properties or computational convenience. In many situations the W test is attractive for this latter reason because it is constructed from the unrestricted estimates of the parameters and their estimated covariance matrix. The LM test is based on estimation with the hypothesis imposed as parametric restrictions so it seems reasonable that a choice between W or LM be based on the relative ease of estimation under the null and alternative hypotheses. Whenever it is easier to estimate the restricted model, the LM test will generally be more useful. It then provides applied researchers with a simple technique for assessing the adequacy of their particular specification. This paper has two aims. The first is to exposit the various forms of the LM statistic and to collect together some of the relevant research reported in the mathematical statistics literature. The second is to illustrate the construction of LM tests by considering a number of particular econometric specifications as examples. It will be found that in many instances the LM statistic can be computed by a regression using the residuals of the fitted model which, because of its simplicity, is itself estimated by OLS. The paper contains five sections. In Section 2, the LM statistic is outlined and some alternative versions of it are discussed. Section 3 gives the derivation of the statistic for

5,826 citations

Book
01 Jan 1996
TL;DR: Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks in this self-contained account.
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

5,632 citations