scispace - formally typeset
Search or ask a question
Author

Alexander J. McNeil

Bio: Alexander J. McNeil is an academic researcher from University of York. The author has contributed to research in topics: Copula (linguistics) & Credit risk. The author has an hindex of 35, co-authored 96 publications receiving 13290 citations. Previous affiliations of Alexander J. McNeil include University of Zurich & École Polytechnique Fédérale de Lausanne.


Papers
More filters
Book
16 Oct 2005
TL;DR: The most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management can be found in this paper, where the authors describe the latest advances in the field, including market, credit and operational risk modelling.
Abstract: This book provides the most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management. Whether you are a financial risk analyst, actuary, regulator or student of quantitative finance, Quantitative Risk Management gives you the practical tools you need to solve real-world problems. Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book's methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives. Fully revised and expanded to reflect developments in the field since the financial crisis Features shorter chapters to facilitate teaching and learning Provides enhanced coverage of Solvency II and insurance risk management and extended treatment of credit risk, including counterparty credit risk and CDO pricing Includes a new chapter on market risk and new material on risk measures and risk aggregation

2,580 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This article deals with the static (nontime- dependent) case and emphasizes the copula representation of dependence for a random vector and the problem of finding multivariate models which are consistent with prespecified marginal distributions and correlations is addressed.
Abstract: Modern risk management calls for an understanding of stochastic dependence going beyond simple linear correlation. This paper deals with the static (non-time-dependent) case and emphasizes the copula representation of dependence for a random vector. Linear correlation is a natural dependence measure for multivariate normally and, more generally, elliptically distributed risks but other dependence concepts like comonotonicity and rank correlation should also be understood by the risk management practitioner. Using counterexamples the falsity of some commonly held views on correlation is demonstrated; in general, these fallacies arise from the naive assumption that dependence properties of the elliptical world also hold in the non-elliptical world. In particular, the problem of finding multivariate models which are consistent with prespecified marginal distributions and correlations is addressed. Pitfalls are highlighted and simulation algorithms avoiding these problems are constructed.

2,052 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating Value at Risk (VaR) and related risk measures describing the tail of the conditional distribution of a heteroscedastic financial return series.

1,721 citations

Journal ArticleDOI
TL;DR: The Gaussian mixture representation of a multivariate t distribution is used as a starting point to construct two new copulas, the skewed t copula and the grouped tCopula, which allow more heterogeneity in the modelling of dependent observations.
Abstract: Summary The t copula and its properties are described with a focus on issues related to the dependence of extreme values. The Gaussian mixture representation of a multivariate t distribution is used as a starting point to construct two new copulas, the skewed t copula and the grouped t copula, which allow more heterogeneity in the modelling of dependent observations. Extreme value considerations are used to derive two further new copulas: the t extreme value copula is the limiting copula of componentwise maxima of t distributed random vectors; the t lower tail copula is the limiting copula of bivariate observations from a t distribution that are conditioned to lie below some joint threshold that is progressively lowered. Both these copulas may be approximated for practical purposes by simpler, better-known copulas, these being the Gumbel and Clayton copulas respectively.

952 citations

Journal ArticleDOI
TL;DR: It is shown that a necessary and sufficient condition for an Archimedean copula generator to generate a $d-dimensional copula is that the generator is a d-monotone function.
Abstract: It is shown that a necessary and sufficient condition for an Archimedean copula generator to generate a $d$-dimensional copula is that the generator is a $d$-monotone function. The class of $d$-dimensional Archimedean copulas is shown to coincide with the class of survival copulas of $d$-dimensional $\ell_1$-norm symmetric distributions that place no point mass at the origin. The $d$-monotone Archimedean copula generators may be characterized using a little-known integral transform of Williamson [Duke Math. J. 23 (1956) 189--207] in an analogous manner to the well-known Bernstein--Widder characterization of completely monotone generators in terms of the Laplace transform. These insights allow the construction of new Archimedean copula families and provide a general solution to the problem of sampling multivariate Archimedean copulas. They also yield useful expressions for the $d$-dimensional Kendall function and Kendall's rank correlation coefficients and facilitate the derivation of results on the existence of densities and the description of singular components for Archimedean copulas. The existence of a sharp lower bound for Archimedean copulas with respect to the positive lower orthant dependence ordering is shown.

617 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

Journal ArticleDOI
TL;DR: This survey tries to provide a structured and comprehensive overview of the research on anomaly detection by grouping existing techniques into different categories based on the underlying approach adopted by each technique.
Abstract: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.

9,627 citations

Journal ArticleDOI
TL;DR: This chapter discusses the development of the Spatial Point Pattern Analysis Code in S–PLUS, which was developed in 1993 by P. J. Diggle and D. C. Griffith.
Abstract: (2005). Applied Multivariate Statistical Analysis. Technometrics: Vol. 47, No. 4, pp. 517-517.

3,932 citations

Journal ArticleDOI
TL;DR: Fundamental properties of conditional value-at-risk are derived for loss distributions in finance that can involve discreetness and provides optimization shortcuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out of reach.
Abstract: Fundamental properties of conditional value-at-risk (CVaR), as a measure of risk with significant advantages over value-at-risk (VaR), are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence of models based on scenarios and finite sampling. CVaR is able to quantify dangers beyond VaR and moreover it is coherent. It provides optimization short-cuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out of reach. The numerical efficiency and stability of such calculations, shown in several case studies, are illustrated further with an example of index tracking.

3,010 citations