scispace - formally typeset
Search or ask a question
Author

Christian Genest

Bio: Christian Genest is an academic researcher from McGill University. The author has contributed to research in topics: Copula (probability theory) & Copula (linguistics). The author has an hindex of 52, co-authored 185 publications receiving 15597 citations. Previous affiliations of Christian Genest include University of British Columbia & University of Montana.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents an introduction to inference for copula models, based on rank methods, by working out in detail a small, fictitious numerical example, the various steps involved in investigating the dependence between two random variables and in modeling it using copulas.
Abstract: This paper presents an introduction to inference for copula models, based on rank methods. By working out in detail a small, fictitious numerical example, the writers exhibit the various steps involved in investigating the dependence between two random variables and in modeling it using copulas. Simple graphical tools and numerical techniques are presented for selecting an appropriate model, estimating its parameters, and checking its goodness-of-fit. A larger, realistic application of the methodology to hydrological data is then presented.

1,414 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the properties of a semiparametric method for estimating the dependence parameters in a family of multivariate distributions and proposed an estimator, obtained as a solution of a pseudo-likelihood equation, which is consistent, asymptotically normal and fully efficient at independence.
Abstract: SUMMARY This paper investigates the properties of a semiparametric method for estimating the dependence parameters in a family of multivariate distributions. The proposed estimator, obtained as a solution of a pseudo-likelihood equation, is shown to be consistent, asymptotically normal and fully efficient at independence. A natural estimator of its asymptotic variance is proved to be consistent. Comparisons are made with alternative semiparametric estimators in the special case of Clayton's model for association in bivariate data.

1,280 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined the problem of selecting an Archimedean copula providing a suitable representation of the dependence structure between two variates X and Y in the light of a random sample (X 1, Y 1, X n, Y n ).
Abstract: A bivariate distribution function H(x, y) with marginals F(x) and G(y) is said to be generated by an Archimedean copula if it can be expressed in the form H(x, y) = ϕ–1[ϕ{F(x)} + ϕ{G(y)}] for some convex, decreasing function ϕ defined on [0, 1] in such a way that ϕ(1) = 0. Many well-known systems of bivariate distributions belong to this class, including those of Gumbel, Ali-Mikhail-Haq-Thelot, Clayton, Frank, and Hougaard. Frailty models also fall under that general prescription. This article examines the problem of selecting an Archimedean copula providing a suitable representation of the dependence structure between two variates X and Y in the light of a random sample (X 1, Y 1), …, (X n , Y n ). The key to the estimation procedure is a one-dimensional empirical distribution function that can be constructed whether the uniform representation of X and Y is Archimedean or not, and independently of their marginals. This semiparametric estimator, based on a decomposition of Kendall's tau statistic...

1,246 citations

01 Nov 2006
TL;DR: In this paper, the authors present a critical review of blanket tests for goodness-of-fit testing of copula models and suggest new ones, and conclude with a number of practical recommendations.
Abstract: Many proposals have been made recently for goodness-of-fit testing of copula models. After reviewing them briefly, the authors concentrate on "blanket tests", i.e., those whose implementation requires neither an arbitrary categorization of the data nor any strategic choice of smoothing parameter, weight function, kernel, window, etc. The authors present a critical review of these procedures and suggest new ones. They describe and interpret the results of a large Monte Carlo experiment designed to assess the effect of the sample size and the strength of dependence on the level and power of the blanket tests for various combinations of copula models under the null hypothesis and the alternative. To circumvent problems in the determination of the limiting distribution of the test statistics under composite null hypotheses, they recommend the use of a double parametric bootstrap procedure, whose implementation is detailed. They conclude with a number of practical recommendations.

1,140 citations

Journal ArticleDOI
TL;DR: A taxonomy of solutions is presented which serves as the framework for a survey of recent theoretical developments in the area and a number of current research directions are mentioned and an extensive, current annotated bibliography is included.
Abstract: This paper addresses the problem of aggregating a number of expert opinions which have been expressed in some numerical form in order to reflect individual uncertainty vis-a-vis a quantity of interest. The primary focus is consensus belief formation and expert use, although some relevant aspects of group decision making are also reviewed. A taxonomy of solutions is presented which serves as the framework for a survey of recent theoretical developments in the area. A number of current research directions are mentioned and an extensive, current annotated bibliography is included.

1,027 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule.
Abstract: It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called "contrastive divergence" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.

5,150 citations

Journal ArticleDOI
TL;DR: In this paper, the authors characterize preference relations over acts which have a numerical representation by the functional J(f) = min > {∫ uo f dP / P∈C } where f is an act, u is a von Neumann-Morgenstern utility over outcomes, and C is a closed and convex set of finitely additive probability measures on the states of nature.

2,719 citations

Book
16 Oct 2005
TL;DR: The most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management can be found in this paper, where the authors describe the latest advances in the field, including market, credit and operational risk modelling.
Abstract: This book provides the most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management. Whether you are a financial risk analyst, actuary, regulator or student of quantitative finance, Quantitative Risk Management gives you the practical tools you need to solve real-world problems. Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book's methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives. Fully revised and expanded to reflect developments in the field since the financial crisis Features shorter chapters to facilitate teaching and learning Provides enhanced coverage of Solvency II and insurance risk management and extended treatment of credit risk, including counterparty credit risk and CDO pricing Includes a new chapter on market risk and new material on risk measures and risk aggregation

2,580 citations