scispace - formally typeset
Search or ask a question
Book•

An Introduction to Copulas

01 Jan 1999-
TL;DR: This book discusses the fundamental properties of copulas and some of their primary applications, which include the study of dependence and measures of association, and the construction of families of bivariate distributions.
Abstract: The study of copulas and their role in statistics is a new but vigorously growing field. In this book the student or practitioner of statistics and probability will find discussions of the fundamental properties of copulas and some of their primary applications. The applications include the study of dependence and measures of association, and the construction of families of bivariate distributions. This book is suitable as a text or for self-study.
Citations
More filters
Posted Content•
TL;DR: In this article, the authors propose a simple calibration for the marginal posterior distribution of a scalar parameter of interest which is invariant to monotonic and smooth transformations, which can be used for medical statistics, where a single scalar effect measure is often the target.
Abstract: A composite likelihood is a non-genuine likelihood function that allows to make inference on limited aspects of a model, such as marginal or conditional distributions. Composite likelihoods are not proper likelihoods and need therefore calibration for their use in inference, from both a frequentist and a Bayesian perspective. The maximizer to the composite likelihood can serve as an estimator and its variance is assessed by means of a suitably defined sandwich matrix. In the Bayesian setting, the composite likelihood can be adjusted by means of magnitude and curvature methods. Magnitude methods imply raising the likelihood to a constant, while curvature methods imply evaluating the likelihood at a different point by translating, rescaling and rotating the parameter vector. Some authors argue that curvature methods are more reliable in general, but others proved that magnitude methods are sufficient to recover, for instance, the null distribution of a test statistic. We propose a simple calibration for the marginal posterior distribution of a scalar parameter of interest which is invariant to monotonic and smooth transformations. This can be enough for instance in medical statistics, where a single scalar effect measure is often the target.
01 Jan 2013
TL;DR: This thesis develops models and associated Bayesian inference methods for flexible univariate and multivariate conditional density estimation and shows that the models are flexible in the sense that they can cap the uncertainty in the model.
Abstract: This thesis develops models and associated Bayesian inference methods for flexible univariate and multivariate conditional density estimation. The models are flexible in the sense that they can cap ...
Proceedings Article•DOI•
01 Mar 2019
TL;DR: In this article, the dependence between consumption and maximum demand for a bulk electricity customer when it comes to its electricity load was considered, using the Tawn Type 1 copula model.
Abstract: Dependence between random variables is a phenomenon that cannot be over emphasized. This study considered the dependence between consumption and maximum demand for a bulk electricity customer when it comes to its electricity load. The study applied the Clayton, Frank, Gumbel, Joe and Tawn Type 1 copulas to the realizations of the random variables (consumption and maximum demand). Considering the AICs and BICs of the models under study, the Tawn Type 1 copula model best represented the dependence between consumption and maximum demand. Using the inverse marginal distributions of both consumption and maximum demand, actual values were obtained from the pseudoobservations provided by the Tawn Type 1 copula. All the models revealed the presence of lower tail dependence between the two variables, with the exception of the Frank copula. The selected model was used to forecast load consumption and maximum demand.

Cites background from "An Introduction to Copulas"

  • ...The Archimedean family of copulas, extensively discussed in [20], [21] and [22] allow for modeling of dependence....

    [...]

Book Chapter•DOI•
12 Dec 2003
TL;DR: In this paper, the authors use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about the marginal quasi-density functions and the joint moment conditions.
Abstract: In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about the marginal quasi-density functions and the joint moment conditions. The modeling approach is related to joint probability models derived from copula functions, but we note that the entropy approach has some practical advantages over copula-based models. Under suitable regularity conditions, the quasi-maximum likelihood estimator (QMLE) of the model parameters is consistent and asymptotically normal. We demonstrate the procedure with an application to the joint probability model of trading volume and price variability for the Chicago Board of Trade soybean futures contract.
Posted Content•
TL;DR: An uncertainty compiler as mentioned in this paper is a tool that automatically translates original computer source code lacking explicit uncertainty analysis into code containing appropriate uncertainty representations and uncertainty propagation algorithms, which can apply intrusive uncertainty propagation methods to code or parts of codes and therefore more comprehensively and flexibly address both epistemic and aleatory uncertainties.
Abstract: An uncertainty compiler is a tool that automatically translates original computer source code lacking explicit uncertainty analysis into code containing appropriate uncertainty representations and uncertainty propagation algorithms. We have developed an prototype uncertainty compiler along with an associated object-oriented uncertainty language in the form of a stand-alone Python library. It handles the specifications of input uncertainties and inserts calls to intrusive uncertainty quantification algorithms in the library. The uncertainty compiler can apply intrusive uncertainty propagation methods to codes or parts of codes and therefore more comprehensively and flexibly address both epistemic and aleatory uncertainties.