scispace - formally typeset
Search or ask a question

Showing papers by "Olivier Ledoit published in 2018"


Journal ArticleDOI
TL;DR: In this article, the authors revisited the methodology of Stein (1975, 1986) for estimating a covariance matrix in the setting where the number of variables can be of the same magnitude as the sample size, and they proposed an alternative solution by minimizing the limiting expression of the unbiased estimator of risk under large-dimensional asymptotics, rather than the finite-sample expression.
Abstract: This paper revisits the methodology of Stein (1975, 1986) for estimating a covariance matrix in the setting where the number of variables can be of the same magnitude as the sample size. Stein proposed to keep the eigenvectors of the sample covariance matrix but to shrink the eigenvalues. By minimizing an unbiased estimator of risk, Stein derived an ‘optimal’ shrinkage transformation. Unfortunately, the resulting estimator has two pitfalls: the shrinkage transformation can change the ordering of the eigenvalues and even make some of them negative. Stein suggested an ad hoc isotonizing algorithm that post-processes the transformed eigenvalues and thereby fixes these problems. We offer an alternative solution by minimizing the limiting expression of the unbiased estimator of risk under large-dimensional asymptotics, rather than the finite-sample expression. Compared to the isotonized version of Stein’s estimator, our solution is theoretically more elegant and also delivers improved performance, as evidenced by Monte Carlo simulations.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a new covariance matrix estimator that blends factor structure with time-varying conditional heteroskedasticity of residuals in large dimensions up to 1000 stocks, which can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of stock returns.
Abstract: This paper injects factor structure into the estimation of time-varying, large-dimensional covariance matrices of stock returns. Existing factor models struggle to model the covariance matrix of residuals in the presence of time-varying conditional heteroskedasticity in large universes. Conversely, rotation-equivariant estimators of large-dimensional time-varying covariance matrices forsake directional information embedded in market-wide risk factors. We introduce a new covariance matrix estimator that blends factor structure with time-varying conditional heteroskedasticity of residuals in large dimensions up to 1000 stocks. It displays superior all-around performance on historical data against a variety of state-of-the-art competitors, including static factor models, exogenous factor models, sparsity-based models, and structure-free dynamic models. This new estimator can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of stock returns.

24 citations


Journal ArticleDOI
TL;DR: It is demonstrated that using the recent DCC-NL estimator of Engle et al. (2017) substantially enhances the power of tests for cross-sectional anomalies: on average, 'Student' t-statistics more than double.
Abstract: Many researchers seek factors that predict the cross-section of stock returns. The standard methodology sorts stocks according to their factor scores into quantiles and forms a corresponding long-short portfolio. Such a course of action ignores any information on the covariance matrix of stock returns. Historically, it has been difficult to estimate the covariance matrix for a large universe of stocks. We demonstrate that using the recent DCC-NL estimator of Engle et al. (2017) substantially enhances the power of tests for cross-sectional anomalies: On average, `Student' t-statistics more than double.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors established the first analytical formula for optimal nonlinear shrinkage of large-dimensional covariance matrices by identifying and mathematically exploiting a deep connection between non linear shrinkage and nonparametric estimation of the Hilbert transform of the sample spectral density.
Abstract: This paper establishes the first analytical formula for optimal nonlinear shrinkage of large-dimensional covariance matrices. We achieve this by identifying and mathematically exploiting a deep connection between nonlinear shrinkage and nonparametric estimation of the Hilbert transform of the sample spectral density. Previous nonlinear shrinkage methods were numerical: QuEST requires numerical inversion of a complex equation from random matrix theory whereas NERCOME is based on a sample-splitting scheme. The new analytical approach is more elegant and also has more potential to accommodate future variations or extensions. Immediate benefits are that it is typically 1,000 times faster with the same accuracy, and accommodates covariance matrices of dimension up to 10, 000. The difficult case where the matrix dimension exceeds the sample size is also covered.

6 citations


Posted ContentDOI
TL;DR: This paper will discuss inference procedures that are asymptotically valid under very general conditions, allowing for heavy tails and time dependence in the return data, and promotes a studentized time series bootstrap procedure.
Abstract: Applied researchers often want to make inference for the difference of a given performance measure for two investment strategies. In this paper, we consider the class of performance measures that are smooth functions of population means of the underlying returns; this class is very rich and contains many performance measures of practical interest (such as the Sharpe ratio and the variance). Unfortunately, many of the inference procedures that have been suggested previously in the applied literature make unreasonable assumptions that do not apply to real-life return data, such as normality and independence over time. We will discuss inference procedures that are asymptotically valid under very general conditions, allowing for heavy tails and time dependence in the return data. In particular, we will promote a studentized time series bootstrap procedure. A simulation study demonstrates the improved finite-sample performance compared to existing procedures. Applications to real data are also provided.

6 citations


Posted Content
TL;DR: In this article, the authors introduce a new covariance matrix estimator that blends factor structure with time-varying conditional heteroskedasticity of residuals in large dimensions up to 1000 stocks, which can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of stock returns.
Abstract: This paper injects factor structure into the estimation of time-varying, large-dimensional covariance matrices of stock returns. Existing factor models struggle to model the covariance matrix of residuals in the presence of time-varying conditional heteroskedasticity in large universes. Conversely, rotation-equivariant estimators of large-dimensional time-varying covariance matrices forsake directional information embedded in market-wide risk factors. We introduce a new covariance matrix estimator that blends factor structure with time-varying conditional heteroskedasticity of residuals in large dimensions up to 1000 stocks. It displays superior all-around performance on historical data against a variety of state-of-the-art competitors, including static factor models, exogenous factor models, sparsity-based models, and structure-free dynamic models. This new estimator can be used to deliver more efficient portfolio selection and detection of anomalies in the cross-section of stock returns.

3 citations