scispace - formally typeset
Search or ask a question
Author

Victor DeMiguel

Bio: Victor DeMiguel is an academic researcher from London Business School. The author has contributed to research in topics: Portfolio & Portfolio optimization. The author has an hindex of 24, co-authored 43 publications receiving 5153 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio.
Abstract: We evaluate the out-of-sample performance of the sample-based mean-variance model, and its extensions designed to reduce estimation error, relative to the naive 1-N portfolio. Of the 14 models we evaluate across seven empirical datasets, none is consistently better than the 1-N rule in terms of Sharpe ratio, certainty-equivalent return, or turnover, which indicates that, out of sample, the gain from optimal diversification is more than offset by estimation error. Based on parameters calibrated to the US equity market, our analytical results and simulations show that the estimation window needed for the sample-based mean-variance strategy and its extensions to outperform the 1-N benchmark is around 3000 months for a portfolio with 25 assets and about 6000 months for a portfolio with 50 assets. This suggests that there are still many "miles to go" before the gains promised by optimal portfolio choice can actually be realized out of sample. The Author 2007. Published by Oxford University Press on behalf of The Society for Financial Studies. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org, Oxford University Press.

2,809 citations

Journal ArticleDOI
TL;DR: In this article, a general framework for finding portfolios that perform well out-of-sample in the presence of estimation error is proposed, which relies on solving the traditional minimum-variance problem but subject to the additional constraint that the norm of the portfolio-weight vector be smaller than a given threshold.
Abstract: We provide a general framework for finding portfolios that perform well out-of-sample in the presence of estimation error. This framework relies on solving the traditional minimum-variance problem but subject to the additional constraint that the norm of the portfolio-weight vector be smaller than a given threshold. We show that our framework nests as special cases the shrinkage approaches of Jagannathan and Ma (Jagannathan, R., T. Ma. 2003. Risk reduction in large portfolios: Why imposing the wrong constraints helps. J. Finance58 1651--1684) and Ledoit and Wolf (Ledoit, O., M. Wolf. 2003. Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J. Empirical Finance10 603--621, and Ledoit, O., M. Wolf. 2004. A well-conditioned estimator for large-dimensional covariance matrices. J. Multivariate Anal.88 365--411) and the 1/N portfolio studied in DeMiguel et al. (DeMiguel, V., L. Garlappi, R. Uppal. 2009. Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy? Rev. Financial Stud.22 1915--1953). We also use our framework to propose several new portfolio strategies. For the proposed portfolios, we provide a moment-shrinkage interpretation and a Bayesian interpretation where the investor has a prior belief on portfolio weights rather than on moments of asset returns. Finally, we compare empirically the out-of-sample performance of the new portfolios we propose to 10 strategies in the literature across five data sets. We find that the norm-constrained portfolios often have a higher Sharpe ratio than the portfolio strategies in Jagannathan and Ma (2003), Ledoit and Wolf (2003, 2004), the 1/N portfolio, and other strategies in the literature, such as factor portfolios.

913 citations

Journal ArticleDOI
TL;DR: This paper proposes a class of portfolios that have better stability properties than the traditional minimum-variance portfolios and shows analytically that the resulting portfolio weights are less sensitive to changes in the asset-return distribution than those of the traditional portfolios.
Abstract: Mean-variance portfolios constructed using the sample mean and covariance matrix of asset returns perform poorly out-of-sample due to estimation error. Moreover, it is commonly accepted that estimation error in the sample mean is much larger than in the sample covariance matrix. For this reason, practitioners and researchers have recently focused on the minimum-variance portfolio, which relies solely on estimates of the covariance matrix, and thus, usually performs better out-of-sample. But even the minimum-variance portfolios are quite sensitive to estimation error and have unstable weights that fluctuate substantially over time. In this paper, we propose a class of portfolios that have better stability properties than the traditional minimum-variance portfolios. The proposed portfolios are constructed using certain robust estimators and can be computed by solving a single nonlinear program, where robust estimation and portfolio optimization are performed in a single step. We show analytically that the resulting portfolio weights are less sensitive to changes in the asset-return distribution than those of the traditional minimum-variance portfolios. Moreover, our numerical results on simulated and empirical data confirm that the proposed portfolios are more stable than the traditional minimum-variance portfolios, while preserving (or slightly improving) their relatively good out-of-sample performance.

247 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a class of portfolios that have better stability properties than the traditional minimum-variance portfolios, which can be computed by solving a single nonlinear program, where robust estimation and portfolio optimization are performed in a single step.
Abstract: Mean-variance portfolios constructed using the sample mean and covariance matrix of asset returns perform poorly out of sample due to estimation error. Moreover, it is commonly accepted that estimation error in the sample mean is much larger than in the sample covariance matrix. For this reason, researchers have recently focused on the minimum-variance portfolio, which relies solely on estimates of the covariance matrix, and thus usually performs better out of sample. However, even the minimum-variance portfolios are quite sensitive to estimation error and have unstable weights that fluctuate substantially over time. In this paper, we propose a class of portfolios that have better stability properties than the traditional minimum-variance portfolios. The proposed portfolios are constructed using certain robust estimators and can be computed by solving a single nonlinear program, where robust estimation and portfolio optimization are performed in a single step. We show analytically that the resulting portfolio weights are less sensitive to changes in the asset-return distribution than those of the traditional portfolios. Moreover, our numerical results on simulated and empirical data confirm that the proposed portfolios are more stable than the traditional minimum-variance portfolios, while preserving (or slightly improving) their relatively good out-of-sample performance.

224 citations

Journal ArticleDOI
TL;DR: This work studies an oligopoly consisting of M leaders and N followers that supply a homogeneous product (or service) noncooperatively and proposes a computational approach to find the equilibrium based on the sample average approximation method and analyze its rate of convergence.
Abstract: We study an oligopoly consisting of M leaders and N followers that supply a homogeneous product (or service) noncooperatively. Leaders choose their supply levels first, knowing the demand function only in distribution. Followers make their decisions after observing the leader supply levels and the realized demand function. We term the resulting equilibrium a stochastic multiple-leader Stackelberg-Nash-Cournot (SMS) equilibrium. We show the existence and uniqueness of SMS equilibrium under mild assumptions. We also propose a computational approach to find the equilibrium based on the sample average approximation method and analyze its rate of convergence. Finally, we apply this framework to model competition in the telecommunication industry.

185 citations


Cited by
More filters
Book
01 Jan 2009

8,216 citations

Journal ArticleDOI
TL;DR: Research indicates that individuals and organizations often rely on simple heuristics in an adaptive way, and ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples.
Abstract: As reflected in the amount of controversy, few areas in psychology have undergone such dramatic conceptual changes in the past decade as the emerging science of heuristics. Heuristics are efficient cognitive processes, conscious or unconscious, that ignore part of the information. Because using heuristics saves effort, the classical view has been that heuristic decisions imply greater errors than do “rational” decisions as defined by logic or statistical models. However, for many decisions, the assumptions of rational models are not met, and it is an empirical rather than an a priori issue how well cognitive heuristics function in an uncertain world. To answer both the descriptive question (“Which heuristics do people use in which situations?”) and the prescriptive question (“When should people rely on a given heuristic rather than a complex strategy to make better judgments?”), formal models are indispensable. We review research that tests formal models of heuristic inference, including in business organizations, health care, and legal institutions. This research indicates that (a) individuals and organizations often rely on simple heuristics in an adaptive way, and (b) ignoring part of the information can lead to more accurate judgments than weighting and adding all information, for instance for low predictability and small samples. The big future challenge is to develop a systematic theory of the building blocks of heuristics as well as the core capacities and environmental structures these exploit.

2,715 citations

Journal ArticleDOI
TL;DR: The study of heuristics shows that less information, computation, and time can in fact improve accuracy, in contrast to the widely held view that less processing reduces accuracy.
Abstract: Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the discovery of less-is-more effects; (b) the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; (c) an advancement from vague labels to computational models of heuristics; (d) the development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an ‘‘adaptive toolbox;’’ and (e) the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people’s adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies.

1,339 citations

Journal ArticleDOI
TL;DR: In this article, a general framework for finding portfolios that perform well out-of-sample in the presence of estimation error is proposed, which relies on solving the traditional minimum-variance problem but subject to the additional constraint that the norm of the portfolio-weight vector be smaller than a given threshold.
Abstract: We provide a general framework for finding portfolios that perform well out-of-sample in the presence of estimation error. This framework relies on solving the traditional minimum-variance problem but subject to the additional constraint that the norm of the portfolio-weight vector be smaller than a given threshold. We show that our framework nests as special cases the shrinkage approaches of Jagannathan and Ma (Jagannathan, R., T. Ma. 2003. Risk reduction in large portfolios: Why imposing the wrong constraints helps. J. Finance58 1651--1684) and Ledoit and Wolf (Ledoit, O., M. Wolf. 2003. Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J. Empirical Finance10 603--621, and Ledoit, O., M. Wolf. 2004. A well-conditioned estimator for large-dimensional covariance matrices. J. Multivariate Anal.88 365--411) and the 1/N portfolio studied in DeMiguel et al. (DeMiguel, V., L. Garlappi, R. Uppal. 2009. Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy? Rev. Financial Stud.22 1915--1953). We also use our framework to propose several new portfolio strategies. For the proposed portfolios, we provide a moment-shrinkage interpretation and a Bayesian interpretation where the investor has a prior belief on portfolio weights rather than on moments of asset returns. Finally, we compare empirically the out-of-sample performance of the new portfolios we propose to 10 strategies in the literature across five data sets. We find that the norm-constrained portfolios often have a higher Sharpe ratio than the portfolio strategies in Jagannathan and Ma (2003), Ledoit and Wolf (2003, 2004), the 1/N portfolio, and other strategies in the literature, such as factor portfolios.

913 citations