scispace - formally typeset
Search or ask a question
Posted Content

A Very Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix

01 Jan 1991-Research Papers in Economics (Department of Economics, University of Birmingham)-
About: This article is published in Research Papers in Economics.The article was published on 1991-01-01 and is currently open access. It has received 736 citations till now. The article focuses on the topics: Covariance function & Estimation of covariance matrices.
Citations
More filters
Journal ArticleDOI
TL;DR: The authors presented conditions under which a simple extension of common nonparametric covariance matrix estimation techniques yields standard error estimates that are robust to very general forms of spatial and temporal dependence as the time dimension becomes large.
Abstract: Many panel data sets encountered in macroeconomics, international economics, regional science, and finance are characterized by cross-sectional or “spatial” dependence. Standard techniques that fail to account for this dependence will result in inconsistently estimated standard errors. In this paper we present conditions under which a simple extension of common nonparametric covariance matrix estimation techniques yields standard error estimates that are robust to very general forms of spatial and temporal dependence as the time dimension becomes large. We illustrate the relevance of this approach using Monte Carlo simulations and a number of empirical examples.

3,763 citations

Posted Content
TL;DR: This paper examined the influence of venture capital on patent applications in twenty industries over three decades and found that increases in venture capital activity in an industry are associated with significantly higher patenting rates.
Abstract: We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983-1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.(Publication abstract)

1,660 citations

Journal ArticleDOI
TL;DR: In this paper, the mean squared prediction error (MSPE) from the parsimonious model is adjusted to account for the noise in the large model's model. But, the adjustment is based on the nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size.
Abstract: Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.

1,540 citations

Posted Content
TL;DR: In this paper, the authors show that constraining portfolio weights to be nonnegative is equivalent to using the sample covariance matrix after reducing its large elements and then form the optimal portfolio without any restrictions on portfolio weights.
Abstract: Mean-variance efficient portfolios constructed using sample moments often involve taking extreme long and short positions. Hence practitioners often impose portfolio weight constraints when constructing efficient portfolios. Green and Hollifield (1992) argue that the presence of a single dominant factor in the covariance matrix of returns is why we observe extreme positive and negative weights. If this were the case then imposing the weight constraint should hurt whereas the empirical evidence is often to the contrary. We reconcile this apparent contradiction. We show that constraining portfolio weights to be nonnegative is equivalent to using the sample covariance matrix after reducing its large elements and then form the optimal portfolio without any restrictions on portfolio weights. This shrinkage helps reduce the risk in estimated optimal portfolios even when they have negative weights in the population. Surprisingly, we also find that once the nonnegativity constraint is imposed, minimum variance portfolios constructed using the monthly sample covariance matrix perform as well as those constructed using covariance matrices estimated using factor models, shrinkage estimators, and daily data. When minimizing tracking error is the criterion, using daily data instead of monthly data helps. However, the sample covariance matrix without any correction for microstructure effects performs the best.

1,208 citations

Journal ArticleDOI
TL;DR: In this article, a production-based asset pricing model is proposed, which is analogous to the standard consumption-based model, but it uses producers and production functions in the place of consumers and utility functions.
Abstract: This paper describes a production-based asset pricing model. It is analogous to the standard consumption-based model, but it uses producers and production functions in the place of consumers and utility functions. The model ties stock returns to investment returns (marginal rates of transformation) which are inferred from investment data via a production function. The production-based model is used to examine forecasts of stock returns by business-cycle related variables and the association of stock returns with subsequent economic activity. THIS PAPER DESCRIBES A production-based asset pricing model. It is analogous to the standard consumption-based model, but it uses producers and production functions in the place of consumers and utility functions. The production-based model is used to explain two links between stock returns and economic fluctuations that have been the focus of much recent empirical research in finance. These are: 1) a number of variables forecast stock returns, including the term premium, the default premium, lagged returns, dividend-price ratios, and investment; and 2) many of the same variables, and stock returns in particular, forecast measures of economic activity such as investment and GNP growth.1 Since the production-based model is explicitly analogous to the consumption-based model, I start with a review of that model's logic. The consumption-based model ties asset returns to marginal rates of substitution which are inferred from consumption data (or state variables presumed to drive consumption) through a utility function. It is derived from the consumer's first order conditions for optimal intertemporal consumption demand. Its

1,169 citations

References
More filters
Journal ArticleDOI

13,118 citations


"A Very Simple, Positive Semi-Defini..." refers methods in this paper

  • ...The bound m on the number of sample autocovariances 72, used to form ST, is in many studies equal to the number of nonzero autocorrelations of h,(0*), which is known a priori (e.g., Cumby, Huizinga, and Obstfeld (1983), Hansen and Singleton (1982), and West (1986a))....

    [...]

  • ...…AND AUTOCORRELATION CONSISTENT COVARIANCE MATRIX BY WHITNEY K. NEWEY AND KENNETH D. WEST' MANY RECENT RATIONAL EXPECTATIONS MODELS have been estimated by the techniques developed by Hansen (1982), Hansen and Singleton (1982), Cumby, Huizinga, and Obstfeld (1983), and White and Domowitz (1984)....

    [...]

Journal ArticleDOI
TL;DR: The authors presented conditions under which a simple extension of common nonparametric covariance matrix estimation techniques yields standard error estimates that are robust to very general forms of spatial and temporal dependence as the time dimension becomes large.
Abstract: Many panel data sets encountered in macroeconomics, international economics, regional science, and finance are characterized by cross-sectional or “spatial” dependence. Standard techniques that fail to account for this dependence will result in inconsistently estimated standard errors. In this paper we present conditions under which a simple extension of common nonparametric covariance matrix estimation techniques yields standard error estimates that are robust to very general forms of spatial and temporal dependence as the time dimension becomes large. We illustrate the relevance of this approach using Monte Carlo simulations and a number of empirical examples.

3,763 citations

Posted Content
TL;DR: This paper examined the influence of venture capital on patent applications in twenty industries over three decades and found that increases in venture capital activity in an industry are associated with significantly higher patenting rates.
Abstract: We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983-1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.(Publication abstract)

1,660 citations

Journal ArticleDOI
TL;DR: In this paper, the mean squared prediction error (MSPE) from the parsimonious model is adjusted to account for the noise in the large model's model. But, the adjustment is based on the nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size.
Abstract: Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero. We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size. Simulation evidence supports our recommended procedure.

1,540 citations

Journal ArticleDOI
TL;DR: This paper proposed a framework for out-of-sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodelled heterogeneity, incorrect functional form, or any combination of these.
Abstract: We propose a framework for out-of-sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine-tune the forecast selection to current economic conditions. To this end, we propose a two-step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors.

1,248 citations