scispace - formally typeset
Search or ask a question

Showing papers by "Francis X. Diebold published in 1996"


ReportDOI
TL;DR: In the first half of this century, special attention was given to two features of the business cycle: (1) the comovement of many individual economic series and (2) the different behavior of the economy during expansions and contractions.
Abstract: In the first half of this century, special attention was given to two features of the business cycle: (1) the comovement of many individual economic series and (2) the different behavior of the economy during expansions and contractions. Both of these attributes were ignored in many subsequent business cycle models, which were often linear representations of a single macroeconomic aggregate. However, recent theoretical and empirical research has revived interest in each attribute separately. Notably, dynamic factor models have been used to obtain a single common factor from a set of macroeconomic variables, and nonlinear models have been used to describe the regime-switching nature of aggregate output. We survey these two strands of research and then provide some suggestive empirical analysis in an effort to unite the two literatures and to assess their usefulness in a statistical characterization of business- cycle dynamics.

354 citations


Posted Content
TL;DR: A number of forecast evaluation topics of particular relevance in economics and finance are described, including methods for evaluating direction-of-change forecasts, probability forecasts and volatility forecasts.
Abstract: It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately -- forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance. We provide selective account of forecast evaluation and combination methods. First we discuss evaluation of a single forecast, and in particular, evaluation of whether and how it may be improved. Second, we discuss the evaluation and comparison of the accuracy of competing forecasts. Third, we discuss whether and how a set of forecasts may be combined to produce a superior composite forecast. Fourth, we describe a number of forecast evaluation topics of particular relevance in economics and finance, including methods for evaluating direction-of-change forecasts, probability forecasts and volatility forecasts.

307 citations


Journal ArticleDOI
TL;DR: The authors compare the performance of two alternative approximations to the finite-sample distributions of test statistics for structural change, one based on asymptotics and another based on the bootstrap.

213 citations


Journal ArticleDOI
TL;DR: A new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function is proposed, and the existence and uniqueness of the optimal predictor are established.
Abstract: We make three related contributions. First, we propose a new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function, and we establish existence and uniqueness of the optimal predictor. Second, we provide a detailed application to optimal prediction of a conditionally heteroscedastic process under asymmetric loss, the insights gained from which are broadly applicable. Finally, we incorporate our results into a general framework for recursive prediction-based model selection under the relevant loss function.

167 citations


Posted Content
TL;DR: The importance of forecast evaluation and combination techniques is discussed in this paper, where the authors discuss the evaluation and comparison of the accuracy of competing forecasts and discuss whether and how a set of forecasts may be combined to produce a superior composite forecast.
Abstract: It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately -- forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance. We provide selective account of forecast evaluation and combination methods. First we discuss evaluation of a single forecast, and in particular, evaluation of whether and how it may be improved. Second, we discuss the evaluation and comparison of the accuracy of competing forecasts. Third, we discuss whether and how a set of forecasts may be combined to produce a superior composite forecast. Fourth, we describe a number of forecast evaluation topics of particular relevance in economics and finance, including methods for evaluating direction-of-change forecasts, probability forecasts and volatility forecasts.

144 citations


Posted Content
TL;DR: Rudebusch as discussed by the authors showed that the best-fitting trend-stationary models imply very different medium-and long-run dynamics, and that regardless of which of the two models obtains, the exact finite-sample distributions of the Dickey-Fuller test statistics are very similar.
Abstract: Fifteen years after the seminal work of Charles R. Nelson and Charles I. Plosser ( 1982), the question of deterministic vs. stochastic trend in U.S. GNP (and other key aggregates) remains open. The surrounding controversy certainly is not due to lack of professional interest-the literature on the question is huge. Instead, the low power of tests of stochastic trend (or "difference stationarity" in the parlance of John H. Cochrane [1988]) against nearby deterministic-trend ("trend-stationary") alternatives, together with the fact that such nearby alternatives are the relevant ones, explains the lack of consensus. In an important paper, Glenn D. Rudebusch (1993) contributes to the "we don't know" literature initiated by Lawrence J. Christiano and Martin Eichenbaum (1990) by arguing that unit-root tests applied to U.S. quarterly real GNP per capita lack power even against distant alternatives. First, Rudebusch shows that the best-fitting trend-stationary and difference-stationary models imply very different mediumand long-run dynamics. Then he shows with an innovative procedure that, regardless of which of the two models obtains, the exact finite-sample distributions of the Dickey-Fuller test statistics are very similar. Thus, Rudebusch concludes that unit-root tests are unlikely to be capable of discriminating between deterministic and stochastic trends. The distinction between trend stationarity is not critical in some contexts. Often, for example, one wants a broad gauge of the persistence in aggregate output dynamics, in which case one may be better informed by an interval estimate of the dominant root in an autoregressive approximation. Hence the importance of James H. Stock's (1991) clever procedure for computing such intervals. But the distinction between trend stationarity and difference stationarity is potentially important in other contexts, such as economic forecasting, because the trendand differencestationary models may imply very different dynamics and hence different point forecasts, as argued by Stock and Mark W. Watson (1988) and John Y. Campbell and Pierre Perron (1991) . Motivated by the potential importance of unit roots for the forecasting of aggregate output, as well as other considerations that we discuss later, we extend Rudebusch's (1993) analysis to several long spans of annual U.S. real GNP data, and we examine the robustness of all results to variations in the sample period and the particular GNP measure employed. As we shall show, the outcome is both surprising and robust.

131 citations


ReportDOI
TL;DR: In this article, the authors discuss the importance of forecast evaluation and combination techniques and compare the accuracy of competing forecasts, and discuss whether and how a set of forecasts may be combined to produce a superior composite forecast.
Abstract: Publisher Summary Forecasts are of great importance and are widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately—forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as are expectations rational; are financial markets efficient; and many more. The chapter discusses evaluation of a single forecast, and in particular, evaluation of whether and how it may be improved. The chapter also discusses the evaluation and comparison of the accuracy of competing forecasts. There is also a discussion whether and how a set of forecasts may be combined to produce a superior composite forecast. A number of forecast evaluation topics of particular relevance in economics and finance, including methods for evaluating direction-of-change forecasts, probability forecasts and volatility forecasts are described.

50 citations


Journal ArticleDOI
TL;DR: In this article, Swinnerton and Wial reported that the four-year retention rate, or the probability of remaining in a job for four or more years, fell from.55 in 1983 to.49 in 1987.
Abstract: Tnnumerable media reports over the past few years purport to find evidence of decliningjob stability in the U.S. economy. Of course, articles in the media do not necessarily provide evidence of actual trends in the economy, since they are not based on representative sampling. However, in a recent issue of this journal, Swinnerton and Wial (1995) claimed to find empirical evidence of declining job stability in the U.S. economy over the 1980s, based on representative samples from the tenure supplements to the Current Population Surveys. Averaging over all workers, they reported that the four-year retention rate, or the probability of remaining in ajob for four or more years, fell from .55 in 1983 to .49 in 1987. Although they acknowledged that firm conclusions cannot be drawn from two data points, Swinnerton and Wial

38 citations


Journal ArticleDOI
TL;DR: In this paper, an alternative minimum-expected-loss estimator for agricultural supply response to movements in expected price is proposed, based on the statistical properties of the commonly-used econometric estimator, which may have a bimodal distribution.

34 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide additional and complementary time-domain motivation for fractional integration in terms of the long-horizon behavior of (1) the variance-time function, and (2) confidence intervals for predictions.

30 citations


ReportDOI
TL;DR: The authors argue that even for the famously recalcitrant U.S. GNP series, unit root tests over long spans can be informative, and they make clear that uncritical repetition of the ''we don't know, and we don't care'' mantra is just as scientifically irresponsible as blind adoption of the view that all macroeconomic series are difference-stationary.
Abstract: A sleepy consensus has emerged that U.S. GNP data are uninformative as to whether trend is better described as deterministic or stochastic. Although the distinction is not critical in some contexts, it is important for point forecasting, because the two models imply very different long-run dynamics and hence different long-run forecasts. We argue that, even for the famously recalcitrant GNP series, unit root tests over long spans can be informative. Our results make clear that uncritical repetition of the `we don't know, and we don't care' mantra is just as scientifically irresponsible as blind adoption of the view that `all macroeconomic series are difference-stationary,' or the view that `all macroeconomic series are trend-stationary.' There is simply no substitute for serious, case- by-case analysis.

Posted Content
TL;DR: In this article, an exact maximum likelihood estimation of many observation-driven models remains an open question, and only approximate maximum likelihood estimators are attempted, because the unconditional density needed for exact estimation is not known in closed form.
Abstract: The possibility of exact maximum likelihood estimation of many observation-driven models remains an open question. Often only approximate maximum likelihood estimation is attempted, because the unconditional density needed for exact estimation is not known in closed form. Using simulation and nonparametric density estimation techniques that facilitate empirical likelihood evaluation, we develop an exact maximum likelihood procedure. We provide an illustrative application to the estimation of ARCH models, in which we compare the sampling properties of the exact estimator to those of several competitors. We find that, especially in situations of small samples and high persistence, efficiency gains are obtained. We conclude with a discussion of directions for future research, including application of our methods to panel data models.

ReportDOI
TL;DR: This paper developed an exact maximum likelihood estimation procedure for ARCH models using simulation and nonparametric density estimation techniques that facilitate empirical likelihood evaluation, and compared the sampling properties of the exact estimator to those of several competitors and found that, especially in situations of small samples and high persistence, efficiency gains are obtained.
Abstract: The possibility of exact maximum likelihood estimation of many observation-driven models remains an open question Often only approximate maximum likelihood estimation is attempted, because the unconditional density needed for exact estimation is not known in closed form Using simulation and nonparametric density estimation techniques that facilitate empirical likelihood evaluation, we develop an exact maximum likelihood procedure We provide an illustrative application to the estimation of ARCH models, in which we compare the sampling properties of the exact estimator to those of several competitors We find that, especially in situations of small samples and high persistence, efficiency gains are obtained We conclude with a discussion of directions for future research, including application of our methods to panel data models

Posted Content
TL;DR: The authors argue that even for the famously recalcitrant U.S. GNP series, unit root tests over long spans can be informative, and they make clear that uncritical repetition of the ''we don't know, and we don't care'' mantra is just as scientifically irresponsible as blind adoption of the view that all macroeconomic series are difference-stationary.
Abstract: A sleepy consensus has emerged that U.S. GNP data are uninformative as to whether trend is better described as deterministic or stochastic. Although the distinction is not critical in some contexts, it is important for point forecasting, because the two models imply very different long-run dynamics and hence different long-run forecasts. We argue that, even for the famously recalcitrant GNP series, unit root tests over long spans can be informative. Our results make clear that uncritical repetition of the `we don't know, and we don't care' mantra is just as scientifically irresponsible as blind adoption of the view that `all macroeconomic series are difference-stationary,' or the view that `all macroeconomic series are trend-stationary.' There is simply no substitute for serious, case- by-case analysis.


01 Jan 1996
TL;DR: A new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function is proposed, and the existence and uniqueness of the optimal predictor are established.
Abstract: We make three related contributions. First, we propose a new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function, and we establish existence and uniqueness of the optimal predictor. Second, we provide a detailed application to optimal prediction of a conditionally heteroskedastic process under asymmetric loss, the insights gained from which are broadly applicable. Finally, we incorporate our results into a general framework for recursive prediction-based model selection under the relevant loss function.