scispace - formally typeset
Search or ask a question
Author

Norman R. Swanson

Bio: Norman R. Swanson is an academic researcher from Rutgers University. The author has contributed to research in topics: Estimator & Model selection. The author has an hindex of 43, co-authored 222 publications receiving 7252 citations. Previous affiliations of Norman R. Swanson include Texas A&M University & Inter-American Development Bank.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a data-determined method for testing structural models of the errors in vector autoregressions is discussed, which can easily be combined with prior economic knowledge and a subjective analysis of data characteristics to yield valuable information concerning model selection and specification.
Abstract: A data-determined method for testing structural models of the errors in vector autoregressions is discussed. The method can easily be combined with prior economic knowledge and a subjective analysis of data characteristics to yield valuable information concerning model selection and specification. In one dimension, it turns out that standard t statistics can be used to test the various overidentifying restrictions that are implied by a model. In another dimension, the method compares a priori knowledge of a structural model for the errors with the properties exhibited by the data. Thus this method may help to ensure that orderings of the errors for impulse response and forecast error variance decomposition analyses are sensible, given the data. Two economic examples are used to illustrate the method.

463 citations

Journal ArticleDOI
TL;DR: Ex ante or real-time forecasting results based on rolling window prediction methods indicate that multivariate adaptive linear vector autoregression models often outperform a variety of adaptive and nonadaptive univariate models.
Abstract: We take a model selection approach to the question of whether a class of adaptive prediction models (artificial neural networks) is useful for predicting future values of nine macroeconomic variables. We use a variety of out-of-sample forecast-based model selection criteria, including forecast error measures and forecast direction accuracy. Ex ante or real-time forecasting results based on rolling window prediction methods indicate that multivariate adaptive linear vector autoregression models often outperform a variety of (1) adaptive and nonadaptive univariate models, (2) nonadaptive multivariate models, (3) adaptive nonlinear models, and (4) professionally available survey predictions. Further, model selection based on the in-sample Schwarz information criterion apparently fails to offer a convenient shortcut to true out-of-sample performance measures.

374 citations

Journal ArticleDOI
TL;DR: A model-selection approach to the question of whether forward-interest rates are useful in predicting future spot rates indicates that the premium of the forward rate over the spot rate helps to predict the sign of future changes in the interest rate.
Abstract: We take a model-selection approach to the question of whether forward-interest rates are useful in predicting future spot rates, using a variety of out-of-sample forecast-based model-selection criteria—forecast mean squared error, forecast direction accuracy, and forecast-based trading-system profitability. We also examine the usefulness of a class of novel prediction models called artificial neural networks and investigate the issue of appropriate window sizes for rolling-window-based prediction methods. Results indicate that the premium of the forward rate over the spot rate helps to predict the sign of future changes in the interest rate. Furthermore, model selection based on an in-sample Schwarz information criterion (SIC) does not appear to be a reliable guide to out-of-sample performance in the case of short-term interest rates. Thus, the in-sample SIC apparently fails to offer a convenient shortcut to true out-of-sample performance measures.

274 citations

Journal ArticleDOI
TL;DR: The authors examined the extent to which fluctuations in the money stock anticipate (or Granger cause) fluctuations in real output using a variety of rolling window and increasing window estimation techniques, and found that the relation between income, money, prices, and interest rates is stable, as long as sufficient data are used, and that there is cointegration among the variables considered, although co-integration spaces become very difficult to estimate precisely when smaller windows are used.

267 citations

Posted Content
TL;DR: In this paper, a general analysis of the conditions under which consistent estimation can be achieved in instrumental variables regression when the available instruments are weak in the local-to-zero sense is presented.
Abstract: This paper conducts a general analysis of the conditions under which consistent estimation can be achieved in instrumental variables regression when the available instruments are weak in the local-to-zero sense. More precisely, the approach adopted in this paper combines key features of the local-to-zero framework of Staiger and Stock (1997) and the many-instrument framework of Morimune (1983) and Bekker (1994) and generalizes both of these frameworks in the following ways. First, we consider a general local-to-zero framework which allows for an arbitrary degree of instrument weakness by modeling the first-stage coefficients as shrinking toward zero at an unspecified rate, say b_n^-1. Our local-to-zero setup, in fact, reduces to that of Staiger and Stock(1997) in the case where b_n = n^0.5. In addition, we examine a broad class of single-equation estimators which extends the well-known k-class to include, amongst others, the Jackknife Instrumental Variables Estimator (JIV E) of Angrist, Imbens, and Krueger (1999). Analysis of estimators within this extended class based on a pathwise asymptotic scheme, where the number of instruments K_n is allowed to grow as a function of the sample size, reveals that consistent estimation depends importantly on the relative magnitudes of r_n, the growth rate of the concentration parameter, and K_n: In particular, it is shown that members of the extended class which satisfy certain general condtions, such as LIML and JIV E, are consistent provided that K_n^0.5/r_n --> 0; as n --> infinity. On the other hand, the two-stage least squares (2SLS) estimator is shown not to satisfy the needed conditions and is found to be consistent only if K_n/r_n --> 0; as n --> infinity. A main point of our paper is that the use of many instruments may be beneficial from a point estimation standpoint in empirical applications where the available instruments are weak but abundant, as it provides an extra source, by which the concentration parameter can grow, thus, allowing consistent estimation to be achievable, in certain cases, even in the presence of weak instruments. Our results, thus, add to the findings of Staiger and Stock (1997) who study a local-to-zero framework where Kn is held fixed and the concentration parameter does not diverge as sample size grows; in consequence, no single-equation estimator is found to be consistent under their setup.

265 citations


Cited by
More filters
Book
01 Jan 2009

8,216 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book ChapterDOI
01 Jan 2005
TL;DR: This paper proposed quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors, and tabulated critical values that enable using the first-stage F-statistic (or, for instance, the Cragg-Donald (1993) statistic) to test whether give n instruments are weak.
Abstract: Weak instruments can produce biased IV estimators and hypothesis tests with large size distortions. But what, precisely , are weak instruments, and how does one detect them in practice? This paper proposes quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors. We tabulate critical values that enable using the first-stage F-statistic (or, when there are multiple endogenous regressors, the Cragg-Donald (1993) statistic) to test whether give n instruments are weak.

4,545 citations

Posted Content
TL;DR: A theme of the text is the use of artificial regressions for estimation, reference, and specification testing of nonlinear models, including diagnostic tests for parameter constancy, serial correlation, heteroscedasticity, and other types of mis-specification.
Abstract: Offering a unifying theoretical perspective not readily available in any other text, this innovative guide to econometrics uses simple geometrical arguments to develop students' intuitive understanding of basic and advanced topics, emphasizing throughout the practical applications of modern theory and nonlinear techniques of estimation. One theme of the text is the use of artificial regressions for estimation, reference, and specification testing of nonlinear models, including diagnostic tests for parameter constancy, serial correlation, heteroscedasticity, and other types of mis-specification. Explaining how estimates can be obtained and tests can be carried out, the authors go beyond a mere algebraic description to one that can be easily translated into the commands of a standard econometric software package. Covering an unprecedented range of problems with a consistent emphasis on those that arise in applied work, this accessible and coherent guide to the most vital topics in econometrics today is indispensable for advanced students of econometrics and students of statistics interested in regression and related topics. It will also suit practising econometricians who want to update their skills. Flexibly designed to accommodate a variety of course levels, it offers both complete coverage of the basic material and separate chapters on areas of specialized interest.

4,284 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations