scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Econometrics in 1997"


Journal ArticleDOI
TL;DR: In this paper, the authors examined the econometric properties of estimates of sigma convergence as traditionally defined in the literature and showed that all these estimates are subject to substantial biases and that the empirical estimates clearly reflect the nature and the magnitude of these biases as predicted by Econometric theory.
Abstract: SUMMARY The paper considers international per capita output and its growth using a panel of data for 102 countries between 1960 and 1989. It sets out an explicitly stochastic Solow growth model and shows that this has quite diAerent properties from the standard approach where the output equation is obtained by adding an error term to the linearized solution of a deterministic Solow model. It examines the econometric properties of estimates of beta convergence as traditionally defined in the literature and shows that all these estimates are subject to substantial biases. Our empirical estimates clearly reflect the nature and the magnitude of these biases as predicted by econometric theory. Steady state growth rates diAer significantly across countries and once this heterogeneity is allowed for the estimates of beta are substantially higher than the consensus in the literature. But they are very imprecisely estimated and diAcult to interpret. The paper also discusses the economic implications of these results for sigma convergence. #1997 John Wiley & Sons, Ltd.

639 citations


Journal ArticleDOI
TL;DR: It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size.
Abstract: In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provid ...

635 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a finite mixture negative binomial count model that accommodates unobserved heterogeneity in an intuitive and analytically tractable manner for six measures of medical care demand by the elderly.
Abstract: SUMMARY In this article we develop a finite mixture negative binomial count model that accommodates unobserved heterogeneity in an intuitive and analytically tractable manner. This model, the standard negative binomial model, and its hurdle extension are estimated for six measures of medical care demand by the elderly using a sample from the 1987 National Medical Expenditure Survey. The finite mixture model is preferred overall by statistical model selection criteria. Two points of support adequately describe the distribution of the unobserved heterogeneity, suggesting two latent populations, the ‘healthy’ and the ‘ill’ whose fitted distributions diAer substantially from each other. #1997 by John Wiley & Sons, Ltd.

526 citations


Report SeriesDOI
TL;DR: In this paper, the generalized method of moments (GMM) estimation technique is discussed for count data models with endogenous regressors, and it is shown that a set of instruments is not orthogonal to both error types.
Abstract: The generalized method of moments (GMM) estimation technique is discussed for count data models with endogenous regressors. Count data models can be specified with additive or multiplicative errors and it is shown that, in general, a set of instruments is not orthogonal to both error types. Simultaneous equations with a dependent count variable often do not have a reduced form which is a simple function of the instruments. However, a simultaneous model with a count and a binary variable can only be logically consistent when the system is triangular. Utilizing data from the British Health and Lifestyle Survey 1991-1992, the GMM estimator is used in the estimation of a model explaining the number of visits to doctors, with a self-reported binary health index as a possible endogenous regressor. If this regressor is truly endogenous, one expects the pseudo-likelihood estimate of its coefficient to be biased upwards. Indeed, for the additive model, the estimated coefficient of the binary health index decreases in value when the possible endogeneity of this regressor is taken into account. Further indication of endogeneity is given by the fact that the overidentifying restrictions are rejected in the multiplicative model, but not in the additive model. Finally, a model that includes predicted latent health instead of the binary health index is estimated in stages.

365 citations


Journal ArticleDOI
Marco Bianchi1
TL;DR: In this article, the authors test the convergence hypothesis in a cross-section of 119 countries by means of bootstrap multimodality tests and non-parametric density estimation techniques and find low mobility patterns of intra-distribution dynamics and increasing evidence for bimodality.
Abstract: In this paper, the authors test the convergence hypothesis in a cross-section of 119 countries by means of bootstrap multimodality tests and non parametric density estimation techniques. By looking at the density distribution of GDP across countries in 1970, 1980 and 1989, we find low mobility patterns of intra-distribution dynamics and increasing evidence for bimodality. The findings stand in sharp contrast with the convergence prediction. (c) 1997 John Wiley & Sons, Ltd.-

286 citations


Journal ArticleDOI
TL;DR: Using sequential trend break and panel data models, this paper investigated the unit root hypothesis for the inflation rates of thirteen OECD countries and found evidence of stationarity in only four of the thirteen countries.
Abstract: SUMMARY Using sequential trend break and panel data models, we investigate the unit root hypothesis for the inflation rates of thirteen OECD countries. With individual country tests, we find evidence of stationarity in only four of the thirteen countries. The results are more striking with the panel data model. We can strongly reject the unit root hypothesis both for a panel of all thirteen countries and for a number of smaller panels consisting of as few as three countries. The non-rejection of the unit root hypothesis for inflation is very fragile to even a small amount of cross-section variation. #1997 John Wiley & Sons, Ltd.

250 citations


Journal ArticleDOI
TL;DR: In this article, the relationship between technological activity and patent applications is analyzed and several econometric models for count panel data are estimated, dealing with the discrete nature of patents and firm specific unobservables arising from the panel data context.
Abstract: This paper analyses the relationship between the main determinants of technological activity and patent applications. To this end, an original panel of 181 international manufacturing firms investing substantial amounts in R&D during the late 1980s has been constructed. The number of patent applications by firms is explained by current and lagged levels of R&D expenditures and technological spillovers. Technological and geographical opportunities are also taken into account as additional determinants. In order to examine this relationship, several econometric models for count panel data are estimated. These models deal with the discrete nature of patents and firm specific unobservables arising from the panel data context. The main findings of the paper are first, a high sensitivity of results to the specification of patent distribution. Second, the estimates of the preferred GMM panel data method suggest decreasing returns to scale in technological activity and finally a positive impact of technological spillovers on firm's own innovation. © 1997 John Wiley & Sons, Ltd.

243 citations


Journal ArticleDOI
TL;DR: In this paper, the use of bootstrap methods to compute interval estimates and perform hypothesis tests for decomposable measures of economic inequality is considered, using the Gini coefficient and Theil's entropy measures of inequality.
Abstract: SUMMARY In this paper we consider the use of bootstrap methods to compute interval estimates and perform hypothesis tests for decomposable measures of economic inequality. Two applications of this approach, using the Gini coefficient and Theil's entropy measures of inequality, are provided. Our first application employs preand post-tax aggregate state income data, constructed from the Panel Study of Income Dynamics. We find that although casual observation of the inequality measures suggests that the post-tax distribution of income is less equal among states than pre-tax income, none of these observed differences are statistically significant at the 10% level. Our second application uses the National Longitudinal Survey of Youth data to study youth inequality. We find that youth inequality decreases as the cohort ages, but between age-group inequality has increased in the latter half of the 1980s. The results suggest that (1) statistical inference is essential even when large samples are available, and (2) the bootstrap procedure appears to perform well in this setting. © 1997 by John Wiley & Sons, Ltd. J. appl. econom. 12: 133-150, 1997.

222 citations


Journal ArticleDOI
TL;DR: In this paper, the patent equation is estimated using the number of European patent applications and the input by research capital in a panel of French manufacturing firms, and the patent expression is used to estimate the knowledge-production function.
Abstract: The purpose of this paper is to estimate the patent equation, an empirical counterpart to the ‘knowledge-production function’. Innovation output is measured through the number of European patent applications and the input by research capital, in a panel of French manufacturing firms. Estimating the innovation function raises specific issues related to count data. Using the framework of models with multiplicative errors, we explore and test for various specifications: correlated fixed effects, serial correlations, and weak exogeneity. We also present a first extension to lagged dependent variables. © 1997 John Wiley & Sons, Ltd.

195 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the unobserved heterogeneity commonly assumed to be the source of overdispersion in count data models has predictable implications for the probability structure of such mixture models.
Abstract: SUMMARY This paper demonstrates that the unobserved heterogeneity commonly assumed to be the source of overdispersion in count data models has predictable implications for the probability structure of such mixture models. In particular, the common observation of excess zeros is a strict implication of unobserved heterogeneity. This result has important implications for using count model estimates for predicting certain interesting parameters. Test statistics to detect such heterogeneity-related departures from the null model are proposed and applied in a health-care utilization example, suggesting that a null Poisson model should be rejected in favour of a mixed alternative. © 1997 by John Wiley & Sons, Ltd. J. Appl. Econ., 12, 337-350 (1997)

185 citations


Journal ArticleDOI
TL;DR: This paper used the Kalman filter to obtain maximum likelihood estimates of a permanent-transitory component model for log spot and forward dollar prices of the pound, the franc, and the yen.
Abstract: SUMMARY Using the Kalman filter, we obtain maximum likelihood estimates of a permanent‐transitory components model for log spot and forward dollar prices of the pound, the franc, and the yen. This simple parametric model is useful in understanding why the forward rate may be an unbiased predictor of the future spot rate even though an increase in the forward premium predicts a dollar appreciation. Our estimates of the expected excess return on short-term dollar-denominated assets are persistent and reasonable in magnitude. They also exhibit sign fluctuations and negative covariance with the estimated expected depreciation. #1997 John Wiley & Sons, Ltd. J. Appl. Econ., 12, 715‐734 (1997)

Journal ArticleDOI
TL;DR: In this paper, the usefulness of financial spread as indicators of future inflation and output growth in the countries of the European Union, placing a particular focus on out-of-sample forecasting performance, is examined.
Abstract: This paper seeks to address the policy issue of the usefulness of financial spreads as indicators of future inflation and output growth in the countries of the European Union, placing a particular focus on out-of-sample forecasting performance. Such analysis is of considerable relevance to monetary authorities, given the breakdown of the money/income relation in a number of countries and following increased emphasis of domestic monetary policy on control of inflation following the broadening of the ERM bands. The results confirm that for some countries, financial spread variables do contain some information about future output growth and inflation, with the yield curve and the reverse yield gap performing best. However, the relatively poor out-of-sample forecasting performance and/or parameter instability suggests that the need for caution in using spread variables for forecasting in EU countries. Only a small number of spreads contain information, and improve forecasting in a manner which is stable over time. © 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a semi-parametric estimation method for hurdle (two-part) count regression models is developed for the analysis of overdispersed individual level data characterized by a large proportion of non-users, and highly skewed distribution of counts for users.
Abstract: This paper develops a semi-parametric estimation method for hurdle (two-part) count regression models. The approach in each stage is based on Laguerre series expansion for the unknown density of the unobserved heterogeneity. The semi-parametric hurdle model nests Poisson and negative binomial hurdle models, which have been used in recent applied literature. The empirical part of the paper evaluates the impact of managed care programmes for Medicaid eligibles on utilization of health-care services using a key utilization variable, the number of doctor and health centre visits. Health status measures and age seem to be more important in determining health-care utilization than other socio-economic and enrollment variables. The semi-parametric approach is particularly useful for the analysis of overdispersed individual level data characterized by a large proportion of non-users, and highly skewed distribution of counts for users. © 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This article developed two conditionally heteroscedastic models which allow an asymmetric reaction of the conditional volatility to the arrival of news, induced by both the sign of past shocks and the size of past unexpected volatility.
Abstract: SUMMARY This paper develops two conditionally heteroscedastic models which allow an asymmetric reaction of the conditional volatility to the arrival of news. Such a reaction is induced by both the sign of past shocks and the size of past unexpected volatility. The proposed models are shown to converge in distribution to

Journal ArticleDOI
TL;DR: A measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected lost of a long- run forecast, which allows for general loss functions, univariate or multivariate information sets, and stationary or nonstationary data.
Abstract: SUMMARY We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several U.S. macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several nonparametric extensions of our approach.

Journal ArticleDOI
TL;DR: The theoretical model of Gaertner (1974) and Pollak (1976) for the interdependence of preferences in the linear expenditure system is estimated for a cross-section of households.
Abstract: The theoretical model of Gaertner (1974) and Pollak (1976) for the interdependence of preferences in the Linear Expenditure System is estimated for a cross-section of households. The interdependence of consumption of different households has implications for the stochastic structure of the model and for the identifiability of its parameters, Both aspects are dealt with. The empirical results indicate a significant role played by the interdependence of preferences. One of its implications is that predictions of the effects of changes in a household's exogenous variables differ according to whether the exogenous variable only changes for this household or for all households jointly. (C) 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a new class of parametric regression models for both under- and over-dispersed count data is proposed, based on squared polynomial expansions around a Poisson baseline density.
Abstract: SUMMARY A new class of parametric regression models for both under- and overdispersed count data is proposed. These models are based on squared polynomial expansions around a Poisson baseline density. The approach is similar to that for continuous data using squared Hermite polynomials proposed by Gallant and Nychka and applied to financial data by, among others, Gallant and Tauchen. The count models are applied to underdispersed data on the number of takeover bids received by targeted firms, and to overdispersed data on the number of visits to health practitioners. The models appear to be particularly useful for underdispersed count data. #1997 by John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a model of cointegration where long-run parameters are subject to switching between several different cointegrating regimes is examined, where shifts are allowed to be governed by the outcome of an unobserved Markov chain with unknown transition probabilities.
Abstract: In this paper we examine a model of cointegration where long-run parameters are subject to switching between several different cointegrating regimes. These shifts are allowed to be governed by the outcome of an unobserved Markov chain with unknown transition probabilities. We illustrate this approach using Japanese data on consumption and disposable income, and find that the data favour a Markov-switching long-run relationship over a standard temporally stable formulation. © 1997 by John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors investigated factors determining the demand for hospitalization in Germany and cast doubts on the role of social insurance on the need for hospital trips, and found that there are also important differences in the hospitalization behavior of men and women.
Abstract: The dramatically rising health expenditures have become a matter of prime concern. Using a rich panel dataset this paper contributes to this debate by investigating factors determining the demand for hospitalization in Germany. While most previous studies have found a significant impact of social insurance on the demand for hospital trips, the empirical results presented here cast doubts on the role of those economic incentives. There are also important differences in the hospitalization behaviour of men and women, and between the full sample and those with chronic conditions, which have been neglected by the literature. © 1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is shown that the h-block cross-validation function for least-squares based estimators can be expressed in a form which enormously impact on the amount of calculation required.
Abstract: Cross-validation is a method used to estimate the expected prediction error of a model. Such estimates may be of interest in themselves, but their use for model selection is more common. Unfortunately, cross-validation is viewed as being computationally expensive in many situations. In this paper it is shown that the h-block cross-validation function for least-squares based estimators can be expressed in a form which can enormously impact on the amount of calculation required. The standard approach is of O(T2) where T denotes the sample size, while the proposed approach is of O(T) and yields identical numerical results. The proposed approach has widespread potential application ranging from the estimation of expected prediction error to least squares-based model specification to the selection of the series order for non-parametric series estimation. The technique is valid for general stationary observations. Simulation results and applications are considered. © 1997 by John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors analyse the demand for food in the USA and the Netherlands in the period 1929-88, using a differential consumer demand system, the CBS model, and compare their results with those of Tobin.
Abstract: We analyse the demand for food in the USA and the Netherlands in the period 1929-88, using a differential consumer demand system, the CBS model, and we compare our results with those of Tobin. For the USA we find an income elasticity of 0-75, both in budget-survey data and in time-series data, which is higher than those found by Tobin, and an own-price elasticity of -0 45. For the Netherlands we obtain an income elasticity of 0-35 in time-series data and 0-65 in individual budget-survey data and an own-price elasticity of about -0 20.

Journal ArticleDOI
TL;DR: This application offers three choices: calendar-time, age, and duration of residence in New Orleans, and exploits the semi-parametric features of Cox regression and estimate parallel specifications in which mortality risk is treated as an arbitrary function of one of the three alternative time measures, while the remaining two enter the hazard parametrically.
Abstract: SUMMARY Event data can often be analysed using different concepts of waiting time. Our application offers three choices: calendar-time, age, and duration of residence in New Orleans. We exploit the semi-parametric features of Cox regression and estimate parallel specifications in which mortality risk is treated as an arbitrary function of one of the three alternative time measures, while the remaining two enter the hazard parametrically. Comparisons of the parameter estimates with the corresponding estimates of the baseline hazards form the crux of a simple specification checking procedure. In our formal treatment we rely on Aalen's Multiplicative Intensity formulation and tackle complications such as left-truncation, functional form specification, and choice-based sampling. © 1997 by John Wiley & Sons, Ltd. J. appl. econom. 12: 1-25, 1997.

Journal ArticleDOI
TL;DR: In this paper, an extensive analysis of statistical demand functions for food using household survey data and aggregate time-series data on food consumption in the USA and The Netherlands was carried out.
Abstract: SUMMARY This paper reports results of an extensive analysis of statistical demand functions for food using household survey data and aggregate time-series data on food consumption in the USA and The Netherlands. Using the model put forward by Tobin (1950) for survey data, we find that socio-economic information on the composition, education, and status of households adds little to the explanation of food consumption. The income elasticity of food consumption decreases over time in the USA but increases in The Netherlands. Applying multivariate cointegration analysis to the time-series data, we find that strict price homogeneity, structural stability, and weak exogeneity of prices have to be rejected statistically at conventional significance levels, whereas weak exogeneity of food consumption cannot be rejected. The long-run income elasticity tends to decrease over time for US data and is roughly constant for Dutch data. The findings corroborate earlier findings for the survey data. The rejection of price exogeneity is consistent with Tobin’s model which treats prices as endogenous. #1997 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the difference between means and the ratio of determinants of covariance matrices when a subset of explanatory variables is included or excluded from a regression is derived, for an economic application.
Abstract: We argue for the adoption of a predictive approach to model specification. Specifically, we derive the difference between means and the ratio of determinants of covariance matrices when a subset of explanatory variables is included or excluded from a regression. Results for an economic application are presented as an example.

Journal ArticleDOI
TL;DR: In this paper, the authors used information-theoretic techniques to identify the optimal information set and lag order for a Vector Autoregressive (VAR) forecast of food consumption in the Netherlands.
Abstract: This paper is concerned with empirical econometric modeling of food consumption in the USA and the Netherlands. Using autoregressive distributed lag models (ADLs) selected via the Informational Complexity (ICOMP) criterion, we study the relationship between food consumption and income. Whether food consumption obeys the homogeneity postulate is tested using information criteria. Using information-theoretic techniques, we identify the optimal information set and lag order for a Vector Autoregressive (VAR) forecast of food consumption in the Netherlands we demonstrate how multisample cluster analysis, a combinatorial grouping of samples or data matrices, can be used to determine when the pooling of data sets is appropriate, and how ICOMP can be used in conjunction with the Genetic Algorithm (GA) to determine the optimal predictors in the celebrated seemingly unrelated regressions (SUR) model framework.

Journal ArticleDOI
TL;DR: In this paper, the authors provided time-series and cross-sectional budget survey analyses of the demand for food in the United States and the Netherlands according to the tasks set by Jan Magnus and Mary Morgan (MM).
Abstract: This paper provides time-series and cross-sectional budget survey analyses of the demand for food in the United States and the Netherlands according to the tasks set by Jan Magnus and Mary Morgan (MM). Various econometric methods, including weighted least squares (WLS), cointegration, error correction, the almost ideal demand system (AIDS), and time-varying parameter (TVP) techniques, are used and the estimated demand elasticities compared across country and over time. © 1997 John Wiley & Sons, Ltd.


Journal ArticleDOI
TL;DR: In this article, the authors investigated the business cycle properties of UK data using a VAR technique and formulated a Real Business Cycle (RBC) model, which includes both permanent and transitory shocks to technology.
Abstract: In this paper the business cycle properties of UK data are investigated using a VAR technique. A Real Business Cycle (RBC) model is formulated. The model includes both permanent and transitory shocks to technology. The business cycle properties of the data and the model are investigated by deriving the expected changes over various forecast horizons from a VAR model. It is found, contrary to evidence in Rotemberg and Woodford (1996), that the model can account for many features of the data and that temporary shocks are pertinent in order to explain the business cycle moments. The main difference between theory and data is present in hours worked. (C) 1997 by John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: RATS is now user-friendly, well documented, and does much more than time series, however, RATS, like many econometrics packages, makes insufficient effort to assure the user that its results are accurate.
Abstract: I have used RATS (Regression Analysis of Time Series; Doan, 1994) as my primary econometrics package for over ten years. In the early days it was extremely frustrating; it was user-hostile, easily accessible only to experienced programmers, the manual was often more confusing than helpful, and it was almost exclusively oriented toward time-series analysis. I persisted nonetheless because it was (and still is, to my knowledge) the only econometrics package which supports frequency domain analysis in more than a purely automatic fashion. The current version bears little resemblance to its predeceedssors: RATS is now user-friendly, well documented, and does much more than time series. However, RATS, like many econometrics packages, makes insufficient effort to assure the user that its results are accurate. Therefore this review will focus on the issue of whether RATS does produce accurate results. Typical software reviews, unfortunately, tend to focus on the transparent, i.e. the userinterface, to the complete exclusion of the latent, i.e. what the computer is actually doing, crunching numbers. This is rather surprising, since we should first inquire whether the program gives an accurate answer, and only then worry about how easy it is to get that answer. Unfortunately, almost all reviews of econometric software completely omit any reference to numerical accuracy, despite the existence of entire collections of benchmarks such as those proposed by Lachenbruck (1983) and Elliott, Reisch, and Campbell (1989). I surveyed three journals which regularly publish software reviews (International Journal of Forecasting, Economic Journal, and Journal of Applied Econometrics) for the years 1990-95. Of more than seventy reviews, only three mentioned numerical accuracy, and only one (Veall, 1991) actually employed a benchmark regression (he used the Lachenbruck tests). This general inattention to numerical accuracy may convey the impression that there are no such problems, when nothing could be farther from the truth: the statistical/econometric software has not been written which does not have numerical deficiencies.

Journal ArticleDOI
TL;DR: This re-analysis of Tobin's (l950) study makes three points: graphs are a powerful device for discovery and for communication, and can reveal much of the information in the data, and squeezing out the more subtle multivariate messages requires some solution to the usual overparameterization problem.
Abstract: This re-analysis of Tobin's (1950) study makes three points: 1. Graphs are a powerful device for discovery and for communication, and can reveal much of the information in the data. 2. Squeezing out the more subtle multivariate messages requires some solution to the usual overparameterization problem. Data-mining is still the treatment of choice for this crippling disease, but it is more akin to leeches than to anti-biotics. A Bayesian sensitivity analysis is an alternative, but it isn't a perfect cure either. 3. Clear identification of the issues can help keep the enterprise from wandering off in technically amusing but largely irrelevant directions. © 1997 John Wiley & Sons, Ltd.