scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Applied Econometrics in 1996"


Journal ArticleDOI
TL;DR: In this article, the authors used response surface regressions based on simulation experiments to calculate distribution functions for some well-known unit root and cointegration test statistics, which can be used to calculate both asymptotic and finite sample critical values and P-values for any of the tests.
Abstract: SUMMARY This paper employs response surface regressions based on simulation experiments to calculate distribution functions for some well-known unit root and cointegration test statistics. The principal contributions of the paper are a set of data files that contain estimated response surface coefficients and a computer program for utilizing them. This program, which is freely available via the Internet, can easily be used to calculate both asymptotic and finite-sample critical values and P-values for any of the tests. Graphs of some of the tabulated distribution functions are provided. An empirical example deals with interest rates and inflation rates in Canada. Tests of the null hypothesis that a time-series process has a unit root have been widely used in recent years, as have tests of the null hypothesis that two or more integrated series are not cointegrated. The most commonly used unit root tests are based on the work of Dickey and Fuller (1979) and Said and Dickey (1984). These are known as Dickey-Fuller (DF) tests and Augmented Dickey-Fuller (ADF) tests, respectively. These tests have non-standard distributions, even asymptotically. The cointegration tests developed by Engle and Granger (1987) are closely related to DF and ADF tests, but they have different, non-standard distributions, which depend on the number of possibly cointegrated variables. Although the asymptotic theory of these unit root and cointegration tests is well developed, it is not at all easy for applied workers to calculate the marginal significance level, or P-value, associated with a given test statistic. Until a few years ago (MacKinnon, 1991), accurate critical values for cointegration tests were not available at all. In a recent paper (MacKinnon, 1994), I used simulation methods to estimate the asymptotic distributions of a large number of unit root and cointegration tests. I then obtained reasonably simple approximating equations that may be used to obtain approximate asymptotic P-values. In the present paper, I extend the results to allow for up to 12 variables, instead of six, and I correct two deficiencies of the earlier work. The first deficiency is that the approximating equations are considerably less accurate than the underlying estimated asymptotic distributions. The second deficiency is that, even though the simulation experiments provided information about the finite-sample distributions of the test statistics, the approximating equations were obtained only for the asymptotic case. The key to overcoming these two deficiencies is to use tables of response surface coefficients, from which estimated quantiles for any sample size may be calculated, instead of equations to

2,969 citations


Journal ArticleDOI
TL;DR: In this paper, the authors develop attractive functional forms and simple quasi-likelihood estimation methods for regression models with a fractional dependent variable, and apply these methods to a data set of employee participation rates in 401 (k) pension plans.
Abstract: SUMMARY We develop attractive functional forms and simple quasi-likelihood estimation methods for regression models with a fractional dependent variable. Compared with log-odds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to use ad hoc transformations to handle data at the extreme values of zero and one. We also offer some new, robust specification tests by nesting the logit or probit function in a more general functional form. We apply these methods to a data set of employee participation rates in 401 (k) pension plans. I. INTRODUCTION Fractional response variables arise naturally in many economic settings. The fraction of total weekly hours spent working, the proportion of income spent on charitable contributions, and participation rates in voluntary pension plans are just a few examples of economic variables bounded between zero and one. The bounded nature of such variables and the possibility of observing values at the boundaries raise interesting functional form and inference issues. In this paper we specify and analyse a class of functional forms with satisfying econometric properties. We also synthesize and expand on the generalized linear models (GLM) literature from statistics and the quasi-likelihood literature from econometrics to obtain robust methods for estimation and inference with fractional response variables. We apply the methods to estimate a model of employee participation rates in 401 (k) pension plans. The key explanatory variable of interest is the plan's 'match rate,' the rate at which a firm matches a dollar of employee contributions. The empirical work extends that of Papke (1995), who studied this problem using linear spline methods. Spline methods are fiexible, but they do not ensure that predicted values lie in the unit interval. To illustrate the methodological issues that arise with fractional dependent variables, suppose that a variable y, O^y^l, is to be explained by a 1 x/^ vector of explanatory variables \ = {Xi,X2 XK), with the convention that Xi = l. The population model

2,933 citations


Journal ArticleDOI
Bruce E. Hansen1
TL;DR: In this article, a theory of testing under non-standard conditions is developed to bound the asymptotic distribution of standardized likelihood ratio statistics, even when conventonal regularity conditions (such as unidentified nuisance parameters and identically zero scores) are violated.
Abstract: SUMMARY A theory of testing under non-standard conditions is developed. By viewing the likelihood as a function of the unknown parameters, empirical process theory enables us to bound the asymptotic distribution of standardized likelihood ratio statistics, even when conventonal regularity conditions (such as unidentified nuisance parameters and identically zero scores) are violated. This testing methodology is applied to the Markov switching model of GNP proposed by Hamilton (1989). The standardized likelihood ratio test is unable to reject the hypothesis of an AR(4) in favour of the Markov switching model. Instead, we find strong evidence for an alternative model. This model, like Hamilton's, is characterized by parameters which switch between states, but the states arrive independently over time, rather than following an unrestricted Markov process. The primary difference, however, is that the second autoregressive parameter, in addition to the intercept, switches between states.

863 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the joint time series behavior of monthly stock returns and growth in industrial production and found that stock returns are well characterized by year-long episodes of high volatility, separated by longer quiet periods.
Abstract: SUMMARY This paper investigates the joint time series behavior of monthly stock returns and growth in industrial production. We find that stock returns are well characterized by year-long episodes of high volatility, separated by longer quiet periods. Real output growth, on the other hand, is subject to abrupt changes in the mean associated with economic recessions. We study a bivariate model in which these two changes are driven by related unobserved variables, and conclude that economic recessions are the primary factor that drives fluctuations in the volatility of stock returns. This framework proves useful both for forecasting stock volatility and for identifying and forecasting economic turning points.

680 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply long-memory processes to describe inflation for 10 countries, and find strong evidence of long memory with mean reverting behaviour for all countries except Japan, which appears stationary.
Abstract: This paper considers the application of long-memory processes to describing inflation for 10 countries. We implement a new procedure to obtain approximate maximum likelihood estimates of an ARFIMA-GARCH process; which is fractionally integrated I(d) with a superimposed stationary ARMA component in its conditional mean. Additionally, this long-memory process is allowed to have GARCH type conditional heteroscedasticity. On analysing monthly post-World War II CPI inflation for 10 different countries, we find strong evidence of long memory with mean reverting behaviour for all countries except Japan, which appears stationary. For three high inflation economies there is evidence that the mean and volatility of inflation interact in a way that is consistent with the Friedman hypothesis. Copyright 1996 by John Wiley & Sons, Ltd.

623 citations


Journal ArticleDOI
TL;DR: In this paper, the inconsistency of common scale estimators when output is proxied by deflated sales, based on a common output deflator across firms, is explored, and it reveals itself as a downward bias in the scale estimates obtained from production function regressions, under a variety of assumptions about the pattern of technology, demand and factor price shocks.
Abstract: SUMMARY This paper explores the inconsistency of common scale estimators when output is proxied by deflated sales, based on a common output deflator across firms. The problem arises when firms operate in an imperfectly competitive environment and prices differ between them. In particular, we show that this problem reveals itself as a downward bias in the scale estimates obtained from production function regressions, under a variety of assumptions about the pattern of technology, demand and factor price shocks. The result also holds for scale estimates obtained from cost functions. The analysis is carried one step further by adding a model of product demand. Within this augmented model we examine the probability limit of the scale estimate obtained from an ordinary production function regression. This analysis reveals that the OLS estimate will be biased towards a value below one, and how this bias is affected by the magnitude of the parameters and the amount of variation in the various shocks. We have included an empirical section which illustrates the issues. The empirical analysis presents a tentative approach to solve the problem discussed in the theoretical part of this paper.

441 citations


Journal ArticleDOI
TL;DR: In this paper, a semiparametric model is proposed to reduce implicit restrictions in a Box-Cox model by using a semi-parametric model, which is shown to provide more accurate mean predictions than the benchmark parametric model.
Abstract: SUMMARY Previous work on the preferred specification of hedonic price models usually recommended a Box-Cox model. In this paper we note that any parametric model involves implicit restrictions and they can be reduced by using a semiparametric model. We estimate a benchmark parametric model which passes several common specification tests, before showing that a semiparametric model outperforms it significantly. In addition to estimating the model, we compare the predictions of the models by deriving the distribution of the predicted log(price) and then calculating the associated prediction intervals. Our data show that the semiparametric model provides more accurate mean predictions than the benchmark parametric model.

236 citations


Journal ArticleDOI
TL;DR: In this paper, the double-threshold ARCH (DTARCH) model is extended to handle the situation where both the conditional mean and the conditional variance specifications are piecewise linear given previous information.
Abstract: Tong's threshold models have been found useful in modelling nonlinearities in the conditional mean of a time series. The threshold model is extended to the so-called double-threshold ARCH(DTARCH) model, which can handle the situation where both the conditional mean and the conditional variance specifications are piecewise linear given previous information. Potential applications of such models include financial data with different (asymmetric) behaviour in a rising versus a falling market and business cycle modelling. Model identification, estimation and diagnostic checking techniques are developed. Maximum likelihood estimation can be achieved via an easy-to-use iteratively weighted least squares algorithm. Portmanteau-type statistics are also derived for checking model adequacy. An illustrative example demonstrates that asymmetric behaviour in the mean and the variance could be present in financial series and that the DTARCH model is capable of capturing these phenomena.

222 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that in certain states of nature, vector autoregressions in the differences of the variables (in the spirit of Box-Jenkins time-series modelling), can outperform vector equilibrium-correction mechanisms.
Abstract: Analyses of forecasting that assume a constant, time-invariant data generating process (DGP), and so implicitly rule out structural change or regime shifts in the economy, ignore an aspect of the real world responsible for some of the more dramatic historical episodes of predictive failure. Some models may offer greater protection against unforeseen structural breaks than others, and various tricks may be employed to robustify forecasts to change. We show that in certain states of nature, vector autoregressions in the differences of the variables (in the spirit of Box-Jenkins time-series modelling), can outperform vector ‘equilibrium-correction’ mechanisms. However, appropriate intercept corrections can enhance the performance of the latter, albeit that reductions in forecast bias may only be achieved at the cost of inflated forecast error variances.

214 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an alternative approach which imposes monotonicity and concavity properties only over the set of prices where inferences will be drawn, but with a significant loss in flexibility.
Abstract: SUMMARY Empirical economists using flexible functional forms often face the disturbing choice of drawing inferences from an approximation violating properties dictated by theory or imposing global restrictions that greatly restrict the flexibility of the functional form. Focusing on the cost function, this paper presents an alternative approach which imposes monotonicity and concavity properties only over the set of prices where inferences will be drawn. An application investigating elasticities for Bemdt-Wood data set using the translog, generalized Leontief, and symmetric generalized McFadden flexible functional forms illustrates the technique. Violations of theoretical properties pose one of the most difficult challenges faced by empirical economists. The introduction of flexible functional forms provided specifications capable of approximating many technologies and of violating monotonicity and concavity conditions implied by theory. Imposing these restrictions at all non-negative prices eliminates this problem, but with a significant loss in flexibility. This paper assesses the benefits of imposing monotonicity and concavity on cost functions over a range of prices through appropriate choice of prior distribution. The evolution of flexible functional forms approximating cost functions over the last twenty years supplies a clear indication of the importance of incorporating properties from theory into empirical analysis. Diewert's (1971) introduction of the generalized Leontief and Christensen, Jorgenson, and Lau's (1973) translog supplied attractive alternatives to earlier Cobb-Douglass and CES forms. These fexible functional forms are capable of attaining arbitrary price elasticities at a point, unlike other forms which imposed strong restrictions on elasticities between inputs. Unfortunately, such advances leading to gains in flexibility also produced functional forms that often violate monotonicity and concavity properties implied by theory. Caves and Christensen (1980) and Barnett and Lee (1985) find small regular regions for the translog and generalized Leontief, and concavity violations often appear in empirical applications using these forms.1 For example, Diewert and Wales (1987) report violations of

200 citations


Journal ArticleDOI
Abstract: A number of topics are discussed concerning how economic forecasts can be improved in quality or at least in presentation. These include the following: using 50% uncertainty intervals rather than 95%; noting that even though forecasters use many different techniques, they are all occasionally incorrect in the same direction; that there is a tendency to underestimate changes; that some expectations and recently available data are used insufficiently; lagged forecasts errors can help compensate for structural breaks; series that are more forecastable could be emphasized and that present methods of evaluating forecasts do not capture the useful properties of some methods compared to alternatives.

Journal ArticleDOI
TL;DR: A new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function is proposed, and the existence and uniqueness of the optimal predictor are established.
Abstract: We make three related contributions. First, we propose a new technique for solving prediction problems under asymmetric loss using piecewise-linear approximations to the loss function, and we establish existence and uniqueness of the optimal predictor. Second, we provide a detailed application to optimal prediction of a conditionally heteroscedastic process under asymmetric loss, the insights gained from which are broadly applicable. Finally, we incorporate our results into a general framework for recursive prediction-based model selection under the relevant loss function.

Journal ArticleDOI
Simon van Norden1
TL;DR: The authors developed a new test for speculative bubbles, which is applied to data for the Japanese yen, the German mark and the Canadian dollar exchange rates from 1977 to 1991, assuming that bubbles display a particular kind of regime-switching behaviour, which was shown to imply coefficient restrictions on a simple switching-regression model of exchange rate innovations.
Abstract: This paper develops a new test for speculative bubbles, which is applied to data for the Japanese yen, the German mark and the Canadian dollar exchange rates from 1977 to 1991. The test assumes that bubbles display a particular kind of regime-switching behaviour, which is shown to imply coefficient restrictions on a simple switching-regression model of exchange rate innovations. Test results are sensitive to the specification of exchange rate fundamentals and other factors. Evidence most consistent with the bubble hypothesis is found using an overshooting model of the Canadian dollar and a PPP model of the Japanese yen.

Journal ArticleDOI
TL;DR: In this article, the authors examined the forecast performance of a cointegrated system relative to a comparable VAR that fails to recognize that the system is characterized by cointegration.
Abstract: This paper examines the forecast performance of a cointegrated system relative to the forecast performance of a comparable VAR that fails to recognize that the system is characterized by cointegration. The cointegrated system we examine is composed of three vectors, a money demand representation, a Fisher equation, and a risk premium captured by an interest rate differential. The forecasts produced by the vector error correction model (VECM) associated with this system are compared with those obtained from a corresponding differenced vector autoregression, (DVAR) as well as a vector autoregression based upon the levels of the data (LVAR). Forecast evaluation is conducted using both the ‘full-system’ criterion proposed by Clements and Hendry (1993) and by comparing forecast performance for specific variables. Overall our findings suggest that selective forecast performance improvement (especially at long forecast horizons) may be observed by incorporating knowledge of cointegration rank. Our general conclusion is that when the advantage of incorporating cointegration appears, it is generally at longer forecast horizons. This is consistent with the predictions of Engle and Yoo (1987). But we also find, consistent with Clements and Hendry (1995) that relative gain in forecast performance clearly depends upon the chosen data transformation.

Journal ArticleDOI
TL;DR: Simulation, real data sets, and multi-step-ahead post-sample forecasts are used and it is found that for simulated data imposing the 'correct' unit-root constraints implied by co-integration does improve the accuracy of forecasts.
Abstract: Does co-integration help long-term forecasts? In this paper, we use simulation, real data sets, and multi-step-ahead post-sample forecasts to study this question. Based on the square root of the trace of forecasting error-covariance matrix, we found that for simulated data imposing the ‘correct’ unit-root constraints implied by co-integration does improve the accuracy of forecasts. For real data sets, the answer is mixed. Imposing unit-root constraints suggested by co-integration tests produces better forecasts for some cases, but fares poorly for others. We give some explanations for the poor performance of co-integration in long-term forecasting and discuss the practical implications of the study. Finally, an adaptive forecasting procedure is found to perform well in one- to ten-step-ahead forecasts.

Journal ArticleDOI
TL;DR: In this article, an empirical reconsideration of evidence for excess co-movement of commodity prices within the framework of univariate and multivariate GARCH(1, 1) models is provided, and corresponding score and likelihood ratio tests are developed.
Abstract: This paper provides an empirical reconsideration of evidence for excess co-movement of commodity prices within the framework of univariate and multivariate GARCH(1, 1) models. Alternative formulations of zero excess co-movement are provided, and corresponding score and likelihood ratio tests are developed. Monthly time series data for two sample periods, 1960–85 and 1974–92, on up to nine commodities are used. In contrast to earlier work, only weak evidence of excess co-movement is found.

Journal ArticleDOI
TL;DR: In this article, the Tobit model is used to explain the budget share that Dutch families spend on vacations. But, the authors take account of the substantial number of zero shares, and two types of models are used.
Abstract: We analyse several limited dependent variable models explaining the budget share that Dutch families spend on vacations. To take account of the substantial number of zero shares, two types of models are used. The first is the single-equation censored regression model. We estimate and test several parametric and semi-parametric extensions of the Tobit model. Second, we consider two-equation models, in which the participation decision and the decision on the amount to spend are treated separately. The first decision is modelled as a binary choice model; the second as a conditional regression. We estimate and test parametric and semi-parametric specifications.

Journal ArticleDOI
TL;DR: In this article, the first and second derivatives of the log-likelihood are used for estimation purposes in the context of univariate GARCH models, and the computational benefit of using the analytic derivatives (first and second) may be substantial.
Abstract: In the context of univariate GARCH models we show how analytic first and second derivatives of the log-likelihood can be successfully employed for estimation purposes. Maximum likelihood GARCH estimation usually relies on the numerical approximation to the log-likelihood derivatives, on the grounds that an exact analytic differentiation is much too burdensome. We argue that this is not the case and that the computational benefit of using the analytic derivatives (first and second) may be substantial. Furthermore, we make a comparison of various gradient algorithms that are used for the maximization of the GARCH Gaussian likelihood. We suggest the implementation of a globally efficient computation algorithm that is obtained by suitably combining the use of the estimated information matrix with that of the exact Hessian during the maximization process. As this would appear a straightforward extension, we then study the finite sample performance of the exact Hessian and its approximations (that is, the estimated information, outer products and misspecification robust matrices) in inference.

Journal ArticleDOI
TL;DR: The authors used panel data to estimate a two-tiered frontier model to measure the extent to which employers often pay more than necessary to hire a worker, while at the same time, employees often accept wages less than they could otherwise command.
Abstract: This paper uses panel data to estimate a two-tiered instead of a one-tiered frontier model. The innovation is to develop a two-step maximum likelihood procedure yielding consistent estimates of inefficiency, while at the same time accounting for heterogeneity. The model is applied by estimating a ‘two-tiered’ earnings function to obtain indices of worker and firm incomplete labour market wage information using panel data from the Panel Study of Income Dynamics (1969–84). The estimation preserves the traditional quadratic age-earnings profile, but measures the extent to which employers often pay more than necessary to hire a worker (incomplete employer information), while at the same time, employees often accept wages less than they could otherwise command (incomplete employee information). The results indicate that employees acquire less information than employers. © 1996 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors investigated productivity growth in broad-acre agriculture in Western Australia and constructed and discussed the Tornqvist indices of three output groups (crops, sheep products and other) and five input groups (livestock, materials and services, labour, capital and land) and derived and used to form an index of total factor productivity, that is observed to grow at an average annual rage of 2·7%.
Abstract: This study investigates productivity growth in broad-acre agriculture in Western Australia. Tornqvist indices of three output groups (crops, sheep products and other) and five input groups (livestock, materials and services, labour, capital and land) are constructed and discussed. Indices of total output and total inputs are also derived and used to form an index of total factor productivity, that is observed to grow at an average annual rage of 2·7%. The input and output indices are also used in the estimation of output supply and input demand equations, derived from a flexible profit function. The Generalized McFadden functional form is used, because it is possible to impose global curvature upon it without loss of flexibility. Asymptotic chi-square tests reject the hypotheses of Hicks-neutral technical change in inputs and in outputs. Technical change is observed to be ‘materials and services’ saving relative to the other input groups, and also appears to favour wool and sheepmeat production relative to the other output groups.

Journal ArticleDOI
TL;DR: This paper explores some of the ways in which the forecast cost function can be used in estimating the parameters and in producing the forecasts, and defines the optimal forecast and considers three of the steps involved in forming the forecast.
Abstract: In many forecasting problems, the forecast cost function is used only in evaluating the forecasts; a second cost function is used in estimating the parameters in the model In this paper, I explore some of the ways in which the forecast cost function can be used in estimating the parameters and, more generally, in producing the forecasts I define the optimal forecast and note that it may depend on the entire conditional distribution of the data, which is typically unknown I then consider three of the steps involved in forming the forecast: approximating the optimal forecast, selecting the model, and estimating any unknown parameters The forecast cost function forms the basis of the approximation, selection, and estimation The methods are illustrated using time series models applied to 15 US macroeconomic series and in a small Monte Carlo experiment

Journal ArticleDOI
TL;DR: In this article, the authors compare the familiar probit model with three semiparametric estimators of binary response models in an application to labour market participation of married women in Switzerland and Germany.
Abstract: This paper compares the familiar probit model with three semiparametric estimators of binary response models in an application to labour market participation of married women. This exercise is performed using two different cross-section data sets from Switzerland and Germany. For the Swiss data the probit specification cannot be rejected and the models yield similar results. In the German case the probit model is rejected, but the coefficient estimates do not vary substantially across the models. The predicted choice probabilities, however, differ systematically for a subset of the sample. The results of this paper indicate that more work is necessary on specification tests of semiparametric models and on simulations using these models.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a robust statistical approach to test the unbiasedness hypothesis in forward exchange market efficiency studies, using robust regression methods with stochastic trend non-stationarity and general forms of serial dependence.
Abstract: SUMMARY This paper provides a robust statistical approach to testing the unbiasedness hypothesis in forward exchange market efficiency studies. The methods we use allow us to work explicitly with levels rather than differenced data. They are statistically robust to data distributions with heavy tails, and they can be applied to data sets where the frequency of observation and the futures maturity do not coincide. In addition, our methods allow for stochastic trend non-stationarity and general forms of serial dependence. The methods are applied to daily data of spot exchange rates and forward exchange rates during the 1920s, which marked the first episode of a broadly general floating exchange rate system. The tail behaviour of the data is analysed using an adaptive data-based method for estimating the tail slope of the density. The results confirm the need for the use of robust regression methods. We find cointegration between the forward rate and spot rate for the four currencies we consider (the Belgian and French francs, the Italian lira and the US dollar, all measured against the British pound), we find support for a stationary risk premium in the case of the Belgian franc, the Italian lira and the US dollar, and we find support for the simple market efficiency hypothesis (where the forward rate is an unbiased predictor of the future spot rate and there is a zero mean risk premium) in the case of the US dollar.

Journal ArticleDOI
TL;DR: In this paper, the authors analyse transitions between pensionable jobs, non-pensionable jobs and other labour market states, using the 1988/9 UK Retirement Survey, focusing on the positive association between length of job tenure and pensionable status, allowing for the possibility that pension scheme members are less mobile than other workers.
Abstract: We analyse transitions between pensionable jobs, non-pensionable jobs, and other labour market states, using the 1988/9 UK Retirement Survey. We focus on the positive association between length of job tenure and pensionable status, allowing for the possibility that pension scheme members are less mobile than other workers because they have persistent unobserved characteristics that predispose them towards a high degree of security in both employment and retirement. We use a competing risks model with state-specific random effects. The model is estimated by simulated maximum likelihood.

Journal ArticleDOI
TL;DR: In this paper, the Fourier cost function is able to represent a broader range of technological structures than the translog cost function, and the authors present an application to the case of an incomplete panel of French farmers, where three technical issues are addressed: the missing data problem, the choice of the order of approximation and the conditions ensuring asymptotic normality of Fourier parameters estimates.
Abstract: Selecting a functional form for a cost or profit function in applied production analysis is a crucial step in assessing the characteristics of a technology. The present study highlights differences in the description of a technology which are induced by fitting a Fourier or a translog cost functions. On average, both forms provide similar information on the technology. However, estimation results and statistical tests tend to favour the Fourier specification. This is mainly because many trigonometric terms appear to be significant in our application. Accordingly, our results show that the Fourier cost function is able to represent a broader range of technological structures than the translog. The paper presents an application to the case of an incomplete panel of French farmers. In the process, three technical issues are addressed: the missing data problem, the choice of the order of approximation and the conditions ensuring asymptotic normality of Fourier parameters estimates.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that an optimal contract model is more appropriate for understanding the time series behaviour of real wages and consumption, and they show that the long-run intertemporal elasticity of substitution (IES) for non-durable consumption can be estimated from a cointegrating regression.
Abstract: SUMMARY This paper re-examines whether the time series properties of aggregate consumption, real wages, and asset returns can be explained by a neoclassical model. Previous empirical rejections of the model have suggested that the optimal labour contract model might be appropriate for understanding the time series properties of the real wage rate and consumption. We show that an optimal contract model restricts the long-run relation of the real wage rate and consumption. We exploit this long-run restriction (cointegration restriction) for estimating and testing the model, using Ogaki and Park's (1989) cointegration approach. This long-run restriction involves a parameter that we call the long-run intertemporal elasticity of substitution (IES) for non-durable consumption but does not involve the IES for leisure. This allows us to estimate the long-run IES for non-durable consumption from a cointegrating regression. Tests for the null of cointegration do not reject our model. As a further analysis, our estimates of the long-run IES for non-durable consumption are used to estimate the discount factor and a coefficient of time-nonseparability using Hansen's (1982) Generalized Method of Moments. We form a specification test for our model a la Hausman (1978) from these two steps. This specification test does not reject our model. This paper re-examines whether the time series properties of aggregate consumption, real wages, and asset returns are consistent with a simple neoclassical representative agent economy. Previous empirical explorations of this issue have rejected the neoclassical model in large part because the marginal rate of substitution between consumption and leisure does not equal the real wage as is implied by the the first-order conditions of the model. In this paper we argue that an optimal labour contracting model is more appropriate for understanding the time series behaviour of real wages and consumption. We show that a version of the optimal contract model restricts the long-run relation between real wages and consumption. We exploit this longrun restriction ri(cointegration restriction) to estimate preference parameters and test the model.' First, we employ the cointegration approach suggested by Ogaki and Park (1989) to estimate the long-run intertemporal elasticity of substitution for non-durable consumption from a cointegrating regression. We test the model by testing for the cointegration restriction.

Journal ArticleDOI
TL;DR: In this paper, the authors apply recently developed tests applicable in this situation to both US and Canadian data, and find substantial evidence of a threshold effect, particularly in US data, but the estimated threshold values are high.
Abstract: The possibility that the effect of monetary policy on output may depend on whether credit conditions are tight or loose can be expressed as a non-linearity in the relation between real money supply and output, of which a simple case is a threshold effect. In this case, consistent with the credit-rationing model of Blinder (1987), the monetary variable has a more powerful effect if it is below some threshold than when it is above. Testing for the importance of this threshold is straightforward if the appropriate threshold value is known a priori, but where the value is not known and must be chosen based on the sample, the testing problem becomes more difficult. We apply recently-developed tests applicable in this situation to both US and Canadian data, and find substantial evidence of a threshold effect, particularly in US data. However, the estimated threshold values are high. Copyright 1996 by John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Using a fad model with Markov-switching heteroscedasticity in both the fundamental and fad components, the authors examined the possibility that the 1987 stock market crash was an example of a short-lived fad.
Abstract: Using a fad model with Markov-switching heteroscedasticity in both the fundamental and fad components (UC-MS model), this paper examines the possibility that the 1987 stock market crash was an example of a short-lived fad. While we usually think of fads as speculative bubbles, what the UC-MS model seems to be picking up is unwarranted pessimism which the market exhibited with the OPEC oil shock and the '87 crash. Furthermore, the conditional variance implied by the UC-MS model captures most of the dynamics in the GARCH specification of stock return volatility. Yet unlike the GARCH measure of volatility, the UC-MS measure of volatility is consistent with volatility reverting to its normal level very quickly after the crash.

Journal ArticleDOI
TL;DR: In this article, Monte Carlo experiments show that some tests of the cointegration vectors do not work well on series generated by an equilibrium business cycle model, and co-integration restrictions add little to forecasting; structural VAR models based on weak long-run restrictions seem to work well.
Abstract: SUMMARY Cointegration analyses of macroeconomic time series are often not based on fully specified theoretical models. We use a theoretical model to scrutinize common procedures in applied cointegration analysis. Monte Carlo experiments show that (1) some tests of the cointegration vectors do not work well on series generated by an equilibrium business cycle model; (2) cointegration restrictions add little to forecasting; (3) structural VAR models based on weak long-run restrictions seem to work well. The main disadvantages of cointegration analysis without strong links to economic theory are that it makes it hard to estimate and interpret the cointegration vectors.

Journal ArticleDOI
TL;DR: This article derived the stationary distribution from a continuous time error correction model and estimated this by MLE methods, showing that the distribution exhibits a wide variety of distributional shapes including multimodality.
Abstract: This paper provides a framework for building and estimating non-linear real exchange rate models. The approach derives the stationary distribution from a continuous time error correction model and estimates this by MLE methods. The derived distribution exhibits a wide variety of distributional shapes including multimodality. The main result is that swings in the US/UK rate over the period 1973:3 to 1990:5 can be attributed to the distribution becoming bimodal with the rate switching between equilibria. By capturing these changes in the distribution, the non-linear model yields improvements over the random walk, the speculative efficiency model, and Hamilton's stochastic segmented trends model.