scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1996"


ReportDOI
TL;DR: In this paper, a modified version of the Dickey-Fuller t test is proposed to improve the power when an unknown mean or trend is present, and a Monte Carlo experiment indicates that the modified test works well in small samples.
Abstract: The asymptotic power envelope is derived for point-optimal tests of a unit root in the autoregressive representation of a Gaussian time series under various trend specifications. We propose a family of tests whose asymptotic power functions are tangent to the power envelope at one point and are never far below the envelope. When the series has no deterministic component, some previously proposed tests are shown to be asymptotically equivalent to members of this family. When the series has an unknown mean or linear trend, commonly used tests are found to be dominated by members of the family of point-optimal invariant tests. We propose a modified version of the Dickey-Fuller t test which has substantially improved power when an unknown mean or trend is present. A Monte Carlo experiment indicates that the modified test works well in small samples.

4,284 citations


ReportDOI
TL;DR: In this paper, an empirical focus is on estimating the parameters of a production function for the equipment industry, and then using those estimates to analyze the evolution of plant-level productivity.
Abstract: Technological change and deregulation have caused a major restructuring of the telecommunications equipment industry over the last two decades. Our empirical focus is on estimating the parameters of a production function for the equipment industry, and then using those estimates to analyze the evolution of plant-level productivity. The restructuring involved significant entry and exit and large changes in the sizes of incumbents. Firms' choices on whether to liquidate, and on input quantities should they continue, depended on their productivity. This generates a selection and a simultaneity problem when estimating production functions. Our theoretical focus is on providing an estimation algorithm which takes explicit account of these issues. We find that our algorithm produces markedly different and more plausible estimates of production function coefficients than do traditional estimation procedures. Using our estimates we find increases in the rate of aggregate productivity growth after deregulation. Since we have plant-level data we can introduce indices which delve deeper into how this productivity growth occurred. These indices indicate that productivity increases were primarily a result of a reallocation of capital towards more productive establishments.

3,657 citations


Journal ArticleDOI
TL;DR: In this paper, a semiparametric procedure is presented to analyze the effects of institutional and labor market factors on recent changes in the U.S. distribution of wages, including de-unionization and supply and demand shocks.
Abstract: This paper presents a semiparametric procedure to analyze the effects of institutional and labor market factors on recent changes in the U.S. distribution of wages. The effects of these factors are estimated by applying kernel density methods to appropriately weighted samples. The procedure provides a visually clear representation of where in the density of wages these various factors exert the greatest impact. Using data from the Current Population Survey, we find, as in previous research, that de-unionization and supply and demand shocks were important factors in explaining the rise in wage inequality from 1979 to 1988. We find also compelling visual and quantitative evidence that the decline in the real value of the minimum wage explains a substantial proportion of this increase in wage inequality, particularly for women. We conclude that labor market institutions are as important as supply and demand considerations in explaining changes in the U.S. distribution of wages from 1979 to 1988.

2,677 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic distribution of standard test statistics is described as functionals of chi-square processes, and a transformation based upon a conditional probability measure yields an asymptic distribution free of nuisance parameters, which can be easily approximated via simulation.
Abstract: Many econometric testing problems involve nuisance parameters which are not identified under the null hypotheses. This paper studies the asymptotic distribution theory for such tests. The asymptotic distributions of standard test statistics are described as functionals of chi-square processes. In general, the distributions depend upon a large number of unknown parameters. We show that a transformation based upon a conditional probability measure yields an asymptotic distribution free of nuisance parameters, and we show that this transformation can be easily approximated via simulation. The theory is applied to threshold models, with special attention given to the so-called self-exciting threshold autoregressive model. Monte Carlo methods are used to assess the finite sample distributions. The tests are applied to U.S. GNP growth rates, and we find that Potter's (1995) threshold effect in this series can be possibly explained by sampling variation.

2,327 citations


Journal ArticleDOI
TL;DR: The authors developed procedures for inference about the moments of smooth functions of out-of-sample predictions and prediction errors, when there is a long time series of predictions and realizations, and provided tools for analysis of predictive accuracy and efficiency, and more generally, of predictive ability.
Abstract: This paper develops procedures for inference about the moments of smooth functions of out-of-sample predictions and prediction errors, when there is a long time series of predictions and realizations. The aim is to provide tools for analysis of predictive accuracy and efficiency, and, more generally, of predictive ability. The paper allows for nonnested and nonlinear models, as well as for possible dependence of predictions and prediction errors on estimated regression parameters. Simulations indicate that the procedures can work well in samples of size typically available.

1,166 citations


Journal ArticleDOI
TL;DR: In this article, the effects of unions on the structure of wages, using an estimation technique that explicitly accounts for misclassification errors in reported union status, and potential correlations between union status and unobserved productivity.
Abstract: This paper studies the effects of unions on the structure of wages, using an estimation technique that explicitly accounts for misclassification errors in reported union status, and potential correlations between union status and unobserved productivity. The econometric model is estimated separately for five skill groups using a large panel data set formed from the U.S. Current Population Survey. The results suggest that unions raise wages more for workers with lower levels of observed skills. In addition, the patterns of selection bias differ by skill group. Among workers with lower levels of observed skill, unionized workers are positively selected, whereas union workers are negatively selected from among those with higher levels of observed skill. DESPITE A LARGE AND SOPHISTICATED LITERATURE there is still substantial disagreement over the extent to which differences in the structure of wages between union and nonunion workers represent an effect of trade unions, rather than a consequence of the nonrandom selection of unionized workers. Over the past decade several alternative approaches have been developed to control for unobserved heterogeneity between union and nonunion workers.2 One method that has been successfully applied in other areas of applied microeconometrics is the use of longitudinal data to measure the wage gains or losses of workers who change union status. Unfortunately, longitudinal estimators are highly sensitive to measurement error: even a small fraction of misclassified union status changes can lead to significant biases if the true rate of mobility between union and nonunion jobs is low. This sensitivity led Lewis (1986) to essentially dismiss the longitudinal evidence in his landmark survey of union wage effects. In this paper I present some new evidence on the union wage effect, based on a longitudinal estimator that explicitly accounts for misclassification errors in reported union status. The estimator uses external information on union status misclassification rates, along with the reduced-form coefficients from a multi- variate regression of wages on the observed sequence of union status indicators, to isolate the causal effect of unions from any selection biases introduced by a correlation between union status and the permanent component of unobserved wage heterogeneity. Recognizing that unions may raise wages more or less for

649 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that the firm will choose to exclude some low value consumers from all markets, and a class of cases that allow explicit solution is derived by making use of a multivariate form of integration by parts.
Abstract: Typically, work on mechanism design has assumed that all private information can be captured in a single scalar variable. This paper explores one way in which this assumption can be relaxed in the context of the multiproduct nonlinear pricing problem. It is shown that the firm will choose to exclude some low value consumers from all markets. A class of cases that allow explicit solution is derived by making use of a multivariate form of "integration by parts." In such cases the optimal tariff is cost-based.

570 citations


Journal ArticleDOI
TL;DR: In this paper, a new natural restriction on utility functions, called risk vulnerability, was introduced, which is equivalent to the condition that an undesirable risk can never be made desirable by the presence of an independent, unfair risk.
Abstract: We examine in this paper a new natural restriction on utility functions, namely that adding an unfair background risk to wealth makes risk-averse individuals behave in a more risk-averse way with respect to any other independent risk. This concept is called risk vulnerability. It is equivalent to the condition that an undesirable risk can never be made desirable by the presence of an independent, unfair risk. Moreover, under risk vulnerability, adding an unfair background risk reduces the demand for risky assets. Risk vulnerability generalizes the concept of properness (individually undesirable, independent risks are always jointly undesirable) introduced by Pratt and Zeckhauser (1987). It implies that the two first derivatives of the utility function are concave transformations of the original utility function. Under decreasing absolute risk aversion, a sufficient condition for risk vulnerability is local properness, i.e. r'' ≥ r'r, where r is the Arrow-Pratt coefficient of absolute risk aversion.

545 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric estimation procedure for continuous-time stochastic models is proposed, where prices of derivative securities depend crucially on the form of the instantaneous volatility of the underlying process, leaving the volatility function unrestricted and estimate it nonparametrically.
Abstract: We propose a nonparametric estimation procedure for continuous-time stochastic models. Because prices of derivative securities depend crucially on the form of the instantaneous volatility of the underlying process, we leave the volatility function unrestricted and estimate it nonparametrically. Only discrete data are used but the estimation procedure still does not rely on replacing the continuous-time model by some discrete approximation. Instead the drift and volatility functions are forced to match the densities of the process. We estimate the stochastic differential equation followed by the short-term interest rate and compute nonparametric prices for bonds and bond options.

539 citations


Journal ArticleDOI
TL;DR: In this article, conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of over-identifying restrictions are given, with particular attention given to the case of dependent data.
Abstract: Monte Carlo experiments have shown that tests based on generalized-method-ofmoments estimators often have true levels that differ greatly from their nominal levels when asymptotic critical values are used. This paper gives conditions under which the bootstrap provides asymptotic refinements to the critical values of t tests and the test of overidentifying restrictions. Particular attention is given to the case of dependent data. It is shown that with such data, the bootstrap must sample blocks of data and that the formulae for the bootstrap versions of test statistics differ from the formulae that apply with the original data. The results of Monte Carlo experiments on the numerical performance of the bootstrap show that it usually reduces the errors in level that occur when critical values based on first-order asymptotic theory are used. The bootstrap also provides an indication of the accuracy of critical values obtained from first-order asymptotic theory.

519 citations


Journal ArticleDOI
TL;DR: The authors show that adding income uncertainty to the standard optimization problem induces a concave consumption function in which, as Keynes suggested, the marginal propensity to consume out of wealth or transitory income declines with the level of wealth.
Abstract: At least since Keynes (1935), many economists have had the intuition that the marginal propensity to consume out of wealth declines as wealth increases. Nonetheless, standard perfect-certainty and certainty equivalent versions of intertemporal optimizing models of consumption imply a marginal propensity to consume that is unrelated to the level of household wealth. We show that adding income uncertainty to the standard optimization problem induces a concave consumption function in which, as Keynes suggested, the marginal propensity to consume out of wealth or transitory income declines with the level of wealth.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a proof of the consistency and asymptotic normality of the quasi-maximum likelihood estimator in GARCH(1,1) and IGARCH (1, 1) models, showing that the presence of a unit root in the conditional variance does not affect the limiting distribution of the estimators.
Abstract: This paper provides a proof of the consistency and asymptotic normality of the quasi-maximum likelihood estimator in GARCH(1,1) and IGARCH(1,1) models. In contrast to the case of a unit root in the conditional mean, the presence of a unit root in the conditional variance does not affect the limiting distribution of the estimators ; in both models, estimators are normally distributed. In addition, a consistent estimator of the covariance matrix is available, enabling the use of standard test statistics for inference.

Journal ArticleDOI
TL;DR: In this paper, the Central Limit Theorem for degenerate U-statistics of order higher than two is used to construct consistent tests in the context of a nonparametric regression model, such as the significance of a subset of regressors and the specification of the semiparametric functional form of the regression function.
Abstract: In this paper, we develop several consistent tests in the context of a nonparametric regression model. These include tests for the significance of a subset of regressors and tests for the specification of the semiparametric functional form of the regression function, where the latter covers tests for a partially linear and a single index specification against a general nonparametric alternative. One common feature to the construction of all these tests is the use of the Central Limit Theorem for degenerate U-statistics of order higher than two. As a result, they share the same advantages over most of the corresponding existing tests in the literature: (a) They do not depend on any ad hoc modifications such as sample splitting, random weighting, etc. (b) Under the alternative hypotheses, the test statistics in this paper diverge to positive infinity at a faster rate than those based on ad hoc modifications.

Journal ArticleDOI
TL;DR: In this paper, the collective choice of fiscal policy in a "federation" with two levels of government is studied, where local policy redistributes across individuals and affects the probability of aggregate shocks.
Abstract: We study the collective choice of fiscal policy in a "federation" with two levels of government. Local policy redistributes across individuals and affects the probability of aggregate shocks, whereas federal policy shares international risk. There is a tradeoff between risk-sharing and moral hazard: federal risk-sharing may induce local governments to enact policies that increase local risk. We analyze this tradeoff under alternative fiscal constitutions. In particular, we contrast a vertically ordered system like the EC with a horizontally ordered federal system like the US. Alternative arrangements create different incentives for policymakers and voters, and give rise to different political equilibria. Under appropriate institutions, centralization of functions and power can mitigate the moral hazard problem.

Journal ArticleDOI
TL;DR: In this paper, a test for stochastic dominance based on the Goodness of Fit Test was proposed, implemented, and compared with indirect tests of second-order dominance currently utilized in income distribution studies.
Abstract: Tests for stochastic dominance, based upon extensions of the Goodness of Fit Test to the nonparametric comparison of income distributions, are proposed, implemented, and compared with indirect tests of second order stochastic dominance currently utilized in income distribution studies.

Journal ArticleDOI
TL;DR: The authors investigate the separate effects of a training program on the duration of participants' subsequent employment and unemployment spells, and find that the program studied, the National Supported Work Demonstration, raised trainees' employment rates solely by lengthening their employment durations.
Abstract: We investigate the separate effects of a training program on the duration of participants' subsequent employment and unemployment spells. This program randomly assigned volunteers to treatment and control groups. However, the treatments and controls experiencing subsequent employment and unemployment spells are not generally random (or comparable) subsets of the initial groups because the sorting process into subsequent spells is very different for the two groups. Standard practice in duration models ignores this sorting process, leading to a sample selection problem and misleading estimates of the training effects. We propose an estimator that addresses this problem and find that the program studied, the National Supported Work Demonstration, raised trainees' employment rates solely by lengthening their employment durations.

Journal ArticleDOI
TL;DR: In this article, the authors propose and analyze two real-time monitoring procedures with controlled size asymptotically: the fluctuation and CUSUM monitoring procedures, and extend an invariance principle in the sequential testing literature to obtain their results.
Abstract: Contemporary tests for structural change deal with detections of the one-shot type: given an historical data set of fixed size, these tests are designed to detect a structural break within the data set. Due to the law of the iterated logarithm, one-shot tests cannot be applied to monitor out-of-sample stability each time new data arrive without signalling a nonexistent break with probability one. We propose and analyze two real-time monitoring procedures with controlled size asymptotically: the fluctuation and CUSUM monitoring procedures. We extend an invariance principle in the sequential testing literature to obtain our results. Simulation results show that the proposed monitoring procedures indeed have controlled asymptotic size. Detection timing depends on the magnitude of parameter change, the signal to noise ratio, and the location of the out-of-sample break point.

Journal ArticleDOI
TL;DR: The authors used the Panel Study of Income Dynamics to test whether risk-sharing is complete between or within American families, and the test results rejected inter- as well as intra-family full risk sharing even assuming that leisure is endogenous or that leisure and consumption are nonseparable.
Abstract: This paper uses the Panel Study of Income Dynamics to test whether risk-sharing is complete between or within American families. The tests accommodate a wide variety in the configuration and availability of family data. The test results reject inter- as well as intra-family full risk-sharing even assuming that leisure is endogenous or that leisure and consumption are nonseparable.

Journal ArticleDOI
TL;DR: A one-agent Bayesian model of learning by doing and technological choice is explored and it is found that a human-capital- rich agent may find it optimal to avoid any switching of technologies, and therefore to experience no long-run growth.
Abstract: This is a one-agent Bayesian model of learning by doing and technology choice. The more the agent uses a technology, the better he learns its parameters, and the more productive he gets. This expertise is a form of human capital. Any given technology has bounded productivity, which therefore can grow in the long run only if the agent keeps switching to better technologies. But a switch of technologies temporarily reduces expertise: The bigger is the technological leap, the bigger the loss in expertise. The prospect of a productivity drop may prevent the agent from climbing the technological ladder as quickly as he might. Indeed, an agent may be so skilled at some technology that he will never switch again, so that he will experience no long-run growth. In contrast, someone who is less skilled (and therefore less productive) at that technology may find it optimal to switch technologies over and over again, and therefore enjoy long-run growth in output. Thus the model can give rise to overtaking.

Journal ArticleDOI
TL;DR: In this paper, a model of non-cooperative bargaining among n participants, applied to situations describable as games in coalitional form, is presented and analyzed, which leads to a unified solution theory for such games that has as special cases the Shapley value in the transferable utility (TU) case, the Nash bargaining solution in the pure bargaining case, and the recently introduced Maschler-Owen consistent value in a general nontransferable utility case.
Abstract: We present and analyze a model of noncooperative bargaining among n participants, applied to situations describable as games in coalitional form. This leads to a unified solution theory for such games that has as special cases the Shapley value in the transferable utility (TU) case, the Nash bargaining solution in the pure bargaining case, and the recently introduced Maschler-Owen consistent value in the general nontransferable utility (NTU) case. Moreover, we show that any variation (in a certain class) of our bargaining procedure which generates the Shapley value in the TU setup must yield the consistent value in the general NTU setup.

Journal ArticleDOI
TL;DR: In this article, the authors examine background wealth deterioration that takes the form of both general firstand second-degree stochastic dominance changes in risk (FSD and SSD) and determine conditions that are both necessary and sufficient for each of these two types of background risk changes to imply more risk-averse behavior on the part of the individual.
Abstract: ECONOMIC DECISION MAKING UNDER UNCERTAINTY often takes place in the presence of multiple risks and in markets that are less than complete. As a consequence, choices about endogenous risks sometimes must be made while simultaneously facing one or more immutable exogenous "background risks" that are not under the control of the agent, and that are independent of endogenous risks. It is somehow natural to assume that an exogenous deterioration in background wealth will cause an individual to take more care elsewhere. If we define a deterioration, for example, as making the individual poorer by removing a fixed amount of initial wealth, we know from Pratt (1964) that decreasing absolute risk aversion (DARA) of an individual's von Neumann-Morgenstern utility function yields this natural result. If, on the other hand, background wealth becomes riskier due to the addition of a zero-mean risk, that is also statistically independent of all other risks, behavior will be more risk averse if and only if preferences are risk vulnerable as defined by Gollier and Pratt (1996). Risk vulnerability (described below in Section 4) is a stronger notion than DARA and includes proper risk aversion (Pratt and Zeckhauser (1987)) and standard risk aversion (Eeckhoudt and Kimball (1992), Kimball (1993)) as particular cases. But a deterioration in background wealth may encompass more complicated distribution changes than the introduction of another statistically independent risk. In this paper, we examine background wealth deteriorations that take the form of both general firstand second-degree stochastic dominance changes in risk (FSD and SSD respectively). In particular, we determine conditions that are both necessary and sufficient for each of these two types of background risk changes to imply more risk-averse behavior on the part of the individual. For the case of FSD changes, this condition turns out to be Ross' stronger characterization of decreasing absolute risk aversion. In the case of general SSD changes in the distribution of background wealth, the condition derived is a stronger version (in Ross' sense) of the conditions characterizing preferences that are locally risk vulnerable in the sense of Gollier and Pratt. The necessary and sufficient conditions derived are fairly restrictive upon preferences. However, if we take as positive behavior that individuals act in a more risk-averse manner whenever the distribution of background wealth deteriorates, these conditions place canonical limits upon appropriate utility representations. At the very least, they

Journal ArticleDOI
TL;DR: The authors examined how proportional transaction costs, short-sale constraints, and margin requirements affect inferences based on asset return data about intertemporal marginal rates of substitution (IMRSs) and showed that small transaction costs can greatly reduce the required variability of IMRSs.
Abstract: This paper examines how proportional transaction costs, short-sale constraints, and margin requirements affect inferences based on asset return data about intertemporal marginal rates of substitution (IMRSs). It is shown that small transaction costs can greatly reduce the required variability of IMRSs. This suggests that the low variability of many parametric, aggregate consumption based IMRSs need not be inconsistent with asset return data. Euler inequalities for a transaction cost economy with power utility are tested using aggregate consumption data and returns on stocks and short maturity U.S. Treasury bills. In the majority of cases there is little evidence against power utility specifications with low risk-aversion parameters. The results are obtained with transaction costs on stocks as small as .5% of price, and are in sharp contrast to the strong rejection of the analogous Euler equalities for a frictionless economy.

Journal ArticleDOI
TL;DR: In this article, the authors consider the situation where a single consumer buys a stream of goods from different sellers over time and the true value of each seller's product to the buyer is initially unknown.
Abstract: We consider the situation where a single consumer buys a stream of goods from different sellers over time. The true value of each seller's product to the buyer is initially unknown. Additional information can be gained only by experimentation. For exogeneously given prices the buyer's problem is a multi-armed bandit problem. The innovation in this paper is to endogenize the cost of experimentation to the consumer by allowing for price competition between the sellers. The role of prices is then to allocate intertemporally the costs and benefits of learning between buyers and sellers. We examine how strategic aspects of the oligopoly model interact with the learning process. All Markov perfect equilibria (MPE) are efficient. We identify an equilibrium which besides its unique robustness properties has a strikingly simple, seemingly myopic pricing rule. Prices below marginal cost emerge naturally to sustain experimentation. Intertemporal exchange of the gains of learning is necessary to support efficient experimentation. We analyze the asymptotic behavior of the equilibria.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the spatial theory of voting to an institutional structure in which policy choices depend upon not only the executive but also the composition of the legislature, and apply "coalition proof" type refinements.
Abstract: This paper extends the spatial theory of voting to an institutional structure in which policy choices depend upon not only the executive but also the composition of the legislature. Voters have incentives to be strategic since policy reflects the outcome of a simultaneous election of the legislature and the executive and since the legislature's impact on policy depends upon relative plurality. To analyze equilibrium in this game between voters, we apply "coalition proof' type refinements. The model has several testable implications which are consistent with voting behavior in the United States. For instance, the model predicts: (a) split-tickets where some voters vote for one party for president and the other for congress; (b) for some parameter values, a divided government with different parties controlling the executive and the majority of the legislature; and (c) the midterm electoral cycle with the party holding the presidency always losing votes in midterm congressional elections.

Journal ArticleDOI
TL;DR: In this article, the authors developed continuous record asymptotic approximations for the measurement error in conditional variances and covariances when using two-sided rolling regressions and a one-sided weighted rolling regression.
Abstract: It is widely known that conditional covariances of asset returns change over time. Researchers doing empirical work have adopted many strategies for accommodating conditional heteroskedasticity. Among the popular strategies are: (a) chopping the available data into short blocks of time and assuming homoskedasticity within the blocks, (b) performing one-sided rolling regressions, in which only data from, say, the preceding five year period is used to estimate the conditional covariance of returns at a given date, and (c) performing two-sided rolling regressions, in which covariances are estimated for each date using, say, five years of lags and five years of leads. Another model-GARCH-amounts to a one-sided weighted rolling regression. We develop continuous record asymptotic approximations for the measurement error in conditional variances and covariances when using these methods. We derive asymptotically optimal window lengths for standard rolling regressions and optimal weights for weighted rolling regressions. As an empirical example, we estimate volatility on the S&P 500 stock index using daily data from 1928 to 1990.

ReportDOI
TL;DR: In this paper, the authors examined the effect of cash transfers and food stamp benefits on family labor supply and welfare participation among two-parent families and found that welfare participation and labor supply are highly responsive to changes in the benefit structure under the AFDC-UP program.
Abstract: This paper examines the effect of cash transfers and food stamp benefits on family labor supply and welfare participation among two-parent families. The Aid to Families with Dependent Children-Unemployed Parent Program has provided cash benefits to two-parent households since 1961. Despite recent expansions, little is known about the program's effect on labor supply and welfare participation. I develop a model of family labor supply in which hours of work for the husband and wife are chosen to maximize family utility subject to a family budget constraint accounting for AFDC-UP benefits and other tax and transfer programs. The husband's and wife's labor supply decisions are restricted to no work, part-time work, and full-time work. Maximum likelihood techniques are used to estimate parameters of the underlying hours of work and welfare participation equations. The estimates are used to determine the magnitude of the work disincentive effects of the AFDC-UP program, and to simulate the effects of changes in AFDC-UP benefit and eligibility rules on family labor supply and welfare participation. The results suggest that labor supply and welfare participation among two-parent families are highly responsive to changes in the benefit structure under the AFDC-UP program.

Journal ArticleDOI
TL;DR: In this article, the authors proposed three classes of consistent one-sided tests for serial correlation of unknown form for the residual from a linear dynamic regression model that includes both lagged dependent variables and exogenous variables.
Abstract: This paper proposes three classes of consistent one-sided tests for serial correlation of unknown form for the residual from a linear dynamic regression model that includes both lagged dependent variables and exogenous variables. The tests are obtained by comparing a kernel-based normalized spectral density estimator and the null normalized spectral density, using a quadratic norm, the Hellinger metric, and the Kullback-Leibler information criterion respectively. Under the null hypothesis of no serial correlation, the three classes of new test statistics are asymptotically N(0,1) and equivalent. The null distributions are obtained without having to specify any alternative model. Unlike some conventional tests for serial correlation, the null distributions of our tests remain invariant when the regressors include lagged dependent variables. Under a suitable class of local alternatives, the three classes of the new tests are asymptotically equally efficient. Under global alternatives, however, their relative efficiencies depend on the relative magnitudes of the three divergence measures. Our approach provides an interpretation for Box and Pierce's (1970) test, which can be viewed as a quadratic norm based test using a truncated periodogram. Many kernels deliver tests with better power than Box and Pierce's test or the truncated kernel based test. A simulation study shows that the new tests have good power against an AR(1) process and a fractionally integrated process. In particular, they have better power than the Lagrange multiplier tests of Breusch (1978) and Godfrey (1978) as well as the portmanteau tests of Box and Pierce (1970) and Ljung and Box (1978). The cross-validation procedure of Beltrao and Bloomfield (1987) and Robinson (1991a) works reasonably well in determining the smoothing parameter of the kernel spectral estimator and is recommended for use in practice.

Journal ArticleDOI
TL;DR: In this paper, it was shown that given any model of the effect of mutations, any invariant distribution of the "mutationless" process is close to an invariant distributions of the process with appropriately chosen small mutation rates, and that this refinement effect can only be obtained by restrictions on how the magnitude of mutation on evolution varies across states of the system.
Abstract: Recent evolutionary models have introduced "small mutation rates" as a way of refining predictions of long-run behavior. We show that this refinement effect can only be obtained by restrictions on how the magnitude of the effect of mutation on evolution varies across states of the system. In particular, given any model of the effect of mutations, any invariant distribution of the "mutationless" process is close to an invariant distribution of the process with appropriately chosen small mutation rates.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the influence of the underlying properties of the inequality measure on the distortion of the distribution and illustrate the magnitude of the effect using a simulation, and demonstrate the application of a robust estimation procedure.
Abstract: Inequality measures are often used to summarize information about empirical income distributions. However the resulting picture of the distribution and of changes in the distribution can be severely distorted if the data are contaminated. The nature of this distortion will in general depend upon the underlying properties of the inequality measure. We investigate this issue theoretically using a technique based on the influence function, and illustrate the magnitude of the effect using a simulation. We consider both direct nonparametric estimation from the sample, and indirect estimation using a parametric model; in the latter case we demonstrate the application of a robust estimation procedure. We apply our results to two micro-data examples.

Journal ArticleDOI
TL;DR: In this paper, model determination methods and their use in the prediction of economic time series have been discussed, with the main part of the paper concerned with model determination, forecast evaluation, and the construction of evolving sequences of models that can adapt in dimension and form (including the way in which any nonstationarity in the data is modelled).
Abstract: Our general subject is model determination methods and their use in the prediction of economic time series. The methods suggested are Bayesian in spirit but they can be justified by classical as well as Bayesian arguments. The main part of the paper is concerned with model determination, forecast evaluation, and the construction of evolving sequences of models that can adapt in dimension and form (including the way in which any nonstationarity in the data is modelled) as new characteristics in the data become evident. The paper continues some recent work on Bayesian asymptotics by the author and Werner Ploberger (1995), develops embedding techniques for vector martingales that justify the role of a class of exponential densities in model selection and forecast evaluation, and implements the modelling ideas in a multivariate regression framework that includes Bayesian vector autoregressions (BVAR's) and reduced rank regressions (RRR's). It is shown how the theory in the paper can be used: (i) to construct optimized BVAR's with data-determined hyperparameters; (ii) to compare models such as BVAR's, optimized BVAR's, and RRR's; (iii) to perform joint order selection of cointegrating rank, lag length, and trend degree in a VAR; and (iv) to discard data that may be irrelevant and thereby reset the initial conditions of a model.