scispace - formally typeset
Search or ask a question

Showing papers on "Estimator published in 2002"


Journal ArticleDOI
TL;DR: It is proved the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density.
Abstract: A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.

11,727 citations


Posted Content
TL;DR: In this article, the focus is on panels where a large number of individuals or firms are observed for a small number of time periods, typical of applications with microeconomic data, and the emphasis is on single equation models with autoregressive dynamics and explanatory variables.
Abstract: This paper reviews econometric methods for dynamic panel data models, and presents examples that illustrate the use of these procedures. The focus is on panels where a large number of individuals or firms are observed for a small number of time periods, typical of applications with microeconomic data. The emphasis is on single equation models with autoregressive dynamics and explanatory variables that are not strictly exogenous, and hence on the Generalised Method of Moments estimators that are widely used in this context. Two examples using firm-level panels are discussed in detail: a simple autoregressive model for investment rates; and a basic production function.

1,821 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric estimator based on the concept of expected minimum input function (or expected maximal output function) is proposed, which is related to the FDH estimator but will not envelop all the data.

1,023 citations


Journal ArticleDOI
TL;DR: A model of visual motion perception using standard estimation theory, under the assumptions that there is noise in the initial measurements and slower motions are more likely to occur than faster ones, is formulated and found that specific instantiation of such a velocity estimator can account for a wide variety of psychophysical phenomena.
Abstract: ing incorrect velocities. We show that these ‘illusions’ arise naturally in a system that attempts to estimate local image velocity. We formulated a model of visual motion perception using standard estimation theory, under the assumptions that (i) there is noise in the initial measurements and (ii) slower motions are more likely to occur than faster ones. We found that specific instantiation of such a velocity estimator can account for a wide variety of psychophysical phenomena.

959 citations


Posted Content
TL;DR: In this paper, a true fixed effects model is extended to the stochastic frontier model using results that specifically employ the nonlinear specification, and the random effects model was reformulated as a special case of the random parameters model that retains the fundamental structure of the Stochastic Frontier model.
Abstract: Received analyses based on stochastic frontier modeling with panel data have relied primarily on results from traditional linear fixed and random effects models. This paper examines extensions of these models that circumvent two important shortcomings of the existing fixed and random effects approaches. The conventional panel data stochastic frontier estimators both assume that technical or cost inefficiency is time invariant. In a lengthy panel, this is likely to be a particularly strong assumption. Second, as conventionally formulated, the fixed and random effects estimators force any time invariant cross unit heterogeneity into the same term that is being used to capture the inefficiency. Thus, measures of inefficiency in these models may be picking up heterogeneity in addition to or even instead of technical or cost inefficiency. In this paper, a true fixed effects model is extended to the stochastic frontier model using results that specifically employ the nonlinear specification. The random effects model is reformulated as a special case of the random parameters model that retains the fundamental structure of the stochastic frontier model. The techniques are illustrated through two applications, a large panel from the U.S. banking industry and a cross country comparison of the efficiency of health care delivery.

838 citations


Journal ArticleDOI
TL;DR: In this article, the authors used Hermite polynomials to construct an explicit sequence of closed-form functions and showed that it converges to the true (but unknown) likelihood function.
Abstract: When a continuous-time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed-form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.

823 citations


Posted Content
TL;DR: This article proposed quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors, and tabulated critical values that enable using the first-stage F-statistic (or, for instance, the Cragg-Donald [1993] statistic) to test whether the given instruments are weak.
Abstract: Weak instruments can produce biased IV estimators and hypothesis tests with large size distortions. But what, precisely, are weak instruments, and how does one detect them in practice? This paper proposes quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors. We tabulate critical values that enable using the first-stage F-statistic (or, when there are multiple endogenous regressors, the Cragg-Donald [1993] statistic) to test whether the given instruments are weak.

812 citations


Journal ArticleDOI
TL;DR: The (conditional) minimum average variance estimation (MAVE) method is proposed, which is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included.
Abstract: Summary. Searching for an effective dimension reduction space is an important problem in regression, especially for high dimensional data. We propose an adaptive approach based on semiparametric models, which we call the (conditional) minimum average variance estimation (MAVE) method, within quite a general setting. The MAVE method has the following advantages. Most existing methods must undersmooth the nonparametric link function estimator to achieve a faster rate of consistency for the estimator of the parameters (than for that of the nonparametric function). In contrast, a faster consistency rate can be achieved by the MAVE method even without undersmoothing the nonparametric link function estimator. The MAVE method is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included. Because of the faster rate of consistency for the parameter estimators, it is possible for us to estimate the dimension of the space consistently. The relationship of the MAVE method with other methods is also investigated. In particular, a simple outer product gradient estimator is proposed as an initial estimator. In addition to theoretical results, we demonstrate the efficacy of the MAVE method for high dimensional data sets through simulation. Two real data sets are analysed by using the MAVE approach.

787 citations


01 Jan 2002
TL;DR: In this paper, the authors used Hermite polynomials to construct an explicit sequence of closed-form functions and showed that it converges to the true (but unknown) likelihood function.
Abstract: When a continuous-time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed-form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.

741 citations


Posted Content
TL;DR: In this article, a panel approach to median unbiased estimation that takes account of cross-section dependence was developed, which considerably reduced the effects of bias and gained precision from estimating cross-sectional error correlation.
Abstract: This paper deals with cross section dependence, homogeneity restrictions and small sample bias issues in dynamic panel regressions. To address the bias problem we develop a panel approach to median unbiased estimation that takes account of cross section dependence. The new estimators given here considerably reduce the effects of bias and gain precision from estimating cross section error correlation. The paper also develops an asymptotic theory for tests of coefficient homogeneity under cross section dependence, and proposes a modified Hausman test to test for the presence of homogeneous unit roots. An orthogonalization procedure is developed to remove cross section dependence and permit the use of conventional and meta unit root tests with panel data. Some simulations investigating the finite sample performance of the estimation and test procedures are reported.

714 citations


Journal ArticleDOI
01 Mar 2002-Genetics
TL;DR: A new estimator for jointly estimating two- and four-gene coefficients of relatedness between individuals from an outbreeding population with data on codominant genetic markers is proposed and compared to previous estimators, the new one is generally advantageous, especially for highly polymorphic loci and/or small sample sizes.
Abstract: I propose a new estimator for jointly estimating two-gene and four-gene coefficients of relatedness between individuals from an outbreeding population with data on codominant genetic markers and compare it, by Monte Carlo simulations, to previous ones in precision and accuracy for different distributions of population allele frequencies, numbers of alleles per locus, actual relationships, sample sizes, and proportions of relatives included in samples. In contrast to several previous estimators, the new estimator is well behaved and applies to any number of alleles per locus and any allele frequency distribution. The estimates for two- and four-gene coefficients of relatedness from the new estimator are unbiased irrespective of the sample size and have sampling variances decreasing consistently with an increasing number of alleles per locus to the minimum asymptotic values determined by the variation in identity-by-descent among loci per se, regardless of the actual relationship. The new estimator is also robust for small sample sizes and for unknown relatives being included in samples for estimating allele frequencies. Compared to previous estimators, the new one is generally advantageous, especially for highly polymorphic loci and/or small sample sizes.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the RV is sometimes a quite noisy estimator of integrated variance, even with large values of M. The authors use the limit theory on some exchange rate data and some stock data.
Abstract: This paper looks at some recent work on estimating quadratic variation using realised variance (RV) — that is sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high frequency financial return data. When the underlying process is a semimartingale we recall the fundamental result that RV is a consistent (as M →∞ ) estimator of quadratic variation (QV). We express concern that without additional assumptions it seems difficult to give any measure of uncertainty of the RV in this context. The position dramatically changes when we work with a rather general SV model — which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance.

Posted Content
TL;DR: In this article, the authors study the panel DOLS estimator of a homogeneous cointegration vector for a balanced panel of N individuals observed over T time periods and find that the estimator is fully parametric, computationally convenient, and more precise than the single equation estimator.
Abstract: We study the panel DOLS estimator of a homogeneous cointegration vector for a balanced panel of N individuals observed over T time periods. Allowable heterogeneity across individuals include individual-specific time trends, individual-specific fixed effects and time-specific effects. The estimator is fully parametric, computationally convenient, and more precise than the single equation estimator. For fixed N as T approaches infinity, the estimator converges to a function of Brownian motions and the Wald statistic for testing a set of linear constraints has a limiting chi-square distribution. The estimator also has a Gaussian sequential limit distribution that is obtained first by letting T go to infinity then letting N go to infinity. In a series of Monte Carlo experiments, we find that the asymptotic distribution theory provides a reasonably close approximation to the exact finite sample distribution. We use panel dynamic OLS to estimate coefficients of the long-run money demand function from a panel of 19 countries with annual observations that span from 1957 to 1996. The estimated income elasticity is 1.08 (asymptotic s.e.=0.26) and the estimated interest rate semi-elasticity is -0.02 (asymptotic s.e.=0.01).

Journal ArticleDOI
TL;DR: The analysis of the proposed fault isolation scheme provides rigorous analytical results concerning the fault isolation time, and two simulation examples are given to show the effectiveness of the fault diagnosis methodology.
Abstract: This paper presents a robust fault diagnosis scheme for abrupt and incipient faults in nonlinear uncertain dynamic systems. A detection and approximation estimator is used for online health monitoring. Once a fault is detected, a bank of isolation estimators is activated for the purpose of fault isolation. A key design issue of the proposed fault isolation scheme is the adaptive residual threshold associated with each isolation estimator. A fault that has occurred can be isolated if the residual associated with the matched isolation estimator remains below its corresponding adaptive threshold, whereas at least one of the components of the residuals associated with all the other estimators exceeds its threshold at some finite time. Based on the class of nonlinear uncertain systems under consideration, an isolation decision scheme is devised and fault isolability conditions are given, characterizing the class of nonlinear faults that are isolable by the robust fault isolation scheme. The nonconservativeness of the fault isolability conditions is illustrated by deriving a subclass of nonlinear systems and of faults for which these conditions are also necessary for fault isolability. Moreover, the analysis of the proposed fault isolation scheme provides rigorous analytical results concerning the fault isolation time. Two simulation examples are given to show the effectiveness of the fault diagnosis methodology.

Posted Content
TL;DR: In this article, the authors proposed a nonparametric structural model with non-additive errors and nonlinear control functions to identify objects of interest, including the average conditional response, the average structural function, as well as the full structural response function.
Abstract: This paper investigates identification and inference in a nonparametric structural model with instrumental variables and non-additive errors. We allow for non-additive errors because the unobserved heterogeneity in marginal returns that often motivates concerns about endogeneity of choices requires objective functions that are non-additive in observed and unobserved components. We formulate several independence and monotonicity conditions that are sufficient for identification of a number of objects of interest, including the average conditional response, the average structural function, as well as the full structural response function. For inference we propose a two-step series estimator. The first step consists of estimating the conditional distribution of the endogenous regressor given the instrument. In the second step the estimated conditional distribution function is used as a regressor in a nonlinear control function approach. We establish rates of convergence, asymptotic normality, and give a consistent asymptotic variance estimator.

Journal ArticleDOI
TL;DR: The proposed maximum-likelihood location estimator for wideband sources in the near field of the sensor array is derived and is shown to yield superior performance over other suboptimal techniques, including the wideband MUSIC and the two-step least-squares methods.
Abstract: In this paper, we derive the maximum-likelihood (ML) location estimator for wideband sources in the near field of the sensor array. The ML estimator is optimized in a single step, as opposed to other estimators that are optimized separately in relative time-delay and source location estimations. For the multisource case, we propose and demonstrate an efficient alternating projection procedure based on sequential iterative search on single-source parameters. The proposed algorithm is shown to yield superior performance over other suboptimal techniques, including the wideband MUSIC and the two-step least-squares methods, and is efficient with respect to the derived Cramer-Rao bound (CRB). From the CRB analysis, we find that better source location estimates can be obtained for high-frequency signals than low-frequency signals. In addition, large range estimation error results when the source signal is unknown, but such unknown parameter does not have much impact on angle estimation. In some applications, the locations of some sensors may be unknown and must be estimated. The proposed method is extended to estimate the range from a source to an unknown sensor location. After a number of source-location frames, the location of the uncalibrated sensor can be determined based on a least-squares unknown sensor location estimator.

Journal ArticleDOI
TL;DR: The authors provide an overview of inverse probability weighted estimators for cross section and two-period panel data applications under an ignorability assumption, and provide straightforward and asymptotically normal estimation methods.
Abstract: I provide an overview of inverse probability weighted (IPW) M-estimators for cross section and two-period panel data applications. Under an ignorability assumption, I show that population parameters are identified, and provide straightforward \(\sqrt{N}\)-consistent and asymptotically normal estimation methods. I show that estimating a binary response selection model by conditional maximum likelihood leads to a more efficient estimator than using known probabilities, a result that unifies several disparate results in the literature. But IPW estimation is not a panacea: in some important cases of nonresponse, unweighted estimators will be consistent under weaker ignorability assumptions.

Journal ArticleDOI
TL;DR: In this article, the authors examined the panel data estimation of dynamic models for count data that include correlated fixed effects and predetermined variables, and used a linear feedback model to obtain a consistent estimator for the parameters in the dynamic model.

Posted Content
TL;DR: In this article, an exact form of the local Whittle likelihood is studied with the intent of developing a general purpose estimation procedure for the memory parameter (d) that applies throughout the stationary and nonstationary regions of d and which does not rely on tapering or differencing prefilters.
Abstract: An exact form of the local Whittle likelihood is studied with the intent of developing a general purpose estimation procedure for the memory parameter (d) that applies throughout the stationary and nonstationary regions of d and which does not rely on tapering or differencing prefilters. The resulting exact local Whittle estimator is shown to be consistent and to have the same N(0,1/4) limit distribution for all values of d.

ReportDOI
TL;DR: In this article, the authors develop a new framework to analyze the properties of matching estimators and establish a number of new results, such as the conditional bias term may not vanish at a rate faster than root-N when more than one continuous variable is used for matching.
Abstract: Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. In this article, we develop a new framework to analyze the properties of matching estimators and establish a number of new results. First, we show that matching estimators include a conditional bias term which may not vanish at a rate faster than root-N when more than one continuous variable is used for matching. As a result, matching estimators may not be root-N-consistent. Second, we show that even after removing the conditional bias, matching estimators with a fixed number of matches do not reach the semiparametric efficiency bound for average treatment effects, although the efficiency loss may be small. Third, we propose a bias-correction that removes the conditional bias asymptotically, making matching estimators root-N-consistent. Fourth, we provide a new estimator for the conditional variance that does not require consistent nonparametric estimation of unknown functions. We apply the bias-corrected matching estimators to the study of the effects of a labor market program previously analyzed by Lalonde (1986). We also carry out a small simulation study based on Lalonde's example where a simple implementation of the biascorrected matching estimator performs well compared to both simple matching estimators and to regression estimators in terms of bias and root-mean-squared-error. Software for implementing the proposed estimators in STATA and Matlab is available from the authors on the web.

Journal ArticleDOI
TL;DR: In this article, the authors considered a dynamic panel AR(1) model with fixed effects when both n and T are large and showed that a relatively simple fix to OLS or the MLE results in an asymptotically unbiased estimator.
Abstract: We consider a dynamic panel AR(1) model with fixed effects when both n and T are large. Under the “T fixed n large” asymptotic approximation, the ordinary least squares (OLS) or Gaussian maximum likelihood estimator (MLE) is known to be inconsistent due to the well-known incidental parameter problem. We consider an alternative asymptotic approximation where n and T grow at the same rate. It is shown that, although OLS or the MLE is asymptotically biased, a relatively simple fix to OLS or the MLE results in an asymptotically unbiased estimator. Under the assumption of Gaussian innovations, the bias-corrected MLE is shown to be asymptotically efficient by a Hajek type convolution theorem.

Journal ArticleDOI
TL;DR: Simulation studies show that this estimator compares well with maximum likelihood estimators (i.e., empirical Bayes estimators from the Bayesian viewpoint) for which an iterative numerical procedure is needed and may be infeasible.
Abstract: Consider a stochastic abundance model in which the species arrive in the sample according to independent Poisson processes, where the abundance parameters of the processes follow a gamma distribution. We propose a new estimator of the number of species for this model. The estimator takes the form of the number of duplicated species (i.e., species represented by two or more individuals) divided by an estimated duplication fraction. The duplication fraction is estimated from all frequencies including singleton information. The new estimator is closely related to the sample coverage estimator presented by Chao and Lee (1992, Journal of the American Statistical Association 87, 210-217). We illustrate the procedure using the Malayan butterfly data discussed by Fisher, Corbet, and Williams (1943, Journal of Animal Ecology 12, 42-58) and a 1989 Christmas Bird Count dataset collected in Florida, U.S.A. Simulation studies show that this estimator compares well with maximum likelihood estimators (i.e., empirical Bayes estimators from the Bayesian viewpoint) for which an iterative numerical procedure is needed and may be infeasible.

Journal ArticleDOI
TL;DR: In this paper, a global smoothing procedure is developed using basis function approximations for estimating the parameters of a varying-coefficient model with repeated measurements, which applies whether or not the covariates are time invariant and does not require binning of the data when observations are sparse at distinct observation times.
Abstract: SUMMARY A global smoothing procedure is developed using basis function approximations for estimating the parameters of a varying-coefficient model with repeated measurements. Inference procedures based on a resampling subject bootstrap are proposed to construct confidence regions and to perform hypothesis testing. Conditional biases and variances of our estimators and their asymptotic consistency are developed explicitly. Finite sample properties of our procedures are investigated through a simulation study. Application of the proposed approach is demonstrated through an example in epidemiology. In contrast to the existing methods, this approach applies whether or not the covariates are timeinvariant and does not require binning of the data when observations are sparse at distinct observation times.

Journal ArticleDOI
TL;DR: In this paper, a Cox-Ingersoll-Ross model with parameters calibrated to match monthly observations of the U.S. short-term interest rate is used as a test case.
Abstract: Stochastic differential equations often provide a convenient way to describe the dynamics of economic and financial data, and a great deal of effort has been expended searching for efficient ways to estimate models based on them. Maximum likelihood is typically the estimator of choice; however, since the transition density is generally unknown, one is forced to approximate it. The simulation-based approach suggested by Pedersen (1995) has great theoretical appeal, but previously available implementations have been computationally costly. We examine a variety of numerical techniques designed to improve the performance of this approach. Synthetic data generated by a Cox-Ingersoll-Ross model with parameters calibrated to match monthly observations of the U.S. short-term interest rate are used as a test case. Since the likelihood function of this process is known, the quality of the approximations can be easily evaluated. On datasets with 1,000 observations, we are able to approximate the maximum likelihood e...

Journal ArticleDOI
TL;DR: The numerically integrated state-space (NISS) method as mentioned in this paper was proposed to fit models to time series of population abun- dances that incorporate both process noise and observation error in a likelihood framework.
Abstract: We evaluate a method for fitting models to time series of population abun- dances that incorporates both process noise and observation error in a likelihood framework. The method follows the probability logic of the Kalman filter, but whereas the Kalman filter applies to linear, Gaussian systems, we implement the full probability calculations numerically so that any nonlinear, non-Gaussian model can be used. We refer to the method as the "numerically integrated state-space (NISS) method" and compare it to two common methods used to analyze nonlinear time series in ecology: least squares with only process noise (LSPN) and least squares with only observation error (LSOE). We compare all three methods by fitting Beverton-Holt and Ricker models to many replicate model-generated time series of length 20 with several parameter choices. For the Ricker model we chose parameters for which the deterministic part of the model produces a stable equilibrium, a two-cycle, or a four-cycle. For each set of parameters we used three process-noise and observation-error scenarios: large standard deviation (0.2) for both, and large for one but small (0.05) for the other. The NISS method had lower estimator bias and variance than the other methods in nearly all cases. The only exceptions were for the Ricker model with stable-equilibrium parameters, in which case the LSPN and LSOE methods has lower bias when noise variances most closely met their assumptions. For the Beverton-Holt model, the NISS method was much less biased and more precise than the other methods. We also evaluated the utility of each method for model selection by fitting simulated data to both models and using information criteria for selection. The NISS and LSOE methods showed a strong bias toward selecting the Ricker over the Beverton-Holt, even when data were generated with the Beverton-Holt. It remains unclear whether the LSPN method is generally superior for model selection or has fortuitously better biases in this particular case. These results suggest that information criteria are best used with caution for nonlinear population models with short time series. Finally we evaluated the convergence of likelihood ratios to theoretical asymptotic distributions. Agreement with asymptotic distributions was very good for stable-point Rick- er parameters, less accurate for two-cycle and four-cycle Ricker parameters, and least accurate for the Beverton-Holt model. The numerically integrated state-space method has a number of advantages over least squares methods and offers a useful tool for connecting models and data and ecology.

Journal ArticleDOI
TL;DR: In this article, the authors examined inference on regressions when interval data are available on one variable, the other variables being measured precisely, and found that the IMMI Assumptions alone imply simple nonparametric bounds on E(y|x, v) and E(v|x) and combined with a semiparametric binary regression model yield an identification region for the parameters that may be estimated consistently by modified maximum score (MMS) method.
Abstract: This paper examines inference on regressions when interval data are available on one variable, the other variables being measured precisely. Let a population be characterized by a distribution P(y,x, v, v 0 , v 1 ), where y ∈ R 1 , x ∈ R k , and the real variables (v, v 0 , v 1 ) satisfy v 0 ≤ v ≤ v 1 . Let a random sample be drawn from P and the realizations of (y, x, v 0 , v 1 ) be observed, but not those of v. The problem of interest may be to infer E(y|x, v) or E(v|x). This analysis maintains Interval (I), Monotonicity (M), and Mean Independence (MI) assumptions: (I) P(v 0 ≤ v ≤ v 1 ) = 1; (M) E(y|x, v) is monotone in v; (MI) E(y|x, v, v 0 , v 1 ) = E(y|x, v). No restrictions are imposed on the distribution of the unobserved values of v within the observed intervals [v 0 , v 1 ]. It is found that the IMMI Assumptions alone imply simple nonparametric bounds on E(y|x, v) and E(v|x). These assumptions invoked when y is binary and combined with a semiparametric binary regression model yield an identification region for the parameters that may be estimated consistently by a modified maximum score (MMS) method. The IMMI assumptions combined with a parametric model for E(y|x, v) or E(v|x) yield an identification region that may be estimated consistently by a modified minimum-distance (MMD) method. Monte Carlo methods are used to characterize the finite-sample performance of these estimators. Empirical case studies are performed using interval wealth data in the Health and Retirement Study and interval income data in the Current Population Survey.

Journal ArticleDOI
TL;DR: In this paper, a transformed likelihood approach is suggested to estimate fixed effects dynamic panel data models and conditions on the data generating process of the exogenous variables are given to get around the issue of "incidental parameters".

Journal ArticleDOI
TL;DR: In this paper, a general framework for identification, estimation, and hypothesis testing in cointegrated systems when the cointegrating coefficients are subject to (possibly) non-linear and cross-equation restrictions, obtained from economic theory or other relevant a priori information is developed.
Abstract: The paper develops a general framework for identification, estimation, and hypothesis testing in cointegrated systems when the cointegrating coefficients are subject to (possibly) non-linear and cross-equation restrictions, obtained from economic theory or other relevant a priori information. It provides a proof of the consistency of the quasi maximum likelihood estimators (QMLE), establishes the relative rates of convergence of the QMLE of the short-run and the long-run parameters, and derives their asymptotic distributions; thus generalizing the results already available in the literature for the linear case. The paper also develops tests of the over-identifying (possibly) non-linear restrictions on the cointegrating vectors. The estimation and hypothesis testing procedures are applied to an Almost Ideal Demand System estimated on U.K. quarterly observations. Unlike many other studies of consumer demand this application does not treat relative prices and real per capita expenditures as exogenously given.

Journal ArticleDOI
TL;DR: In this article, the authors show that the matching approach per se is no magic bullet solving all problems of evaluation studies, but that its success depends critically on the information available in the sample.
Abstract: Not available in German. Recently several studies analysed active labour market policies using a newly proposed matching estimator for multiple programmes. Since there is only very limited practical experience with this estimator, this paper checks its sensitivity with respect to issues that are of practical importance. The estimator turns out to be fairly robust to several matters that concern its implementation. Furthermore, the paper demonstrates that the matching approach per se is no magic bullet solving all problems of evaluation studies, but that its success depends critically on the information available in the sample. Finally, a comparison with a bootstrap distribution provides some justification for using a simplified approximation of the distribution of the estimator that ignores its sequential nature.

Journal ArticleDOI
TL;DR: An estimator of location and scatter based on a modified version of the Gnanadesikan–Kettenring robust covariance estimate is proposed, which is as good as or better than SD and FMCD at detecting outliers and other structures, with much shorter computing times.
Abstract: The computing times of high-breakdown point estimates of multivariate location and scatter increase rapidly with the number of variables, which makes them impractical for high-dimensional datasets, such as those used in data mining. We propose an estimator of location and scatter based on a modified version of the Gnanadesikan–Kettenring robust covariance estimate. We compare its behavior with that of the Stahel–Donoho (SD) and Rousseeuw and Van Driessen's fast MCD (FMCD) estimates. In simulations with contaminated multivariate normal data, our estimate is almost as good as SD and clearly better than FMCD. It is much faster than both, especially for large dimension. We give examples with real data with dimensions between 5 and 93, in which the proposed estimate is as good as or better than SD and FMCD at detecting outliers and other structures, with much shorter computing times.