scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Statistics in 1989"


Journal ArticleDOI
TL;DR: In this article, the authors extend the jackknife and the bootstrap method of estimating standard errors to the case where the observations form a general stationary sequence, and they show that consistency is obtained if $l = l(n) \rightarrow \infty$ and $l(n)/n \ rightarrow 0$.
Abstract: We extend the jackknife and the bootstrap method of estimating standard errors to the case where the observations form a general stationary sequence. We do not attempt a reduction to i.i.d. values. The jackknife calculates the sample variance of replicates of the statistic obtained by omitting each block of $l$ consecutive data once. In the case of the arithmetic mean this is shown to be equivalent to a weighted covariance estimate of the spectral density of the observations at zero. Under appropriate conditions consistency is obtained if $l = l(n) \rightarrow \infty$ and $l(n)/n \rightarrow 0$. General statistics are approximated by an arithmetic mean. In regular cases this approximation determines the asymptotic behavior. Bootstrap replicates are constructed by selecting blocks of length $l$ randomly with replacement among the blocks of observations. The procedures are illustrated by using the sunspot numbers and some simulated data.

2,185 citations


Journal ArticleDOI
TL;DR: It is shown that backfitting is the Gauss-Seidel iterative method for solving a set of normal equations associated with the additive model and conditions for consistency and nondegeneracy are provided and convergence is proved for the backfitting and related algorithms for a class of smoothers that includes cubic spline smoothers.
Abstract: We study linear smoothers and their use in building nonparametric regression models. In the first part of this paper we examine certain aspects of linear smoothers for scatterplots; examples of these are the running-mean and running-line, kernel and cubic spline smoothers. The eigenvalue and singular value decompositions of the corresponding smoother matrix are used to describe qualitatively a smoother, and several other topics such as the number of degrees of freedom of a smoother are discussed. In the second part of the paper we describe how linear smoothers can be used to estimate the additive model, a powerful nonparametric regression model, using the "backfitting algorithm." We show that backfitting is the Gauss-Seidel iterative method for solving a set of normal equations associated with the additive model. We provide conditions for consistency and nondegeneracy and prove convergence for the backfitting and related algorithms for a class of smoothers that includes cubic spline smoothers.

1,023 citations


Journal ArticleDOI
TL;DR: Asymptotic normality of the maximum likelihood estimator for the parameters of a long range dependent Gaussian process is proved in this paper, where the limit of the Fisher information matrix is derived for such processes which implies efficiency of the estimator.
Abstract: Asymptotic normality of the maximum likelihood estimator for the parameters of a long range dependent Gaussian process is proved. Furthermore, the limit of the Fisher information matrix is derived for such processes which implies efficiency of the estimator and of an approximate maximum likelihood estimator studied by Fox and Taqqu. The results are derived by using asymptotic properties of Toeplitz matrices and an equicontinuity property of quadratic forms.

891 citations


Journal ArticleDOI
TL;DR: In this article, the authors define and investigate classes of statistical models for the analysis of associations between variables, some of which are qualitative and some quantitative, and characterize the subclass of decomposable models where the statistical theory is especially simple.
Abstract: We define and investigate classes of statistical models for the analysis of associations between variables, some of which are qualitative and some quantitative. In the cases where only one kind of variables is present, the models are well-known models for either contingency tables or covariance structures. We characterize the subclass of decomposable models where the statistical theory is especially simple. All models can be represented by a graph with one vertex for each variable. The vertices are possibly connected with arrows or lines corresponding to directional or symmetric associations being present. Pairs of vertices that are not connected are conditionally independent given some of the remaining variables according to specific rules.

742 citations


Journal ArticleDOI
TL;DR: In this article, the authors generalise l'estimateur bien connu de Hill de lindice d a fonction de reparatition avec queue de variation reguliere a une estimation de l'indice of a loi de valeurs extremes.
Abstract: On generalise l'estimateur bien connu de Hill de l'indice d'une fonction de reparatition avec queue de variation reguliere a une estimation de l'indice d'une loi de valeurs extremes. On demontre la convergence et la normalite asymptotique. On utilise l'estimateur pour certaines estimations comme celle d'une quantile elevee et d'un point d'extremite

613 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the behavior of regression analysis when there might be some violation of the assumed link function, the functional form of the model which relates the outcome variable $y$ to the regressor variable $\mathbf{x}$ and the random error.
Abstract: We study the behavior of regression analysis when there might be some violation of the assumed link function, the functional form of the model which relates the outcome variable $y$ to the regressor variable $\mathbf{x}$ and the random error. We allow the true link function to be completely arbitrary, except that $y$ depends on $\mathbf{x}$ only through a linear combination $\beta\mathbf{x}$. The slope vector $\beta$ is identified only up to a multiplicative scalar. Under appropriate conditions, any maximum likelihood-type regression estimate is shown to be consistent for $\beta$ up to a multiplicative scalar, even though the estimate might be based on a misspecified link function. The crucial conditions are (1) the estimate is based on minimizing a criterion function $L(\theta, y)$ which is convex in $\theta$, where $\theta = a + b\mathbf{x}$, (2) the expected criterion function $E\lbrack L(a + b\mathbf{x}, y)\rbrack$ has a proper minimizer and (3) the regressor variable $\mathbf{x}$ is sampled randomly from a probability distribution such that $E(b\mathbf{x}\mid\beta\mathbf{x})$ is linear in $\beta\mathbf{x}$ for all linear combinations $b\mathbf{x}$. The least squares estimate, the GLM estimates and the $M$-estimates for robust regression are discussed in detail. These estimates are asymptotically normal. With the assumption that the regressor variable has an elliptically symmetric distribution, we show that under a scale-invariant null hypothesis of the form $H_0: \beta W = 0$, the asymptotic covariance matrix for $\hat{\beta}W$ is proportional to the one derived by treating the assumed link function as being true. The Wald test as well as the likelihood ratio test for a scale-invariant null hypothesis has the correct asymptotic null distribution after an appropriate rescaling of the test statistic to account for the proportionality constant between the two asymptotic covariance matrices. For normally distributed $\mathbf{x}$, the rescaling factor for $M$-estimates is the same as the one used in robust regression, while the rescaling factor for GLM estimates is related to adjustment for overdispersion. Confidence sets can be constructed by inverting Wald's tests. The impact of the violation of linear conditional expectation condition 3 is discussed. A new dimension is added to the regression diagnostics by exploring the elliptical symmetry of the design distribution. A connection between this work and adaptive estimation is briefly discussed.

456 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present conditions naturelles for convergence forte and for the normalite asymptotique of l'estimation of Pickand's estimations.
Abstract: L'article se constitue de deux parties. On donne une preuve simple de la convergence faible de l'estimation de Pickand pour le parametre principal d'une loi de valeurs extremes. On donne d'autres conditions naturelles pour la convergence forte et pour la normalite asymptotique de l'estimation. Ensuite une grande quantile d'une loi est estimee par une combinaison de statistiques d'ordre intermediaires ou extremes. Ceci mene a un intervalle de confiance asymptotique

305 citations


Journal ArticleDOI
TL;DR: The delete-1 jackknife is known to give inconsistent variance estimators for nonsmooth estimators such as the sample quantiles as mentioned in this paper, which can be rectified by using a more general jackknife with $d$, the number of observations deleted, depending on a smoothness measure of the point estimator.
Abstract: The delete-1 jackknife is known to give inconsistent variance estimators for nonsmooth estimators such as the sample quantiles. This well-known deficiency can be rectified by using a more general jackknife with $d$, the number of observations deleted, depending on a smoothness measure of the point estimator. Our general theory explains why jackknife works or fails. It also shows that (i) for "sufficiently smooth" estimators, the jackknife variance estimators with bounded $d$ are consistent and asymptotically unbiased and (ii) for "nonsmooth" estimators, $d$ has to go to infinity at a rate explicitly determined by a smoothness measure to ensure consistency and asymptotic unbiasedness. Improved results are obtained for several classes of estimators. In particular, for the sample $p$-quantiles, the jackknife variance estimators with $d$ satisfying $n^{1/2}/d \rightarrow 0$ and $n - d \rightarrow \infty$ are consistent and asymptotically unbiased.

295 citations


Journal ArticleDOI
TL;DR: In this paper, the relation between S-estimators and M-stimators of multivariate location and covariance is discussed and the influence function IF (x;S F) of S-functionals exists and is the same as that of corresponding M-functional.
Abstract: We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of S-functionals exists and is the same as that of corresponding M-functionals. Also, we show that S-estimators have a limiting normal distribution which is similar to the limiting normal distribution which is similar to the limiting normal distribution of M-estimators. Finally, we compare asymptotic variances and breakdown point of both types of estimators.

292 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a stationary, mean zero Gaussian process with covariances and derive the asymptotic behavior of some suitably normalized von Mises statistics based on the two-parameter empirical process.
Abstract: Let $(X_j)^\infty_{j = 1}$ be a stationary, mean-zero Gaussian process with covariances $r(k) = EX_{k + 1} X_1$ satisfying $r(0) = 1$ and $r(k) = k^{-D}L(k)$ where $D$ is small and $L$ is slowly varying at infinity. Consider the two-parameter empirical process for $G(X_j),$ $\bigg\{F_N(x, t) = \frac{1}{N} \sum^{\lbrack Nt \rbrack}_{j = 1} \lbrack 1\{G(X_j) \leq x\} - P(G(X_1) \leq x) \rbrack; // -\infty < x < + \infty, 0 \leq t \leq 1\bigg\},$ where $G$ is any measurable function. Noncentral limit theorems are obtained for $F_N(x, t)$ and they are used to derive the asymptotic behavior of some suitably normalized von Mises statistics and $U$-statistics based on the $G(X_j)$'s. The limiting processes are structurally different from those encountered in the i.i.d. case.

240 citations


Journal ArticleDOI
TL;DR: In this paper, a class of nonparametric regression estimates introduced by Beran to estimate conditional survival functions in the presence of right censoring is considered and an exponential probability bound for the tails of distributions of kernel estimates of conditional survival function is derived.
Abstract: We consider a class of nonparametric regression estimates introduced by Beran to estimate conditional survival functions in the presence of right censoring. An exponential probability bound for the tails of distributions of kernel estimates of conditional survival functions is derived. This inequality is next used to prove weak and strong uniform consistency results. The developments rest on sharp exponential bounds for the oscillation modulus of multivariate empirical processes obtained by Stute.

Journal ArticleDOI
TL;DR: In this article, the asymptotic behavior of some nonparametric tests is studied in situations where both bootstrap tests and randomization tests are applicable, and it is shown that the tests are equivalent in the sense that the resulting critical values and power functions are appropriately close, and that the difference in the critical functions of the tests, evaluated at the observed data, tends to 0 in probability.
Abstract: In this paper, the asymptotic behavior of some nonparametric tests is studied in situations where both bootstrap tests and randomization tests are applicable. Under fairly general conditions, the tests are asymptotically equivalent in the sense that the resulting critical values and power functions are appropriately close. This implies, among other things, that the difference in the critical functions of the tests, evaluated at the observed data, tends to 0 in probability. Randomization tests may be preferable since an exact desired level of the test may be obtained for finite samples. Examples considered are: testing independence, testing for spherical symmetry, testing for exchangeability, testing for homogeneity, and testing for a change point.

Journal ArticleDOI
TL;DR: In this paper, an analog of the spectral analysis of time series is developed for data in general spaces, applied to data from an election in which 5738 people rank ordered five candidates.
Abstract: An analog of the spectral analysis of time series is developed for data in general spaces. This is applied to data from an election in which 5738 people rank ordered five candidates. Group theoretic considerations offer an analysis of variance like decomposition which seems natural and fruitful. A variety of inferential tools are suggested. The spectral ideas are then extended to general homogeneous spaces such as the sphere.

Journal ArticleDOI
TL;DR: In this article, the authors generalize the weighted normal plot to accommodate dependent, non-identically distributed observations subject to multiple random effects for each individual unit under study, and compare the expected and empirical cumulative distribution functions of standardized linear combinations of estimated residuals for each of the individual units.
Abstract: When one uses the unbalanced, mixed linear model $\mathbf{y}_i = \mathbf{X}_i\mathbf{\alpha} + \mathbf{Z}_i\mathbf{\beta}_i + \varepsilon_i, i = 1, \cdots, n$ to analyze data from longitudinal experiments with continuous outcomes, it is customary to assume $\varepsilon_i \sim_{\operatorname{ind}} \mathscr{N}(\mathbf{0}, \sigma^2\mathbf{I}_i)$ independent of $\mathbf{\beta}_i \sim_{\operatorname{iid}} \mathscr{N}(\mathbf{0,\Delta})$, where $\sigma^2$ and the elements of an arbitrary $\mathbf{\Delta}$ are unknown variance and covariance components. In this paper, we describe a method for checking model adequacy and, in particular, the distributional assumption on the random effects $\mathbf{\beta}_i$. We generalize the weighted normal plot to accommodate dependent, nonidentically distributed observations subject to multiple random effects for each individual unit under study. One can detect various departures from the normality assumption by comparing the expected and empirical cumulative distribution functions of standardized linear combinations of estimated residuals for each of the individual units. Through application of distributional results for a certain class of estimators to our context, we adjust the estimated covariance of the empirical cumulative distribution function to account for estimation of unknown parameters. Several examples of our method demonstrate its usefulness in the analysis of longitudinal data.

Journal ArticleDOI
TL;DR: The general method for comparing forecasters after a finite number of trials is proven to include calculating all proper scoring rules as special cases and to be translated into a method for deciding who will give better forecasts in the future.
Abstract: A probability assessor or forecaster is a person who assigns subjective probabilities to events which will eventually occur or not occur. There are two purposes for which one might wish to compare two forecasters. The first is to see who has given better forecasts in the past. The second is to decide who will give better forecasts in the future. A method of comparison suitable for the first purpose may not be suitable for the second and vice versa. A criterion called calibration has been suggested for comparing the forecasts of different forecasters. Calibration, in a frequency sense, is a function of long run (future) properties of forecasts and hence is not suitable for making comparisons in the present. A method for comparing forecasters based on past performance is the use of scoring rules. In this paper a general method for comparing forecasters after a finite number of trials is introduced. The general method is proven to include calculating all proper scoring rules as special cases. It also includes comparison of forecasters in all simple two-decision problems as special cases. The relationship between the general method and calibration is also explored. The general method is also translated into a method for deciding who will give better forecasts in the future. An example is given using weather forecasts.

Journal ArticleDOI
TL;DR: In this paper, a tractable mathematical model for kernel-based projection pursuit regression approximation is presented, which permits computation of explicit formulae for bias and variance of estimators.
Abstract: We construct a tractable mathematical model for kernel-based projection pursuit regression approximation. The model permits computation of explicit formulae for bias and variance of estimators. It is shown that the bias of an orientation estimate dominates error about the mean--indeed, the latter is asymptotically negligible in comparison with bias. However, bias and error about the mean are of the same order in the case of projection pursuit curve estimates. Implications of our formulae for bias and variance are discussed.

Journal ArticleDOI
TL;DR: In this article, the authors show that smoothing appropriately can improve estimator convergence rate from 0 to 1/4 for the unsmoothed bootstrap to 0 for arbitrary ε > 0, where ε is the variance of a quantile estimator.
Abstract: Recent attention has focussed on possible improvements in performance of estimators which might flow from using the smoothed bootstrap. We point out that in a great many problems, such as those involving functions of vector means, any such improvements will be only second-order effects. However, we argue that substantial and significant improvements can occur in problems where local properties of underlying distributions play a decisive role. This situation often occurs in estimating the variance of an estimator defined in an $L^1$ setting; we illustrate in the special case of the variance of a quantile estimator. There we show that smoothing appropriately can improve estimator convergence rate from $n^{-1/4}$ for the unsmoothed bootstrap to $n^{-(1/2) + \varepsilon}$, for arbitrary $\varepsilon > 0$. We provide a concise description of the smoothing parameter which optimizes the convergence rate.

Journal ArticleDOI
TL;DR: Projection pursuit regression and kernel regression are methods for estimating a smooth function of several variables from noisy data obtained at scattered sites as discussed by the authors, and they are complementary: for a given function, if one method offers a dimensionality reduction, the other does not.
Abstract: Projection pursuit regression and kernel regression are methods for estimating a smooth function of several variables from noisy data obtained at scattered sites. Methods based on local averaging can perform poorly in high dimensions (curse of dimensionality). Intuition and examples have suggested that projection based approaches can provide better fits. For what sorts of regression functions is this true? When and by how much do projection methods reduce the curse of dimensionality? We make a start by focusing on the two-dimensional problem and study the $L^2$ approximation error (bias) of the two procedures with respect to Gaussian measure. Let RA stand for a certain PPR-type approximation and KA for a particular kernel-type approximation. Building on a simple but striking duality for polynomials, we show that RA behaves significantly better than the minimax rate of approximation for radial functions, while KA performs significantly better than the minimax rate for harmonic functions. In fact, the rate improvements carry over to large classes, RA behaving very well for functions with enough angular smoothness (oscillating slowly with angle), while KA behaves very well for functions with enough Laplacian smoothness, (oscillations averaging out locally). The rate improvements matter: They are equivalent to lowering the dimensionality of the problem. For example, for functions with nice tail behavior, RA behaves as if the dimensionality of the problem were 1.5 rather than its nominal value 2. Also, RA and KA are complementary: For a given function, if one method offers a dimensionality reduction, the other does not.

Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of constructing honest confidence regions for nonparametric regression and established a lower rate of convergence for the size of the confidence region, and demonstrated the achievability of this rate using Stein's estimates and the associated unbiased risk estimates.
Abstract: The problem of constructing honest confidence regions for nonparametric regression is considered. A lower rate of convergence, $n^{-1/4}$, for the size of the confidence region is established. The achievability of this rate is demonstrated using Stein's estimates and the associated unbiased risk estimates. Practical implications are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the asymptotic behavior of estimators of the optimal value and optimal solutions of a stochastic program and show that in the presence of inequality constraints, the estimators are not normal in general.
Abstract: The aim of this article is to investigate the asymptotic behaviour of estimators of the optimal value and optimal solutions of a stochastic program. These estimators are closely related to the $M$-estimators introduced by Huber (1964). The parameter set of feasible solutions is supposed to be defined by a number of equality and inequality constraints. It will be shown that in the presence of inequality constraints the estimators are not asymptotically normal in general. Maximum likelihood and robust regression methods will be discussed as examples.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of robustness or sensitivity of given Bayesian posterior criteria to specification of the prior distribution, including the posterior mean, variance and probability of a set (for credible regions and hypothesis testing).
Abstract: We consider the problem of robustness or sensitivity of given Bayesian posterior criteria to specification of the prior distribution. Criteria considered include the posterior mean, variance and probability of a set (for credible regions and hypothesis testing). Uncertainty in an elicited prior, $\pi_0$, is modelled by an $\varepsilon$-contamination class $\Gamma = \{\pi = (1 - \varepsilon)\pi_0 + \varepsilon q, q \in Q\}$, where $\varepsilon$ reflects the amount of probabilistic uncertainty in $\pi_0$, and $Q$ is a class of allowable contaminations. For $Q = \{$all unimodal distributions$\}$ and $Q = \{\text{all symmetric unimodal distributions}\}$, we determine the ranges of the various posterior criteria as $\pi$ varies over $\Gamma$.

Journal ArticleDOI
TL;DR: In this paper, the problem of minimizing the maximum asymptotic bias of regression estimates over varepsilon-contamination neighborhoods for the joint distribution of the response and carriers is considered.
Abstract: This paper considers the problem of minimizing the maximum asymptotic bias of regression estimates over $\varepsilon$-contamination neighborhoods for the joint distribution of the response and carriers. Two classes of estimates are treated: (i) $M$-estimates with bounded function $\rho$ applied to the scaled residuals, using a very general class of scale estimates, and (ii) bounded influence function type generalized $M$-estimates. Estimates in the first class are obtained as the solution of a minimization problem, while estimates in the second class are specified by an estimating equation. The first class of $M$-estimates is sufficiently general to include both Huber Proposal 2 simultaneous estimates of regression coefficients and residuals scale, and Rousseeuw-Yohai $S$-estimates of regression. It is shown than an $S$-estimate based on a jump-function type $\rho$ solves the min-max bias problem for the class of $M$-estimates with very general scale. This estimate is obtained by the minimization of the $\alpha$-quantile of the squared residuals, where $\alpha = \alpha(\varepsilon)$ depends on the fraction of contamination $\varepsilon$. When $\varepsilon \rightarrow 0.5, \alpha(\varepsilon) \rightarrow 0.5$ and the min-max estimator approaches the least median of squared residuals estimator introduced by Rousseeuw. For the bounded influence class of $GM$-estimates, it is shown the "sign" type nonlinearity yields the min-max estimate. This estimate coincides with the minimum gross-error sensitivity $GM$-estimate. For $p = 1$, the optimal $GM$-estimate is optimal among the class of all equivariant regression estimates. The min-max $S$-estimator has a breakdown point which is independent of the number of carriers $p$ and tends to 0.5 as $\varepsilon$ increases to 0.5, but has a slow rate of convergence. The min-max $GM$-estimate has the usual rate of convergence, but a breakdown point which decreases to zero with increasing $p$. Finally, we compare the min-max biases for both types of estimates, for the case where the nominal model is multivariate normal.

Journal ArticleDOI
TL;DR: In this article, the authors use moment matrices and their determinants to elucidate the structure of mixture estimation as carried out using the method of moments and derive an asymptotically normal statistic for testing the true number of points in the mixing distribution.
Abstract: The use of moment matrices and their determinants are shown to elucidate the structure of mixture estimation as carried out using the method of moments. The setting is the estimation of a discrete finite support point mixing distribution. In the important class of quadratic variance exponential families it is shown for any sample there is an integer $\hat{ u}$ depending on the data which represents the maximal number of support points that one can put in the estimated mixing distribution. From this analysis one can derive an asymptotically normal statistic for testing the true number of points in the mixing distribution. In addition, one can construct consistent nonparametric estimates of the mixing distribution for the case when the number of points is unknown or even infinite. The normal model is then examined in more detail, and in particular the case when $\sigma^2$ is unknown is given a comprehensive solution. It is shown how to estimate the parameters in a direct way for every hypothesized number of support points in the mixing distribution, and it is shown how the structure of the problem yields a decomposition of variance into model and error components very similar to the traditional analysis of variance.

Journal ArticleDOI
TL;DR: In this article, the authors developed asymptotic theory for two polynomial-based methods of estimating orientation in projection pursuit density approximation, one using Legendre polynomials and the other employing Hermite functions.
Abstract: We develop asymptotic theory for two polynomial-based methods of estimating orientation in projection pursuit density approximation. One of the techniques uses Legendre polynomials and has been proposed and implemented by Friedman [1]. The other employs Hermite functions. Issues of smoothing parameter choice and robustness are addressed. It is shown that each method can be used to construct $\sqrt n$-consistent estimates of the projection which maximizes distance from normality, although the former can only be employed in that manner when the underlying distribution has extremely light tails. The former can be used very generally to measure "low-frequency" departure from normality.

Journal ArticleDOI
TL;DR: In this paper, a stochastic expansion for estimating the asymptotic distribution of linear contrasts and the consistency of the bootstrap is derived under the weak condition Θ(kappa n −1/3) (log n) −2/3 −rightarrow 0, where n is the sample size and kappa is the maximal diagonal element of the hat matrix.
Abstract: A stochastic expansion for $M$-estimates in linear models with many parameters is derived under the weak condition $\kappa n^{1/3}(\log n)^{2/3} \rightarrow 0$, where $n$ is the sample size and $\kappa$ the maximal diagonal element of the hat matrix. The expansion is used to study the asymptotic distribution of linear contrasts and the consistency of the bootstrap. In particular, it turns out that bootstrap works in cases where the usual asymptotic approach fails.

Journal ArticleDOI
TL;DR: In this paper, an adaptive estimator for the slope parameters of the linear regression model is constructed based upon the "regression quantile" statistics suggested by Koenker and Bassett, which employ kernel-density type estimators of the optimal $L$-estimator weight function.
Abstract: Asymptotically efficient (adaptive) estimators for the slope parameters of the linear regression model are constructed based upon the "regression quantile" statistics suggested by Koenker and Bassett. The estimators are natural analogues of the adaptive $L$-estimators of location of Sacks, but employ kernel-density type estimators of the optimal $L$-estimator weight function.

Journal ArticleDOI
TL;DR: In this article, the authors give sufficient conditions for strong consistency of estimators for the order of general nonstationary autoregressive models based on the minimization of an information criterion a la Akaike's (1969) AIC.
Abstract: We give sufficient conditions for strong consistency of estimators for the order of general nonstationary autoregressive models based on the minimization of an information criterion a la Akaike's (1969) AIC. The case of a time-dependent error variance is also covered by the analysis. Furthermore, the more general case of regressor selection in stochastic regression models is treated.

Journal ArticleDOI
TL;DR: In this article, the asymptotic behavior of the regression parameter in a linear model in which the dimension of regression parameter may increase with the sample size to the stochastic equicontinuity of an associated $M$-process is analyzed.
Abstract: We relate the asymptotic behavior of $M$-estimators of the regression parameter in a linear model in which the dimension of the regression parameter may increase with the sample size to the stochastic equicontinuity of an associated $M$-process. The approach synthesises a number of results for the dimensionally fixed regression model and then extends these results in a direct unified way. The resulting theorems require only mild conditions on the $\psi$-function and the underlying distribution function. In particular, the results do not require $\psi$ to be smooth and hence can be applied to such estimators as the least absolute deviations estimator. We also treat one-step $M$-estimation.

Journal ArticleDOI
TL;DR: In this article, the authors focus on bootstrap-based prediction regions for the angular rotation curves of test children, when the relevant training data are gathered from normal children of comparable ages.
Abstract: This paper is about random coefficient trigonometric regression models and their use in gait analysis. Here gait analysis means free-speed walking on a level surface. Our study focuses on bootstrap-based prediction regions for the angular rotation curves of test children, when the relevant training data are gathered from normal children of comparable ages. Considerations that led to our choice of model and use of the bootstrap are given. Prediction regions and empirical bootstrap distributions are displayed, as is their application to several test cases. Also included is a study of the almost sure asymptotic behavior of the theoretical bootstrap probability of the prediction regions.

Journal ArticleDOI
TL;DR: On donne des conditions simplifiees pour la consistance et la normalite asymptotique d'estimations M deduites par maximisation de moyennes de fonctions concaves aleatoires identiquement distribuees et independantes as discussed by the authors.
Abstract: On donne des conditions simplifiees pour la consistance et la normalite asymptotique d'estimations M deduites par maximisation de moyennes de fonctions concaves aleatoires identiquement distribuees et independantes. Des applications sont donnees pour l'estimation de maximum de vraisemblance