scispace - formally typeset
Search or ask a question

Showing papers on "Estimator published in 1976"



Journal ArticleDOI
TL;DR: In this paper, a maximum likelihood estimator is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise, where the role of the prefilters is to accentuate the signal passed to the correlator at frequencies for which the signal-to-noise (S/N) ratio is highest and suppress the noise power.
Abstract: A maximum likelihood (ML) estimator is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise. This ML estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. The ML estimator is compared with several other proposed processors of similar form. Under certain conditions the ML estimator is shown to be identical to one proposed by Hannan and Thomson [10] and MacDonald and Schultheiss [21]. Qualitatively, the role of the prefilters is to accentuate the signal passed to the correlator at frequencies for which the signal-to-noise (S/N) ratio is highest and, simultaneously, to suppress the noise power. The same type of prefiltering is provided by the generalized Eckart filter, which maximizes the S/N ratio of the correlator output. For low S/N ratio, the ML estimator is shown to be equivalent to Eckart prefiltering.

4,317 citations


Journal ArticleDOI
TL;DR: In this article, a limited memory algorithm is developed for adaptive correction of the a priori statistics which are intended to compensate for time-varying model errors, which provides improved state estimates at little computational expense when applied to an orbit determination problem for a near-earth satellite with significant modeling errors.
Abstract: Sequential estimators are derived for suboptimal adaptive estimation of the unknown a priori state and observation noise statistics simultaneously with the system state. First- and second-order moments of the noise processes are estimated based on state and observation noise samples generated in the Kalman filter algorithm. A limited memory algorithm is developed for adaptive correction of the a priori statistics which are intended to compensate for time-varying model errors. The algorithm provides improved state estimates at little computational expense when applied to an orbit determination problem for a near-earth satellite with significant modeling errors.

528 citations


Journal ArticleDOI
TL;DR: In this article, the authors briefly review the principles of maximum entropy spectral analysis and the closely related problem of autoregressive time series modelling and discuss the important aspect of model identification.

430 citations


Journal ArticleDOI
TL;DR: The restricted maximum likelihood (REML) estimators as discussed by the authors have the property of invariance under translation and the additional property of reducing to the analysis variance estimators for many, if not all, cases of balanced data (equal subclass numbers).
Abstract: The maximum likelihood (ML) procedure of Hartley aud Rao [2] is modified by adapting a transformation from Pattersou and Thompson [7] which partitions the likelihood render normality into two parts, one being free of the fixed effects. Maximizing this part yields what are called restricted maximum likelihood (REML) estimators. As well as retaining the property of invariance under translation that ML estimators have, the REML estimators have the additional property of reducing to the analysis variance (ANOVA) estimators for many, if not all, cases of balanced data (equal subclass numbers). A computing algorithm is developed, adapting a transformation from Hemmerle and Hartley [6], which reduces computing requirements to dealing with matrices having order equal to the dimension of the parameter space rather than that of the sample space. These same matrices also occur in the asymptotic sampling variances of the estimators.

401 citations


Journal ArticleDOI
TL;DR: In this article, a number of estimators of regression coefficients, all of generalized ridge, or shrinkage type, were considered, and results of a simulation study indicate that with respect to two commonly used mean square error criteria, two ordinary ridge estimators, one proposed by Hoerl, Kennard and Baldwin, and the other introduced here, perform substantially better than both least squares and other estimators discussed here.
Abstract: We consider a number of estimators of regression coefficients, all of generalized ridge, or 'shrinkage' type. Results of a simulation study indicate that with respect to two commonly used mean square error criteria, two ordinary ridge estimators, one proposed by Hoerl, Kennard and Baldwin, and the other introduced here, perform substantially better than both least squares and the other estimators discussed here

381 citations


Journal ArticleDOI
TL;DR: Parzen estimators are often used for nonparametric estimation of probability density functions and a problem-dependent criterion for its value is proposed and illustrated by some examples.
Abstract: Parzen estimators are often used for nonparametric estimation of probability density functions. The smoothness of such an estimation is controlled by the smoothing parameter. A problem-dependent criterion for its value is proposed and illustrated by some examples. Especially in multimodal situations, this criterion led to good results.

364 citations


Journal ArticleDOI
TL;DR: A maximum likelihood estimator for digital sequences disturbed by Gaussian noise, intersymbol interference and interchannel interference is derived and it appears that, under a certain condition, the error performance is asymptotically as good as if both ISI and ICI were absent.
Abstract: A maximum likelihood (ML) estimator for digital sequences disturbed by Gaussian noise, intersymbol interference (ISI) and interchannel interference (ICI) is derived It is shown that the sampled outputs of the multiple matched filter (MMF) form a set of sufficient statistics for estimating the input vector sequence Two ML vector sequence estimation algorithms are presented One makes use of the sampled output data of the multiple whitened matched filter and is called the vector Viterbi algorithm The other one is a modification of the vector Viterbi algorithm and uses directly the sampled output of the MMF It appears that, under a certain condition, the error performance is asymptotically as good as if both ISI and ICI were absent

299 citations


Journal ArticleDOI
TL;DR: In this article, the authors extended the analysis for a standard linear regression model to the case of data randomly censored on the right, and the slope and intercept estimators are weighted linear combinations of the uncensored observations where the weights are derived from the Kaplan-Meier product-limit estimator of a distribution function.
Abstract: SUMMARY The analysis for a standard linear regression model is extended to the case of data randomly censored on the right. The slope and intercept estimators are weighted linear combinations of the uncensored observations where the weights are derived from the Kaplan-Meier product-limit estimator of a distribution function. Some distribution theory for the slope estimator is given. For illustration the estimators are applied to the Stanford heart transplant data.

276 citations


Journal ArticleDOI
TL;DR: In this article, the strong law of large numbers and the central limit theorem for estimators of the parameters in quite general finite-parameter linear models for vector time series are presented.
Abstract: This paper presents proofs of the strong law of large numbers and the central limit theorem for estimators of the parameters in quite general finite-parameter linear models for vector time series. The estimators are derived from a Gaussian likelihood (although Gaussianity is not assumed) and certain spectral approximations to this. An important example of finite-parameter models for multiple time series is the class of autoregressive moving-average (ARMA) models and a general treatment is given for this case. This includes a discussion of the problems associated with identification in such models. LINEAR PROCESSES; VECTOR ARMA MODELS; IDENTIFICATION; LIMIT THEOREMS;

271 citations


Journal ArticleDOI
TL;DR: This algorithm is shown to yield an image which is unbiased, which has the minimum variance of any estimator using the same measurements, and which will perform better than any current reconstruction technique, where the performance measures are the bias and viariance.
Abstract: The stochastic nature of the measurements used for image reconstruction from projections has largely been ignored in the past. If taken into account, the stochastic nature has been used to calculate the performance of algorithms which were developed independent of probabilistic considerations. This paper utilizes the knowledge of the probability density function of the measurements from the outset, and derives a reconstruction scheme which is optimal in the maximum likelihood sense. This algorithm is shown to yield an image which is unbiased -- that is, on the average it equals the object being reconstructed -- and which has the minimum variance of any estimator using the same measurements. As such, when operated in a stochastic environment, it will perform better than any current reconstruction technique, where the performance measures are the bias and viariance.

Journal ArticleDOI
TL;DR: A survey of contributions during the last five years to estimation of parameters by linear functions of observations in the Gauss-Markoff model can be found in this article, where the classes of BLE and ALE (admissible linear estimators) are characterized when the loss function is quadratic.
Abstract: The first lecture in this series is devoted to a survey of contributions during the last five years to estimation of parameters by linear functions of observations in the Gauss-Markoff model. Some new results are also given. The classes of BLE (Bayes linear estimators) and ALE (admissible linear estimators) are characterized when the loss function is quadratic. It is shown that ALE's are either BLE's or limits of BLE's. Biased estimators like ridge and shrunken estimators are shown to be special cases of BLE's. Minimum variance unbiased estimation of parameters in a linear model is discussed with the help of a projection operator under very general conditions.

Journal ArticleDOI
TL;DR: In this paper, the linear least squares prediction approach is applied to some problems in two-stage sampling from finite populations, and a theorem giving the optimal estimator and its error-variance under a general linear "superpopulation" model for a finite population is stated.
Abstract: The linear least-squares prediction approach is applied to some problems in two-stage sampling from finite populations A theorem giving the optimal (BLU) estimator and its error-variance under a general linear “superpopulation” model for a finite population is stated This theorem is then applied to a model describing many populations whose elements are grouped naturally in clusters Next, the probability model is used to analyze various conventional estimators and certain estimators suggested by the theory as alternatives to the conventional ones Problems of design are considered, as are some consequences of regression-model failure

Journal ArticleDOI
TL;DR: In this paper, a unified approach to the study of biased estimators in an effort to determine their relative merits is provided, including simple and generalized ridge estimators, principal component estimators with extensions such as that, proposed by Marquardt [19] and the shrunken estimator proposed by Stein [23].
Abstract: Biased estimators of the coefficients in the linear regression model have been the subject of considerable discussion in the recent, literature. The purpose of this paper is to provide a unified approach to the study of biased estimators in an effort to determine their relative merits. The class of estimators includes the simple and the generalized ridge estimators proposed by Hoerl and Kennard [9], the principal component estimator with extensions such as that, proposed by Marquardt [19] and the shrunken estimator proposed by Stein [23]. The problem of estimating the biasing parameters is considered and illustrated with two examples.

Journal ArticleDOI
TL;DR: In this paper, the problem of estimating the inverse of a covariance matrix in the standard multivariate normal situation using a particular loss function is considered, and a class of multivariate estimators of the mean, each of which dominates the maximum likelihood estimator is presented.
Abstract: The problem of estimating several normal mean vectors in an empirical Bayes situation is considered. In this case, it reduces to the problem of estimating the inverse of a covariance matrix in the standard multivariate normal situation using a particular loss function. Estimators which dominate any constant multiple of the inverse sample covariance matrix are presented. These estimators work by shrinking the sample eigenvalues toward a central value, in much the same way as the James-Stein estimator for a mean vector shrinks the maximum likelihood estimators toward a common value. These covariance estimators then lead to a class of multivariate estimators of the mean, each of which dominates the maximum likelihood estimator.

Journal ArticleDOI
TL;DR: In this article, the authors show that all presently known estimators are readily derivable from the FIME formula if they are considered as numerical approximations to its solution, and then they classify the resulting estimators into asymptotically equivalent groups.

Journal ArticleDOI
TL;DR: In this paper, a generalization of Schmidt's estimator is proposed which is unbiased and usually superior to both Schmidt's and the classical estimator when the magnitude boxes are not infinitesimal.
Abstract: Schmidt's (1968) estimator, sometimes used to calculate the luminosity function from a complete sample of observed objects, can be generalized naively to the case in which the maximum distance for detection is a function of the direction. Though unbiased, this estimator then does not have minimum variance and, in some cases, is inferior to the classical estimator. The classical estimator, however, is biased when the magnitude boxes are not infinitesimal. A generalization of Schmidt's estimator is proposed which is unbiased and usually superior to both Schmidt's and the classical estimator. Variance formulas and numerical examples are given. The results can be used in combining several catalogs.

Journal ArticleDOI
Alan M. Gross1
TL;DR: In this paper, a variety of 95-percent confidence interval procedures have been examined in some detail using Monte Carlo techniques on simulated samples of sizes 10 and 20 from a spectrum of distributions ranging from the Gaussian to the long-tailed Cauchy.
Abstract: A variety of 95-percent confidence interval procedures have been examined in some detail using Monte Carlo techniques. These estimators were tried on simulated samples of sizes 10 and 20 from a spectrum of distributions ranging from the Gaussian to the long-tailed Cauchy. The robustness of an estimator is measured by both the closeness of its level to the 5-percent goal (robustness of validity) and its expected length as compared to its competitors (robustness of efficiency). Results include some quite robust procedures including some of the point M-estimators from the Princeton Robustness Study.

Journal ArticleDOI
TL;DR: In this paper, an estimator of the parameters of a nonlinear time series regression is obtained by using an autoregressive assumption to approximate the variance-covariance matrix of the disturbances.
Abstract: An estimator of the parameters of a nonlinear time series regression is obtained by using an autoregressive assumption to approximate the variance-covariance matrix of the disturbances. Considerations are set forth which suggest that this estimator will have better small sample efficiency than circular estimators. Such is the case for examples considered in a Monte Carlo study.

Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition is given for such ridge estimators to yield estimators of every non-null linear combination of the regression coefficients with smaller mean square error than that of the Gauss-Markov best linear unbiased estimator.
Abstract: Ridge regression is re-examined and ridge estimators based on prior information are introduced. A necessary and sufficient condition is given for such ridge estimators to yield estimators of every nonnull linear combination of the regression coefficients with smaller mean square error than that of the Gauss-Markov best linear unbiased estimator.

Journal ArticleDOI
TL;DR: In this paper, a two-stage method is presented for detecting step changes of variance in first-order autoregressive time series models, where potential change points are initially located using a moving-block procedure.
Abstract: SUMMARY A two-stage method is presented for detecting step changes of variance in first-order autoregressive time series models. Potential change points are initially located using a "moving-block" procedure. Given initial change points, an iterative likelihood argument is used to develop estimators of the change points, variances and autoregressive parameters. The efficacy of the method is examined with computer simulation experiments, and a numerical example using stock market data is discussed.

Journal ArticleDOI
TL;DR: In this article, the mean lineal and areal projections of embedded features are estimated using unbiased ratio estimators of stereological fractions for various types of random plane and line sections of the specimen.
Abstract: SUMMARY The usual derivations of the well-known fundamental formulae of stereology are unsatisfactory. Precise and general conditions for the validity of a comprehensive system of such formulae are given. They take the form of unbiased ratio estimators of stereological fractions. These estimators are defined for various types of ‘weighted’ random plane and line sections of the specimen. Thanks to certain probabilistic equivalences, these sections may in practice be implemented fairly easily. The system includes new formulae which allow the estimation of the mean lineal and areal projections of embedded features. The mean square errors of several alternative estimators of the same feature characteristics are compared.

Journal ArticleDOI
TL;DR: In this paper, a tractable characterization for the admissible estimators within the class of invariant quadratic unbiased estimators for a normally distributed mixed model with two unknown variance components is given.
Abstract: For a normally distributed mixed model with two unknown variance components $\theta_1$ and $\theta_2$, a tractable characterization is given for the admissible estimators within the class $\tilde{\mathscr{N}}_\delta$ of invariant quadratic unbiased estimators for $\delta_1\theta_1 + \delta_2\theta_2$. Here the term admissible is used with reference only to the class $\tilde{\mathscr{N}}_\delta$. This characterization is based on a result for general linear models which characterizes the admissible estimators within the class of linear unbiased estimators. The admissibility of MINQUE estimators and the usual analysis of variance estimators is considered.

Journal ArticleDOI
TL;DR: In this article, a maximum likelihood solution is presented for a model which is a synthesis of the linear functional and structural relations, the coefficients of which are symmetrical with respect to the within and between-groups sample covariances.
Abstract: SUMMARY: A maximum likelihood solution is presented for a model which is a synthesis of the linear functional and structural relations. In the replicated case, the slope estimate is a root of a quadratic equation, the coefficients of which are symmetrical with respect to the within and between-groups sample covariances. In the unreplicated case it is shown that two variance ratios must be known, whereupon the slope estimator is a root of a quintic equation. When one of these variance ratios is zero, we obtain an estimator which was proposed on heuristic grounds by Teissier (1948).

Journal ArticleDOI
TL;DR: A survey of robust alternatives to the mean, standard deviation, product moment correlation, t-test, and analysis of variance is offered in this paper, with a focus on the effects of outliers.
Abstract: It is noted that the usual estimators that are optimal under a Gaussian assumption are very vulnerable to the effects of outliers. A survey of robust alternatives to the mean, standard deviation, product moment correlation, t-test, and analysis of variance is offered. Robust methods of factor analysis, principal components analysis and multivariate analysis of variance are also surveyed, as are schemes for outlier detection.

Journal ArticleDOI
TL;DR: These studies and those of Klotz, Milton and Zacks point, with some exceptions, to the greater efficiency of ML estimators under a range of experimental settings.
Abstract: Explicit solutions have been derived for the maximum likelihood (ML) and restricted maximum likelihood (REML) equations under normality for four common variance components models with balanced (equal subclass numbers) data. Solutions of the REML equations are identical to analysis of variance (AOV) estimators. Ratios of mean squared errors of REML and ML solutions have also been derived. Unbalanced (unequal subclass numbers) data have been used in a series of numerical trials to compare ML and REML procedures with three other estimation methods using a two-way crossed classification mixed model with no interaction and zero or one observation per cell. Results are similar to those reported by Hocking and Kutner [1975] for the balanced incomplete block design. Collectively, these studies and those of Klotz, Milton and Zacks [1969] point, with some exceptions, to the greater efficiency of ML estimators under a range of experimental settings.

Journal ArticleDOI
TL;DR: In this paper, a decomposition-aggregation method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs, which can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level.
Abstract: In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems. Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

Journal ArticleDOI
TL;DR: In this article, two methods of fitting piecewise multiple regression models are presented, one based on dynamic programming and the other based on hierarchical procedure, which is suitable for very long sequences of data.
Abstract: Two methods of fitting piecewise multiple regression models are presented. One, based on dynamic programming, yields maximum‐likelihood estimators and is suitable for sequences of moderate length. A second, hierarchical, procedure yields approximations to the maximum‐likelihood estimators and is suitable for very long sequences of data. Both methods have computational requirements that are linear in the number of segments.

Journal ArticleDOI
TL;DR: A consistent estimator is discussed which is computationally more efficient than estimators based on Parzen's estimation and its relation between the distance of a sample from the decision boundary and its contribution to the error is derived.
Abstract: The L^{ \alpha} -distance between posterior density functions (PDF's) is proposed as a separability measure to replace the probability of error as a criterion for feature extraction in pattern recognition. Upper and lower bounds on Bayes error are derived for \alpha > 0 . If \alpha = 1 , the lower and upper bounds coincide; an increase (or decrease) in \alpha loosens these bounds. For \alpha = 2 , the upper bound equals the best commonly used bound and is equal to the asymptotic probability of error of the first nearest neighbor classifier. The case when \alpha = 1 is used for estimation of the probability of error in different problem situations, and a comparison is made with other methods. It is shown how unclassified samples may also be used to improve the variance of the estimated error. For the family of exponential probability density functions (pdf's), the relation between the distance of a sample from the decision boundary and its contribution to the error is derived. In the nonparametric case, a consistent estimator is discussed which is computationally more efficient than estimators based on Parzen's estimation. A set of computer simulation experiments are reported to demonstrate the statistical advantages of the separability measure with \alpha = 1 when used in an error estimation scheme.

Journal ArticleDOI
TL;DR: In this article, the best linear unbiased estimator which was proposed previously for interpolating, distributing, and extrapolating a time series by related series is applied to the estimation of missing observations.
Abstract: The best linear unbiased estimator which we proposed previously for interpolating, distributing, and extrapolating a time series by related series is applied to the estimation of missing observations. Under special assumptions, the problem reduces to the one treated in Doran [2]. Our estimator is compared with his and is shown to be more efficient.