scispace - formally typeset
Search or ask a question

Showing papers on "Outlier published in 1988"


Book
01 Jan 1988
TL;DR: The Mixture Likelihood Approach to Clustering and the Case Study Homogeneity of Mixing Proportions Assessing the Performance of the Mixture likelihood approach toClustering.
Abstract: General Introduction Introduction History of Mixture Models Background to the General Classification Problem Mixture Likelihood Approach to Clustering Identifiability Likelihood Estimation for Mixture Models via EM Algorithm Start Values for EMm Algorithm Properties of Likelihood Estimators for Mixture Models Information Matrix for Mixture Models Tests for the Number of Components in a Mixture Partial Classification of the Data Classification Likelihood Approach to Clustering Mixture Models with Normal Components Likelihood Estimation for a Mixture of Normal Distribution Normal Homoscedastic Components Asymptotic Relative Efficiency of the Mixture Likelihood Approach Expected and Observed Information Matrices Assessment of Normality for Component Distributions: Partially Classified Data Assessment of Typicality: Partially Classified Data Assessment of Normality and Typicality: Unclassified Data Robust Estimation for Mixture Models Applications of Mixture Models to Two-Way Data Sets Introduction Clustering of Hemophilia Data Outliers in Darwin's Data Clustering of Rare Events Latent Classes of Teaching Styles Estimation of Mixing Proportions Introduction Likelihood Estimation Discriminant Analysis Estimator Asymptotic Relative Efficiency of Discriminant Analysis Estimator Moment Estimators Minimum Distance Estimators Case Study Homogeneity of Mixing Proportions Assessing the Performance of the Mixture Likelihood Approach to Clustering Introduction Estimators of the Allocation Rates Bias Correction of the Estimated Allocation Rates Estimated Allocation Rates of Hemophilia Data Estimated Allocation Rates for Simulated Data Other Methods of Bias Corrections Bias Correction for Estimated Posterior Probabilities Partitioning of Treatment Means in ANOVA Introduction Clustering of Treatment Means by the Mixture Likelihood Approach Fitting of a Normal Mixture Model to a RCBD with Random Block Effects Some Other Methods of Partitioning Treatment Means Example 1 Example 2 Example 3 Example 4 Mixture Likelihood Approach to the Clustering of Three-Way Data Introduction Fitting a Normal Mixture Model to Three-Way Data Clustering of Soybean Data Multidimensional Scaling Approach to the Analysis of Soybean Data References Appendix

2,397 citations


Book ChapterDOI
01 Jan 1988
TL;DR: In this article, a robust statistical procedure for estimating population parameters which are insensitive to the effect of outliers is proposed. But the robust procedure does not consider outliers, i.e., observations inconsistent with the assumed model of the random process generating the observations.
Abstract: One of the most vexing of problems in data analysis is the determination of whether or not to discard some observations because they are inconsistent with the rest of the observations and/or the probability distribution assumed to be the underlying distribution of the data. One direction of research activity related to this problem is that of the study of robust statistical procedures (cf. Huber [1981]), primarily procedures for estimating population parameters which are insensitive to the effect of “outliers”, i.e., observations inconsistent with the assumed model of the random process generating the observations. Typically, the robust procedure involves some “trimming” or down-weighting procedure, wherein some fraction of the extreme observations are automatically eliminated or given less weight to guard against the potential effect of outliers.

797 citations


Journal ArticleDOI
TL;DR: The present simulation study examined the standardization problem and found that those approaches which standardize by division by the range of the variable gave consistently superior recovery of the underlying cluster structure.
Abstract: A methodological problem in applied clustering involves the decision of whether or not to standardize the input variables prior to the computation of a Euclidean distance dissimilarity measure. Existing results have been mixed with some studies recommending standardization and others suggesting that it may not be desirable. The existence of numerous approaches to standardization complicates the decision process. The present simulation study examined the standardization problem. A variety of data structures were generated which varied the intercluster spacing and the scales for the variables. The data sets were examined in four different types of error environments. These involved error free data, error perturbed distances, inclusion of outliers, and the addition of random noise dimensions. Recovery of true cluster structure as found by four clustering methods was measured at the correct partition level and at reduced levels of coverage. Results for eight standardization strategies are presented. It was found that those approaches which standardize by division by the range of the variable gave consistently superior recovery of the underlying cluster structure. The result held over different error conditions, separation distances, clustering methods, and coverage levels. The traditionalz-score transformation was found to be less effective in several situations.

715 citations


Journal ArticleDOI
TL;DR: An iterative procedure is proposed for detecting IO and AO in practice and for estimating the time series parameters in autoregressive-integrated-moving-average models in the presence of outliers.
Abstract: Outliers in time series can be regarded as being generated by dynamic intervention models at unknown time points. Two special cases, innovational outlier (IO) and additive outlier (AO), are studied in this article. The likelihood ratio criteria for testing the existence of outliers of both types, and the criteria for distinguishing between them are derived. An iterative procedure is proposed for detecting IO and AO in practice and for estimating the time series parameters in autoregressive-integrated-moving-average models in the presence of outliers. The powers of the procedure in detecting outliers are investigated by simulation experiments. The performance of the proposed procedure for estimating the autoregressive coefficient of a simple AR(l) model compares favorably with robust estimation procedures proposed in the literature. Two real examples are presented.

589 citations


Journal ArticleDOI
TL;DR: In this paper, a maximum likelihood approach is described for the analysis of growth increment data derived from tagging experiments, which allows the separate estimation of measurement error and growth variability, and uses mixture theory to provide an objective way of dealing with outliers.
Abstract: A maximum likelihood approach is described for the analysis of growth increment data derived from tagging experiments. As well as describing mean growth this approach allows the separate estimation of measurement error and growth variability, and uses mixture theory to provide an objective way of dealing with outliers. The method is illustrated using data for Pacific bonito (Sarda chiliensis) and the growth variability model is compared to other published models. The difference between growth curves derived from tagging and age‐length data is emphasised and new parameters are given for the von Bertalanffy curve that have better statistical properties, and represent better the growth information in tagging data, than do the conventional parameters.

191 citations


Journal ArticleDOI
TL;DR: In this paper, an outlier is defined as an observation with a large random error, generated by the linear model under consideration, and is detected by examining the posterior distribution of the random errors.
Abstract: SUMMARY An approach to detecting outliers in a linear model is developed. An outlier is defined to be an observation with a large random error, generated by the linear model under consideration. Outliers are detected by examining the posterior distribution of the random errors. An augmented residual plot is also suggested as a graphical aid in finding outliers. We propose a precise definition of an outlier in a linear model which appears to lead to simple ways of exploring data for the possibility of outliers. The definition is such that, if the parameters of the model are known, then it is also known which observations are outliers. Alternatively, if the parameters are unknown, the posterior distribution can be used to calculate the posterior probability that any observation is an outlier. In a linear model with normally distributed random errors, Ei, with mean zero and variance a 2we declare the ith observation to be an outlier if IEi I> ko- for some choice of k. The value of k can be chosen so that the prior probability of an outlier is small and thus outliers are observations which are more extreme than is usually expected. Realizations of normally distributed errors of more than about three standard deviations from the mean are certainly surprising, and worth further investigation. Such outlying observations can occur under the assumed model, however, and this should be taken into account when deciding what to do with outliers and in choosing k. Note that ei is the actual realization of the random error, not the usual estimated residual ?i. The problem of outliers is studied and thoroughly reviewed by Barnett & Lewis (1984), Hawkins (1980), Beckman & Cook (1983) and Pettit & Smith (1985). The usual Bayesian approach to outlier detection uses the definition given by Freeman (1980). Freeman defines an outlier to be 'any observation that has not been generated by the mechanism that generated the majority of observations in the data set'. Freeman's definition therefore requires that a model for the generation of outliers be specified and is implemented by, for example, Box & Tiao (1968), Guttman, Dutter & Freeman (1978) and Abraham & Box (1978). Our method differs in that we define outliers as arising from the model under consideration rather than arising from a separate, expanded, model. Our approach is similar to that described by Zellner & Moulton (1985) and is an extension of the philosophy

182 citations


Journal ArticleDOI
TL;DR: The iteratively reweighted least squares (IRLS) algorithm as mentioned in this paper provides a means of computing approximate lp solutions (1 ⩽ p), which can be used to solve large, sparse, rectangular systems of linear, algebraic equations very efficiently.

151 citations


Journal ArticleDOI
Hans Kürzl1
TL;DR: Major advantages of EDA are the straightforward application of its techniques and the easily interpretable results, which can be enhanced considerably, resulting in a more objective outlier definition as well as a better resolution of the regional background.

119 citations


Journal ArticleDOI
TL;DR: A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, and an algorithm to restore realistic images is presented.
Abstract: A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g. 8*8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images. >

97 citations


Journal ArticleDOI
TL;DR: In this paper, the sensitivity of these two kriging estimators to departures from these assumptions and in particular their resistance to outliers is considered, and an outlier effect index designed to assess the effect of a single outlier on both estimators is proposed.
Abstract: Ordinary kriging is well-known to be optimal when the data have a multivariate normal distribution (and if the variogram is known), whereas lognormal kriging presupposes the multivariate lognormality of the data. But in practice, real data never entirely satisfy these assumptions. In this article, the sensitivity of these two kriging estimators to departures from these assumptions and in particular, their resistance to outliers is considered. An outlier effect index designed to assess the effect of a single outlier on both estimators is proposed, which can be extended to other types of estimators. Although lognormal kriging is sensitive to slight variations in the sill of the variogram of the logs (i.e., their variance), it is not influenced by the estimate of the mean of the logs.

74 citations


Journal ArticleDOI
TL;DR: In this article, the authors used smoothing splines to model the form of the noise distribution and used the maximum likelihood method to obtain consistent and efficient (minimum variance) estimates of parameters.
Abstract: SUMMARY Least squares (LS) estimation of model parameters is widely used in geophysics. If the data errors are Gaussian and independent the LS estimators will be maximum likelihood (ML) estimators and will be unbiased and of minimum variance. However, if the noise is not Gaussian, e.g. if the data are contaminated by extreme outliers, LS fitting will result in parameter estimates which may be biased or grossly inaccurate. When the probability distribution of the errors is known it is possible, using the maximum likelihood method, to obtain consistent and efficient (minimum variance) estimates of parameters. In some cases the distribution of the noise may be determined empirically, and the resulting distribution used in the ML estimation. A procedure for doing this is described here. Hourly values of geomagnetic observatory data are used to illustrate the technique. These data sets contain a number of periodic components, whose amplitudes and phases are geophysically interesting. Geomagnetic storms and other phenomena in the record make the noise distribution long-tailed, asymmetric and variable with location. Using an iterative procedure, one can model the form of these distributions using smoothing splines. For these data ML estimation yields quite different results from standard robust and LS procedures. The technique has the potential for widespread application to other problems involving the recovery of a known form of signal from non-Gaussian noise.

Journal ArticleDOI
TL;DR: It is shown that many standard outlier tests are not able to detect gross outliers (masking effect) and an alternative method of evaluation using tests of estimates based on robust statistics is proposed.
Abstract: The paper criticizes the use of standard outlier tests when evaluating interlaboratory data. It is shown that many such tests are not able to detect gross outliers (masking effect). An alternative method of evaluation using tests of estimates based on robust statistics is proposed.

Journal ArticleDOI
TL;DR: It is suggested that Bayesian forecasting may facilitate optimal administration of alfentanil during long procedures and that a rapid assay should be developed to measure plasma concentrations of al fentanin intraoperatively.
Abstract: To achieve therapeutic plasma concentrations of the opioid alfentanil, one must administer the drug as a variable rate continuous infusion. For most patients, using population pharmacokinetic parameters of alfentanil for dosing regimen allows accurate prediction of the plasma concentration of the drug over time. However, for some patients, using such parameters results in systematic over- or underprediction of the concentration. Retrospectively studying a data set (dosage history and measured concentrations) for 34 patients, the authors examined how Bayesian forecasting could improve the precision of prediction. For each patient, a Bayesian regression was performed to estimate "individualized" pharmacokinetic parameters, using population pharmacokinetic values for alfentanil and the measurement of alfentanil in one or more plasma samples from each patient. These individualized parameters were then used to predict the subsequent plasma concentrations of alfentanil over time. By comparing the value of each measured point with its corresponding predicted value, the authors calculated the prediction error as a percentage of the measured value. The precision of the prediction was assessed by the percent mean absolute prediction error. After Bayesian forecasting using a single point sampled at 80 min after start of anesthesia, the average precision of the prediction was 13.8 +/- 6.1% (SD). Using no Bayesian forecasting and only population values of the pharmacokinetic parameters for the prediction of the concentration, the precision was 24.3 +/- 16.9%. The improvement in precision brought by Bayesian forecasting was especially noticeable for those patients whose prediction of alfentanil was poor using population pharmacokinetic values (i.e., "outlier" patients).(ABSTRACT TRUNCATED AT 250 WORDS)

Journal ArticleDOI
TL;DR: A Monte Carlo simulation used to compare Dk and D squared in terms of their hit and false alarm rates, their extent of overlap, and their effect on correlation coefficients resulting from outlier removal indicated that D squared had a higher hit rate than Dk with approximately the same false alarm rate.
Abstract: Comrey (1985) presented a statistic, Dk, to detect outliers. Its purported advantage over the more well-known Mahalanobis D squared is that it might be more sensitive to outliers that distort the correlation coefficient. The present study used a Monte Carlo simulation to compare Dk and D squared in terms of their hit and false alarm rates, their extent of overlap, and their effect on correlation coefficients resulting from outlier removal. The results indicated that D squared had a higher hit rate than Dk with approximately the same false alarm rate. The statistics identified the same cases as outliers 19 to 55 percent of the time. Surprising, the average correlations that resulted from outlier removal by D squared were closer to the population correlations than were those resulting from outlier removal by Dk. Under the conditions investigated, D squared was preferable to Dk as an outlier removal statistic.

Journal ArticleDOI
TL;DR: This paper characterizes the outlier payment formulae that minimize risk for hospitals under any fixed constraints on the sum of outlier payments and minimum hospital coinsurance rate and discusses some problems with the implementation of the current policy.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The problem of analyzing the threshold effect of signal processing algorithms which use the singular-value decomposition (SVD) is addressed and the probability of obtaining an outlier is calculated and used to determine the threshold SNR at which the variance of parameter estimation errors depart from Cramer-Rao bound behavior.
Abstract: The problem of analyzing the threshold effect of signal processing algorithms which use the singular-value decomposition (SVD) is addressed. The probability of obtaining an outlier is calculated and used to determine the threshold SNR at which the variance of parameter estimation errors depart from Cramer-Rao bound behavior. Simulation results using low rank approximation and linear prediction for frequency estimation verify the analysis. The same method of analysis can be applied to a broad class of parameter-estimation methods in which the principal-component technique or low rank approximations to matrices are used. >

Journal ArticleDOI
TL;DR: In this paper, a Bonferroni bound for the outlier test at each step is used to detect multiple outliers; backwards-stepping and the use of deleted residuals results in the limiting of both masking and swamping effects.
Abstract: When fitting a model to a contingency table, a significant lack of fit can sometimes be caused by a few outlier cells, with the model fitting the remaining cells well. These cells can be identified by using deleted residuals (the residual from the expected count with the cell deleted) and tested using the drop in likelihood ratio goodness-of-fit statistic (from the model with the cell included to the model with the cell deleted), with the cells being tested from least extreme to most extreme (“backwards-stepping”). This article shows that using a Bonferroni bound for the outlier test at each step results in a conservative test with good power to detect multiple outliers; backwards-stepping and the use of deleted residuals results in the limiting of both masking and swamping effects. The procedure generalizes easily to complicated probability models.

Journal ArticleDOI
TL;DR: In this article, a contaminated multivariate normal distribution with two parameters indicating the percentage of outliers and the degree of contamination is used to identify the multivariate outliers, which can then be eliminated to obtain approximately normal data.
Abstract: Multivariate outliers may be modeled using the contaminated multivariate normal distribution with two parameters indicating the percentage of outliers and the degree of contamination. Recent developments in elliptical distribution theory are used to determine estimators of these parameters. These estimators can be used with an index of Mahalanobis distance to identify the multivariate outliers, which can then be eliminated to obtain approximately normal data. The performance of the proposed estimators and outliers rejection procedures are evaluated in a small simulation study.

Journal ArticleDOI
TL;DR: In this article, a procedure based on the well-known score-test is discussed for detection of outliers and distinguishing between the outlier types, and the importance levels of the tests are also obtained and illustrated with simulated examples.
Abstract: . Two characterizations, the aberrant observation and innovation models, for outliers in time series are considered. A procedure based on the well-known score-test is discussed for detection of outliers and distinguishing between the outlier types. Significance levels of the tests are also obtained and the method is illustrated with simulated examples.

Journal ArticleDOI
TL;DR: In this article, some well-known reeurrence relations for order statistics in the i.i.d. case are generalized to the case when the variables are independent and non-identically distributed.
Abstract: Some well-known reeurrence relations for order statistics in the i.i.d. case are generalized to the case when the variables are independent and non-identically distributed. These results could be employed in order to reduce the amount of direct computations involved in evaluating the moments of order statistics from an outlier model.

Journal ArticleDOI
TL;DR: In this paper, the authors show that robust estimates of geochemical data should always include at least the simple median and hinge width, to complement the often misleading mean and standard deviation.
Abstract: Numerical data summaries in many geochemical papers rely on arithmetic means, with or without standard deviations. Yet the mean is the worst average (estimate of location) for those extremely common geochemical data sets which are non-normally distributed or include outliers. The widely used geometric mean, although allowing for skewed distributions, is equally susceptible to outliers. The superior performance of 19 “robust” estimates of location (simple median, plus various combined, adaptive, trimmed, and skipped,L, M, andW estimates) is illustrated using real geochemical data sets varying in sources of error (pure analytical error to multicomponent geological variability), modality (unimodal to polymodal), size (20 to >2000 data values), and continuity (continuous to truncated in either or both tails). The arithmetic mean tends to overestimate location of many geochemical data sets because of positive skew and large outliers; robust estimates yield consistent smaller averages, although some (e.g., Hampel's and Andrew's) do perform better than others (e.g., Shorth mean, dominant cluster mode). Recommended values for international standard rocks, and for such important geochemical concepts as “average chondrite,” can be reproduced far more simply via robust estimation on complete interlaboratory data sets than via the rather complicated and subjective methods (e.g., “laboratory ratings”) so far used in the literature. Robust estimates also seem generally less affected by truncation than the mean; for example, if values below machine detection limits are alternatively treated as missing values or as real values of zero, similar averages are obtained. The standard (and mean) deviations yield consistently larger values of scale for many geochemical data sets than the hinge width (interquartile range) or median absolute deviation from the median. Therefore, summaries of geochemical data should always include at least the simple median and hinge width, to complement the often misleading mean and standard deviation.

Journal ArticleDOI
TL;DR: In this article, an approche bayesienne a la modelisation des observations aberrantes and l'examine en relation avec les membres de la famille exponentielle en general and avec la distribution exponentialle en particulier is presented.
Abstract: On presente une approche bayesienne a la modelisation des observations aberrantes et on l'examine en relation avec les membres de la famille exponentielle en general et avec la distribution exponentielle en particulier

Journal ArticleDOI
TL;DR: In this article, robustness and power of the HotellinK T2 statistics have been shown to be robust and robust to the assumption of a mean vector of a bivariate population and the equality of mean vectors of two bivariate populations.
Abstract: To test an assumed mean vector of a bivariate population, and to test the equality of mean vectors of two bivariate populations, robust statistics are developed which have exactly the same form as the HotellinK T2 statistics, These statistics are shown to ror robustness and power.

Journal ArticleDOI
TL;DR: In this article, a procedure is described for characterizing the set of all parameter vectors that are consistent with data corrupted by a bounded noise, which applies to any parametric model that can be simulated on a computer when upper and lower bounds for the noise are known a priori.
Abstract: A procedure is described for characterizing the set of all parameter vectors that are consistent with data corrupted by a bounded noise. The method applies to any parametric model that can be simulated on a computer when upper and lower bounds for the noise are known a priori. The convergence properties of the associated estimator are considered, as well as its behavior in the presence of outliers. To illustrate the versatility of the technique, problems are considered where (i) the set of the true values of the parameter vector does not reduce to a singleton, (ii) the model is not uniquely identifiable, (iii) the hypotheses on the noise bounds are not satisfied, and (iv) the data contain a majority of outliers.

Journal ArticleDOI
TL;DR: In this article, the Box-Cox family of power transformations is proposed as a means of obtaining an approximate multivariate normal distribution without an arbitrary deletion of outlier values, which will allow for the construction of reference ranges for the clinical and laboratory variables of interest.
Abstract: Attempts often have been made to transform clinical and laboratory data to approximate normality for the purpose of developing either univariate “normal” ranges or multivariate reference ranges in the “supposedly healthy” population. For many of these transformations to be successful, it has been necessary to arbitrarily delete outlier values with no scientific justification for doing so. In this article, construction principles used in the determination of these ranges are reviewed. In addition, the Box–Cox family of power transformations is proposed as a means of obtaining an approximate multivariate normal distribution without an arbitrary deletion of outlier values. This method will allow for the construction of reference ranges for the clinical and laboratory variables of interest.

Journal ArticleDOI
TL;DR: In this article, the authors compare the statistical methods and probabilistic properties of outliers and of order statistics, and ask to what extent they coincide and depend on each other.
Abstract: Outliers are to be found among the extremes of a data set. Extremes are examples of order statistics. It is thus relevant to ask to what extent the statistical methods (and probabilistic properties) of outliers and of order statistics coincide and depend on each other. Whilst clear overlap is identifiable, aims and procedures are often quite distinct and each topic plays its own important role in the panoply of statistical principles and methodology.

Book ChapterDOI
01 Jan 1988
TL;DR: In this article, the authors considered the projection pursuit for testing the presence of clusters based on the model of the ellipsoidally symmetric unlmodal densities mixture and showed that under this model the use of projections indices based on Renyi entropy or on third or fourth moments results in obtaining an estimate of the discriminant subspace.
Abstract: In this paper, the consideration of the projection pursuit for testing the presence of clusters is based on the model of the ellipsoidally symmetric unlmodal densities mixture. It is shown that under this model the use of projections indices based on Renyi entropy or on third or fourth moments results In obtaining an estimate of the discriminant subspace. For estimating the Renyi indices values some forms of the order statistics are used. For detecting outliers the ratio of the standard variance estimate to a robust one is proposed as projection index. In-deces for discriminant analysis problem are introduced.

Journal ArticleDOI
TL;DR: Balakrishnan as discussed by the authors showed that the robustness of some estimators of th location and scale parameters of a continuous population with pdf f(x) symmetric about zero comprising a single outlier with pdf g(x), and the cumulative round off error involved in the numerical evaluation of the moments of order statistics from the symmetric outlier model, using a table of the folded population and from the folded outlier models, has also been studied.
Abstract: Balakrishnan (1987a) has recently shown that the moments of order statistics in samples drawn from a continuous population with pdf f(x) symmetric about zero comprising a single outlier with pdf g(x) also symmetric about zero can be expressed in terms of the moments of order statistics in samples drawn from the population obtained by folding the pdf f(x) at zero and the moments of order statistics in samples drawn from the population obtained by folding the pdf f(x) at zero comprising a single outlier with pdf obtained by folding g(x) at zero. The cumulative round off error involved in the numerical evaluation of the moments of order statistics from the symmetric outlier model, using a table of the moments of order statistics from the folded population and the moments of order statistics from the folded outlier model, has also been studied by Balakrishnan (1987a) and shown to be not serious. Making use of these results we study here the robustness of some estimators of th location and scale parameters of ...

Journal ArticleDOI
TL;DR: In this paper, the test statistics for two and three outliers are expanded to give more insight, and critical values, based on simulation, are given for the statistics for 2 and 3 outliers.
Abstract: The work by Wilks (1963) is discussed, and the test statistics for two and three outliers are expanded to give more insight. Critical values, based on simulation, are given for the statistics for two and three outliers. Approximations for the critical values are also suggested.

Proceedings ArticleDOI
07 Dec 1988
TL;DR: A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity.
Abstract: A numerically well-behaved factorized methodology is developed for estimating spacecraft sensor alignments from prelaunch and inflight data without the need to compute the spacecraft attitude or angular velocity. Such a methodology permits the estimation of sensor alignments (or other biases) in a framework free of unknown dynamical variables. In actual mission implementation such an algorithm is usually better behaved than one that must compute sensor alignments simultaneously with the spacecraft attitude, for example by means of a Kalman filter. In particular, such a methodology is less sensitive to data dropouts of long duration, and the derived measurement used in the attitude-independent algorithm usually makes data checking and editing of outliers much simpler than would be the case in the filter. >