Journal ArticleDOI
Distinguishing Outlier Types in Time Series
Reads0
Chats0
TLDR
In this paper, a method of testing for the presence of an outlier of unknown type is proposed and the properties of a rule based on the likelihood ratio which attempts to distinguish the two types of outlier are examined and compared with those of corresponding Bayes rules.Abstract:
Distinguishing an outlier in a time series arising through measurement error from one arising through a perturbation of the underlying system can be of use in data validation. In this paper a method of testing for the presence of an outlier of unknown type is proposed. Then the properties of a rule based on the likelihood ratio which attempts to distinguish the two types of outlier are examined and compared with those of the corresponding Bayes rules. An example involving data from an industrial production process is studied.read more
Citations
More filters
Journal ArticleDOI
Outliers, Level Shifts, and Variance Changes in Time Series
TL;DR: In this paper, the problem of detecting outliers, level shifts, and variance changes in a univariate time series is considered, and the methods employed are extremely simple yet useful, such as least squares techniques and residual variance ratios.
Journal ArticleDOI
Robust Kalman Filter Based on a Generalized Maximum-Likelihood-Type Estimator
Mital A Gandhi,Lamine Mili +1 more
TL;DR: A robust filter in a batch-mode regression form to process the observations and predictions together, making it very effective in suppressing multiple outliers, and results revealed that this filter compares favorably with the H¿-filter in the presence of outliers.
Journal ArticleDOI
Leave‐K‐Out Diagnostics for Time Series
TL;DR: This document demonstrates the efficacy of observation deletion based diagnostics for ARIMA models, addressing issues special to the time diagnostics based on the innovations variance, including the dependency aspect of time series data gives rise to a smearing effect.
Journal ArticleDOI
Outlier detection and time series modeling
Bovas Abraham,A. Chuang +1 more
TL;DR: In this paper, the authors proposed a method for distinguishing an observational outlier from an innovational one using regression analysis techniques, and a four-step procedure for modeling time series in the presence of outliers.
Journal ArticleDOI
On Outlier Detection in Time Series
TL;DR: The estimation and detection of outliers in a time series generated by a Gaussian autoregressive moving average process is considered and it is shown that the estimation of additive outliers is directly related toThe estimation of missing or deleted observations.
References
More filters
Journal ArticleDOI
Outliers in Time Series
TL;DR: In this article, the authors considered two types of outliers that may occur in a time series, i.e., a gross error of observation or recording error affects a single observation, and a single "innovation" is extreme.
Journal ArticleDOI
The 1972 Wald Lecture Robust Statistics: A Review
TL;DR: A selective review on robust statistics, centering on estimates of location, but extending into other estimation and testing problems, can be found in this paper, where three important classes of estimates are singled out and some basic heuristic tools for assessing properties of robust estimates (or test statistics) are discussed.
Journal ArticleDOI
Maximum likelihood estimation of regression models with autoregressive-moving average disturbances
Andrew Harvey,G. D. A. Phillips +1 more
TL;DR: In this paper, the regression model with autoregressive-moving average disturbances is cast in a form suitable for the application of Kalman filtering techniques, which enables the generalized least squares estimator to be calculated without evaluating and inverting the covariance matrix of the disturbances.
Journal ArticleDOI
Bayesian analysis of some outlier problems in time series
Bovas Abraham,George E. P. Box +1 more
TL;DR: In this article, the aberrant innovation model and aberrant observation model are considered to characterize outliers in time series, allowing for a small probability that any given observation is "bad" and in this set-up the inference about the parameters of an autoregressive model is considered.