scispace - formally typeset
Search or ask a question

Showing papers on "Outlier published in 1994"


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate how a non-recursive, a simple recursive, a modified recursive, and a hybrid outlier elimination procedure are influenced by population skew and sample size.
Abstract: Results from a Monte Carlo study demonstrate how a non-recursive, a simple recursive, a modified recursive, and a hybrid outlier elimination procedure are influenced by population skew and sample size. All the procedures are based on computing a mean and a standard deviation from a sample in order to determine whether an observation is an outlier. Miller (1991) showed that the estimated mean produced by the simple non-recursive procedure can be affected by sample size and that this effect can produce a bias in certain kinds of experiments. We extended this result to the other three procedures. We also create two new procedures in which the criterion used to identify outliers is adjusted as a function of sample size so as to produce results that are unaffected by sample size.

817 citations


Journal ArticleDOI
Neil Shephard1
TL;DR: The use of simulation techniques to extend the applicability of the usual Gaussian state space filtering and smoothing techniques to a class of nonGaussian time series models allows a fully Bayesian or maximum likelihood analysis of some interesting models, including outlier models, discrete Markov chain components, multiplicative models and stochastic variance models.
Abstract: SUMMARY In this paper we suggest the use of simulation techniques to extend the applicability of the usual Gaussian state space filtering and smoothing techniques to a class of nonGaussian time series models. This allows a fully Bayesian or maximum likelihood analysis of some interesting models, including outlier models, discrete Markov chain components, multiplicative models and stochastic variance models. Finally we discuss at some length the use of a non-Gaussian model to seasonally adjust the published money supply figures.

384 citations


Journal ArticleDOI
TL;DR: A few repeats of a simple forward search from a random starting point are shown to provide sufficiently robust parameter estimates to reveal masked multiple outliers and stability of the patterns obtained is exhibited by the stalactite plot.
Abstract: A few repeats of a simple forward search from a random starting point are shown to provide sufficiently robust parameter estimates to reveal masked multiple outliers. The stability of the patterns obtained is exhibited by the stalactite plot. The robust estimators used are least median of squares for regression and the minimum volume ellipsoid for multivariate outliers. The forward search also has potential as an algorithm for calculation of these parameter estimates. For large problems, parallel computing provides appreciable reduction in computational time.

261 citations


Journal ArticleDOI
TL;DR: A development of a previous genetic algorithm is presented so that a full validation of the results can be obtained and this algorithm has been shown to perform very well also as an outlier detector, allowing easy identification of the presence of outliers even in cases where the ‘classical’ techniques fail.
Abstract: Genetic algorithms have been proved to be a very efficient method in the feature selection problem. However, as for every other method, if the validation of the results is performed in an incomplete way, erroneous conclusions can be drawn. In this paper a development of a previous genetic algorithm is presented so that a full validation of the results can be obtained. Furthermore, this algorithm has been shown to perform very well also as an outlier detector, allowing easy identification of the presence of outliers even in cases where the ‘classical’ techniques fail.

188 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyse fifteen post-World War I US macroeconomic time series using a modified outlier identification procedure based on Tsay (1988a) and find that large shocks appear to be present in all the series examined.
Abstract: SUMMARY We analyse fifteen post-World War I1 US macroeconomic time series using a modified outlier identification procedure based on Tsay (1988a). 'Large shocks' appear to be present in all the series we examined. Furthermore, there are three basic outlier patterns: (I) outliers seem to be associated with business cycles, (2) outliers are clustered together-both over time and across series, (3) there appears to be a dichotomy between outlier behaviour of real versus nominal series. Also, after controlling for outliers, much of the evidence of non-linearity in many of the time series is eliminated.

185 citations


Journal ArticleDOI
TL;DR: A new consistent and robust method called the least-squares of inverted balanced relative errors (LIRS) is proposed and its superiority to the ordinary least-Squares method is demonstrated by use of five actual data sets.

178 citations


Journal ArticleDOI
TL;DR: It is shown that the Gibbs Sampler applies nicely to various problems in analyzing autoregressive processes and, in many cases, it enjoys certain advantages over the traditional methods.
Abstract: . Applications of the Gibbs Sampler in time series analysis are considered. We show that the sampler applies nicely to various problems in analyzing autoregressive processes and, in many cases, it enjoys certain advantages over the traditional methods. The problems considered include random level-shift models, outliers and missing values. Real examples are used to illustrate the analysis.

147 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that extreme deviant scores, or outliers, reduce the probability of Type I errors of the Student t test and, at the same time, substantially increase the probability for Type II errors, so that power declines.
Abstract: Extremely deviant scores, or outliers, reduce the probability of Type I errors of the Student t test and, at the same time, substantially increase the probability of Type II errors, so that power declines. The magnitude of the change depends jointly on the probability of occurrence of an outlier and its extremity, or its distance from the mean. Although outliers do not modify the probability of Type I errors of the Mann-Whitney-Wilcoxon test, they nevertheless increase the probability of Type II errors and reduce power. The effect on this nonparametric test depends largely on the probability of occurrence and not the extremity. Because deviant scores influence the t test to a relatively greater extent, the nonparametric method acquires an advantage for outlier-prone densities despite its loss of power.

133 citations


Journal ArticleDOI
TL;DR: A meta-algorithm based on partitioning the data that enables compound estimators to work in high dimension is proposed and it is shown that even when the computational effort is restricted to a linear function of the number of data points, the algorithm results in an estimator with good asymptotic properties.
Abstract: Estimation of multivariate shape and location in a fashion that is robust with respect to outliers and is affine equivariant represents a significant challenge. The use of compound estimators that use a combinatorial estimator such as Rousseeuw's minimum volume ellipsoid (MVE) or minimum covariance determinant (MCD) to find good starting points for high-efficiency robust estimators such as S estimators has been proposed. In this article we indicate why this scheme will fail in high dimension due to combinatorial explosion in the space that must be searched for the MVE or MCD. We propose a meta-algorithm based on partitioning the data that enables compound estimators to work in high dimension. We show that even when the computational effort is restricted to a linear function of the number of data points, the algorithm results in an estimator with good asymptotic properties. Extensive computational experiments are used to confirm that significant benefits accrue in finite samples as well. We also g...

130 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derive exact finite sample disbibutions and characterizes the tail behavior of maximum likelihood estimators of the cointegrating coefficients in error correction models, showing that extreme outliers occur more frequently for the reduced rank regression estimator than for alternative asymptotically efficient procedures based on triangular representation.
Abstract: The author derives some exact finite sample disbibutions and characterizes the tail behavior of maximum likelihood estimators of the cointegrating coefficients in error correction models. The reduced rank regression estimator has a distribution with Cauchy-like tails and no finite moments of integer order. The maximum likelihood estimator of the coefficients in a particular triangular system representation has matrix t-distribution tails with finite integer moments to order T - n + r, where T is the sample size, n is the total number of variables, and r is the dimension of cointegration space. This helps explain some recent simulation studies where extreme outliers occur more frequently for the reduced rank regression estimator than for alternative asymptotically efficient procedures based on triangular representation. Copyright 1994 by The Econometric Society.

124 citations


Journal ArticleDOI
TL;DR: In this article, a generalized theory of aligned rank tests for serial dependence problems is proposed. But the problem is not restricted to linear models with independent observations, but also to a general linear model with a linear regression trend, where the observations are assumed to be stationary in the mean.

Journal ArticleDOI
TL;DR: Presents an autonomous, statistically robust, sequential function approximation approach to simultaneous parameterization and organization of (possibly partially occluded) surfaces in noisy, outlier-ridden, functional range data.
Abstract: Presents an autonomous, statistically robust, sequential function approximation approach to simultaneous parameterization and organization of (possibly partially occluded) surfaces in noisy, outlier-ridden (not Gaussian), functional range data. At the core of this approach is the Robust Sequential Estimator, a robust extension to the method of sequential least squares. Unlike most existing surface characterization techniques, the authors' method generates complete surface hypotheses in parameter space. Given a noisy depth map of an unknown 3-D scene, the algorithm first selects appropriate seed points representing possible surfaces. For each nonredundant seed it chooses the best approximating model from a given set of competing models using a modified Akaike Information Criterion. With this best model, each surface is expanded from its seed over the entire image, and this step is repeated for all seeds. Those points which appear to be outliers with respect to the model in growth are not included in the (possibly disconnected) surface. Point regions are deleted from each newly grown surface in the prune stage. Noise, outliers, or coincidental surface alignment may cause some points to appear to belong to more than one surface. These ambiguities are resolved by a weighted voting scheme within a 5/spl times/5 decision window centered around the ambiguous point. The isolated point regions left after the resolve stage are removed and any missing points in the data are filled by the surface having a majority consensus in an 8-neighborhood. >

Journal ArticleDOI
TL;DR: In this article, the Akaike Information Criterion (AIC) goodness-of-fit test is used to identify more objectively the optimum model for flood frequency analysis in Kenya from a class of competing models.
Abstract: For a long time now, the hydrologist has been faced with the problem of finding which of the many possible probability distribution functions can be used most effectively in flood frequency analyses. This problem has been mainly due to the insufficiency of the conventional goodness-of-fit procedures when used with the typically skewed flood probability distributions. In this study, the Akaike Information Criterion (AIC) goodness-of-fit test is used to identify more objectively the optimum model for flood frequency analysis in Kenya from a class of competing models. The class is comprised of (a) seven three-parameter density functions, namely, log-normal, Pearson, log-Pearson, Fisher-Tippet, log-Fisher-Tippet, Walter Boughton and log-Walter Boughton; and (b) two five-parameter density functions, namely, Wakeby and log-Wakeby. The AIC is also used in this study as a method of testing for the existence of outlier peak-flow values in the peak annual data used. A modified version of the chi-square goo...

Journal ArticleDOI
TL;DR: In this paper, the authors present an algorithm for obtaining the minimum covariance determinant (MCD) estimator for a given data set, which is probabilistic; it involves taking random starting "trial solutions" and refining each to a local optimum satisfying the necessary condition for the MCD optimum.

Journal ArticleDOI
TL;DR: This paper capitalizes on a necessary condition characterizing the LTS fit to develop a probabilistic ‘feasible solution’ algorithm that takes random starting trial solutions and refines each to the local optimum satisfying this necessary condition.

Proceedings ArticleDOI
Black1, Rangarajan
21 Jun 1994
TL;DR: This paper unifies "line-process" approaches for regularization with discontinuities and robust estimation techniques and generalizes the notion of a "line process" to that of an analog "outlier process" and shows that a problem formulated in terms of outlier processes can be viewed in Terms of robust statistics.
Abstract: This paper unifies "line-process" approaches for regularization with discontinuities and robust estimation techniques. We generalize the notion of a "line process" to that of an analog "outlier process" and show that a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlier-process formulation exists and give a straightforward method for converting a robust estimation problem into an outlier-process formulation. This outlier-processes approach provides a general framework which subsumes the traditional line-process approaches as well as a wide class of robust estimation problems. Examples in image reconstruction and optical flow are used to illustrate the approach. >

01 Apr 1994
TL;DR: Outlier resistant wavelet transforms are developed, which improve upon the Donoho and Johnstone nonlinear signal extraction methods and are included with the 'S+WAVELETS' object-oriented toolkit for wavelet analysis.
Abstract: In a series of papers, Donoho and Johnstone develop a powerful theory based on wavelets for extracting non-smooth signals from noisy data. Several nonlinear smoothing algorithms are presented which provide high performance for removing Gaussian noise from a wide range of spatially inhomogeneous signals. However, like other methods based on the linear wavelet transform, these algorithms are very sensitive to certain types of non-Gaussian noise, such as outliers. In this paper, we develop outlier resistant wavelet transforms. In these transforms, outliers and outlier patches are localized to just a few scales. By using the outlier resistant wavelet transform, we improve upon the Donoho and Johnstone nonlinear signal extraction methods. The outlier resistant wavelet algorithms are included with the 'S+WAVELETS' object-oriented toolkit for wavelet analysis.

Journal ArticleDOI
TL;DR: In this article, the authors examined historical field growth and presented estimates of future additions to proved reserves from fields discovered before 1992, and derived a lower bound of a range of estimates of futu e growth by applying monotone growth functions computed from the common field set to all fields.
Abstract: Growth in estimates of recovery in discovered fields is an important source of annual additions to United States proved reserves. This paper examines historical field growth and presents estimates of future additions to proved reserves from fields discovered before 1992. Field-level data permitted the sample to be partitioned on the basis of recent field growth patterns into outlier and common field sets, and analyzed separately. The outlier field set accounted for less than 15% of resources, yet grew proportionately six times as much as the common fields. Because the outlier field set contained large old heavy-oil fields and old low-permeability gas fields, its future growth is expected to be particularly sensitive to prices. A lower bound of a range of estimates of futu e growth was calculated by applying monotone growth functions computed from the common field set to all fields. Higher growth estimates were obtained by extrapolating growth of the common field set and assuming the outlier fields would maintain the same share of total growth that occurred from 1978 through 1991. By 2020, the two estimates for additions to reserves from pre-1992 fields are 23 and 32 billion bbl of oil in oil fields and 142 and 195 tcf of gas in gas fields.

Journal ArticleDOI
TL;DR: In this paper, the authors established some recurrence relations satisfied by the single and product moments of order statistics arising from n independent and non-identically distributed exponential random variables, and used these results to examine the sensitivity of a robust linear estimator to the presence of multiple outliers in the sample.

Journal ArticleDOI
TL;DR: A robust regression method, least median squares (LMS), is insensitive to atypical values in the dependent and/or independent variables in a regression analysis, and outliers that have significantly different variances from the rest of the data can be identified in a residual analysis.
Abstract: Fisheries data often contain inaccuracies due to various errors, if such errors meet the Gauss–Markov conditions and the normality assumption, strong theoretical justification can be made for traditional least-squares (LS) estimates. However, these assumptions are not always met. Rather, it is more common that errors do not follow the Gauss–Markov and normality assumptions. Outliers may arise due to heterogenous variabilities. This results in a biased regression analysis. The sensitivity of the LS regression analysis to atypical values in the dependent and/or independent variables makes it difficult to identify outliers in a residual analysis. A robust regression method, least median squares (LMS), is insensitive to atypical values in the dependent and/or independent variables in a regression analysis. Thus, outliers that have significantly different variances from the rest of the data can be identified in a residual analysis. Using simulated and field data, we explore the application of LMS in the analys...

Journal ArticleDOI
TL;DR: This article showed that the magnitude of the largest Z score in a univariate data set is bounded above by, and that the same bound holds for standardized and internally studentized residuals in regression analysis.
Abstract: Shiffler (1988) showed that the magnitude of the largest Z score in a univariate data set is bounded above by . Similar bounds hold for standardized and internally studentized residuals in regression analysis. The implications of these bounds for outlier identification in regression do not appear to be widely recognized. Many regression textbooks contain recommendations for residual analysis that are not appropriate in light of these results.

Book ChapterDOI
01 Jan 1994
TL;DR: The authors discusses the outlier strategies of research design within school effectiveness research, a study design that facilitates research by maximizing the contrasts among schools through generating samples of positive, negative, and typical schools.
Abstract: Publisher Summary Outliers are cases in a research study that do not conform to predicted patterns. In studies seeking to identify normative behavior or central tendencies in data bases, researchers typically view outliers not as objects of interest, but as problems to be solved. Outlier cases are often eliminated from analyses. Yet, because of their unusual characteristics, outliers are interesting in other circumstances. There are four sets of sampling decisions that typically result in research being referred to as an outlier study. These include studies of positive outliers, studies contrasting positive and negative outliers, studies of positive outliers and typical examples of the phenomena of interest and, rarely, studies of positive, typical, and negative outlier examples. This chapter discusses the outlier strategies of research design within school effectiveness research, a study design that facilitates research by maximizing the contrasts among schools through generating samples of positive, negative, and typical schools. The chapter outlines the findings of a number of outlier studies and presents a description of an outlier study, namely, the International School Effectiveness Research Project (ISERP).

Book
28 Mar 1994
TL;DR: In this paper, the VAR model is used to estimate pushing trends and pulling equilibria in multivariate time series, and the effect of the outliers and the effectiveness of the testing procedure is evaluated.
Abstract: textChapter 2 introduces the baseline version of the VAR model, with its basic statistical assumptions that we examine in the sequel. We first check whether the variables in the VAR can be transformed to meet these assumptions. We analyze the univariate characteristics of the series. Important properties are a bounded spectrum, the order of (seasonal) integration, linearity and normality after the appropriate transformation. Subsequently, these properties are contrasted with the properties of stochastic fractional integration. We suggest data-analytic tools to check the assumption of univariate unit root integration. In an appendix we give a detailed account of unit root tests for stochastic unit root nonstationarity versus deterministic nonstationarity at frequencies of interest. Chapter 3 first discusses local and global influence analysis, which should point out the observations with the most notable impact on the estimates of location and covariance parameters. The results from this analysis can be helpful in spotting the sources of possible problems with the baseline model. After the influence analysis we discuss the merits of various statistical diagnostic tests for the adequacy of the separate regression equations. After one has estimated the unrestricted VAR one should check some overall characteristics of the system. We present several suggestions on how to do this. Chapter 4 deals with common sources of misspecification stemming from problems with seasonality and seasonal adjustment in the multivariate model. We discuss a number of univariate unobserved component models for stochastic seasonality, giving additional insight into the properties of models with unit root nonstationarity. We also suggest a modification of a simple but quite robust seasonal adjustment procedure. Some new data-analytic tools are introduced to examine the seasonal component more closely. Appendix A4.1 discusses the limitations of deterministic modeling of seasonality. Appendix A4.2 treats aspects of backforecasting in models with nonstationarity in mean. Chapter 5 introduces outlier models. We develop a testing procedure to direct and evaluate the treatment of exceptional observations in the VAR. We illustrate its application on an artificial data set that contains important characteristics of macroeconomic time series. The effect of the outliers and the effectiveness of the testing procedure is also analyzed on a four-variate set of quarterly French data, which exhibits cointegration. We compare some ready-to-use outlier correction methods in the last section. Chapter 6 deals with restrictions on the VAR model. First we discuss a number of interesting reparameterizations of the VAR under unit root restrictions. The reparameterizations lead to different interpretations, which can help to assess the plausibility of empirical outcomes. We present some straightforward transformation formulae for a number of these parameterizations and show which assumptions are essential for the equivalence of these models. We illustrate this in simple numerical examples. Next we compare VAR based methods to estimate pushing trends and pulling equilibria in multivariate time series. The predictability approach of Box and Tiao receives special attention. Finally we discuss multivariate tests for unit roots and cointegration. Chapter 7 applies the methods described in the previous chapters to analyze gross fixed capital investment in the Netherlands from 1961 to 1988 in a six-variate system. We discuss a number of economic approaches to model macroeconomic investment series. We list a number of problems in empirical applications of these models. Section 7.3 presents empirically relevant aspects of the measurement model for macroeconomic investment. Section 7.4 applies the univariate techniques of Chapters 2, 3, 4 and 5 to the investment series and five other macroeconomic with a notable dynamic relationship with investment, viz. consumption, imports, exports, the terms of trade and German industrial production. The univariate analysis clearly shows the presence of nonstationary seasonal components in a number of the series. The model is extended with a structural break on the basis of results from the univariate analysis. The subsequent multivariate analysis confirms the need for a structural break in the model for the growth rates of the multivariate series. An empirically important equilibrium relation between investment, imports and exports is seen to remain stable over the entire sample period. The partial correlation of deviations from this equilibrium and growth rates of investment is large and stable.

Journal ArticleDOI
TL;DR: The correspondence focuses on the robust 3-D-3-D pose estimation, especially, multiple pose estimation which is formulated as a series of general regressions which involve a successively size-decreasing data set, with each regression relating to one particular pose of interest.
Abstract: The correspondence focuses on the robust 3-D-3-D pose estimation, especially, multiple pose estimation. The robust 3-D-3-D multiple pose estimation problem is formulated as a series of general regressions which involve a successively size-decreasing data set, with each regression relating to one particular pose of interest. Since the first few regressions may carry a severely contaminated Gaussian error noise model, the MF-estimator (Zhuang et al., 1992) is used to solve each regression for each pose of interest. Extensive computer experiments with both real imagery and simulated data are conducted and results are promising. Three distinctive features of the MF-estimator are theoretically discussed and experimentally demonstrated: It is highly robust in the sense that it is not much affected by a possible large portion of outliers or incorrect matches as long as the minimum number of inliers necessary to give a unique solution are provided; It is made virtually independent of initial guesses; It is computationally reasonable and admits an efficient parallel implementation. >

Book ChapterDOI
01 Jan 1994
TL;DR: In this paper, the problem of choosing the number of component clusters of individuals, determining the variables which are contributing to the differences between the clusters using all possible subset selection of variables, and detecting outliers or extreme observations across the clustering alternatives in one expert-system simultaneously within the context of the standard mixture of multivariate normal distributions is considered.
Abstract: This paper considers the problem of choosing the number of component clusters of individuals, determining the variables which are contributing to the differences between the clusters using all possible subset selection of variables, and detecting outliers or extreme observations across the clustering alternatives in one expert-system simultaneously within the context of the standard mixture of multivariate normal distributions. This is achieved by introducing and deriving a new informational measure of complexity (ICOMP) criterion of the estimated inverse-Fisher information matrix (IFIM) developed by Bozdogan as an alternative to Akaike’ s information criterion (AIC), and Bozdogan’s CAIC for the mixture-model. A numerical example is shown on a real data set to illustrate the significance of these validity functionals.

Posted Content
TL;DR: In this article, the effects of a break under the null hypothesis and the choice of break date are considered, and new limiting distributions are derived, including the case where a shift in trend occurs under the unit root null hypothesis.
Abstract: The authors consider unit root tests that allow a shift in trend at an unknown time. They focus on the additive outlier approach but also give results for the innovational outlier approach. Various methods of choosing the break date are considered. New limiting distributions are derived, including the case where a shift in trend occurs under the unit root null hypothesis. Limiting distributions are invariant to mean shifts but not to slope shifts. Simulations are used to assess finite sample size and power. The authors focus on the effects of a break under the null and the choice of break date. Copyright 1998 by Economics Department of the University of Pennsylvania and the Osaka University Institute of Social and Economic Research Association.

Patent
15 Dec 1994
TL;DR: In this article, a nonlinear resistive network is used to detect the presence of objects in the image by comparing the brightness or intensity of each pixel with that of the background, if the intensity of a pixel is significantly different from the background level, the data path switch corresponding to that pixel is opened.
Abstract: A vehicular traffic monitoring system incorporates an array of photosensors and a nonlinear resistive network for identifying, locating, and processing outliers in sensor images of a highway or intersection. The camera system can be mounted on a pole or overpass to provide an image of the roadway or intersection. Areas of the outlier network ("video loops") are designated to correspond to selected areas of the roadway. Images are received by the outlier detection network with all data path switches closed between sensor elements and their corresponding network nodes. The system detects the presence of objects in the image by comparing the brightness or intensity of each pixel with that of the background. If the intensity of a pixel is significantly different from the background level, the data path switch corresponding to that pixel is opened. A readout of the state of all the switches in the network yields a map of outlier points for each video frame. The outlier map is connected to a data processing system to identify and locate outlier points in the image. The detection of a threshold number of outliers in a video loop indicates the presence of a vehicle at the corresponding area of the roadway. The processor, having a greatly reduced computational load without extensive image processing, simply measures and transmits traffic data such as the number and speed of vehicles passing through the video loops.

Journal ArticleDOI
TL;DR: In this article, Monte Carlo simulation is used to evaluate the performance of several GM-estimators as applied to the problem of two-group discrimination, with respect to probability of misclassification.

Journal ArticleDOI
TL;DR: It is shown that this method has considerable sample size advantage over naive repeated measurements and is robust for outlier error distribution.
Abstract: Detecting changes in longitudinal data is important in medical research. However, the existence of measurement outliers can cause an unexpected increase in the false alarm rate in claiming changes. To reduce the outliers, a new method has been developed. In this scheme, two measures are initially taken and, if they are closer than a specified threshold, the average of the two is considered to be the estimate of the true mean; otherwise a third measurement is taken, and the mean of the closest pair is considered to be the estimate. It is shown that this method has considerable sample size advantage over naive repeated measurements. Moreover, this scheme is robust for outlier error distribution. Evidence on outlier removal in dental attachment probing is used as an example.

Posted Content
TL;DR: The wavelet transform is looked at in the context of multiresolution analysis, its uses in other fields are discussed, and an Econometric application of wavelets to outlier detection is presented.
Abstract: In recent years, wavelets have become widely used in physics, engineering, and mathematics. They have been used for signal processing, image processing, numerical computation, and data compression. Wavelets have not, however, been used very much in the fields of Economics, Econometrics, and Finance. In this study, We will look at the wavelet transform in the context of multiresolution analysis, discuss its uses in other fields, and present an Econometric application of wavelets to outlier detection.