scispace - formally typeset
Search or ask a question

Showing papers on "Median filter published in 1989"


Journal ArticleDOI
TL;DR: In this article, the adaptive weighted median filter (AWMF) is proposed for reducing speckle noise in medical ultrasonic images. But it is not suitable for image segmentation.
Abstract: A method for reducing speckle noise in medical ultrasonic images is presented. It is called the adaptive weighted median filter (AWMF) and is based on the weighted median, which originates from the well-known median filter through the introduction of weight coefficients. By adjusting the weight coefficients and consequently the smoothing characteristics of the filter according to the local statistics around each point of the image, it is possible to suppress noise while edges and other important features are preserved. Application of the filter to several ultrasonic scans has shown that processing improves the detectability of small structures and subtle gray-scale variations without affecting the sharpness or anatomical information of the original image. Comparison with the pure median filter demonstrates the superiority of adaptive techniques over their space-invariant counterparts. Examples of processed images show that the AWMF preserves small details better than other nonlinear space-varying filters which offer equal noise reduction in uniform areas. >

715 citations


Journal ArticleDOI
TL;DR: The contribution of this work is to quantify and justify the functional relationships between image features and filter parameters so that the design process can be easily modified for different conditions of noise and scale.

284 citations


Journal ArticleDOI
TL;DR: Two methods for estimation of noise correlations along an array of sensors are presented, both rely on a parametric (autoregressive moving average) noise model that has the advantage of describing the noise correlations by a small number of parameters.
Abstract: Two methods for estimation of noise correlations along an array of sensors are presented. Both rely on a parametric (autoregressive moving average) noise model. The model has the advantage of describing the noise correlations by a small number of parameters and can be applied to a great variety of physical noises. The first method is related to the calculation of the likelihood of whitened observations, and the second is related to Pisarenko's method (1973) applied to whitened observations. Both methods are obtained by optimization of a criterion and are iterative. The noise estimates can be used for sensor-output whitening and it then provides a means to improve array processing performance. The two methods perform well, both on simulated and real data. However, the first method seems simpler and more robust than the second. >

143 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm that requires a significantly smaller number of comparisons and is significantly faster than the traditional approach to order statistics filtering, and also propose a filter structure for order statistics that is much faster than known sorting structures.
Abstract: Order statistics are used in a variety of filtering techniques (e.g. median, alpha -trimmed mean, nonlinear order statistics filtering, morphological filtering). Their computation is relatively fast, because it requires only comparisons. The author presents an algorithm that requires a significantly smaller number of comparisons and is significantly faster than the traditional approach to order statistics filtering. Also proposed are filter structures for order statistics filtering that are much faster than the known sorting structures. >

135 citations


Proceedings ArticleDOI
10 Jul 1989
TL;DR: A noise model is defined for remotely sensed images, and the noise statistics are estimated by using the means and variances from small image blocks to detect a straight line through the maj or cluster of data points.
Abstract: A noise model is defined for remotely sensed images, and the noise statistics are estimated by using the means and variances from small (4x4 or 8x8) image blocks. Since most images contain many small but homogeneous areas, a scatter plot of variance vs (mean)' reveals characteristics of the noise. The Hough transform is then applied to the scatter plot to detect a straight line through the maj or cluster of data points. This defines the image noise statistics. Images from SAR, Landsat TM, and passive microwave Sensors are used for illustration.

132 citations


Journal ArticleDOI
TL;DR: This study presents a performance evaluation of non-linear filters derived from the robust point estimation theory by classification of various approaches to nonlinear filtering into three types of estimators according to the process of the filter.
Abstract: Nonlinear filters are used in many applications, including speech and image processing, owing to their ability to suppress noise and preserve signal features such as edges. This study presents a performance evaluation of non-linear filters derived from the robust point estimation theory. The first part of the work is a classification of various approaches to nonlinear filtering into three types of estimators according to the process of the filter. The second part is a computer implementation and evaluation of all of the filters discussed. Finally, a summary of experimental results is presented.

83 citations


Proceedings ArticleDOI
30 Jun 1989
TL;DR: In this paper, a robust, nonlinear, order statistic type filter is proposed for point-like feature detection in infrared systems, known as median subtraction filtering, which exhibits high-pass filter characteristics without the usual ringing associated with linear highpass filters.
Abstract: The nonstationarity of infrared interference backgrounds which prevents the implementation of the usual optimum linear filtering techniques makes clutter suppression signal processing for point target detection in infrared surveillance systems a challenging and difficult problem. Hence, more robust filtering schemes are sought which will perform well in structured backgrounds where the underlying probability distribution defining that structure is not well known or characterized. This paper investigates a promising candidate spatial filter for point-like feature detection in infrared systems. The technique, known as median subtraction filtering, is a robust, nonlinear, order statistic type filter which exhibits highpass filter characteristics without the usual ringing associated with linear highpass filters. A quantitative analysis of the statistical properties of the median subtraction filter is presented, including analytic expressions for the output distribution of the filter (thus analytic expressions for the probability of detection and probability of false alarm), its autocorrelation function and spectral density function. Performance results of a signal processing simulation comparing a median subtraction filter with an adaptive linear filter of the LMS type using actual infrared video as input are also included.

76 citations


Journal ArticleDOI
TL;DR: A fast median filtering algorithm with logarithmic time complexity is presented that is based on a special data structure, a double heap, which naturally supports the median, and can be used to implement any rank-order filter.
Abstract: A fast median filtering algorithm with logarithmic time complexity is presented that is based on a special data structure, a double heap, which naturally supports the median. With slight modification, the approach can be used to implement any rank-order filter. A complete implementation of the algorithm has been tested as a global Modula-2 module and has the expected performance and correctness. >

58 citations


Journal ArticleDOI
TL;DR: It is shown that the direct moment calculation combined with a consensus averaging technique has the best overall performance for accuracy and the ability to use data with a very low signal-to-noise ratio.
Abstract: A numerical model to simulate radar data is used for testing various estimators of the Doppler shift in Doppler radar echoes. The estimators are the pulse pair and poly-pulse pair algorithms in the correlation domain, a least-squares fitting to the spectral peak of the power spectra, and direct calculations of the moments from periodograms in the spectral domain. Two averaging schemes (a consensus average and a median filter) are also examined for data with poor signal-to-noise ratios. The data processing method used in Doppler radar wind profilers, which operate over a very wide range of signal to noise ratios, is examined in detail. It is shown that the direct moment calculation combined with a consensus averaging technique has the best overall performance for accuracy and the ability to use data with a very low signal-to-noise ratio.

53 citations


Patent
26 Jan 1989
TL;DR: In this paper, the authors present a processor that generates compensation terms that are stored in an offset term memory and which are subsequently combined with the output signals from the array to normalize all detector elements in the array such that they all appear to respond to infrared energy.
Abstract: Apparatus and methods for providing nonuniformity compensation of staring infrared focal plane array imaging systems, or other video imaging ssytem, or the like. The invention comprises a processor which implements nonuniformity compensation of the detectors comprising the array. The processor generates compensation terms that are stored in an offset term memory and which are subsequently combined with the output signals from the array. The processing accomplished by the present invention normalizes all detector elements in the array such that they all appear to respond to infrared energy in an identical manner. The processor comprises a median filter which selectively implements cross (X) shaped and plus (+) shaped filters. An antimedian calculator computes the antimedian of the output of the median filter. This value comprises the difference between the central pixel of a respective filter and the median value of all pixels in the cross (X) or plus (+) shaped filter. A third filter samples each of the signals from the detector array and compares them to a preset value indicative of an anticipated scene intensity level determined by the operator to provide an output signal indicative of the difference. Control circuity selects which output signal of the filter circuits is to be used to compensate the detector signals during a particlar video field. The ouptut signals of the antimedian calculator and average filter comprise sign information which is indicative whether the central pixel value is less than, equal to or greater than the median, or whether the central pixel is less than, equal to or greater than the preset value, respectively. The control circuitry increments or decrements the value of the offset terms stored in the offset term memory in resonse to the signal provided by the selected antimedian calculator or third filter, and convergence rate information supplied by the control circuitry which controls the rate of convergence of the offset terms toward the scene average.

47 citations


Proceedings ArticleDOI
01 Nov 1989
TL;DR: In this article, the filtering of noise in image sequences using spatio-temporal motion compensated techniques is considered, and a number of filtering techniques are proposed and compared in this work.
Abstract: In this paper the filtering of noise in image sequences using spatio-temporal motion compensated techniques is considered. Noise in video signals degrades both the image quality and the performance of subsequent image processing algorithms. Although the filtering of noise in single images has been studied extensively, there have been few results in the literature on the filtering of noise in image sequences. A number of filtering techniques are proposed and compared in this work. They are grouped into recursive spatio-temporal and motion compensated filtering techniques. A 3-D point estimator which is an extension of a 2-D estimator due to Kak [5] belongs in the first group, while a motion compensated recursive 3-D estimator and 2-D estimators followed by motion compensated temporal filters belong in the second group. The motion in the sequences is estimated using the pel-recursive Wiener-based algorithm [8] and the block-matching algorithm. The methods proposed are compared experimentally on the basis of the signal-to-noise ratio improvement and the visual quality of the restored image sequences.

Proceedings ArticleDOI
04 Jun 1989
TL;DR: A correspondence method is developed for determining optical flow where the primitive motion tokens to be matched between consecutive time frames are regions, which is simple, computationally efficient, and more robust than iterative gradient methods, especially for medium-range motion.
Abstract: A correspondence method is developed for determining optical flow where the primitive motion tokens to be matched between consecutive time frames are regions. The computation of optical flow consists of three stages: region extraction, region matching, and optical flow smoothing. The computation is completed by smoothing the initial optical flow, where the sparse velocity data are either smoothed with a vector median filter or interpolated to obtain dense velocity estimates by using a motion-coherence regularization. The proposed region-based method for optical flow is simple, computationally efficient, and more robust than iterative gradient methods, especially for medium-range motion. >

Journal ArticleDOI
TL;DR: Restricted numerical evaluation over a certain parameter range of the noise distribution and the range of signal-level indicates that these tests yield performances at comparable levels.
Abstract: The problem of distributed detection of a signal in incompletely specified noise is conducted. The noise assumed belongs to the generalized Gaussian family, and the sensors in the distributed network use the Wilcoxon test. The sensors pass the test statistics to a fusion center, where a hypothesis testing results in a decision regarding the presence or the absence of a signal. Three monotone and admissible fusion center tests are formulated. Restricted numerical evaluation over a certain parameter range of the noise distribution and the range of signal-level indicates that these tests yield performances at comparable levels. >

Journal ArticleDOI
TL;DR: This class of nonlinear digital smoothing filters is shown to exhibit similar type root signal properties as the standard median filter, and the noise attenuation is optimal for certain types of exponential noise distributions.

Proceedings ArticleDOI
08 May 1989
TL;DR: A new concept based on the weighted median filter and an averaging substructure for scan rate conversion is introduced, which gives a good image quality compared with commonly used interpolation methods.
Abstract: A new concept based on the weighted median filter and an averaging substructure for scan rate conversion is introduced. The method can be used with motion information if it is available, or the weights can be fixed so that the interpolator gives a compromise between still image and motion reproduction. The weighted median interpolator gives a good image quality compared with commonly used interpolation methods. The algorithm is suitable for real-time implementations. It is also possible to use the proposed weighted median interpolation to produce high-quality still pictures, for example, from the VCR. >

Journal ArticleDOI
TL;DR: In this article, a histogram of the amplitude decay rates of traces is calculated by comparing the time-gated trace amplitudes to a control function that is the median trace amplitude as a function of time, offset, and common midpoint.
Abstract: Seismic data often contain traces that are dominated by noise; these traces should be removed (edited) before multichannel filtering or stacking. Noise bursts and spikes should be edited before single channel filtering. Spikes can be edited using a running median filter with a threshold; noise bursts can be edited by comparing the amplitudes of each trace to those of traces that are nearby in offset-common midpoint space. Relative amplitude decay rates of traces are diagnostic of their signal-to-noise (S/N) ratios and can be used to define trace editing criteria. The relative amplitude decay rate is calculated by comparing the time-gated trace amplitudes to a control function that is the median trace amplitude as a function of time, offset, and common midpoint. The editing threshold is set using a data-adaptive procedure that analyses a histogram of the amplitude decay rates. A performance evaluation shows that the algorithm makes slightly fewer incorrect trace editing decisions than human editors. The procedure for threshold setting achieves a good balance between preserving the fold of the data and removing the noisiest traces. Tests using a synthetic seismic line show that the relative amplitude decay rates are diagnostic of the traces’S/N ratios. However, the S/N ratios cannot be accurately usefully estimated at the start of processing, where noisy-trace editing is most needed; this is the fundamental limit to the accuracy of noisy trace editing. When trace equalization is omitted from the processing flow (as in amplitude-versus-offset analysis), precise noisy-trace editing is critical. The S/N ratio of the stack is more sensitive to type 2 errors (failing to reject noisy traces) than it is to type 1 errors (rejecting good traces). However, as the fold of the data decreases, the S/N ratio of the stack becomes increasingly sensitive to type 1 errors.

Journal ArticleDOI
TL;DR: The double window Hodges-Lehman filter and a hybrid D-median filter (HDM-filter) for robust image smoothing are proposed and the DWD filter is shown to have simpler structure, although not necessarily lesser computational complexity.
Abstract: The double window Hodges-Lehman filter (DWD-filter) and a hybrid D-median filter (HDM-filter) for robust image smoothing are proposed. An adaptive mixture of the median and the D-filter, the HDM filter first makes decisions about the presence of edges on the basis of a two-way classification of pixels near and around the pixel to be filtered. Subsequently, straightforward D-filtering is used in the absence of edges, and median filtering is used in the presence of edges. The DWD filter uses two windows and D-filtering. The smaller window is used to preserve the details, then the larger window to provide for sufficient smoothing. Detailed simulation results show that the HDM-filter, while retaining all the good properties of the DWD filter, consistently performs better, in terms of signal-to-noise ratio, than the DWD filter and a number of other filters, including the median filter. The DWD filter is shown to have simpler structure, although not necessarily lesser computational complexity. >

Patent
08 May 1989
TL;DR: In this paper, the median of data values instead of mean or some other algebraic combination is used for the median calculation, which can be used in both time domain and spatial domain and the filter size can easily be varied to permit adapting its size to system gain.
Abstract: This technique involves digitally filtering data by determining the median of data values instead of mean or some other algebraic combination. It is unique in that the data used for the median calculation utilizes previous as well as the new data. This filter structure can be used in both time domain and spatial domain. Also, the filter size can easily be varied to permit adapting its size to system gain.

Journal ArticleDOI
TL;DR: A vertical-temporal median filter and adaptive vertical-edge-controlled interpolation are described, using a field frequency doubled to 100 Hz (120 Hz) to have good properties for flicker reduction in TV systems.
Abstract: A vertical-temporal median filter and adaptive vertical-edge-controlled interpolation are described, using a field frequency doubled to 100 Hz (120 Hz). The filter is found to have good properties for flicker reduction in TV systems, but it produces additional disturbing alias components and movement defects in the vertical direction. A major advantage of the adaptive filter over the median filter is the ease with which alias components are suppressed. Furthermore, vertical-edge-controlled interpolation is insensitive to noise and the subcarrier in standard TV receivers, since these disturbances can be prevented by pre- and postfiltering within the edge detector. The filter and interpolation hardware is limited to one field memory for a noninterlace conversion, or two field memories when the field frequency is doubled. By comparison, a satisfactory motion-adaptive filter using frame delays needs three field memories and an additional one for converting the field frequency. >

Patent
Terence Doyle1, Looymans Martine1
23 Aug 1989
TL;DR: In this paper, it was found that a better result can be obtained for moving contours in the picture signal by taking a weighted average from these signals of the present field, and to take this average instead of or mixed with an output signal (Y') of the median filter (67).
Abstract: In a picture signal processing circuit including an interpolation filter having a median filter (67) having a first input (63), a second input (69) and a third input (63) to which respective signals from picture elements of three positionally consecutive lines from a preceding and a present field of a line and field-sequentially assembled picture signal are applied, it has been found that a better result can be obtained for moving contours in the picture signal by taking a (weighted) average from these signals of the present field, and to take this average instead of or mixed with an output signal (Y') of the median filter (67).

Journal ArticleDOI
TL;DR: In this article, size and shape distortions resulting from the use of median filters are studied and the starting point is the filtering of binary objects by operators acting within circular neighbourhoods in a continuous space, leading to predictable shifts of edges towards the local centres of curvature.

Journal ArticleDOI
TL;DR: A two‐step procedure which detects edges by using a median filter to reduce noise and background artefacts and locating image pixels which are ‘information rich’ by using local statistics is presented.
Abstract: SUMMARY Accurate edge detection is a fundamental problem in the areas of image processing and pattern recognition/classification. The lack of effective edge detection methods has slowed the application of image processing to many areas, in particular diagnostic cytology, and is a major factor in the lack of acceptance of image processing in service orientated pathology. In this paper, we present a two-step procedure which detects edges. Since most images are corrupted by noise and often contain artefacts, the first step is to clean up the image. Our approach is to use a median filter to reduce noise and background artefacts. The second operation is to locate image pixels which are ‘information rich’ by using local statistics. This step locates the regions of the image most likely to contain edges. The application of a threshold can then pin-point those pixels forming the edge of structures of interest. The procedure has been tested on routine cytologic specimens.

Proceedings ArticleDOI
23 May 1989
TL;DR: The author derives a simple algorithm to compute the coefficients of the filter adaptively and reports a successful simulation of the restoration of a square wave corrupted by non-Gaussian noise.
Abstract: A hybrid linear order statistic (HLOS) filter is a network of interconnected linear combiners and rank-forming blocks. This structure generalizes most of the previously proposed order statistic filters, including the median filter. The author derives a simple algorithm to compute the coefficients of the filter adaptively and reports a successful simulation of the restoration of a square wave corrupted by non-Gaussian noise. It is noted that HLOS filters show great potential in applications such as image processing and filtering of signals of nonstrictly stationary nature. >

Proceedings Article
01 Jan 1989
TL;DR: In this article, a serial algorithm for separable median filtering is developed that requires only two comparisons per element when the window size is three and a fast parallel concurrent read-exclusive-write parallel random access machine (CREW PRAM) algorithm with good processor-time product is developed for two-dimensional median filtering.
Abstract: A serial algorithm for separable median filtering is developed that requires only two comparisons per element when the window size is three. In addition, fast parallel concurrent-read-exclusive-write parallel random-access machine (CREW PRAM) algorithms with good processor-time product are developed for separable median filtering and two-dimensional median filtering. >

Journal ArticleDOI
TL;DR: A fast radix-2-based median filtering algorithm is proposed, which determines the median bit-by-bit successively by eliminating the samples whose previous bits are different to that of the median.
Abstract: A fast radix-2-based median filtering algorithm is proposed. The median is determined bit-by-bit successively by eliminating the samples whose previous bits are different to that of the median. The intermediate computations of the algorithm do not involve any array computation, nor any memory. The worst-case computational complexity of the algorithm is O(w) for w samples.

Proceedings ArticleDOI
08 May 1989
TL;DR: In this article, a class of ranked-order-based filters for the noise suppression of time-varying imagery is introduced, which efficiently preserve image structures under motion without motion compensation preprocessing.
Abstract: A class of ranked-order-based filters for the noise suppression of time-varying imagery is introduced. These temporal filters efficiently preserve image structures under motion without motion compensation preprocessing. It is shown that spatio-temporal multistage median filters (MMFs) allow for a significant improvement over both spatial MMFs and temporal median filters in image resolution and noise suppression. >

Proceedings ArticleDOI
08 May 1989
TL;DR: In this article, a novel extension of median filters from one dimension to higher dimensions is presented, which is able to preserve features of lower dimensionality, such as thin lines in two-dimensional space.
Abstract: A novel extension of median filters from one dimension to higher dimensions is presented. Unlike the standard and separable median filters, this class of filters is able to preserve features of lower dimensionality, such as thin lines in two-dimensional space. Also, unlike the max/median filter, it does not have to trace exhaustively all the possible lines through the central sample. Hence, this class of filters does not blur sharp images while removing impulse noise and is highly computationally efficient. With minor modifications, missing line noise can also be removed with similar performance. Moreover, this class of filters is able to perform feature selective filtering by which isolated features of any particular shape can be removed from an image with a set of custom-tailored shells. >

Proceedings ArticleDOI
14 Nov 1989
TL;DR: A novel adaptive filter for edge-preserving smoothing of noisy images is introduced that does greater smoothing in the vicinity of edges without compromising performance away from edges and the edge structure of the image.
Abstract: A novel adaptive filter for edge-preserving smoothing of noisy images is introduced. The novelty of the filter is that its region of support is tuned simultaneously in its size and orientation. An edge strength measure is extracted from the local variance and used to control the size of the window. The gradient direction is used to adapt the orientation of the window. The use of both edge strength and edge detection information allows large windows to be used even in the vicinity of edges. The filter has been tested for additive white Gaussian noise with the mean as the point estimator over local windows, and for additive white impulse noise with the median as the point estimator. Results, particularly for the adaptive median filter, are very promising. The results show that the filter does greater smoothing in the vicinity of edges without compromising performance away from edges and the edge structure of the image. >

Journal ArticleDOI
TL;DR: Different types of median-based methods can be used to improve multichannel seismic data, particularly at the stacking stage in processing as discussed by the authors, and different applications of the median concept are described and discussed.
Abstract: Different types of median-based methods can be used to improve multichannel seismic data, particularly at the stacking stage in processing. Different applications of the median concept are described and discussed. The most direct application is the Simple Median Stack (SMS), i.e. to use as output the median value of the input amplitudes at each reflection time. By the Alpha-Trimmed Mean (ATM) method it is possible to exclude and optional amount of the input amplitudes that differ most from the median value. A more novel use of the median concept is the Weighted Median Stack (WMS). This method is based on a long-gapped median filter. The implicit weighting, which is purely statistical in nature, is due to the edge effects that occur when the gapped filter is applied. By shifting the traces around before filtering, the maximum weight may be given to, for example, the far-offset traces. The fourth method is the Iterative Median Stack (IMS). This method, which also includes a strong element of weighting, consists of a repeated use of a gapped median filter combined with a gradual shortening of the filter after each pass. Examples show how the seismic data can benefit from the application of these methods.

Journal ArticleDOI
TL;DR: In this paper, a data-adaptive filtering technique was proposed to reduce the bias error of the magnetotelluric impedance tensor and the random noise of the tensor.