scispace - formally typeset
Search or ask a question

Showing papers on "Median filter published in 1980"


Journal ArticleDOI
TL;DR: Experimental results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing.
Abstract: Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance. These algorithms are nonrecursive and do not require the use of any kind of transform. They share the same characteristics in that each pixel is processed independently. Consequently, this approach has an obvious advantage when used in real-time digital image processing applications and where a parallel processor can be used. For both the additive and multiplicative cases, the a priori mean and variance of each pixel is derived from its local mean and variance. Then, the minimum mean-square error estimator in its simplest form is applied to obtain the noise filtering algorithms. For multiplicative noise a statistical optimal linear approximation is made. Experimental results show that such an assumption yields a very effective filtering algorithm. Examples on images containing 256 × 256 pixels are given. Results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing.

2,701 citations


Journal ArticleDOI
TL;DR: A fast real-time algorithm is presented for median filtering of signals and images that determines the kth bit of the median by inspecting the k most significant bits of the samples.
Abstract: A fast real-time algorithm is presented for median filtering of signals and images. The algorithm determines the kth bit of the median by inspecting the k most significant bits of the samples. The total number of full-word comparison steps is equal to the wordlength of the samples. Speed and hardware complexity of the algorithm is compared with two other fast methods for median filtering.

189 citations


Proceedings ArticleDOI
03 Dec 1980
TL;DR: In this article, a digital filtering technique for speckle reduction is studied using several conventional filters and homomorphic filters and it is shown that homomorphic filtering has higher resolu- tion than straightforward linear filtering.
Abstract: Digital filtering techniques for speckle reduction are studied using several conventional filters and homomorphic filters. It is shown that homomorphic filtering has higher resolu­ tion than straightforward linear filtering. Among the several filters considered, the homo­ morphic Wiener filter is found to yield the best results. Examples and possible extensions of these results are discussed.IntroductionIn an earlier study, a digital model of fully developed speckle was implemented and verified by comparing optical and digital speckle images. The model was also used to obtain detection probabilities of targets imaged in the presence of speckle . 3'4 For an object with intensity u(x,y), its image in the presence of speckle is given byv(x,y) = u(x,y) s(x,y) (1)where s(x,y) represents the speckle intensity. For each x,y, the speckle noise s(x,y) is a random variable with a single sided (s > 0) exponential probability density. For sampled images, if the sampling distance is equal to the correlation distance of the speckle, then x and y can be assumed to take integer values on a rectangular grid and s(x,y) is a spatially uncorrelated random field on this grid. Often, several "independent looks" of the object can be obtained and we havevR(x,y) = u(x,y) sR(x,y), K = 1,..., N (2)re s^(x,y) is the Kindependent exponen that the N-look averagewhere K sample function of the speckle random field. Since {s«.;K = lf. .., N} are exponentially distributed random variables, it is a simple matter to showi l~i N= - I

41 citations


Proceedings ArticleDOI
01 Apr 1980
TL;DR: A new image restoration system that is applicable to the problem of restoring an image degraded by additive noise is presented and appears to compare quite well in performance with other restoration techniques, and can be easily extended to account for other types of degradations.
Abstract: A new image restoration system that is applicable to the problem of restoring an image degraded by additive noise is presented in this paper. The system is developed by attempting to estimate more accurately the frequency response of typical image restoration filters available in the literature. The resulting system combined with its short space implementation is computationally simple, appears to compare quite well in performance with other restoration techniques, and can be easily extended to account for other types of degradations such as bluring and additive noise, multiplicative noise, etc. Some examples are given to illustrate the performance of the new image restoration system.

40 citations


Journal Article
TL;DR: The system performs efficient and fast detection and segmentation of cells scanned in one TV frame within one second as well as the extraction of a large number of morphologic features within a few seconds, and high-resolution analysis of several thousand cells of a sample within one minute will be possible.
Abstract: Cell location, segmentation and feature extraction of cell images are principal tasks of a high-resolution system for automated cytology. To perform these tasks with high speed, image processing algorithms and the architecture of a processor have to be optimized mutually. This has led to the development of a fast system for the evaluation of cytologic samples based on an optimized TV microscope, a host minicomputer with different peripheral array processors and digital image storages. The processors are optimized in speed for two-dimensional local operations to investigate neighborhood relations and morphology in cell images. Two-dimensional transformations of TV images (288 x 512 x 8 bit) can be carried out within 20 to 200 msec. The processors are able to realize linear filter functions (correlation, convolution) as well as nonlinear functions (median filtering). A set of measurements like area, circumference and connectivity can be derived parallely from one image in 20 msec. The system performs efficient and fast detection and segmentation of cells scanned in one TV frame within one second as well as the extraction of a large number of morphologic features within a few seconds. Based on these procedures, high-resolution analysis of several thousand cells of a sample within one minute will be possible.

39 citations


Journal ArticleDOI
TL;DR: A novel two-stage adaptive signal extractor for intermittent signal applications that will adapt only when the signal is present and thereby effect a reduction in the distortion caused by the gust stage is presented.
Abstract: A novel two-stage adaptive signal extractor for intermittent signal applications is presented. If the presence and absence of the signal can be detected, the first stage will adapt only while the signal is absent and thereby effect a reduction in noise, whereas the second stage will adapt only when the signal is present and thereby effect a reduction in the distortion caused by the gust stage. Bounds on performance are derived, and performance improvement relative to a conventional one-stage adaptive noise canceller is assessed.

27 citations


Journal ArticleDOI
TL;DR: It is shown that increasing the noise term in a Wiener filter helps in the reduction of noise but occasions loss of resolving power.
Abstract: A method of generating phase-inverting grids for use in image deblurring is described. We have also made an amplitude filter for correcting motion blurred images. This was used along with the phase-inverting grid to deblur motion blurred pictures. Experimental results are presented. It is shown that increasing the noise term in a Wiener filter helps in the reduction of noise but occasions loss of resolving power.

17 citations


Proceedings ArticleDOI
01 Apr 1980
TL;DR: In this paper, the importance of normalization of parameters in identification of linear models as now commonly applied to digital signal processing is emphasized. And the factorial approach plays a central role when additive noise is considered.
Abstract: The paper emphasizes the importance of normalization of parameters in identification of linear models as now commonly applied to digital signal processing. Classical LPC, Pisarenko, Prony methods are unified and compared. The factorial approach plays a central role when additive noise is considered. The computational requirement is the determination of eigen vectors of correlation and covariance matrices. Various algorithms are then given including sequential estimation procedures in the covariance case. The methods are compared on close sinewaves merged in noise in terms of resolution, windowing, signal-to-noise ratio.

12 citations


Proceedings ArticleDOI
S. Boll1
01 Apr 1980
TL;DR: The performance of this method is compared with the time domain methods on noisy speech having a noise power equal to the signal power and is shown to be equally effective as a noise cancelling preprocessor.
Abstract: Acoustic noise in speech can be suppressed by filtering a separately recorded correlated noise signal and subtracting it from the speech waveform. In the time domain this adaptive noise cancelling approach requires a computational rate which is linear with filter length. This paper describes how to implement the noise cancelling procedure in the frequency domain using the short-time Fourier transform. Using the efficiency of the FFT results in a computation rate which is proportional to the log of the filter length. For acoustic noise suppression where the filter length can be on the order of one thousand points, this approach offers a viable alternative for real time implementation, The performance of this method is compared with the time domain methods on noisy speech having a noise power equal to the signal power and is shown to be equally effective as a noise cancelling preprocessor.

6 citations


Journal ArticleDOI
TL;DR: A new technique to reduce the effect of quantization in pulse code modulation image coding is presented, consisting of Roberts's pseudonoise technique followed by a noise reduction system.
Abstract: A new technique to reduce the effect of quantization in pulse code modulation image coding is presented. The technique consists of Roberts's pseudonoise technique followed by a noise reduction system. The technique by Roberts effectively transforms the signal dependent quantization noise to a signal independent additive random noise. The noise reduction system that follows reduces the additive random noise. Some examples are given to illustrate the performance of the quantization noise reduction system.

5 citations


Proceedings ArticleDOI
24 Dec 1980
TL;DR: In this article, a novel approach for calculating optimal filter coefficients has been devised which makes use of symmetries in the coefficients to reduce computation requirements significantly, for example, with a five-by-five two-dimensional spatial filter there are 25 coefficients which must be determined, and the conventional approach requires over 5000 multiplications and 5000 additions.
Abstract: When using an adaptive filter for real-time signal processing, the filter coefficients must be modified in real time and fast computation methods for determining optimal filter coefficients are essential. The optimal coefficients for signal detection and background suppression depend upon the statistics of the background noise and the characteristics of the signal pulse. A novel approach for calculating optimal filter coefficients has been devised which makes use of symmetries in the coefficients (derived from symmetries in the noise statistics and the signal) to reduce computation requirements significantly. For example, with a five-by-five two-dimensional spatial filter there are 25 coefficients which must be determined, and the conventional approach requires over 5000 multiplications and 5000 additions. When symmetries exist, there are only 6 distinct values for the 25 coefficients, which reduces the required calculations to 75 multiplications and 75 additions. Detailed examples of temporal filtering, spatial filtering, and multispectral filtering illustrate the efficacy of the procedure in practical situations.

Journal ArticleDOI
TL;DR: The effects of changing the allocation of filtering on the performance of a modified duobinary partial response system are examined and the precoded system is superior to the system with decision feedback decoding, except at low values of the signal-to-noise-ratio and large fractions of the filtering at the receiver.
Abstract: The effects of changing the allocation of filtering on the performance of a modified duobinary partial response system are examined. Two classes of filtering as well as two types of decoders are considered. The precoded system is, in general, superior to the system with decision feedback decoding, except at low values of the signal-to-noise-ratio and large fractions of the filtering at the receiver.

Proceedings ArticleDOI
06 Aug 1980
TL;DR: This paper describes the fabrication of three CCD chips on which several image-preprocessing functions have been implemented with effective operation rates equivalent to 10,000 MOPS.
Abstract: The computational burden of any image-processing system is on the preprocessing functions. These include such functions as two-dimensional convolution, edge extraction, texture-feature extraction, and nonlinear filtering (e.g., median filtering). This paper describes the fabrication of three CCD chips on which several image-preprocessing functions have been implemented with effective operation rates equivalent to 10,000 MOPS. The paper also briefly reviews the work done on the first two chips and then describes in more detail functions included on the third chip and the experimental results. The functions discussed include a 5 x 5 voltage-programmable convolution, a 26 x 26 convolution, a 7 x 7 mask-programmable convolution, a 5-element sort for median filtering, and a 3 x 3 Laplacian filter. These circuits have all been designed to operate at a 7-MHz pixel rate.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


05 Mar 1980
TL;DR: It is demonstrated by way of examples that short space implementation leads to a significant performance improvement in reducing wide-band random noise relative to the traditional approach in which the entire image is processed by a linear space invariant filter.
Abstract: : In this report, short space implementation of image restoration systems such as Wiener filtering to avoid the image non-stationarity problem is discussed. It is demonstrated by way of examples that short space implementation leads to a significant performance improvement in reducing wide-band random noise relative to the traditional approach in which the entire image is processed by a linear space invariant filter. (Author)

11 Dec 1980
TL;DR: In this article, a modification of Lee's local-statistics method is developed for digital image noise filtering without the need for a user-specified noise level, where the user specifies an approximate upper size limit for image features he is willing to regard as noise.
Abstract: : A modification of Lee's local-statistics method is developed for digital image noise filtering without the need for a user-specified noise level. Instead, the user specifies an approximate upper size limit for image features he is willing to regard as noise. A noise level is then estimated automatically for each local region of the image by partitioning the region into subregions and exploiting the differing degrees of spatial correlation for signal and noise. Extensions for use with multiplicative noise, third-moment statistics, and edge-detection schemes are discussed briefly. Examples are given for an image of 256 x 256 pixels. (Author)

Proceedings ArticleDOI
R. Kirlin1
09 Apr 1980
TL;DR: The utility of the Floating Point Systems product AP-120B is demonstrated through a comparison of total processing time for the fast median filter algorithm recently published, and typical calculation times and a means of performing large 2DFFT's are presented.
Abstract: Array processors are special purpose computers attached to mini or mainframe computing systems to perform repetitive calculations at rates often ten times faster than their host. Images with their large array dimensions are particularly appropriate for processing with array processors. The utility of the Floating Point Systems product AP-120B is demonstrated through a comparison of total processing time for the fast median filter algorithm recently published [1]. FORTRAN calls and data transfer times are included for completeness of the example. Typical calculation times and a means of performing large 2DFFT's are also presented.