scispace - formally typeset
Search or ask a question

Showing papers on "Thresholding published in 1994"


Journal ArticleDOI
01 Apr 1994
TL;DR: A new method of enhancing fingerprint images is described, based upon nonstationary directional Fourier domain filtering, which leads to significant improvements in the speed and accuracy of the AFIS.
Abstract: A new method of enhancing fingerprint images is described, based upon nonstationary directional Fourier domain filtering. Fingerprints are first smoothed using a directional filter whose orientation is everywhere matched to the local ridge orientation. Thresholding then yields the enhanced image. Various simplifications lead to efficient implementation on general-purpose digital computers. Results of enhancement are presented for fingerprints of various pattern classifications. A comparison is made with the enhancement used within the automated fingerprint identification system (AFIS) developed by the UK Home Office. Use of the proposed enhancement method leads to significant improvements in the speed and accuracy of the AFIS.

367 citations


Proceedings ArticleDOI
03 Nov 1994
TL;DR: In this paper, the authors review and compare various proposals for the choice of thresholds, including soft and hard thresholding, and thresholds that are fixed in advance or chosen level by level from an empirical optimality criterion.
Abstract: Methods based on thresholding and shrinking empirical wavelet coefficients hold promise for recovering and/or denoising signals observed in noise. Here the authors review and compare various proposals for the choice of thresholds. These include soft and hard thresholding, and thresholds that are fixed in advance or chosen level by level from an empirical optimality criterion. The authors present results from simulations and real data examples. >

281 citations


Journal ArticleDOI
TL;DR: An effective algorithm for character recognition in scene images is studied and highly promising experimental results have been obtained using the method on 100 images involving characters of different sizes and formats under uncontrolled lighting.
Abstract: An effective algorithm for character recognition in scene images is studied. Scene images are segmented into regions by an image segmentation method based on adaptive thresholding. Character candidate regions are detected by observing gray-level differences between adjacent regions. To ensure extraction of multisegment characters as well as single-segment characters, character pattern candidates are obtained by associating the detected regions according to their positions and gray levels. A character recognition process selects patterns with high similarities by calculating the similarities between character pattern candidates and the standard patterns in a dictionary and then comparing the similarities to the thresholds. A relaxational approach to determine character patterns updates the similarities by evaluating the interactions between categories of patterns, and finally character patterns and their recognition results are obtained. Highly promising experimental results have been obtained using the method on 100 images involving characters of different sizes and formats under uncontrolled lighting. >

240 citations


Proceedings ArticleDOI
Haitao Guo1, J.E. Odegard1, M. Lang1, Ramesh A. Gopinath1, Ivan Selesnick1, C.S. Burrus1 
13 Nov 1994
TL;DR: Wavelet processed imagery is shown to provide better detection performance for the synthetic-aperture radar (SAR) based automatic target detection/recognition (ATD/R) problem and several approaches are proposed to combine the data from different polarizations to achieve even better performance.
Abstract: The paper introduces a novel speckle reduction method based on thresholding the wavelet coefficients of the logarithmically transformed image. The method is computational efficient and can significantly reduce the speckle while preserving the resolution of the original image. Both soft and hard thresholding schemes are studied and the results are compared. When fully polarimetric SAR images are available, the authors propose several approaches to combine the data from different polarizations to achieve even better performance. Wavelet processed imagery is shown to provide better detection performance for the synthetic-aperture radar (SAR) based automatic target detection/recognition (ATD/R) problem. >

215 citations


Journal ArticleDOI
TL;DR: The entropy method for image thresholding suggested by Kapur et al. has been modified and a more pertinent information measure of the image is obtained.

199 citations


Journal ArticleDOI
TL;DR: A rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards using a fast dynamic programming recursive structure.
Abstract: We show a rate-distortion optimal way to threshold or drop the DCT coefficients of the JPEG and MPEG compression standards. Our optimal algorithm uses a fast dynamic programming recursive structure. The primary advantage of our approach lies in its complete compatibility with standard JPEG and MPEG decoders. >

190 citations


Proceedings ArticleDOI
06 Oct 1994
TL;DR: The research described in this paper describes aspects of target recognition, thresholding, and location, and the results of a series of simulation experiments are used to analyze the performance of subpixel target location techniques such as: centroiding; Gaussian shape fitting; and ellipse fitting, under varying conditions.
Abstract: Signalizing points of interest on the object to be measured is a reliable and common method of achieving optimum target location accuracy for many high precision measurement tasks. In photogrammetric metrology, images of the targets originate from photographs and CCD cameras. Regardless of whether the photographs are scanned or the digital images are captured directly, the overall accuracy of the technique is partly dependent on the precise and accurate location of the target images. However, it is often not clear which technique to choose for a particular task, or what are the significant sources of error. The research described in this paper describes aspects of target recognition, thresholding, and location. The results of a series of simulation experiments are used to analyze the performance of subpixel target location techniques such as: centroiding; Gaussian shape fitting; and ellipse fitting, under varying conditions.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

173 citations


Journal ArticleDOI
Lawrence O'Gorman1
TL;DR: This method has been shown to reduce the number of binarization failures from 33% to 6% on difficult images and to improve subsequent OCR recognition rates from about 95% to 97,5% on binary images.

165 citations


Journal ArticleDOI
TL;DR: A linear synthesis algorithm from the Radon-Wigner transform is derived and its efficacy is demonstrated, reducing noise and cross-term power for multicomponent linear-FM signals.
Abstract: Thresholding the Radon-Wigner transform (RWT) followed by filtered backprojection to the time-frequency plane reduces noise and cross-term power for multicomponent linear-FM signals. Although the RWT is bilinear, it may be calculated as the magnitude-squared of two linear functions. In this paper, we derive a linear synthesis algorithm from the RWT and demonstrate its efficacy. >

139 citations


Patent
07 Sep 1994
TL;DR: An operator independent image cytometer having a method for image segmentation is described in this paper. But, the method is not suitable for the segmentation of live cells, as it does not have the ability to handle high-resolution images.
Abstract: An operator independent image cytometer having a method for image segmentation. Image segmentation comprises the steps of filtering a digital image of a cellular specimen and thresholding the resultant image. In addition, the thresholding may include the sorting of features extracted from the filtered image. The present invention also includes a method for cytometer autofocus that combines the benefits of sharpening and contrast metrics. The present invention further includes an arc lamp stabilization and intensity control system. The image cytometer has broad applications in determining DNA content and other cellular measurements on as many as 10 5 individual cells, including specimens of living cells. Image segmentation applications include PAP smear analysis and particle recognition.

119 citations


Patent
31 Mar 1994
TL;DR: In this paper, a method and system for the automated detection of lesions in computed tomographic images, including generating image data from at least one selected portion of an object, for example, from CT images of the thorax.
Abstract: A method and system for the automated detection of lesions in computed tomographic images, including generating image data from at least one selected portion of an object, for example, from CT images of the thorax. The image data are then analyzed in order to produce the boundary of the thorax. The image data within the thoracic boundary is then further analyzed to produce boundaries of the lung regions using predetermined criteria. Features within the lung regions are then extracted using multi-gray-level thresholding and correlation between resulting multi-level threshold images and between at least adjacent sections. Classification of the features as abnormal lesions or normal anatomic features is then performed using geometric features yielding a likelihood of being an abnormal lesion along with its location in either the 2-D image section or in the 3-D space of the object.

Journal ArticleDOI
M. Bichsel1
TL;DR: A new segmentation algorithm is derived, based on an object-background probability estimate exploiting the experimental fact that the statistics of local image derivatives show a Laplacian distribution, which avoids early thresholding, explicit edge detection, motion analysis, and grouping.
Abstract: A new segmentation algorithm is derived, based on an object-background probability estimate exploiting the experimental fact that the statistics of local image derivatives show a Laplacian distribution. The objects' simple connectedness is included directly into the probability estimate and leads to an iterative optimization approach that can be implemented efficiently. This new approach avoids early thresholding, explicit edge detection, motion analysis, and grouping. >

Patent
01 Feb 1994
TL;DR: In this paper, a system for detecting motion in a video image is provided wherein the video image was processed into a statistical array in which the array elements are derived from overlapping portions of the image.
Abstract: A system for detecting motion in a video image is provided wherein the video image is processed into a statistical array in which the array elements are derived from overlapping portions of the image. The elements of the statistical array are compared with corresponding array elements derived from an earlier image. The overlapping nature of the statistical array elements allows any detected changes in the image to be correlated with a portion of the image that is smaller than the area from which the statistical quantities are derived. Such spatial correlation of detected image changes is accomplished by thresholding and/or Boolean comparisons among the elements of the statistical array. A motion detection system in accordance with this invention can be used with a video multiplexing system so that motion cay be detected at a plurality of remote locations.

Journal ArticleDOI
TL;DR: It is shown that the analog-temporal behavior of photodiodes combined with thresholding amplifiers can be used favorably to do certain low-level image processing tasks including median filtering and convolution.
Abstract: The paper introduces the concept of near-sensor image processing. By this, the authors mean techniques in which the physical properties of the image sensor itself is utilized to do part of the signal processing task. It is shown that the analog-temporal behavior of photodiodes combined with thresholding amplifiers can be used favorably to do certain low-level image processing tasks including median filtering and convolution. The given examples also show how adaptivity to different light levels can be achieved in a natural way. To extract features from the image, such as moments and shape factors, the authors introduce a simple measurement function. >

Patent
29 Nov 1994
TL;DR: In this article, a method and system for automated detection and classification of masses in mammograms is presented, which include the performance of iterative, multi-level gray level thresholding (202), followed by lesion extraction (203), and feature extraction techniques (205) for classifying true masses from false-postive masses and malignant masses from benign masses.
Abstract: A method and system for automated detection and classification of masses in mammograms. This method and system include the performance of iterative, multi-level gray level thresholding (202), followed by lesion extraction (203) and feature extraction techniques (205) for classifying true masses from false-postive masses and malignant masses from benign masses. The method and system provide improvements in the detection of masses including multi-gray-level thresholding (202) of the processed images to increase sensitivity and accurate region growing and feature analysis to increase specificity. Novel improvements in the classification of masses include a cumulative edge gradient orientation histogram analysis relative to the radial angle of the pixels in question; i.e. either around the margin of the mass or within or around the mass in question. The classification of the mass leads to a likelihood malignancy.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: The method proposed in this paper utilizes techniques of color segmentation and color thresholding to isolate and pinpoint the eyes, nostrils, and mouth on a color image.
Abstract: A robust facial feature extraction algorithm is required for many applications. The method proposed in this paper utilizes techniques of color segmentation and color thresholding to isolate and pinpoint the eyes, nostrils, and mouth on a color image.

Journal ArticleDOI
TL;DR: A fast two-phase 2D entropic thresholding algorithm is proposed that reduces the processing time of each image from more than 2 h to about 2 min and the required memory space is greatly reduced.

Journal ArticleDOI
TL;DR: A new technique, hierarchical threshold segmentation (HTS), is presented, in which region boundaries are defined over a range of gray-shade thresholds and the hierarchy of the spatial relationships between collocated regions from different thresholds is represented in tree form.
Abstract: A significant task in the automated interpretation of cloud features on satellite imagery is the segmentation of the image into separate cloud features to be identified. A new technique, hierarchical threshold segmentation (HTS), is presented. In HTS, region boundaries are defined over a range of gray-shade thresholds. The hierarchy of the spatial relationships between collocated regions from different thresholds is represented in tree form. This tree is pruned, using a neural network, such that the regions of appropriate sizes and shapes are isolated. These various regions from the pruned tree are then collected to form the final segmentation of the entire image. In segmentation testing using Geostationary Operational Environmental Satellite data, HTS selected 94% of 101 dependent sample pruning points correctly, and 93% of 105 independent sample pruning points. Using Advanced Very High Resolution Radiometer data, HTS correctly selected 90% of both the 235-case dependent sample and the 253-case ...

Patent
11 Mar 1994
TL;DR: In this article, a threshold selection for the DCT coefficients of an image or video frame is based on optimizing for minimum distortion for a specified maximum target coding bit rate or, equivalently, for minimized coding bits rate for the specified maximum allowable distortion constraint.
Abstract: For encoding signals corresponding to still images or video sequences, respective standards known as JPEG and MPEG have been proposed These standards are based on digital cosine transform (DCT) compression For economy of transmission, DCT coefficients may be "thresholded" prior to transmission, by dropping the less significant DCT coefficients While maintaining JPEG or MPEG compatibility, threshold selection for the DCT coefficients of an image or video frame is based on optimizing for minimum distortion for a specified maximum target coding bit rate or, equivalently, for minimized coding bit rate for a specified maximum allowable distortion constraint In the selection process, a dynamic programming method is used

Journal ArticleDOI
TL;DR: Experiments of applying this adaptive raster-scan thresholding algorithm to extracting characters from documents confirmed that a reasonable binary image can be efficiently and effectively obtained from a gray-level image under various illuminations.

Proceedings ArticleDOI
30 Oct 1994
TL;DR: Adaptive segmented attenuation correction using the local thresholding technique (adaptive LTS) has been developed for whole body PET imaging using short (2-3 minutes), post-injection transmission scans and can increase scanner throughput without sacrificing the quality and accuracy of whole-body imaging.
Abstract: Adaptive segmented attenuation correction using the local thresholding technique (adaptive LTS) has been developed for whole body PET imaging using short (2-3 minutes), post-injection transmission scans. Optimal threshold is derived at every scan position to provide appropriate segmentation on the transmission images, which are forward-projected into the attenuation sinograms. The entire computation of the new attenuation sinograms of a single bed position (47 slices) on ECAT EXACT takes as little as 6 minutes. So far, the technique has found been very successful in 30 /sup 18/FDG whole-body oncologic studies without user interaction. The resulting emission data quantitatively matches and qualitatively surpasses those using the standard method. This method can increase scanner throughput without sacrificing the quality and accuracy of whole-body imaging. >

Patent
David C Barton1
09 Sep 1994
TL;DR: In this paper, a method of halftoning a digital gray scale image is disclosed that utilizes a point-by-point thresholding comparison to a novel diagonal correlation dither matrix, which forces diagonal correlation of adjacent dots in the output image while maximizing dispersion of dots, thereby producing visually unobtrusive output dot patterns.
Abstract: A method of halftoning a digital gray scale image is disclosed that utilizes a point by point thresholding comparison to a novel diagonal correlation dither matrix. The new dither matrix forces diagonal correlation of adjacent dots in the output image while maximizing dispersion of dots, thereby producing visually unobtrusive output dot patterns. The matrix is generated according to a spatial domain cost function that determines a cost value for each candidate pixel based on respective radial distances and relative angles between a candidate pixel and the ON pixels in the matrix such that unit diagonals are favored over placement of vertically or horizontally adjacent dots.

Journal ArticleDOI
TL;DR: The Histogram-Based Morphological Edge detector (HMED), extracts all the weak gradients yet retains the edge sharpness in the image, and a new morphological operation defined in the domain of the histogram of an image is presented.
Abstract: Presents a new edge detector for automatic extraction of oceanographic (mesoscale) features present in infrared (IR) images obtained from the Advanced Very High Resolution Radiometer (AVHRR). Conventional edge detectors are very sensitive to edge fine structure, which makes it difficult to distinguish the weak gradients that are useful in this application from noise. Mathematical morphology has been used in the past to develop efficient and statistically robust edge detectors. Image analysis techniques use the histogram for operations such as thresholding and edge extraction in a local neighborhood in the image. An efficient computational framework is discussed for extraction of mesoscale features present in IR images. The technique presented in the present article, called the Histogram-Based Morphological Edge detector (HMED), extracts all the weak gradients, yet retains the edge sharpness in the image. A new morphological operation defined in the domain of the histogram of an image is also presented. An interesting experimental result was found by applying the HMED technique to oceanographic data in which certain features are known to have edge gradients of varying strength. >

Journal ArticleDOI
TL;DR: Of four different template matching statistics tested for 3-D tracking of amoebae from the cellular slime mould Dictyostelium discoideum, it was found that the automated procedure performed best when using a correlation statistic for matching.
Abstract: We have developed and tested an automated method for simultaneous 3-D tracking of numerous, fluorescently-tagged cells. The procedure uses multiple thresholding to segment individual cells at a starting timepoint, and then iteratively applies a template-matching algorithm to locate a particular cell's position at subsequent timepoints. To speed up the method, we have developed a distributed implementation in which template matching is carried out in parallel on several different server machines. The distributed implementation showed a monotonic decrease in response time with increasing number of servers (up to 15 tested), demonstrating that the tracking algorithm is well suited to parallelization, and that nearly real-time performance could be expected on a parallel processor. Of four different template matching statistics tested for 3-D tracking of amoebae from the cellular slime mould Dictyostelium discoideum, we found that the automated procedure performed best when using a correlation statistic for matching. Using this statistic, the method achieved a 98.5% success rate in correctly identifying a cell from one time point to the next. This method is now being used regularly for 3-D tracking of normal and mutant cells of D. discoideum, and as such provides a means to quantify the motion of many cells within a three-dimensional tissue mass.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: The proposed method for enhancing fingerprints explores the ability of the M-lattice to form oriented spatial patterns (like reaction-diffusion), while producing binary outputs (like feedback neural networks).
Abstract: Develops a method for the simultaneous restoration and halftoning of fingerprints using the "M-lattice", a new nonlinear dynamical system. This system is rooted in the reaction-diffusion model, first proposed by Turing to explain morphogenesis (the formation of patterns in nature). But in contrast with the general reaction-diffusion, the state variables of the M-lattice are guaranteed to be bounded. The M-lattice system is closely related to the analog Hopfield network and the cellular neural network, but has more flexibility in how its variables interact. These properties make it better suited than reaction-diffusion for several new engineering applications. The proposed method for enhancing fingerprints explores the ability of the M-lattice to form oriented spatial patterns (like reaction-diffusion), while producing binary outputs (like feedback neural networks). The fingerprints synthesized by the M-lattice retain and emphasize more of the relevant detail than do those obtained by adaptive thresholding, a common halftoning method employed in traditional fingerprint classification systems.

Journal ArticleDOI
TL;DR: A new expression for the image separation requirement in the input plane of the linear joint transform correlator for multiobject detection is provided and a new analytical approach for obtaining the threshold function in the binary joint Transform correlator is introduced.
Abstract: The correlation performance of the binary joint transform correlator for multiobject detection is studied mathematically and experimentally using three types of thresholding methods. These thresholding methods include the spatial frequency dependent threshold function, the median thresholding, and the subset median thresholding. We provide a new expression for the image separation requirement in the input plane of the linear joint transform correlator for multiobject detection. Also, we introduce a new analytical approach for obtaining the threshold function in the binary joint transform correlator. The median thresholding method for multiobject detection by the binary joint transform correlator (JTC) is also analyzed. A hybrid optoelectronic setup is used for experiments. Two different implementations of the threshold function are employed in the experimental system. Experimental results of the binary JTC using different thresholding methods are determined and compared in terms of correlation peak-to-noise ratio, peak-to-sidelobe ratio, and space-bandwidth product. The results indicate that the binary JTC performs well for multiobject detection. Furthermore, using the threshold function in the binary JTC eliminates the first-order correlations between different input targets and the even-order harmonic terms in the output plane.

Journal ArticleDOI
TL;DR: This work applies a new class of space-domain convolution operators, so-called gradient-component operators, to a field example from western Canada and demonstrates its potential for improved imaging of the horizontal-gradient magnitude and thus improved edge detection.
Abstract: A new class of space-domain convolution operators permits computation of the components of the horizontal gradient of gridded potential-field data. These so-called gradient-component operators allow one to vary the passband and thus control the frequency content of the resulting horizontal-gradient map. This facilitates computation of gradient maps that accommodate data of widely varying frequency content. Examination of the transfer functions of these operators suggests that this method of numerical differentiation is well suited to potential-field data: in particular, the operators suppress long wavelengths and high-frequency noise bands and amplify signal. Maps of the horizontal gradient of certain potential fields (e.g., gravity, pseudogravity) may be combined with algorithms that locate relative maxima, so-called thresholding, to automate the procedure of source-body edge detection, which is a useful tool in mapping, for example, basement grain, fault patterns, and igneous intrusive bodies. We apply this new operator, together with an existing thresholding algorithm, to a field example from western Canada and demonstrate its potential for improved imaging of the horizontal-gradient magnitude and thus improved edge detection.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized in the details sub-bands, to the point where straightforward thresholding can be applied to segment them.
Abstract: Clusters of fine, granular microcalcifications in mammograms may be an early sign of disease. Individual grains are difficult to detect and segment due to size and shape variability and because the background mammogram texture is inhomogeneous. We present a two-stage method based on wavelet transforms for detecting and segmenting calcifications. The first stage consists of a full resolution wavelet transform, which is simply the conventional filter bank implementation without downsampling, so that all sub-bands remain at full size. Four octaves are computed with two inter-octave voices for finer scale resolution. By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized in the details sub-bands. Detected pixel sites in the LH, HL, and HH sub-bands are heavily weighted before computing the inverse wavelet transform. The LL component is omitted since gross spatial variations are of little interest. Individual microcalcifications are often greatly enhanced in the output image, to the point where straightforward thresholding can be applied to segment them. FROC curves are computed from tests using a well-known database of digitized mammograms. A true positive fraction of 85% is achieved at 0.5 false positives per image. >

Patent
13 Apr 1994
TL;DR: In this article, a method and device for parallel intralinear halftoning of digitized grey value images divided into lines pixels, including successive thresholding of grey values of the pixels and transportation of a quantization error for each pixel to at least one neighboring pixel, was presented.
Abstract: A method and device for parallel intralinear halftoning of digitized grey value images divided into lines pixels, including (1) successive thresholding of grey values of the pixels and (2) transportation of a quantization error for each pixel to at least one neighboring pixel still to be thresholded by adjusting the grey value of the at least one neighboring pixel with at least a portion of the quantization error. The halftoning includes: dividing each of the lines of pixels into at least two disparate groups of pixels; separately thresholding each group of pixels within each line; and transporting the quantization error from a pixel of one group to at least one pixel of at least one other disparate group.

Patent
Donald C. Forslund1
02 May 1994
TL;DR: In this paper, an advanced manufacturing inspection system includes a database containing a rasterized reference image of the product inspected at the inspection resolution, allowing for accurate representation of shaped features.
Abstract: An advanced manufacturing inspection system includes a database containing a rasterized reference image of the product inspected at the inspection resolution, allowing for accurate representation of shaped features. The full image is stored in the system database and is accessed and fed in a raster manner to an electronic registration subsystem which aligns the reference data to the incoming thresholded product inspection data. The aligned reference and inspection data are driven to all parallel defect detection channels. A classifier block selects the output of the desired channels for recording into a defect memory. Alternatively, the thresholding of the inspection gray scale signal is done after registration such that thresholding can be controlled by the reference data. The system is flexible in rendering abnormalities between reference and gray scale inspection images and functions independently of image resolution because the reference and inspection images are of the same resolution. The defects to be rendered are dependent upon that specified by the product designers and the process engineers. Each defect type to be found and rendered is processed by a separate channel whose output can be selected for entry into the defect memory.