scispace - formally typeset
Search or ask a question

Showing papers on "Thresholding published in 1983"


Journal ArticleDOI
J. M. White1, G. D. Rohrer1
TL;DR: Two new, cost-effective thresholding algorithms for use in extracting binary images of characters from machine- or hand-printed documents are described, with a more aggressive approach directed toward specialized, high-volume applications which justify extra complexity.
Abstract: Two new, cost-effective thresholding algorithms for use in extracting binary images of characters from machine- or hand-printed documents are described. The creation of a binary representation from an analog image requires such algorithms to determine whether a point is converted into a binary one because it falls within a character stroke or a binary zero because it does not. This thresholding is a critical step in Optical Character Recognition (OCR). It is also essential for other Character Image Extraction (CIE) applications, such as the processing of machine-printed or handwritten characters from carbon copy forms or bank checks, where smudges and scenic backgrounds, for example, may have to be suppressed. The first algorithm, a nonlinear, adaptive procedure, is implemented with a minimum of hardware and is intended for many CIE applications. The second is a more aggressive approach directed toward specialized, high-volume applications which justify extra complexity.

283 citations


Journal ArticleDOI
TL;DR: Algorithms for automatic thresholding of grey levels (without reference to histogram) are described using the terms 'index of fuzziness' and 'entropy' of a fuzzy set to be minimum when the crossover point of an S-function corresponds to boundary levels among different regions in image space.

148 citations


Journal ArticleDOI
TL;DR: This letter describes algorithms for global thresholding of grey-tone images which use second-order grey level statistics which are seen to be independent of the grey level histogram and effective in selecting thresholds for images with unimodal grey level distributions.

74 citations


Patent
14 Apr 1983
TL;DR: In this article, a method for determining the average gray value of a plurality of regions within a digitized electronic video image and determining the color of the region from the average grey value was proposed.
Abstract: A method for determining the average gray value of a plurality of regions within a digitized electronic video image and for determining the color of the region from the average gray value. Initial image segmentation is accomplished by thresholding a multibit digital value into a one bit black and white representation of a picture element of the region. The gray values of the picture elements within either a black or a white region can then be analyzed to determine the average gray value by constructing a histogram of each region and eliminating from the histogram the picture elements not associated with the actual color or shading of the region of interest. The remaining picture elements are averaged to obtain the average gray value for the region.

65 citations


Journal ArticleDOI
TL;DR: A method of detecting blobs in images by building a succession of lower resolution images and looking for spots in these images, and it is possible to calculate thresholds in the low resolution image, and to apply those thresholds to the region of the original image corresponding to the spot.
Abstract: A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.

45 citations


Journal ArticleDOI
TL;DR: This research investigates a tracker able to handle "multiple hot-spot" targets, in which digital (or optical) signal processing is employed on the FLIR data to identify the underlying target shape.
Abstract: In the recent past, the capability of tracking dynamic targets from forward-looking infrared (FLIR) measurements has been improved substantially, by replacing standard correlation trackers with adaptive extended Kalman filters. This research investigates a tracker able to handle "multiple hot-spot" targets, in which digital (or optical) signal processing is employed on the FLIR data to identify the underlying target shape. This identified shape is then used in the measurement model portion of the filter as it estimates target offset from the center of the field-of-view. In this algorithm, an extended Kalman filter processes the raw intensity measurements from the FLIR to produce target estimates. An alternative algorithm uses a linear Kalman filter to process the position indications of an enhanced correlator in order to generate tracking estimates; the enhancement is accomplished not only by thresholding to eliminate poor correlation information, but also by incorporating the dynamics information from the Kalman filter and the on-line identification of the target shape as a template instead of merely using previous frames of data. The performance capabilities of these two algorithms are evaluated under various tracking environment conditions and for a range of choices of design parameters.

41 citations


Patent
Julius Hayman1
07 Jun 1983
TL;DR: In this paper, a method of improving segmentation of target information in video signal developed by target tracking apparatus, when the average video levels of the target and of its background surround tend to be alike.
Abstract: A method of improving segmentation of target information in video signal developed by target tracking apparatus, when the average video levels of the target and of its background surround tend to be alike.

28 citations


Journal ArticleDOI
Richard G. Casey1, C. R. Jih1
TL;DR: A previously developed classification technique, based on decision trees, has been extended in order to improve reading accuracy in an environment of considerable character variation, including the possibility that documents in the same font style may be produced using quite different print technologies.
Abstract: A low-cost optical character recognition (OCR) system can be realized by means of a document scanner connected to a CPU through an interface. The interface performs elementary image processing functions, such as noise filtering and thresholding of the video image from the scanner. The processor receives a binary image of the document, formats the image into individual character patterns, and classifies the patterns one-by-one. A CPU implementation is highly flexible and avoids much of the development and manufacturing costs for special-purpose, parallel circuitry typically used in commercial OCR. A processor-based recognition system has been investigated for reading documents printed in fixed-pitch conventional type fonts, such as occur in routine office typing. Novel, efficient methods for tracking a print line, resolving it into individual character patterns, detecting underscores, and eliminating noise have been devised. A previously developed classification technique, based on decision trees, has been extended in order to improve reading accuracy in an environment of considerable character variation, including the possibility that documents in the same font style may be produced using quite different print technologies. The system has been tested on typical office documents, and also on artificial stress documents, obtained from a variety of typewriters.

27 citations


Journal ArticleDOI
TL;DR: An on-line computer system for measuring the deformation of a diffuse object with a speckle interferometer and a man–machine interactive method for simple high-speed processing of the interferogram using a light pen are presented.
Abstract: An on-line computer system for measuring the deformation of a diffuse object with a speckle interferometer is presented. Methods for evaluating a speckle interferogram using digital image processing techniques are also discussed. The system consists of an interferometric optical setup and a computer-TV image processing facility. A speckle interferogram is generated arithmetically between two digitized speckle patterns before and after deformation of the object. The information about the deformation is extracted by two procedures in analyzing the interferogram: (a) automatic analysis using digital image processing techniques such as gray scale modification, linear spatial filtering, thresholding, and skeletoning; (b) man-machine interactive method for simple high-speed processing of the interferogram using a light pen. The determined fringe order numbers are interpolated and differentiated spatially to give strain, slope, and bending moment of the deformed object. Some examples of processed patterns are presented.

24 citations


Journal ArticleDOI
Zenon Kulpa1
TL;DR: A controversy between two common approaches to area and perimeter measurement of discrete blobs is resolved and it is shown that the validity of these approaches (and corresponding measurement methods) is related to the digitization scheme assumed.
Abstract: A controversy between two common approaches to area and perimeter measurement of discrete blobs is resolved in this paper. The first approach takes the chain of 8-connected border pixels as the object boundary (thus using Pick's theorem to exclude a contribution of a part of these to the value of the area), whereas the second one considers the boundary of the object to be a line separating object-border pixels from background ones (thus counting all pixels of the object as its area). It is shown that the validity of these approaches (and corresponding measurement methods) is related to the digitization scheme assumed: the first approach is valid for a boundary-line digitization (or edge-detection) scheme, and the second one is better for a point-sampling digitization (or region-extraction or thresholding) scheme.

24 citations


Journal ArticleDOI
TL;DR: A minimum spanning forest method is described which finds cluster patterns in a random graph of points and a uniformity test suitable for low statistics data sets is proposed.

Patent
17 Mar 1983
TL;DR: In this article, a peak/valley thresholding circuit is proposed to extract peaks and valleys in the analog video signal before thresholding the video signal so as to determine an optimal threshold level to be used in thresholding.
Abstract: A system for converting an analog video signal obtained by scanning an original image to be processed into a two-valued video signal such as an input portion of a facsimile machine includes a peak/valley extracting circuit for extracting peaks and valleys in the analog video signal before thresholding the analog video signal so as to determine an optimal threshold level to be used in thresholding. The peak/valley extracting circuit includes peak and valley thresholds in addition to the threshold to be used in thresholding the analog video signal to produce a two-valued video signal. The peak/valley extracting circuit is so structured to supply a peak/valley extraction signal if an extracted peak or valley also satisfies the condition that it is equal to or higher or lower in level than the peak or valley threshold.

Journal ArticleDOI
TL;DR: An algorithm is presented which finds the best-fitting pair of constants, in the least squares sense, to a set of scalar data, called the 'bimean' of the data.

Journal ArticleDOI
TL;DR: A procedure is given which substantially reduces the processing time needed to perform maximum likelihood classification on large data sets by using a set of fixed thresholds which, if exceeded by one probability density function, makes it unnecessary to evaluate a competing density function.
Abstract: A procedure is given which substantially reduces the processing time needed to perform maximum likelihood classification on large data sets. The given method uses a set of fixed thresholds which, if exceeded by one probability density function, makes it unnecessary to evaluate a competing density function. Proofs are given of the existence and optimality of these thresholds for the class of continuous, unimodal, and quasi-concave density functions (which includes the multivariate normal), and a method for computing the thresholds is provided for the specifilc case of multivariate normal densities. An example with remote sensing data consisting of some 20 000 observations of four-dimensional data from nine ground-cover classes shows that by using thresholds, one could cut the processing time almost in half.

01 Jan 1983
TL;DR: Parallel techniques are shown to eliminate some types of overhead associated with serial processing, offer the possibility of improved algorithm capability and accuracy, and decrease execution time.
Abstract: Contour extraction is used as an image processing scenario to explore the advantages of parallelism and the architectural requirements for a parallel computer system, such as PASM. Parallel forms of edge-guided thresholding and contour tracing algorithms are developed and analyzed to highlight important aspects of the scenario. Edge-guided thresholding uses adaptive thresholding to allow contour extraction where gray level variations would not allow global thresholding to be effective. Parallel techniques are shown to eliminate some types of overhead associated with serial processing, offer the possibility of improved algorithm capability and accuracy, and decrease execution time. The implications that the parallel scenario has for machine architecture are considered. Various desirable system attributes are established. 30 references.

01 Jan 1983
TL;DR: This dissertation develops several techniques for automatically segmenting images into regions by adding and deleting clusters based on image space information, by merging regions, and by defining different compatibility coefficients in the relaxation so as to preserve fine structures.
Abstract: This dissertation develops several techniques for automatically segmenting images into regions. The basic approach involves the integration of different types of non-semantic knowledge into the segmentation process such that the knowledge can be used when and where it is useful. These processes are intended to produce initial segmentations of complex images which are faithful with respect to fine image detail, balanced by a computational need to limit the segmentations to a fairly small number of regions. Natural scenes often contain intensity gradients, shadows, highlights, texture, and small objects with fine geometric structure, all of which make the calculation and evaluation of reasonable segmentations for natural scenes extremely difficult. The approach taken by this dissertation is to integrate specialized knowledge into the segmentation process for each kind of image event that can be shown to adversely affect the performance of the process. At the center of our segmentation system is an algorithm which labels pixels in localized subimages with the feature histogram cluster to which they correspond, followed by a relaxation labeling process. However, this algorithm has a tendency to undersegment by failing to find clusters corresponding to small objects; it may also oversegment by splitting intensity gradients into multiple clusters, by finding clusters for "mixed pixel" regions, and by finding clusters corresponding to microtexture elements. In addition, the relaxation process often destroys fine structure in the image. Finally, the artificial subimage partitions introduce the problem of inconsistent cluster sets and the need to recombine the segmentations of the separate subsimages into a consistent whole. This dissertation addresses each of these problems by adding and deleting clusters based on image space information, by merging regions, and by defining different compatibility coefficients in the relaxation so as to preserve fine structures. The result is a segmentation algorithm which is more reliable over a broader range of images than the simple clustering algorithm. Solutions to the same segmentation problems were examined via the integration of different segmentation algorithms (including edge, region, and thresholding algorithms) to produce a consistent segmentation. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author.) UMI

Journal ArticleDOI
TL;DR: The cellular logic transform has been used extensively in the analysis of bilevel images generated by thresholding gray level images and is shown to be useful in gray level image processing.
Abstract: The cellular logic transform has been used extensively in the analysis of bilevel images generated by thresholding gray level images. Now, by a process of gray level resynthesis, it is shown to be useful in gray level image processing.

Proceedings ArticleDOI
26 Oct 1983
TL;DR: A novel algorithm on the basis of nonlinear filtering is described, that overcomes difficulties and has succeeded in locating with high accuracy objects, whose images move at speeds of 1000 pixels/sec.
Abstract: When computer vision is used for the real-time control of dynamic systems, one difficulty arises from the fact that TV-cameras integrate the incoming light over a time interval equivalent to one frame period. If an observed object moves fast enough, its image will deteriorate in two ways: the edges will be blurred and the contrast will diminish. This causes conventional edge detectors or thresholding methods to break down. This paper describes a novel algorithm on the basis of nonlinear filtering, that overcomes these difficulties and has succeeded in locating with high accuracy objects, whose images move at speeds of 1000 pixels/sec. Experimental results obtained by simulation and by controlling real objects are reported. A similar algorithm can be used to estimate the velocity of an object from the degree of motion blur of its image.

Journal ArticleDOI
TL;DR: In this article, a method of estimating the centroid location of a target utilizing radar scan return amplitude versus angle information is presented, which is compared with three thresholding estimators and a first moment estimator in a computer-simulated automatic landing system.
Abstract: A method of estimating the centroid location of a target utilizing radar scan return amplitude versus angle information is presented. The method is compared with three thresholding estimators and a first moment estimator in a computer-simulated automatic landing system. This new method is the most robust and accurate during periods of low signal-to-noise ratio. In periods of high signal-to-noise ratio the method has less error than the thresholding methods and is similar in accuracy to the first moment estimator. Furthermore, the number of pulse transmissions required to obtain a desired level of performance in noise is much less than that needed for the thresholding methods and the first moment estimator employed in this simulation.

Patent
02 May 1983
TL;DR: In this article, a table lookup process is used to threshold the data to 4 bits/pel with different single pel values being assigned to pels having normalized values above or below set threshold values.
Abstract: A method is described for reducing the amount of data that must be transmitted over a telecommunications link in order to generate an acceptable video image of a bilevel document at a remote monitor. Document data is captured at 8 bits/pel with a conventional video freeze-frame system. The captured data is normalized to eliminate variations due to camera settings, room lighting levels and the like. A table lookup process is used to threshold the data to 4 bits/pel with different single pel values being assigned to pels having normalized normalized values above or below set threshold values. The number of intermediate gray scale ranges is reduced. The thresholding process provides log strings of constant pel values while suppressing small nonuniformities in the document's background. The resultant data is opti­ mized for run length compression and the image quality is improved. At the receiver, a second table lookup process is used to define playback values. To provide optimum contrast between characters and background, the table lookup values at the receiver may be different from those used in the compression process.

Journal ArticleDOI
TL;DR: In this paper, a digital seismic reflection section was converted to a gray scale image composed of pixels and processed with techniques borrowed from the disciplines of image enhancement and pattern recognition, including scaling, thresholding, density equalization, filtering, segmentation, and edge-finding.
Abstract: A digital seismic reflection section may be converted to a gray scale image composed of pixels and processed with techniques borrowed from the disciplines of image enhancement and pattern recognition. Types of processing include scaling, thresholding, density equalization, filtering, segmentation, and edge-finding. These are successfully applied to a migrated common mid-point seismic reflection line that traverses the Queen Charlotte fault (located in the northeastern Pacific Ocean). The result is the definition and enhancement of an elongated, near-vertical reflectivity anomaly associated with the Queen Charlotte fault.

Journal ArticleDOI
TL;DR: A procedure for image segmentation involving no image-dependent thresholds is described, which involves not only detection of edges but also production of closed region boundaries.


Journal ArticleDOI
Friedrich M. Wahl1, Samuel So1, Kwan Wong1
TL;DR: A hybrid measurement technique for high-precision surface inspection uses an interferometer to image microscopic surface defects and a new misalignment measure for binary patterns identifies the "straightness" of the fringe lines.
Abstract: A hybrid measurement technique is proposed for high-precision surface inspection. The technique uses an interferometer to image microscopic surface defects. In order to quantify the degree of various surface defects, the interferograms are scanned, digitized, and subsequently converted to a binary image by using an adaptive thresholding technique which takes into account the inhomogeneity of the imaging system. A new misalignment measure for binary patterns identifies the "straightness" of the fringe lines. It is shown that the resulting percentages of misaligned picture elements conform fairly well with the degree of various surface defects.

Journal ArticleDOI
TL;DR: In this article, the combination of both signals, grey level and energy dispersive information, leads to a masking procedure which eliminates image sections, not belonging to the phase under investigation, from the evaluation.
Abstract: SUMMARY Due to the complexity of biological tissue and man-made materials, the limits of light microscopy in quantitative image analysis are often reached. The scanning electron microscope provides a much higher resolution in such situations, especially when using the low noise secondary electrons signal. However, secondary electrons are sensitive to topography rather than to the elements constituting the specimen. This effect usually makes discrimination of distinct phases or objects by simple grey value thresholding difficult or even impossible. The additional information given by an energy dispersive X-ray system is the step to overcome this problem. The combination of both signals—grey level and energy dispersive information—leads to a masking procedure which eliminates image sections, not belonging to the phase under investigation, from the evaluation.

Dissertation
01 Jan 1983
TL;DR: An architecture suitable for VLSI implementations is presented which enables a wide range of image processing operations to be done in a real-time, pipelined fashion and allows derivation of a metric for the similarity of these graphs and of the fingerprints which they represent.
Abstract: Advances in integrated circuit technology have made possible the application of LSI and VLSI techniques to a wide range of computational problems. Image processing is one of the areas that stands to benefit most from these techniques. This thesis presents an architecture suitable for VLSI implementations which enables a wide range of image processing operations to be done in a real-time, pipelined fashion. These operations include filtering, thresholding, thinning and feature extraction. The particular class of images chosen for study are fingerprints. There exists a long history of fingerprint classification and comparison techniques used by humans, but previous attempts at automation have met with little success. This thesis makes use of VLSI image processing operations to create a graph structure representation (minutia graph) of the inter-relationships of various low-level features of fingerprint images. An approach is then presented which allows derivation of a metric for the similarity of these graphs and of the fingerprints which they represent. An efficient algorithm for derivation of maximal common subgraphs of two minutia graphs serves as the basis for computation of this metric, and is itself based upon a specialized clique-finding algorithm. Results of cross comparison of fingerprints from multiple individuals are presented.

Journal ArticleDOI
TL;DR: Single and multi-level thresholding schemes are applied to images of industrial parts and the results are presented to show how the defects can be detected using image processing techniques.
Abstract: Thresholding of a given image into a binary one is a necessary step for most image analysis techniques. Different thresholding techniques have been proposed in the literature to achieve this goal. In this work the authors investigate some of the available techniques, and examine their suitability for industrial applications, such as on-line quality inspection. Single and multi-level thresholding schemes are applied to images of industrial parts and the results are presented. In addition, examples are given to show how the defects can be detected using image processing techniques.

Journal Article
TL;DR: Two nuclear segmentation methods, Baky's minimax algorithm and thresholding, were compared on a sample of 879 atypical bronchial epithelial cells in sputum and indicated that with most classes of atypia, N/C ratios determined by minimax were closer to the visually derived values than were those of thresholding.
Abstract: Two nuclear segmentation methods, Baky's minimax algorithm and thresholding, were compared on a sample of 879 atypical bronchial epithelial cells in sputum. Nuclear-cytoplasmic (N/C) ratios for all cells were determined by each segmentation method and compared to a visually determined value. Cells were categorized by atypia class (from metaplastic through malignant), by staining characteristics (orangeophilic and nonorangeophilic) and by method of digitization (either scanning microphotometry or video system). The method of digitization was confounded by subject differences. The results indicated that with most classes of atypia, N/C ratios determined by minimax were closer to the visually derived values than were those of thresholding, particularly with orangeophilic cells. Both methods become progressively less accurate, as compared to the visual procedure, as the degree of atypia increases.

Proceedings Article
01 Jan 1983
TL;DR: In this paper, an evaluation of suitable edge discrimination techniques and their application to image segmentation is reported, from an analysis of Thematic Mapper Simulator data, it is concluded that segmentation by automated edge discrimination is a valuable technique which can be used in the development of per-field classifiers.
Abstract: An evaluation of suitable edge discrimination techniques and their application to image segmentation is reported. From an analysis of Thematic Mapper Simulator data, it is concluded that segmentation by automated edge discrimination is a valuable technique which can be used in the development of per-field classifiers. A Laplacian convolution operator appears to be the most cost-effective high-pass filter. Spatial frequency domain filtering is more versatile in its ability to enhance different edge types. A simple global gray value threshold can produce good edge discrimination from an enhanced image which may be improved by using a local thresholding technique. A gap-fill postprocessing technique is necessary for useful segmentation. Gradient and other directionally dependent techniques are unsuitable for segmentation.

DOI
01 Feb 1983
TL;DR: The paper presents a straightforward discussion of the problems associated with the use of limiting and analyses the resultant degradation of the nulling characteristics of the array.
Abstract: There are many applications of adaptive antennas where it is desirable to set an adjustable threshold below which received signals are not nulled by the adaptive process. This may be achieved with a gradient descent adaptive control algorithm by a suitable choice of integration loss and update gain factors. Generally, the gradient estimate is obtained by a correlation process, and the hardware design can be simplified by incorporating some form of signal limiting prior to the correlator. However, it is known that this can also have severe implications on the thresholding performance in the presence of multiple signals. The paper presents a straightforward discussion of the problems associated with the use of limiting and analyses the resultant degradation of the nulling characteristics of the array.