scispace - formally typeset
Search or ask a question

Showing papers on "Image segmentation published in 1979"


Journal ArticleDOI
01 May 1979
TL;DR: This paper discusses image segmentation techniques from the standpoint of the assumptions that an image should satisfy in order for a particular technique to be applicable to it.
Abstract: This paper discusses image segmentation techniques from the standpoint of the assumptions that an image should satisfy in order for a particular technique to be applicable to it. These assumptions, which are often not stated explicitly, can be regarded as (perhaps informal) "models" for classes of images. The paper emphasizes two basic classes of models: statistical models that describe the pixel population in an image or region, and spatial models that describe the decomposition of an image into regions.

98 citations


Journal ArticleDOI
TL;DR: Some attempts to segment textured black and white images by detecting clusters of local feature values and partitioning the feature space so as to separate these clusters.

75 citations


Journal ArticleDOI
TL;DR: This correspondence describes research in the development of symbolic registration techniques directed toward the comparison of pairs of images of the same scene to ultimately generate descriptions of the changes in the scene.
Abstract: This correspondence describes research in the development of symbolic registration techniques directed toward the comparison of pairs of images of the same scene to ultimately generate descriptions of the changes in the scene. Unlike most earlier work in image registration, all the matching and analysis will be performed at a symbolic level rather than a signal level. We have applied this registration procedure on several different types of scenes and the system appears to work well both on pairs of images which may be analyzed in part by signal based systems and those which cannot be so analyzed.

56 citations


Proceedings ArticleDOI
01 Dec 1979
TL;DR: In this paper, a split-and-merge algorithm for picture segmentation is described, where the regions of an arbitrary initial segmentation are tested for uniformity and if not uniform they are subdivided into smaller regions or set aside if their size is below a given threshold.
Abstract: Picture segmentation is expressed as a sequence of decision problems within the framework of a split-and-merge algorithm. First regions of an arbitrary initial segmentation are tested for uniformity and if not uniform they are subdivided into smaller regions, or set aside if their size is below a given threshold. Next regions classified as uniform are subject to a cluster analysis to identify similar types which are merged. At this point there exist reliable estimates of the parameters of the random field of each type of region and they are used to classify some of the remaining small regions. Any regions remaining after this step are considered part of a boundary ambiguity zone. The location of the boundary is estimated then by interpolation between the existing uniform regions. Experimental results on artificial picutres are also included.

56 citations


Journal ArticleDOI
TL;DR: The random walk procedure is intended mainly for the texture discrimination problem, and its possible application to the edge detection problem (as shown in this paper) is just a by-product.
Abstract: We consider the problem of texture discrimination Random walks are performed in a plain domain D bounded by an absorbing boundary ? and the absorption distribution is calculated Measurements derived from such distributions are the features used for discrimination Both problems of texture discrimination and edge segment detection can be solved using the same random walk approach The border distributions and their differences with respect to a homogeneous image can classify two different images as having similar or dissimilar textures The existence of an edge segment is concluded if the boundary distribution for a given window (subimage) differs significantly from the boundary distribution for a homogeneous (uniform grey level) window The random walk procedure has been implemented and results of texture discrimination are shown A comparison is made between results obtained using the random walk approach and the first-or second-order statistics, respectively The random walk procedure is intended mainly for the texture discrimination problem, and its possible application to the edge detection problem (as shown in this paper) is just a by-product

45 citations


Journal ArticleDOI
TL;DR: An algorithm for automatic segmentation of PAP-stained cell images and its digital implementation is described, which is suitable to separate the nucleus from the cytoplasm, and the cy toplasm from the background in the filtered image.
Abstract: An algorithm for automatic segmentation of PAP-stained cell images and its digital implementation is described. First, the image is filtered in order to eliminate the granularily and small objects in the image which may upset the segmentation procedure. In a second step, information on gradient and compactness is extracted from the filtered image and stored in three histograms as functions of the extinction. From these histograms, two extinction thresholds are computed. These thresholds are suitable to separate the nucleus from the cytoplasm, and the cytoplasm from the background in the filtered image. Masks are determined in this way, and finally used to analyse the nucleus and the cytoplasm in the original image.

35 citations


Journal ArticleDOI
TL;DR: In this paper, the texture parameters of the chromatin pattern were derived from the segmentation of the nuclear image, which can contribute to the automated classification of specimens on the basis of single cell analysis in cervical cytology.
Abstract: Texture parameters of the nuclear chromatin pattern can contribute to the automated classification of specimens on the basis of single cell analysis in cervical cytology. Current texture parameters are abstract and therefore hamper understanding. In this paper texture parameters are described that can be derived from the chromatin pattern after segmentation of the nuclear image. These texture parameters are more directly related to the visual properties of the chromatin pattern. The image segmentation procedure is based on a region grow algorithm which specifically isolates high chromatin density. The texture analysis method has been tested on a data set of images of 112 cervical nuclei on photographic negatives digitized with a step size of 0.125 micron. The preliminary results of a classification trial indicate that these visually interpretable parameters have promising discriminatory power for the distinction between negative and positive specimens.

21 citations


Proceedings ArticleDOI
09 Jan 1979
TL;DR: In this paper, an approach is described for detecting and classifying tactical targets in FLIR imagery, where the basic assumption used for segmenting objects from their background is that the objects to be detected differ from the background in grey level, edge, properties or texture.
Abstract: An approach is described for detecting and classifying tactical targets in FLIR imagery. The basic assumption used for segmenting objects from their background is that the objects to be detected differ from the background in grey level, edge, properties, or texture. Potential targets are selected from a large frame, by locating combinations of grey level, edge, value, and texture that occur infrequently over the entire frame. Once potential objects are obtained, they are segmented from their backgrounds using the identical process as above, except applied on a local level. The segmented objects are classified into three, types of vehicles or into false, alarms. The classification procedure uses features measured on projections made through the segmented objects. Results are shown for 32 test images.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

17 citations


Proceedings ArticleDOI
10 Oct 1979
TL;DR: In this paper, a virtual system for automatic detection of fabric defects is introduced and the performance of the system is investigated experimentally from a software-oriented point of view and the results show that almost all kinds of defects can be detected automatically with a little misjudgement.
Abstract: In this paper, a virtual system for automatic detection of fabric defects is introduced and the performance of the system is investigated experimentally from a software-oriented point of view . Major discussions on this software system are focused on how the observed noisy data can be made clear without degradation of resolution power of the image and on how several kinds of fabric defects can be detected from the noisy background. Following results are obtained experimentally: (1) Resolution power of the input data below the nominal fineness of a fabric sample(cotton) still provides approximately sufficient information of defects. (2) As a preprocessing of the fabric image data, projection procedure has efficient and appropriate properties in order to reduce background noise. (3) Combining the projection procedure with thresholding procedure in this system 'almost all kinds of defects can be detected automatically with a little misjudgement.

13 citations


01 Jan 1979
TL;DR: It is shown that techniques which rely on histogram clustering often generate gross segmentation errors due to overlap in the distributions of the individual objects in a scene.
Abstract: : The research in this thesis has focussed upon the algorithms and structures that are sufficient to generate an accurate description of the information contained in a relatively complex class of digitized images. This aspect of machine vision is often referred to as 'low-level' vision or segmentation, and usually includes those processes which function close to the sensory data. The bulk of this thesis devotes itself to the exploration of some of the problems typically encountered in segmentation. In addition, a new and robust algorithm is presented that avoids most of these problems. The analysis is carried out through the use of a series of computer-generated tests images with known characteristics. Segmentation algorithms of varying degrees of complexity are applied to each image and their performance is carefully evaluated. It will be shown that even the most sophisticated algorithms that are currently in use often perform poorly when confronted with certain apparently simple images. In particular, it is shown that techniques which rely on histogram clustering often generate gross segmentation errors due to overlap in the distributions of the individual objects in a scene. Moreover, the relaxation processes used to correct these errors are themselves prone to errors, but of a different kind. Both techniques, clustering and relaxation, fail because they are based on information which is too global to be effective in complex scenes.

10 citations


Proceedings ArticleDOI
D. E. Soland1, P. M. Narendra1
20 Aug 1979
TL;DR: The Prototype Automatic Target Screener performs automatic real time detection, recognition and cueing of tactical targets, and incorporates DC restoration to correct for artifacts introduced by the common module FLIR.
Abstract: Applications of image processing to FLIR systems include image enhancement, to improve the imagery displayed to the observer and automatic target cueing, to reduce the operator's search time. The Prototype Automatic Target Screener performs automatic real time detection, recognition and cueing of tactical targets. It also incorporates DC restoration to correct for artifacts introduced by the common module FLIR; and adaptive contrast enhancement to optimally use the display dynamic range and eliminate the need for continual operator adjustments. The PATS system is designed to interface to standard 525-line and 875-line TV formats and perform real-time processing. Decisions on target classification and location are updated every 1/10 second and displayed by means of symbology overlays on the operator's display. Three levels of classification provide optimum performance at low computational cost. The first level rejects clutter. Potential targets are further classified into one of six categories at the second level. These decisions are continually correlated over several frames and the combined decisions are displayed as cues. The PATS architecture features charge-coupled devices to perform many of the high speed functions required for image segmentation and first level feature extraction. It incorporates a bit-slice micro-programmable digital processor and frame memory for speed and flexibility in second level feature extraction and classification.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
06 Nov 1979
TL;DR: Two approaches to organ detection in ab dominal computerized tomography scans, one local and one global, have been developed and an iterative, adaptive boundary-delineating algorithm suitable for single organ detection and amenable to the incorporation of organ-specific knowledge is developed.
Abstract: Two approaches to organ detection in ab dominal computerized tomography scans, one local and one global, have been developed. The first involves a boundary-delineating algorithm which operates homogeneously on entire images and coordinates the use of multiple local criteria for advanced edge detection. The second involves an iterative, adaptive boundary-delineating algorithm suitable for single organ detection and amenable to the incorporation of organ-specific knowledge. Regional results of the first (global) algorithm can initialize the second (local) algorithm. Algorithms were tested on mathematical phantoms prior to application on clinical patient data.


Journal ArticleDOI
TL;DR: It is demonstrated how a practical statistical segmentation algorithm may be constructed which operates locally and gives satisfactory global results.
Abstract: Images whose properties are spatially variant must often be processed locally. Statistical techniques may be required to do this if the image is noisy. These may be difficult to apply when local regions are so small that means, variances, and similar quantities are unstable. We demonstrate how a practical statistical segmentation algorithm may be constructed which operates locally and gives satisfactory global results. The size of the local area over which computations are made has an important effect on the segmentation quality.

ReportDOI
30 Sep 1979
TL;DR: Techniques for: visual inspection using linear features, relaxation matching for map-matching, techniques for contour matching, a segment-based stereo matching method, use of shadows in the interpretation of aerial images, Use of image shading to infer 3-D depth, and use of texture for image segmentation are described.
Abstract: : This technical report summarizes the image understanding activities performed by USC during the period October 1, 1982 through September 30, 1983 under contract number F33615-82-K-1786 with the Defense Advanced Research Projects Agency, Information Processing Techniques Office. This contract is monitored by the Air Force Wright Aeronautical Laboratories, Wright-Patterson Air Force Base, Dayton, OH. The purpose of this research program is to develop techniques and systems for understanding images, particularly for mapping applications. This report describes techniques for: visual inspection using linear features, relaxation matching for map-matching, techniques for contour matching, a segment-based stereo matching method, use of shadows in the interpretation of aerial images, use of image shading to infer 3-D depth, use of texture for image segmentation, VLSI implementation of graph isomorphism algorithms and a study of suitable architectures for implementing image understanding algorithms.

Proceedings ArticleDOI
28 Dec 1979
TL;DR: Video Image Enhancement through adaptive noise filtering and edge sharpening is presented and effective video signal-to-noise ratio can be improved with minimal observable contouring effect, degradation in spatial resolution, and other artifacts.
Abstract: Video Image Enhancement through adaptive noise filtering and edge sharpening is presented. The basic concepts behind this technique are the fact that with some kind of image segmentation, noise filtering can be performed in the nearly uniform region and edge sharpening only near an edge. The resulting algorithm is nonlinear and adaptive. It adapts globally to the input SNR and locally to the gradient magnitude. Implementation is quite simple. Performance is nonlinear and depends on the SNR of the original image. Effective video signal-to-noise ratio can be improved with minimal observable contouring effect, degradation in spatial resolution, and other artifacts.


Proceedings ArticleDOI
04 Sep 1979
TL;DR: In this article, two methods are described for measuring internal machine part clearances by digital processing of industrial radiographs. The first technique requires mathematical modeling of the expected optical density of a radiograph as a function of machine part motion, and the second method involves image registration where radiographs are correlated in a piecewise fashion to allow inference of relative motion of machine parts in a time varying series of images.
Abstract: Two methods are described in this paper for measuring internal machine part clearances by digital processing of industrial radiographs. The first technique requires mathematical modeling of the expected optical density of a radiograph as a function of machine part motion. Part separations are estimated on the basis of individual image scan lines. A final part separation estimate is produced by fitting a polynominal to the individual estimates and correcting for imaging and processing degradations which are simulated using a mathematical model. The second method involves an application of image registration where radiographs are correlated in a piecewise fashion to allow inference of relative motion of machine parts in a time varying series of images. Each image is divided into segments, which are dominated by a small number of features. Segments from one image are cross - correlated with subsequent images to identify machine part motion in image space. Since the magnitude of a correlation peak is a function of the similarity between an image segment and a subsequent image, it can be used to infer the presence of relative motion of features within each image segment thus identifying feature boundaries. Correlation peak magnitude is also used in assessing the confidence that a particular motion has occurred between images. The rigid feature motion of machine parts requires image registration by discontinuous parts in contrast to the continuous image deformations one encounters in projective perspective transformations characteristic of remote sensing applications.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
26 Dec 1979
TL;DR: This paper briefly reviews methods which have been employed to do image segmentation and indicates how texture analysis might be utilized to do this task.
Abstract: This paper briefly reviews methods which have been employed to do image segmentation and indicates how texture analysis might be utilized to do this task. A method will be described for finding the unit cell size of a texture. The unit cell represents a tile which can be used to tile in the plane and generate the texture. It is the fundamental building block of repetitive textures.

Proceedings ArticleDOI
Joseph L. Mundy1
06 Nov 1979
TL;DR: Functional and structural models for image data that have proved effective in automatic inspection applications that will be taken from practical industrial systems are reviewed.
Abstract: This paper will review functional and structural models for image data that have proved effective in automatic inspection applications. Examples will be taken from practical industrial systems.

Proceedings ArticleDOI
28 Dec 1979
TL;DR: The user does not need to have an in-depth knowledge of the whole system, the IMAGE 4 software package takes over the housekeeping functions and permits easy FORTRAN programming, whereas the LATIN interactive program package enables anyone not having computer knowledge to use the system.
Abstract: This paper describes a multi-purpose image-processing system. This system was designed for different applications, for example, medical image processing (thermographic imaging, computer assisted diagnosis, etc...), remote sensing (multispectral analysis and classification, thermal mapping of rivers, etc...) and electron microscope image processing (T.E.M., noise filtering, pattern recognition, geometrical measurements). The system can be connected on-line to any kind of input and output image-peripheral. The peripherals used now are : TV camera, thermographic camera, flying-spot scanner, flying-spot film recorder, mechanical scanner coupled to an optical processor, refreshed B&W and color display, graphic tablet, magnetic tape and disc. The user does not need to have an in-depth knowledge of the whole system, the IMAGE 4 software package takes over the housekeeping functions and permits easy FORTRAN programming, whereas the LATIN interactive program package enables anyone not having computer knowledge to use the system. In the conclusion, a comparison is made with the major image processing systems and software packages published in the literature. The appendix gives illustrations of the previously mentioned applications.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
28 Dec 1979
TL;DR: The problems of intelligent image processing by computer, especially the processing of medical images like computed tomography scans, are examined in light of current image segmentation techniques and it is concluded that part of the problem lies in the lack of knowledge about how to guide low-leveiprocesses from higher level goals.
Abstract: The problems of intelligent image processing by computer, especially the processing of medical images like computed tomography scans, is examined in light of current image segmentation techniques. It is concluded that part of the problem lies in the lack of knowledge about how to guide low-leveiprocesses from higher level goals. An iterative boundary-finding scheme is presented which may aid in this guidance, and results from using specific criteria in the general framework to locate kidneys in abdominal computed tomography scans are presented and discussed. The problem of complex object localization in images is discussed, and some avenues for further research are indicated.© (1979) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
06 Nov 1979
TL;DR: This paper is concerned with the methodologies in statistical image processing and recognition and specific areas considered are the decision rules in image recognition and their comparative evaluation under finite sample size condition.
Abstract: This paper is concerned with the methodologies in statistical image processing and recognition. Specific areas considered are the following: (1) The decision rules in image recognition and their comparative evaluation under finite sample size condition; (2) Statistical feature extraction techniques for image segmentation with emphasis on the statistical characteristic of textural features; (3) Statistical contextual analysis algorithms for images. Emphasis is placed on the contextual pre processing/postprocessing techniques to implement the optimum decision rules with context; (4) Statistical image modelling techniques including the nonhomogeneous models and the autoregressive models. The software problems involved in these areas are also examined in details.

Proceedings ArticleDOI
09 Jan 1979
TL;DR: Specific applications, like the automatic analysis of chest radiographs and the white blood cell neutrophils image segmentation are presented, and problems associated with the above research together with the techniques used to solve them are discussed.
Abstract: This paper describes the ongoing research at Purdue University in the area of biomedical image processing and understanding. Problems associated with the above research together with the techniques used to solve them are discussed, and the research activities are also viewed in the context of the artificial intelligence (A.I.) field. Specific applications, like the automatic analysis of chest radiographs and the white blood cell neutrophils image segmentation are presented.