scispace - formally typeset
Search or ask a question

Showing papers on "Image segmentation published in 1994"


Journal ArticleDOI
TL;DR: This correspondence presents a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters, and suggests two ways in which it can be employed, namely, by using manual seed selection or by automated procedures.
Abstract: We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures. >

3,331 citations


Journal ArticleDOI
TL;DR: An efficient differential box-counting approach to estimate fractal dimension is proposed and by comparison with four other methods, it has been shown that the method is both efficient and accurate.
Abstract: Fractal dimension is an interesting feature proposed to characterize roughness and self-similarity in a picture. This feature has been used in texture segmentation and classification, shape analysis and other problems. An efficient differential box-counting approach to estimate fractal dimension is proposed in this note. By comparison with four other methods, it has been shown that the authors, method is both efficient and accurate. Practical results on artificial and natural textured images are presented. >

767 citations


Journal ArticleDOI
TL;DR: Simulations on synthetic images indicate that the new algorithm performs better and requires much less computation than MAP estimation using simulated annealing, and is found to improve classification accuracy when applied to the segmentation of multispectral remotely sensed images with ground truth data.
Abstract: Many approaches to Bayesian image segmentation have used maximum a posteriori (MAP) estimation in conjunction with Markov random fields (MRF). Although this approach performs well, it has a number of disadvantages. In particular, exact MAP estimates cannot be computed, approximate MAP estimates are computationally expensive to compute, and unsupervised parameter estimation of the MRF is difficult. The authors propose a new approach to Bayesian image segmentation that directly addresses these problems. The new method replaces the MRF model with a novel multiscale random field (MSRF) and replaces the MAP estimator with a sequential MAP (SMAP) estimator derived from a novel estimation criteria. Together, the proposed estimator and model result in a segmentation algorithm that is not iterative and can be computed in time proportional to MN where M is the number of classes and N is the number of pixels. The also develop a computationally efficient method for unsupervised estimation of model parameters. Simulations on synthetic images indicate that the new algorithm performs better and requires much less computation than MAP estimation using simulated annealing. The algorithm is also found to improve classification accuracy when applied to the segmentation of multispectral remotely sensed images with ground truth data. >

687 citations


Journal ArticleDOI
01 Jan 1994
TL;DR: A critical evaluation of speckle suppression filters is made using a simulated SAR image as well as airborne and spaceborne SAR images and computational efficiency and implementation complexity are compared.
Abstract: Speckle, appearing in synthetic aperture radar (SAR) images as granular noise, is due to the interference of waves reflected from many elementary scatterers. Speckle in SAR images complicates the image interpretation problem by reducing the effectiveness of image segmentation and classification. To alleviate deleterious effects of speckle, various ways have been devised to suppress it. This paper surveys several better‐known speckle filtering algorithms. The concept of each filtering algorithm and the interrelationship between algorithms are discussed in detail. A set of performance criteria is established and comparisons are made for the effectiveness of these filters in speckle reduction and edge, line, and point target contrast preservation using a simulated SAR image as well as airborne and spaceborne SAR images. In addition, computational efficiency and implementation complexity are compared. This critical evaluation of speckle suppression filters is mostly new and is presented as a survey p...

570 citations


Journal ArticleDOI
TL;DR: A novel multiresolution color image segmentation (MCIS) algorithm which uses Markov random fields (MRF's) is proposed, a relaxation process that converges to the MAP (maximum a posteriori) estimate of the segmentation.
Abstract: Image segmentation is the process by which an original image is partitioned into some homogeneous regions. In this paper, a novel multiresolution color image segmentation (MCIS) algorithm which uses Markov random fields (MRF's) is proposed. The proposed approach is a relaxation process that converges to the MAP (maximum a posteriori) estimate of the segmentation. The quadtree structure is used to implement the multiresolution framework, and the simulated annealing technique is employed to control the splitting and merging of nodes so as to minimize an energy function and therefore, maximize the MAP estimate. The multiresolution scheme enables the use of different dissimilarity measures at different resolution levels. Consequently, the proposed algorithm is noise resistant. Since the global clustering information of the image is required in the proposed approach, the scale space filter (SSF) is employed as the first step. The multiresolution approach is used to refine the segmentation. Experimental results of both the synthesized and real images are very encouraging. In order to evaluate experimental results of both synthesized images and real images quantitatively, a new evaluation criterion is proposed and developed. >

530 citations


Book ChapterDOI
Serge Beucher1
01 Jan 1994
TL;DR: This paper presents a technique based on mosaic images and on the computation of a watershed transform on a valued graph derived from the mosaic images that leads to a hierarchical segmentation of the image and considerably reduces over-segmentation.
Abstract: A major drawback when using the watershed transformation as a segmentation tool comes from the over-segmentation of the image. Over-segmentation is produced by the great number of minima embedded in the image or in its gradient. A powerful technique has been designed to suppress over-segmentation by a primary selection of markers pointing out the regions or objects to be segmented in the image. However, this approach can be used only if we are able to compute the marker set before applying the watershed transformation. But, in many cases and especially for complex scenes, this is not possible and an alternative technique must be used to reduce the over-segmentation. This technique is based on mosaic images and on the computation of a watershed transform on a valued graph derived from the mosaic images. This approach leads to a hierarchical segmentation of the image and considerably reduces over-segmentation.

382 citations


Journal ArticleDOI
TL;DR: It is shown analytically that applying a properly configured bandpass filter to a textured image produces distinct output discontinuities at texture boundaries; the analysis is based on Gabor elementary functions, but it is the bandpass nature of the filter that is essential.
Abstract: Many texture-segmentation schemes use an elaborate bank of filters to decompose a textured image into a joint space/spatial-frequency representation. Although these schemes show promise, and although some analytical work has been done, the relationship between texture differences and the filter configurations required to distinguish them remain largely unknown. This paper examines the issue of designing individual filters. Using a 2-D texture model, we show analytically that applying a properly configured bandpass filter to a textured image produces distinct output discontinuities at texture boundaries; the analysis is based on Gabor elementary functions, but it is the bandpass nature of the filter that is essential. Depending on the type of texture difference, these discontinuities form one of four characteristic signatures: a step, ridge, valley, or a step change in average local output variation. Accompanying experimental evidence indicates that these signatures are useful for segmenting an image. The analysis indicates those texture characteristics that are responsible for each signature type. Detailed criteria are provided for designing filters that can produce quality output signatures. We also illustrate occasions when asymmetric filters are beneficial, an issue not previously addressed. >

362 citations


Journal ArticleDOI
TL;DR: To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions, resulting in an algorithm that is robust with respect to various image characteristics.
Abstract: Proposes a simple, yet general and powerful, region-growing framework for image segmentation. The region-growing process is guided by regional feature analysis; no parameter tuning or a priori knowledge about the image is required. To decide if two regions should be merged, instead of comparing the difference of region feature means with a predefined threshold, the authors adaptively assess region homogeneity from region feature distributions. This results in an algorithm that is robust with respect to various image characteristics. The merge criterion also minimizes the number of merge rejections and results in a fast region-growing process that is amenable to parallelization. >

321 citations


Journal ArticleDOI
R. Vaillant, C. Monrocq, Y. Le Cun1
01 Aug 1994
TL;DR: An original approach is presented for the localisation of objects in an image which approach is neuronal and has two steps and is applied to the problem of localising faces in images.
Abstract: An original approach is presented for the localisation of objects in an image which approach is neuronal and has two steps. In the first step, a rough localisation is performed by presenting each pixel with its neighbourhood to a neural net which is able to indicate whether this pixel and its neighbourhood are the image of the search object. This first filter does not discriminate for position. From its result, areas which might contain an image of the object can be selected. In the second step, these areas are presented to another neural net which can determine the exact position of the object in each area. This algorithm is applied to the problem of localising faces in images.

299 citations


Journal ArticleDOI
TL;DR: A new method is described for automatic control point selection and matching that can produce subpixel registration accuracy and is demonstrated by registration of SPOT and Landsat TM images.
Abstract: A new method is described for automatic control point selection and matching. First, reference and sensed images are segmented and closed-boundary regions are extracted. Each region is represented by a set of affine-invariant moment-based features. Correspondence between the regions is then established by a two-stage matching algorithm that works both in the feature space and in the image space. Centers of gravity of corresponding regions are used as control points. A practical use of the proposed method is demonstrated by registration of SPOT and Landsat TM images. It is shown that the authors' method can produce subpixel registration accuracy. >

292 citations


Journal ArticleDOI
TL;DR: The authors prove that the most simple segmentation tool, the “region merging” algorithm, is enough to compute a local energy minimum belonging to a compact class and to achieve the job of most of the tools mentioned above.
Abstract: Most segmentation algorithms are composed of several procedures: split and merge, small region elimination, boundary smoothing,..., each depending on several parameters. The introduction of an energy to minimize leads to a drastic reduction of these parameters. The authors prove that the most simple segmentation tool, the “region merging” algorithm, made according to the simplest energy, is enough to compute a local energy minimum belonging to a compact class and to achieve the job of most of the tools mentioned above. The authors explain why “merging” in a variational framework leads to a fast multiscale, multichannel algorithm, with a pyramidal structure. The obtained algorithm is $O(n\ln n)$, where n is the number of pixels of the picture. This fast algorithm is applied to make grey level and texture segmentation and experimental results are shown.

Book ChapterDOI
Luc Vincent1
01 Jan 1994
TL;DR: Grey-scale area openings and closings can be seen as transformations with a structuring element which locally adapts its shape to the image structures, and therefore have very nice filtering capabilities.
Abstract: The filter that removes from a binary image the components with area smaller than a parameter λ is called area opening. Together with its dual, the area closing, it is first extended to grey-scale images. It is then proved to be equivalent to a maximum of morphological openings with all the connected structuring elements of area greater than or equal to λ. The study of the relationships between these filters and image extrema leads to a very efficient area opening/closing algorithm. Grey-scale area openings and closings can be seen as transformations with a structuring element which locally adapts its shape to the image structures, and therefore have very nice filtering capabilities. Their effect is compared to that of more standard morphological filters. Some applications in image segmentation and hierarchical decomposition are also briefly described.

Journal ArticleDOI
T. Uchiyama1, M.A. Arbib
TL;DR: It is shown that competitive learning converges to approximate the optimum solution based on this criterion, theoretically and experimentally, and its efficiency as a color image segmentation method is shown.
Abstract: Presents a color image segmentation method which divides the color space into clusters. Competitive learning is used as a tool for clustering the color space based on the least sum-of-squares criterion. We show that competitive learning converges to approximate the optimum solution based on this criterion, theoretically and experimentally. We apply this method to various color scenes and show its efficiency as a color image segmentation method. We also show the effects of using different color coordinates to be clustered, with some experimental results. >

Journal ArticleDOI
TL;DR: An effective algorithm for character recognition in scene images is studied and highly promising experimental results have been obtained using the method on 100 images involving characters of different sizes and formats under uncontrolled lighting.
Abstract: An effective algorithm for character recognition in scene images is studied. Scene images are segmented into regions by an image segmentation method based on adaptive thresholding. Character candidate regions are detected by observing gray-level differences between adjacent regions. To ensure extraction of multisegment characters as well as single-segment characters, character pattern candidates are obtained by associating the detected regions according to their positions and gray levels. A character recognition process selects patterns with high similarities by calculating the similarities between character pattern candidates and the standard patterns in a dictionary and then comparing the similarities to the thresholds. A relaxational approach to determine character patterns updates the similarities by evaluating the interactions between categories of patterns, and finally character patterns and their recognition results are obtained. Highly promising experimental results have been obtained using the method on 100 images involving characters of different sizes and formats under uncontrolled lighting. >

Journal ArticleDOI
TL;DR: The system described here is an attempt to provide completely automatic segmentation and labeling of normal volunteer brains and the absolute accuracy of the segmentations has not yet been rigorously established.
Abstract: The authors' main contribution is to build upon their earlier efforts by expanding the tissue model concept to cover a brain volume. Furthermore, processing time is reduced and accuracy is enhanced by the use of knowledge propagation, where information derived from one slice is made available to succeeding slices as additional knowledge. The system is organized as follows. Each MR slice is initially segmented by an unsupervised fuzzy c-means clustering algorithm. Next, an expert system uses model-based recognition techniques to locate a landmark, called a focus-of attention tissue. Qualitative models of slices of brain tissue are defined and matched with their instances from imaged slices. If a significant deformation is detected in a tissue, the slice is classified to be abnormal and volume processing halts. Otherwise, the expert system locates the next focus-of-attention tissue, based on a hierarchy of expected tissues. This process is repeated until either a slice is classified as abnormal or all tissues of the slice are labeled. If the slice is determined to be abnormal, the entire volume is also considered abnormal and processing halts. Otherwise, the system will proceed to the next slice and repeat the classification steps until all slices that comprise the volume are processed. A rule-based expert system tool, CLIPS, is used to organize the system. Low level modules for image processing and high level modules for image analysis, all written in the C language, are called as actions from the right hand sides of the rules. The system described here is an attempt to provide completely automatic segmentation and labeling of normal volunteer brains. The absolute accuracy of the segmentations has not yet been rigorously established. The relative accuracy appears acceptable. Efforts have been made to segment an entire volume (rather than merging a set of segmented slices) using supervised pattern recognition techniques or unsupervised fuzzy clustering. However, there is sometimes enough data nonuniformity between slices to prevent satisfactory segmentation. >

Journal ArticleDOI
TL;DR: A hierarchical morphological segmentation algorithm for image sequence coding that directly segments 3-D regions and concentrates on the coding residue, all the information about the 3- D regions that have not been properly segmented and therefore coded.
Abstract: This paper deals with a hierarchical morphological segmentation algorithm for image sequence coding. Mathematical morphology is very attractive for this purpose because it efficiently deals with geometrical features such as size, shape, contrast, or connectivity that can be considered as segmentation-oriented features. The algorithm follows a top-down procedure. It first takes into account the global information and produces a coarse segmentation, that is, with a small number of regions. Then, the segmentation quality is improved by introducing regions corresponding to more local information. The algorithm, considering sequences as being functions on a 3-D space, directly segments 3-D regions. A 3-D approach is used to get a segmentation that is stable in time and to directly solve the region correspondence problem. Each segmentation stage relies on four basic steps: simplification, marker extraction, decision, and quality estimation. The simplification removes information from the sequence to make it easier to segment. Morphological filters based on partial reconstruction are proven to be very efficient for this purpose, especially in the case of sequences. The marker extraction identifies the presence of homogeneous 3-D regions. It is based on constrained flat region labeling and morphological contrast extraction. The goal of the decision is to precisely locate the contours of regions detected by the marker extraction. This decision is performed by a modified watershed algorithm. Finally, the quality estimation concentrates on the coding residue, all the information about the 3-D regions that have not been properly segmented and therefore coded. The procedure allows the introduction of the texture and contour coding schemes within the segmentation algorithm. The coding residue is transmitted to the next segmentation stage to improve the segmentation and coding quality. Finally, segmentation and coding examples are presented to show the validity and interest of the coding approach. >

Journal ArticleDOI
TL;DR: Basic algorithms to extract coherent amorphous regions (features or objects) from 2 and 3D scalar and vector fields and then track them in a series of consecutive time steps are described.
Abstract: We describe basic algorithms to extract coherent amorphous regions (features or objects) from 2 and 3D scalar and vector fields and then track them in a series of consecutive time steps. We use a combination of techniques from computer vision, image processing, computer graphics, and computational geometry and apply them to data sets from computational fluid dynamics. We demonstrate how these techniques can reduce visual clutter and provide the first step to quantifying observable phenomena. These results can be generalized to other disciplines with continuous time-dependent scalar (and vector) fields. >

Journal ArticleDOI
TL;DR: The entropy method for image thresholding suggested by Kapur et al. has been modified and a more pertinent information measure of the image is obtained.

Journal ArticleDOI
TL;DR: A hierarchical segmentation algorithm for image coding based on mathematical morphology, which takes into account the most global information of the image and produces a coarse (with a reduced number of regions) segmentation.

Journal ArticleDOI
TL;DR: Two solutions are proposed to solve the problem of model parameter estimation from incomplete data: a Monte Carlo scheme and a scheme related to Besag's (1986) iterated conditional mode (ICM) method, both of which make use of Markov random-field modeling assumptions.
Abstract: An unsupervised stochastic model-based approach to image segmentation is described, and some of its properties investigated. In this approach, the problem of model parameter estimation is formulated as a problem of parameter estimation from incomplete data, and the expectation-maximization (EM) algorithm is used to determine a maximum-likelihood (ML) estimate. Previously, the use of the EM algorithm in this application has encountered difficulties since an analytical expression for the conditional expectations required in the EM procedure is generally unavailable, except for the simplest models. In this paper, two solutions are proposed to solve this problem: a Monte Carlo scheme and a scheme related to Besag's (1986) iterated conditional mode (ICM) method. Both schemes make use of Markov random-field modeling assumptions. Examples are provided to illustrate the implementation of the EM algorithm for several general classes of image models. Experimental results on both synthetic and real images are provided. >

Journal ArticleDOI
TL;DR: A method to locate three vanishing points on an image, corresponding to three orthogonal directions of the scene, based on two cascaded Hough transforms is proposed, which is efficient, even in the case of real complex scenes.
Abstract: We propose a method to locate three vanishing points on an image, corresponding to three orthogonal directions of the scene. This method is based on two cascaded Hough transforms. We show that, even in the case of synthetic images of high quality, a naive approach may fail, essentially because of the errors due to the limitation of the image size. We take into account these errors as well as errors due to detection inaccuracy of the image segments, and provide a method efficient, even in the case of real complex scenes. >

Journal ArticleDOI
TL;DR: A simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quad tree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory.
Abstract: Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms. This paper presents a simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quadtree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory. The rate-distortion performance of the improved algorithm is calculated for some Gaussian field, and it is examined vie simulation over benchmark gray-level images. In both these cases, significant improvement in the compression performances is shown. >

Proceedings ArticleDOI
06 Oct 1994
TL;DR: The research described in this paper describes aspects of target recognition, thresholding, and location, and the results of a series of simulation experiments are used to analyze the performance of subpixel target location techniques such as: centroiding; Gaussian shape fitting; and ellipse fitting, under varying conditions.
Abstract: Signalizing points of interest on the object to be measured is a reliable and common method of achieving optimum target location accuracy for many high precision measurement tasks. In photogrammetric metrology, images of the targets originate from photographs and CCD cameras. Regardless of whether the photographs are scanned or the digital images are captured directly, the overall accuracy of the technique is partly dependent on the precise and accurate location of the target images. However, it is often not clear which technique to choose for a particular task, or what are the significant sources of error. The research described in this paper describes aspects of target recognition, thresholding, and location. The results of a series of simulation experiments are used to analyze the performance of subpixel target location techniques such as: centroiding; Gaussian shape fitting; and ellipse fitting, under varying conditions.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
29 Nov 1994
TL;DR: In this article, a method for the automated segmentation of medical images, including generating image data from radiographic images of the breast, is presented. But the method is applicable to breast mammograms including the extraction of the skinline as well as correction for non-uniform exposure conditions, hand radiographs, and chest radiographs.
Abstract: A method for the automated segmentation of medical images (figures 5, 11, 12 and 20), including generating image data (figure 19) from radiographic images of the breast (figure 5) The method is applicable to breast mammograms including the extraction of the skinline as well as correction for non-uniform exposure conditions, hand radiographs (figure 11), and chest radiographs (figure 12) Techniques for the segmentation include noise filtering (152 of figure 15), local gray value range determination (153), modified global histogram analysis (154), region growing and determination of object contour (155) The method also is applicable to skin detection and analysis of skin thickening in medical images, where image segmentation (164), local optimization of external skinline (166), creation of a gradient image, identification of the internal skinline (167) and then skin thickness determination are carried out

Journal ArticleDOI
TL;DR: In this paper, a 3D shape determination method for full-frame video data at near-video-frame rates (i.e., 15 Hz) is described. But the method is not suitable for the detection of cast shadows and interreflection.
Abstract: The photometric-stereo method is one technique for three-dimensional shape determination that has been implemented in a variety of experimental settings and that has produced consistently good results. The idea is to use intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. The resulting radiometric constraint makes it possible to obtain local estimates of both surface orientation and surface curvature without requiring either global smoothness assumptions or prior image segmentation. Photometric stereo is moved one step closer to practical possibility by a description of an experimental setting in which surface gradient estimation is achieved on full-frame video data at near-video-frame rates (i.e., 15 Hz). The implementation uses commercially available hardware. Reflectance is modeled empirically with measurements obtained from a calibration sphere. Estimation of the gradient (p, q) requires only simple table lookup. Curvature estimation additionally uses the reflectance map R(p, q). The required lookup table and reflectance maps are derived during calibration. Because reflectance is modeled empirically, no prior physical model of the reflectance characteristics of the objects to be analyzed is assumed. At the same time, if a good physical model is available, it can be retrofitted to the method for implementation purposes. Photometric stereo is subject to error in the presence of cast shadows and interreflection. No purely local technique can succeed because these phenomena are inherently nonlocal. Nevertheless, it is demonstrated that one can exploit the redundancy in three-light-source photometric stereo to detect locally, in most cases, the presence of cast shadows and interreflection. Detection is facilitated by the explicit inclusion of a local confidence estimate in the lookup table used for gradient estimation.

Journal ArticleDOI
TL;DR: A simple and effective method for image contrast enhancement based on the multiscale edge representation of images that offers flexibility to selectively enhance features of different sizes and ability to control noise magnification is presented.
Abstract: Experience suggests the existence of a connection between the contrast of a gray-scale image and the gradient magnitude of intensity edges in the neighborhood where the contrast is measured. This observation motivates the development of edge-based contrast enhancement techniques. We present a simple and effective method for image contrast enhancement based on the multiscale edge representation of images. The contrast of an image can be enhanced simply by stretching or upscaling the multiscale gradient maxima of the image. This method offers flexibility to selectively enhance features of different sizes and ability to control noise magnification. We present some experimental results from enhancing medical images and discuss the advantages of this wavelet approach over other edge-based techniques.

Journal ArticleDOI
TL;DR: In this paper, a new multivariate filtering operation called the alpha-trimmed vector median is proposed, which completely preserves stationary regions in image sequences, without motion compensation or motiondetection.
Abstract: Most current algorithms developed for image sequence filtering require motion information in order to obtain good results both in the still and moving parts of an image sequence. In the present paper, filters which completely preserve stationary regions in image sequences are introduced. In moving regions, the 3D filters inherently reduce to spatial filters and perform well in these areas without any motion-compensation or motion-detection. A new multivariate filtering operation called the alpha-trimmed vector median is proposed. Guidelines for the determination of optimal 3D median-related structures for color and gray-level image sequence filtering are given. Algorithms based on vector median, extended vector median, alpha-trimmed vector median, and componentwise median operations are developed. Properties of the human visual system are taken into account in the design of filters. Noise attenuation and detail preservation capability of the filters is examined. In particular, the impulsive noise attenuation capability of the filters is analyzed theoretically. Simulation results based on real image sequences are given. >

Proceedings ArticleDOI
09 Sep 1994
TL;DR: An iterative algorithm is presented for simultaneous deformation of multiple curves and surfaces to an MRI, with inter-surface constraints and self-intersection avoidance, which automatically creates surfaces of MRI datasets with a common mapping to surface parametric space.
Abstract: An iterative algorithm is presented for simultaneous deformation of multiple curves and surfaces to an MRI, with inter-surface constraints and self-intersection avoidance. The resulting robust segmentation, combined with local curvature matching, automatically creates surfaces of MRI datasets with a common mapping to surface parametric space.

Journal ArticleDOI
TL;DR: A moment-based texture segmentation algorithm is presented that has successfully segmented binary images containing textures with iso-second-order statistics as well as a number of gray-level texture images.

Journal ArticleDOI
TL;DR: Complex moments of the Gabor power spectrum yield estimates of the N-folded symmetry of the local image content at different frequency scales, that is, they allow to detect linear, rectangular, hexagonal/triangular, and so on, structures with very fine to very coarse resolutions as discussed by the authors.
Abstract: Complex moments of the Gabor power spectrum yield estimates of the N-folded symmetry of the local image content at different frequency scales, that is, they allow to detect linear, rectangular, hexagonal/triangular, and so on, structures with very fine to very coarse resolutions. Results from experiments on the unsupervised segmentation of real textures indicate their importance for image processing applications. Real geometric moments computed in Gabor space also provide for very powerful texture features, but lack the clear geometrical interpretation of complex moments. >