scispace - formally typeset
Search or ask a question

Showing papers on "Image segmentation published in 1995"


Journal ArticleDOI
20 Jun 1995
TL;DR: A novel scheme for the detection of object boundaries based on active contours evolving in time according to intrinsic geometric measures of the image, allowing stable boundary detection when their gradients suffer from large variations, including gaps.
Abstract: A novel scheme for the detection of object boundaries is presented. The technique is based on active contours deforming according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric as defined by the image content. This geodesic approach for object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved as showed by a number of examples. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. >

5,566 citations


Proceedings ArticleDOI
15 Sep 1995
TL;DR: Intelligent Sc scissors allows objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse, and allows creation of convincing compositions from existing images while dramatically increasing the speed and precision with which objects can be extracted.
Abstract: We present a new, interactive tool called Intelligent Scissors which we use for image segmentation and composition. Fully automated segmentation is an unsolved problem, while manual tracing is inaccurate and laboriously unacceptable. However, Intelligent Scissors allow objects within digital images to be extracted quickly and accurately using simple gesture motions with a mouse. When the gestured mouse position comes in proximity to an object edge, a live-wire boundary “snaps” to, and wraps around the object of interest. Live-wire boundary detection formulates discrete dynamic programming (DP) as a two-dimensional graph searching problem. DP provides mathematically optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures. Robustness is further enhanced with on-the-fly training which causes the boundary to adhere to the specific type of edge currently being followed, rather than simply the strongest edge in the neighborhood. Boundary cooling automatically freezes unchanging segments and automates input of additional seed points. Cooling also allows the user to be much more free with the gesture path, thereby increasing the efficiency and finesse with which boundaries can be extracted. Extracted objects can be scaled, rotated, and composited using live-wire masks and spatial frequency equivalencing. Frequency equivalencing is performed by applying a Butterworth filter which matches the lowest frequency spectra to all other image components. Intelligent Scissors allow creation of convincing compositions from existing images while dramatically increasing the speed and precision with which objects can be extracted.

922 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed rapid scene analysis algorithms are fast and effective in detecting abrupt scene changes, gradual transitions including fade-ins and fade-outs, flashlight scenes and in deriving intrashot variations.
Abstract: Several rapid scene analysis algorithms for detecting scene changes and flashlight scenes directly on compressed video are proposed. These algorithms operate on the DC sequence which can be readily extracted from video compressed using Motion JPEG or MPEG without full-frame decompression. The DC images occupy only a small fraction of the original data size while retaining most of the essential "global" information. Operating on these images offers a significant computation saving. Experimental results show that the proposed algorithms are fast and effective in detecting abrupt scene changes, gradual transitions including fade-ins and fade-outs, flashlight scenes and in deriving intrashot variations.

893 citations


Journal ArticleDOI
TL;DR: A precise mathematical formulation of the model for evoked potential recordings is presented, where the microstates are represented as normalized vectors constituted by scalp electric potentials due to the underlying generators.
Abstract: A brain microstate is defined as a functional/physiological state of the brain during which specific neural computations are performed. It is characterized uniquely by a fixed spatial distribution of active neuronal generators with time varying intensity. Brain electrical activity is modeled as being composed of a time sequence of nonoverlapping microstates with variable duration. A precise mathematical formulation of the model for evoked potential recordings is presented, where the microstates are represented as normalized vectors constituted by scalp electric potentials due to the underlying generators. An algorithm is developed for estimating the microstates, based on a modified version of the classical k-means clustering method, in which cluster orientations are estimated, Consequently, each instantaneous multichannel evoked potential measurement is classified as belonging to some microstate, thus producing a natural segmentation of brain activity. Use is made of statistical image segmentation techniques for obtaining smooth continuous segments. Time varying intensities are estimated by projecting the measurements onto their corresponding microstates. A goodness of fit statistic for the model is presented. Finally, a method is introduced for estimating the number of microstates, based on nonparametric data-driven statistical resampling techniques. >

770 citations


Journal ArticleDOI
TL;DR: This paper presents a methodology for evaluation of low-level image analysis methods, using binarization (two-level thresholding) as an example, and defines the performance of the character recognition module as the objective measure.
Abstract: This paper presents a methodology for evaluation of low-level image analysis methods, using binarization (two-level thresholding) as an example. Binarization of scanned gray scale images is the first step in most document image analysis systems. Selection of an appropriate binarization method for an input image domain is a difficult problem. Typically, a human expert evaluates the binarized images according to his/her visual criteria. However, to conduct an objective evaluation, one needs to investigate how well the subsequent image analysis steps will perform on the binarized image. We call this approach goal-directed evaluation, and it can be used to evaluate other low-level image processing methods as well. Our evaluation of binarization methods is in the context of digit recognition, so we define the performance of the character recognition module as the objective measure. Eleven different locally adaptive binarization methods were evaluated, and Niblack's method gave the best performance.

700 citations


BookDOI
01 Mar 1995

671 citations


Journal ArticleDOI
TL;DR: A modified box-counting approach is proposed to estimate the FD, in combination with feature smoothing in order to reduce spurious regions and to segment a scene into the desired number of classes, an unsupervised K-means like clustering approach is used.
Abstract: This paper deals with the problem of recognizing and segmenting textures in images. For this purpose the authors employ a technique based on the fractal dimension (FD) and the multi-fractal concept. Six FD features are based on the original image, the above average/high gray level image, the below average/low gray level image, the horizontally smoothed image, the vertically smoothed image, and the multi-fractal dimension of order two. A modified box-counting approach is proposed to estimate the FD, in combination with feature smoothing in order to reduce spurious regions. To segment a scene into the desired number of classes, an unsupervised K-means like clustering approach is used. Mosaics of various natural textures from the Brodatz album as well as microphotographs of thin sections of natural rocks are considered, and the segmentation results to show the efficiency of the technique. Supervised techniques such as minimum-distance and k-nearest neighbor classification are also considered. The results are compared with other techniques. >

650 citations


Journal ArticleDOI
TL;DR: Two contour-based methods which use region boundaries and other strong edges as matching primitives are presented, which have outperformed manual registration in terms of root mean square error at the control points.
Abstract: Image registration is concerned with the establishment of correspondence between images of the same scene. One challenging problem in this area is the registration of multispectral/multisensor images. In general, such images have different gray level characteristics, and simple techniques such as those based on area correlations cannot be applied directly. On the other hand, contours representing region boundaries are preserved in most cases. The authors present two contour-based methods which use region boundaries and other strong edges as matching primitives. The first contour matching algorithm is based on the chain-code correlation and other shape similarity criteria such as invariant moments. Closed contours and the salient segments along the open contours are matched separately. This method works well for image pairs in which the contour information is well preserved, such as the optical images from Landsat and Spot satellites. For the registration of the optical images with synthetic aperture radar (SAR) images, the authors propose an elastic contour matching scheme based on the active contour model. Using the contours from the optical image as the initial condition, accurate contour locations in the SAR image are obtained by applying the active contour model. Both contour matching methods are automatic and computationally quite efficient. Experimental results with various kinds of image data have verified the robustness of the algorithms, which have outperformed manual registration in terms of root mean square error at the control points. >

539 citations


Journal ArticleDOI
TL;DR: The information provided by the user's selected points is explored and an optimal method to detect contours which allows a segmentation of the image is applied, based on dynamic programming (DP), and applies to a wide variety of shapes.
Abstract: The problem of segmenting an image into separate regions and tracking them over time is one of the most significant problems in vision. Terzopoulos et al. (1987) proposed an approach to detect the contour regions of complex shapes, assuming a user selected initial contour not very far from the desired solution. We propose to further explore the information provided by the user's selected points and apply an optimal method to detect contours which allows a segmentation of the image. The method is based on dynamic programming (DP), and applies to a wide variety of shapes. It is exact and not iterative. We also consider a multiscale approach capable of speeding up the algorithm by a factor of 20, although at the expense of losing the guaranteed optimality characteristic. The problem of tracking and matching these contours is addressed. For tracking, the final contour obtained at one frame is sampled and used as initial points for the next frame. Then, the same DP process is applied. For matching, a novel strategy is proposed where the solution is a smooth displacement field in which unmatched regions are allowed while cross vectors are not. The algorithm is again based on DP and the optimal solution is guaranteed. We have demonstrated the algorithms on natural objects in a large spectrum of applications, including interactive segmentation and automatic tracking of the regions of interest in medical images. >

512 citations


Journal ArticleDOI
TL;DR: An unsupervised segmentation algorithm which uses Markov random field models for color textures which characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes is presented.
Abstract: We present an unsupervised segmentation algorithm which uses Markov random field models for color textures. These models characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes. The models are used by a segmentation algorithm based on agglomerative hierarchical clustering. At the heart of agglomerative clustering is a stepwise optimal merging process that at each iteration maximizes a global performance functional based on the conditional pseudolikelihood of the image. A test for stopping the clustering is applied based on rapid changes in the pseudolikelihood. We provide experimental results that illustrate the advantages of using color texture models and that demonstrate the performance of the segmentation algorithm on color images of natural scenes. Most of the processing during segmentation is local making the algorithm amenable to high performance parallel implementation. >

485 citations


Journal ArticleDOI
TL;DR: It is argued that Gabor filter outputs can be modeled as Rician random variables (often approximated well as Gaussian rv's) and developed a decision-theoretic algorithm for selecting optimal filter parameters.
Abstract: Texture segmentation involves subdividing an image into differently textured regions. Many texture segmentation schemes are based on a filter-bank model, where the filters, called Gabor filters, are derived from Gabor elementary functions. The goal is to transform texture differences into detectable filter-output discontinuities at texture boundaries. By locating these discontinuities, one can segment the image into differently textured regions. Distinct discontinuities occur, however, only if the Gabor filter parameters are suitably chosen. Some previous analysis has shown how to design filters for discriminating simple textures. Designing filters for more general natural textures, though, has largely been done ad hoc. We have devised a more rigorously based method for designing Gabor filters. It assumes that an image contains two different textures and that prototype samples of the textures are given a priori. We argue that Gabor filter outputs can be modeled as Rician random variables (often approximated well as Gaussian rv's) and develop a decision-theoretic algorithm for selecting optimal filter parameters. To improve segmentations for difficult texture pairs, we also propose a multiple-filter segmentation scheme, motivated by the Rician model. Experimental results indicate that our method is superior to previous methods in providing useful Gabor filters for a wide range of texture pairs. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: A typologically adaptable snakes model for image segmentation and object representation embedded in the framework of domain subdivision using simplicial decomposition, which extends the geometric and topological adaptability of snakes while retaining all of the features of traditional snake while overcoming many of the limitations of traditional snakes.
Abstract: The paper presents a typologically adaptable snakes model for image segmentation and object representation. The model is embedded in the framework of domain subdivision using simplicial decomposition. This framework extends the geometric and topological adaptability of snakes while retaining all of the features of traditional snakes, such as user interaction, and overcoming many of the limitations of traditional snakes. By superposing a simplicial grid over the image domain and using this grid to iteratively reparameterize the deforming snakes model, the model is able to flow into complex shapes, even shapes with significant protrusions or branches, and to dynamically change topology as necessitated by the data. Snakes can be created and can split into multiple parts or seamlessly merge into other snakes. The model can also be easily converted to and from the traditional parametric snakes model representation. We apply a 2D model to various synthetic and real images in order to segment objects with complicated shapes and topologies. >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The algorithm is based on a nonlinear combination of linear filters and searches for elongated, symmetric line structures, while suppressing the response to edges, leading to an efficient, parameter-free implementation.
Abstract: Presents a novel, parameter-free technique for the segmentation and local description of line structures on multiple scales, both in 2D and in 3D. The algorithm is based on a nonlinear combination of linear filters and searches for elongated, symmetric line structures, while suppressing the response to edges. The filtering process creates one sharp maximum across the line-feature profile and across the scale-space. The multi-scale response reflects local contrast and is independent of the local width. The filter is steerable in both the orientation and scale domains, leading to an efficient, parameter-free implementation. A local description is obtained that describes the contrast, the position of the center-line, the width, the polarity, and the orientation of the line. Examples of images from different application domains demonstrate the generic nature of the line segmentation scheme. The 3D filtering is applied to magnetic resonance volume data in order to segment cerebral blood vessels. >

Journal ArticleDOI
TL;DR: A fully automatic multimodality image registration algorithm that requires no user interaction and can be applied to a wide range of registration problems is presented.
Abstract: ObjectiveA fully automatic multimodality image registration algorithm is presented. The method is primarily designed for 3D registration of MR and PET images of the brain. However, it has also been successfully applied to CT-PET, MR-CT, and MR-SPECT registrations.Materials and MethodsThe head contou

Journal ArticleDOI
TL;DR: The sinc-based interpolation technique enabled serially acquired MR images to be positionally matched to subvoxel accuracy so that small changes in the brain could be distinguished from effects due to misregistration.
Abstract: Objective Methods for automatically registering and reslicing MR images using an interpolation function that matches the structure of the image data are described. Materials and methods Phantom and human brain images were matched by rigid body rotations and translations in two and three dimensions using a least-squares optimization procedure. Subvoxel image shifts were produced with linear or sinc interpolation. Results The use of sinc interpolation ensured that the repositioned images were faithful to the original data and enabled quantitative intensity comparisons to be made. In humans, image segmentation was vital to avoid extraneous soft tissue changes producing systematic errors in registration. Conclusions The sinc-based interpolation technique enabled serially acquired MR images to be positionally matched to subvoxel accuracy so that small changes in the brain could be distinguished from effects due to misregistration.

Journal ArticleDOI
01 Dec 1995
TL;DR: In this paper, a closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc.
Abstract: We present the first closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc. The segmentation problem is formulated as an optimization problem and the genetic algorithm efficiently searches the hyperspace of segmentation parameter combinations to determine the parameter set which maximizes the segmentation quality criteria. The goals of our adaptive image segmentation system are to provide continuous adaptation to normal environmental variations, to exhibit learning capabilities, and to provide robust performance when interacting with a dynamic environment. We present experimental results which demonstrate learning and the ability to adapt the segmentation performance in outdoor color imagery.

Journal ArticleDOI
TL;DR: This work presents a method for segmentation of brain tissue from magnetic resonance images that is a combination of three existing techniques from the computer vision literature: expectation/maximization segmentation, binary mathematical morphology, and active contour models.

Journal ArticleDOI
TL;DR: The algorithm was notably successful in the detection of minimal cancers manifested by masses, and an extensive study of the effects of the algorithm's parameters on its sensitivity and specificity was performed in order to optimize the method for a clinical, observer performance study.
Abstract: A technique is proposed for the detection of tumors in digital mammography. Detection is performed in two steps: segmentation and classification. In segmentation, regions of interest are first extracted from the images by adaptive thresholding. A further reliable segmentation is achieved by a modified Markov random field (MRF) model-based method. In classification, the MRF segmented regions are classified into suspicious and normal by a fuzzy binary decision tree based on a series of radiographic, density-related features. A set of normal (50) and abnormal (45) screen/film mammograms were tested. The latter contained 48 biopsy proven, malignant masses of various types and subtlety. The detection accuracy of the algorithm was evaluated by means of a free response receiver operating characteristic curve which shows the relationship between the detection of true positive masses and the number of false positive alarms per image. The results indicated that a 90% sensitivity can be achieved in the detection of different types of masses at the expense of two falsely detected signals per image. The algorithm was notably successful in the detection of minimal cancers manifested by masses /spl les/10 mm in size. For the 16 such cases in the authors' dataset, a 94% sensitivity was observed with 1.5 false alarms per image. An extensive study of the effects of the algorithm's parameters on its sensitivity and specificity was also performed in order to optimize the method for a clinical, observer performance study. >

Journal ArticleDOI
TL;DR: The feature-based optic flow field is segmented into clusters with affine internal motion which are tracked over time and runs in real-time, and is accurate and reliable.
Abstract: This paper describes a system for detecting and tracking moving objects in a moving world. The feature-based optic flow field is segmented into clusters with affine internal motion which are tracked over time. The system runs in real-time, and is accurate and reliable. >

Journal ArticleDOI
TL;DR: The authors propose a two-stage segmentation strategy which involves: 1) extracting an approximate region containing the cell and part of the background near the cell, and 2) segmenting the cell from the background within this region.
Abstract: A major requirement of an automated, real-time, computer vision-based cell tracking system is an efficient method for segmenting cell images. The usual segmentation algorithms proposed in the literature exhibit weak performance on live unstained cell images, which can be characterized as being of low contrast, intensity-variant, and unevenly illuminated. The authors propose a two-stage segmentation strategy which involves: 1) extracting an approximate region containing the cell and part of the background near the cell, and 2) segmenting the cell from the background within this region. The approach effectively reduces the influence of peripheral background intensities and texture on the extraction of a cell region. The experimental results show that this approach for segmenting cell images is both fast and robust. >

Journal ArticleDOI
TL;DR: This work has developed a method for segmentation of intravascular ultrasound images that identifies the internal and external elastic laminae and the plaque-lumen interface and shows substantial promise for the quantitative analysis of in vivo intrav vascular ultrasound image data.
Abstract: Intravascular ultrasound imaging of coronary arteries provides important information about coronary lumen, wall, and plaque characteristics. Quantitative studies of coronary atherosclerosis using intravascular ultrasound and manual identification of wall and plaque borders are limited by the need for observers with substantial experience and the tedious nature of manual border detection. We have developed a method for segmentation of intravascular ultrasound images that identifies the internal and external elastic laminae and the plaque-lumen interface. The border detection algorithm was evaluated in a set of 38 intravascular ultrasound images acquired from fresh cadaveric hearts using a 30 MHz imaging catheter. To assess the performance of our border detection method we compared five quantitative measures of arterial anatomy derived from computer-detected borders with measures derived from borders manually defined by expert observers. Computer-detected and observer-defined lumen areas correlated very well (r=0.96, y=1.02x+0.52), as did plaque areas (r=0.95, y=1.07x-0.48), and percent area stenosis (r=0.93, y=0.99x-1.34.) Computer-derived segmental plaque thickness measurements were highly accurate. Our knowledge-based intravascular ultrasound segmentation method shows substantial promise for the quantitative analysis of in vivo intravascular ultrasound image data.

Journal ArticleDOI
TL;DR: A system to read automatically the Italian license number of a car passing through a tollgate using a CCTV camera and a frame grabber card to acquire a rear-view image of the vehicle is presented.
Abstract: A system for the recognition of car license plates is presented The aim of the system is to read automatically the Italian license number of a car passing through a tollgate A CCTV camera and a frame grabber card are used to acquire a rear-view image of the vehicle The recognition process consists of three main phases First, a segmentation phase locates the license plate within the image Then, a procedure based upon feature projection estimates some image parameters needed to normalize the license plate characters Finally, the character recognizer extracts some feature points and uses template matching operators to get a robust solution under multiple acquisition conditions A test has been done on more than three thousand real images acquired under different weather and illumination conditions, thus obtaining a recognition rate close to 91% >

Proceedings ArticleDOI
20 Jun 1995
TL;DR: It is shown that existing techniques in early vision such as, snake/balloon models, region growing, and Bayes/MDL are addressing different aspects of the same problem and they can be unified within a common statistical framework which combines their advantages.
Abstract: We present a novel statistical and variational approach to image segmentation based on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL (Minimum Description Length) criterion using the variational principle. We show that existing techniques in early vision such as, snake/balloon models, region growing, and Bayes/MDL are addressing different aspects of the same problem and they can be unified within a common statistical framework which combines their advantages. We analyze how to optimize the precision of the resulting boundary location by studying the statistical properties of the region competition algorithm and discuss what are good initial conditions for the algorithm. Our method is generalized to color and texture segmentation and is demonstrated on grey level images, color images and texture images. >

Journal ArticleDOI
TL;DR: A complete, fast and practical isolated object recognition system has been developed which is very robust with respect to scale, position and orientation changes of the objects as well as noise and local deformations of shape (due to perspective projection, segmentation errors and non-rigid material used in some objects).
Abstract: A complete, fast and practical isolated object recognition system has been developed which is very robust with respect to scale, position and orientation changes of the objects as well as noise and local deformations of shape (due to perspective projection, segmentation errors and non-rigid material used in some objects). The system has been tested on a wide variety of three-dimensional objects with different shapes and material and surface properties. A light-box setup is used to obtain silhouette images which are segmented to obtain the physical boundaries of the objects which are classified as either convex or concave. Convex curves are recognized using their four high-scale curvature extrema points. Curvature scale space (CSS) representations are computed for concave curves. The CSS representation is a multi-scale organization of the natural, invariant features of a curve (curvature zero-crossings or extrema) and useful for very reliable recognition of the correct model since it places no constraints on the shape of objects. A three-stage, coarse-to-fine matching algorithm prunes the search space in stage one by applying the CSS aspect ratio test. The maxima of contours in CSS representations of the surviving models are used for fast CSS matching in stage two. Finally, stage three verifies the best match and resolves any ambiguities by determining the distance between the image and model curves. Transformation parameter optimization is then used to find the best fit of the input object to the correct model. >

Proceedings ArticleDOI
01 Sep 1995
TL;DR: The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of the distortion model that best transform these edge to segments.
Abstract: Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle lens, generate a lot of nonlinear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.

Book ChapterDOI
20 Jun 1995
TL;DR: In this paper, a mathematical construct of object shapes, called the shape interaction matrix, is introduced, which is invariant to both the object motions and the selection of coordinate systems.
Abstract: The structure from motion problem has been extensively studied in the field of computer vision. Yet, the bulk of the existing work assumes that the scene contains only a single moving object. The more realistic case where an unknown number of objects move in the scene has received little attention, especially for its theoretical treatment. We present a new method for separating and recovering the motion and shape of multiple independently moving objects in a sequence of images. The method does not require prior knowledge of the number of objects, nor is dependent on any grouping of features into an object at the image level. For this purpose, we introduce a mathematical construct of object shapes, called the shape interaction matrix, which is invariant to both the object motions and the selection of coordinate systems. This invariant structure is computable solely from the observed trajectories of image features without grouping them into individual objects. Once the structure is computed, it allows for segmenting features into objects by the process of transforming it into a canonical form, as well as recovering the shape and motion of each object. >

Journal ArticleDOI
TL;DR: This paper discusses the links between Mumford and Shah’s variational problem for (signal and) image segmentation, based on an energy functional of a continuous grey-level function, and the numerical algorithms proposed to solve it, which are based on a discrete functional.
Abstract: In this paper we discuss the links between Mumford and Shah’s variational problem for (signal and) image segmentation, based on an energy functional of a continuous grey-level function, and the numerical algorithms proposed to solve it. These numerical approaches are based on a discrete functional. We recall that, in one dimension, this discrete functional is asymptotically equivalent to the continuous functional. This can be summarized in a $\Gamma $-convergence result. We show that the same result holds in dimension two, provided that the continuous energy is adapted to the anisotropy of the discrete approaches. We display a few experimental results in dimensions one and two.

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This approach identifies the regions within images that contain colors from predetermined color sets by searching over a large number of color sets, which allows very fast indexing of the image collection by the color contents of the images.
Abstract: We propose a method for automatic color extraction and indexing to support color queries of image and video databases. This approach identifies the regions within images that contain colors from predetermined color sets. By searching over a large number of color sets, a color index for the database is created in a fashion similar to that for file inversion. This allows very fast indexing of the image collection by the color contents of the images. Furthermore, information about the identified regions, such as the color set, size, and location, enables a rich variety of queries that specify both color content and spatial relationships of regions. We present the single color extraction and indexing method and contrast it to other color approaches. We examine single and multiple color extraction and image query on a database of 3000 color images.

Journal ArticleDOI
TL;DR: The development and use of a brain tissue probability model for the segmentation of multiple sclerosis lesions in magnetic resonance brain images, and an empirical comparison of the performance of statistical and decision tree classifiers, applied to MS lesion segmentation are described.
Abstract: Human investigators instinctively segment medical images into their anatomical components, drawing upon prior knowledge of anatomy to overcome image artifacts, noise, and lack of tissue contrast. The authors describe: 1) the development and use of a brain tissue probability model for the segmentation of multiple sclerosis (MS) lesions in magnetic resonance (MR) brain images, and 2) an empirical comparison of the performance of statistical and decision tree classifiers, applied to MS lesion segmentation. Based on MR image data obtained from healthy volunteers, the model provides prior probabilities of brain tissue distribution per unit voxel in a standardized 3-D "brain space". In comparison to purely data-driven segmentation, the use of the model to guide the segmentation of MS lesions reduced the volume of false positive lesions by 50-80%. >

Journal ArticleDOI
TL;DR: A novel method for efficient image analysis that uses tuned matched Gabor filters that requires no a priori knowledge of the analyzed image so that the analysis is unsupervised.
Abstract: Recent studies have confirmed that the multichannel Gabor decomposition represents an excellent tool for image segmentation and boundary detection. Unfortunately, this approach when used for unsupervised image analysis tasks imposes excessive storage requirements due to the nonorthogonality of the basis functions and is computationally highly demanding. In this correspondence, we propose a novel method for efficient image analysis that uses tuned matched Gabor filters. The algorithmic determination of the parameters of the Gabor filters is based on the analysis of spectral feature contrasts obtained from iterative computation of pyramidal Gabor transforms with progressive dyadic decrease of elementary cell sizes. The method requires no a priori knowledge of the analyzed image so that the analysis is unsupervised. Computer simulations applied to different classes of textures illustrate the matching property of the tuned Gabor filters derived using our determination algorithm. Also, their capability to extract significant image information and thus enable an easy and efficient low-level image analysis will be demonstrated. >