scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Image Processing in 1998"


Journal ArticleDOI
TL;DR: This paper presents a new external force for active contours, which is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image, and has a large capture range and is able to move snakes into boundary concavities.
Abstract: Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility. This paper presents a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. It differs fundamentally from traditional snake external forces in that it cannot be written as the negative gradient of a potential function, and the corresponding snake is formulated directly from a force balance condition rather than a variational formulation. Using several two-dimensional (2-D) examples and one three-dimensional (3-D) example, we show that GVF has a large capture range and is able to move snakes into boundary concavities.

4,071 citations


Journal ArticleDOI
TL;DR: An automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (two-dimensional) or volumes (three-dimensional).
Abstract: We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (two-dimensional) or volumes (three-dimensional). It uses an explicit spline representation of the images in conjunction with spline processing, and is based on a coarse-to-fine iterative strategy (pyramid approach). The minimization is performed according to a new variation (ML*) of the Marquardt-Levenberg algorithm for nonlinear least-square optimization. The geometric deformation model is a global three-dimensional (3-D) affine transformation that can be optionally restricted to rigid-body motion (rotation and translation), combined with isometric scaling. It also includes an optional adjustment of image contrast differences. We obtain excellent results for the registration of intramodality positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) data. We conclude that the multiresolution refinement strategy is more robust than a comparable single-stage method, being less likely to be trapped into a false local optimum. In addition, our improved version of the Marquardt-Levenberg algorithm is faster.

2,801 citations


Journal ArticleDOI
TL;DR: It is shown that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image and the connection to the error norm and influence function in the robust estimation framework leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion.
Abstract: Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges.

1,397 citations


Journal ArticleDOI
TL;DR: These novel schemes use an additive operator splitting (AOS), which guarantees equal treatment of all coordinate axes, can be implemented easily in arbitrary dimensions, have good rotational invariance and reveal a computational complexity and memory requirement which is linear in the number of pixels.
Abstract: Nonlinear diffusion filtering in image processing is usually performed with explicit schemes. They are only stable for very small time steps, which leads to poor efficiency and limits their practical use. Based on a discrete nonlinear diffusion scale-space framework we present semi-implicit schemes which are stable for all time steps. These novel schemes use an additive operator splitting (AOS), which guarantees equal treatment of all coordinate axes. They can be implemented easily in arbitrary dimensions, have good rotational invariance and reveal a computational complexity and memory requirement which is linear in the number of pixels. Examples demonstrate that, under typical accuracy requirements, AOS schemes are at least ten times more efficient than the widely used explicit schemes.

1,229 citations


Journal ArticleDOI
TL;DR: A blind deconvolution algorithm based on the total variational (TV) minimization method proposed is presented, and it is remarked that psf's without sharp edges, e.g., Gaussian blur, can also be identified through the TV approach.
Abstract: We present a blind deconvolution algorithm based on the total variational (TV) minimization method proposed by Acar and Vogel (1994). The motivation for regularizing with the TV norm is that it is extremely effective for recovering edges of images as well as some blurring functions, e.g., motion blur and out-of-focus blur. An alternating minimization (AM) implicit iterative scheme is devised to recover the image and simultaneously identify the point spread function (PSF). Numerical results indicate that the iterative scheme is quite robust, converges very fast (especially for discontinuous blur), and both the image and the PSF can be recovered under the presence of high noise level. Finally, we remark that PSFs without sharp edges, e.g., Gaussian blur, can also be identified through the TV approach.

1,220 citations


Journal ArticleDOI
TL;DR: The generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety, allows to detect both generic obstacles and the lane position in a structured environment at a rate of 10 Hz.
Abstract: This paper describes the generic obstacle and lane detection system (GOLD), a stereo vision-based hardware and software architecture to be used on moving vehicles to increment road safety. Based on a full-custom massively parallel hardware, it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings) at a rate of 10 Hz. Thanks to a geometrical transform supported by a specific hardware module, the perspective effect is removed from both left and right stereo images; the left is used to detect lane markings with a series of morphological filters, while both remapped stereo images are used for the detection of free-space in front of the vehicle. The output of the processing is displayed on both an on-board monitor and a control-panel to give visual feedbacks to the driver. The system was tested on the mobile laboratory (MOB-LAB) experimental land vehicle, which was driven for more than 3000 km along extra-urban roads and freeways at speeds up to 80 km/h, and demonstrated its robustness with respect to shadows and changing illumination conditions, different road textures, and vehicle movement.

1,088 citations


Journal ArticleDOI
TL;DR: Extensive computations are presented that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest alpha for which FinEpsilon(q)(alpha )(L( q)(I)),1/q=alpha/2+1/2, and the norm |F|B(q) alpha)(L(Q)(I)).
Abstract: This paper examines the relationship between wavelet-based image processing algorithms and variational problems. Algorithms are derived as exact or approximate minimizers of variational problems; in particular, we show that wavelet shrinkage can be considered the exact minimizer of the following problem. Given an image F defined on a square I, minimize over all g in the Besov space B11(L1(I)) the functional |F-g|L2(I)2+λ|g|(B11(L1(I))). We use the theory of nonlinear wavelet image compression in L2(I) to derive accurate error bounds for noise removal through wavelet shrinkage applied to images corrupted with i.i.d., mean zero, Gaussian noise. A new signal-to-noise ratio (SNR), which we claim more accurately reflects the visual perception of noise in images, arises in this derivation. We present extensive computations that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest α for which F∈Bqα(Lq(I)),1/q=α/2+1/2, and the norm |F|Bqα(Lq(I)). Both theoretical and experimental results indicate that our choice of shrinkage parameters yields uniformly better results than Donoho and Johnstone's VisuShrink procedure; an example suggests, however, that Donoho and Johnstone's (1994, 1995, 1996) SureShrink method, which uses a different shrinkage parameter for each dyadic level, achieves a lower error than our procedure.

810 citations


Journal ArticleDOI
TL;DR: A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and region-based techniques through the morphological algorithm of watersheds and additionally maintains the so-called nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced.
Abstract: A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and region-based techniques through the morphological algorithm of watersheds. An edge-preserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate estimate of the image gradient. Then, an initial partitioning of the image into primitive regions is produced by applying the watershed transform on the image gradient magnitude. This initial segmentation is the input to a computationally efficient hierarchical (bottom-up) region merging process that produces the final segmentation. The latter process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all RAG edges in a priority queue. We propose a significantly faster algorithm, which additionally maintains the so-called nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced. The final segmentation provides, due to the RAG, one-pixel wide, closed, and accurately localized contours/surfaces. Experimental results obtained with two-dimensional/three-dimensional (2-D/3-D) magnetic resonance images are presented.

794 citations


Journal ArticleDOI
TL;DR: This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones, and proposes the max-tree as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators.
Abstract: This paper deals with a class of morphological operators called connected operators. These operators filter the signal by merging its flat zones. As a result, they do not create any new contours and are very attractive for filtering tasks where the contour information has to be preserved. This paper shows that connected operators work implicitly on a structured representation of the image made of flat zones. The max-tree is proposed as a suitable and efficient structure to deal with the processing steps involved in antiextensive connected operators. A formal definition of the various processing steps involved in the operator is proposed and, as a result, several lines of generalization are developed. First, the notion of connectivity and its definition are analyzed. Several modifications of the traditional approach are presented. They lead to connected operators that are able to deal with texture. They also allow the definition of connected operators with less leakage than the classical ones. Second, a set of simplification criteria are proposed and discussed. They lead to simplicity-, entropy-, and motion-oriented operators. The problem of using a nonincreasing criterion is analyzed. Its solution is formulated as an optimization problem that can be very efficiently solved by a Viterbi (1979) algorithm. Finally, several implementation issues are discussed showing that these operators can be very efficiently implemented.

656 citations


Journal ArticleDOI
TL;DR: A new geometrical framework based on which natural flows for image scale space and enhancement are presented, which unifies many classical schemes and algorithms via a simple scaling of the intensity contrast, and results in new and efficient schemes.
Abstract: We introduce a new geometrical framework based on which natural flows for image scale space and enhancement are presented. We consider intensity images as surfaces in the (x,I) space. The image is, thereby, a two-dimensional (2-D) surface in three-dimensional (3-D) space for gray-level images, and 2-D surfaces in five dimensions for color images. The new formulation unifies many classical schemes and algorithms via a simple scaling of the intensity contrast, and results in new and efficient schemes. Extensions to multidimensional signals become natural and lead to powerful denoising and scale space algorithms.

639 citations


Journal ArticleDOI
TL;DR: A multilevel dominant eigenvector estimation algorithm is used to develop a new run-length texture feature extraction algorithm that preserves much of the texture information in run- lengths matrices and significantly improves image classification accuracy over traditional run- length techniques.
Abstract: We use a multilevel dominant eigenvector estimation algorithm to develop a new run-length texture feature extraction algorithm that preserves much of the texture information in run-length matrices and significantly improves image classification accuracy over traditional run-length techniques. The advantage of this approach is demonstrated experimentally by the classification of two texture data sets. Comparisons with other methods demonstrate that the run-length matrices contain great discriminatory information and that a good method of extracting such information is of paramount importance to successful classification.

Journal ArticleDOI
TL;DR: An efficient algorithm is presented for the discretized problem that combines a fixed point iteration to handle nonlinearity with a new, effective preconditioned conjugate gradient iteration for large linear systems.
Abstract: Tikhonov regularization with a modified total variation regularization functional is used to recover an image from noisy, blurred data. This approach is appropriate for image processing in that it does not place a priori smoothness conditions on the solution image. An efficient algorithm is presented for the discretized problem that combines a fixed point iteration to handle nonlinearity with a new, effective preconditioned conjugate gradient iteration for large linear systems. Reconstructions, convergence results, and a direct comparison with a fast linear solver are presented for a satellite image reconstruction application.

Journal ArticleDOI
TL;DR: A new definition of the total variation (TV) norm forvector-valued functions that can be applied to restore color and other vector-valued images and some numerical experiments on denoising simple color images in red-green-blue color space are presented.
Abstract: We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of (1) not penalizing discontinuities (edges) in the image, (2) being rotationally invariant in the image space, and (3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

Journal ArticleDOI
TL;DR: A new region growing method for finding the boundaries of blobs is presented, which uses two novel discontinuity measures, average contrast and peripheral contrast, to control the growing process.
Abstract: A new region growing method for finding the boundaries of blobs is presented. A unique feature of the method is that at each step, at most one pixel exhibits the required properties to join the region. The method uses two novel discontinuity measures, average contrast and peripheral contrast, to control the growing process.

Journal ArticleDOI
TL;DR: A novel geometric approach for solving the stereo problem for an arbitrary number of images (= or >2) based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images.
Abstract: We present a novel geometric approach for solving the stereo problem for an arbitrary number of images (/spl ges/2). It is based upon the definition of a variational principle that must be satisfied by the surfaces of the objects in the scene and their images. The Euler-Lagrange equations that are deduced from the variational principle provide a set of partial differential equations (PDEs) that are used to deform an initial set of surfaces which then move toward the objects to be detected. The level set implementation of these PDEs potentially provides an efficient and robust way of achieving the surface evolution and to deal automatically with changes in the surface topology during the deformations, i.e., to deal with multiple objects. Results of an implementation of our theory also dealing with occlusion and visibility are presented on synthetic and real images.

Journal ArticleDOI
TL;DR: A set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations are proposed.
Abstract: We discuss possible algorithms for interpolating data given in a set of curves and/or points in the plane. We propose a set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations. The absolute minimal Lipschitz extension model (AMLE) is singled out and studied in more detail. We show experiments suggesting a possible application, the restoration of images with poor dynamic range.

Journal ArticleDOI
TL;DR: An analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image shows that the SNR decreases exponentially with range and a temporal filter structure is proposed to solve this problem.
Abstract: In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter.

Journal ArticleDOI
TL;DR: This paper addresses the issue of recovering and segmenting the apparent velocity field in sequences of images by designing an efficient deterministic multigrid procedure and proposing an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves.
Abstract: We address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.

Journal ArticleDOI
TL;DR: A general recursive approach for image segmentation by extending Otsu's (1978) method, which segments the brightest homogeneous object from a given image at each recursion, leaving only the darkesthomogeneous object after the last recursion.
Abstract: In this correspondence, we present a general recursive approach for image segmentation by extending Otsu's (1978) method. The new approach has been implemented in the scope of document images, specifically real-life bank checks. This approach segments the brightest homogeneous object from a given image at each recursion, leaving only the darkest homogeneous object after the last recursion. The major steps of the new technique and the experimental results that illustrate the importance and the usefulness of the new approach for the specified class of document images of bank checks is presented.

Journal ArticleDOI
TL;DR: The results demonstrate that the constraint of piecewise smoothness, applied through the use of edge-preserving regularization, can provide excellent limited-angle tomographic reconstructions and prove that the more general algorithm is globally convergent under less restrictive conditions.
Abstract: We introduce a generalization of a deterministic relaxation algorithm for edge-preserving regularization in linear inverse problems. This algorithm transforms the original (possibly nonconvex) optimization problem into a sequence of quadratic optimization problems, and has been shown to converge under certain conditions when the original cost functional being minimized is strictly convex. We prove that our more general algorithm is globally convergent (i.e., converges to a local minimum from any initialization) under less restrictive conditions, even when the original cost functional is nonconvex. We apply this algorithm to tomographic reconstruction from limited-angle data by formulating the problem as one of regularized least-squares optimization. The results demonstrate that the constraint of piecewise smoothness, applied through the use of edge-preserving regularization, can provide excellent limited-angle tomographic reconstructions. Two edge-preserving regularizers-one convex, the other nonconvex-are used in numerous simulations to demonstrate the effectiveness of the algorithm under various limited-angle scenarios, and to explore how factors, such as the choice of error norm, angular sampling rate and amount of noise, affect the reconstruction quality and algorithm performance. These simulation results show that for this application, the nonconvex regularizer produces consistently superior results.

Journal ArticleDOI
TL;DR: This work proposes a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations that has been successfully applied to a sequence of cardiac CT volumetric images.
Abstract: Image segmentation remains one of the major challenges in image analysis. In medical applications, skilled operators are usually employed to extract the desired regions that may be anatomically separate but statistically indistinguishable. Such manual processing is subject to operator errors and biases, is extremely time consuming, and has poor reproducibility. We propose a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations. The proposed adaptive K-mean clustering algorithm is capable of segmenting the regions of smoothly varying intensity distributions. Spatial constraints are incorporated in the clustering algorithm through the modeling of the regions by Gibbs random fields. Knowledge-based morphological operations are then applied to the segmented regions to identify the desired regions according to the a priori anatomical knowledge of the region-of-interest. This proposed technique has been successfully applied to a sequence of cardiac CT volumetric images to generate the volumes of left ventricle chambers at 16 consecutive temporal frames. Our final segmentation results compare favorably with the results obtained using manual outlining. Extensions of this approach to other applications can be readily made when a priori knowledge of a given object is available.

Journal ArticleDOI
TL;DR: A modification of the evolution equation based on the gradient flow derived from a weighted area functional, with image dependent weighting factor is obtained that offers a number of advantages, as illustrated by several examples of shape segmentation on medical images.
Abstract: A number of active contour models have been proposed that unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes. The essential idea is to evolve a curve (in two dimensions) or a surface (in three dimensions) under constraints from image forces so that it clings to features of interest in an intensity image. The evolution equation has been derived from first principles as the gradient flow that minimizes a modified length functional, tailored to features such as edges. However, because the flow may be slow to converge in practice, a constant (hyperbolic) term is added to keep the curve/surface moving in the desired direction. We derive a modification of this term based on the gradient flow derived from a weighted area functional, with image dependent weighting factor. When combined with the earlier modified length gradient flow, we obtain a partial differential equation (PDE) that offers a number of advantages, as illustrated by several examples of shape segmentation on medical images. In many cases the weighted area flow may be used on its own, with significant computational savings.

Journal ArticleDOI
TL;DR: Using a data base of 2560 image regions, it is shown that the multiscale approach using opponent features provides better recognition accuracy than other approaches.
Abstract: We introduce a representation for color texture using unichrome and opponent features computed from Gabor filter outputs. The unichrome features are computed from the spectral bands independently while the opponent features combine information across different spectral bands at different scales. Opponent features are motivated by color opponent mechanisms in human vision. We present a method for efficiently implementing these filters, which is of particular interest for processing the additional information present in color images. Using a data base of 2560 image regions, we show that the multiscale approach using opponent features provides better recognition accuracy than other approaches.

Journal ArticleDOI
TL;DR: A method is proposed to define diffusions of orientation-like quantities and it is shown how such orientation diffusions contain a nonlinearity that is reminiscent of edge-process and anisotropic diffusion.
Abstract: Diffusions are useful for image processing and computer vision because they provide a convenient way of smoothing noisy data, analyzing images at multiple scales, and enhancing discontinuities. A number of diffusions of image brightness have been defined and studied so far; they may be applied to scalar and vector-valued quantities that are naturally associated with intervals of either the real line, or other flat manifolds. Some quantities of interest in computer vision, and other areas of engineering that deal with images, are defined on curved manifolds; typical examples are orientation and hue that are defined on the circle. Generalizing brightness diffusions to orientation is not straightforward, especially in the case where a discrete implementation is sought. An example of what may go wrong is presented. A method is proposed to define diffusions of orientation-like quantities. First a definition in the continuum is discussed, then a discrete orientation diffusion is proposed. The behavior of such diffusions is explored both analytically and experimentally. It is shown how such orientation diffusions contain a nonlinearity that is reminiscent of edge-process and anisotropic diffusion. A number of open questions are proposed.

Journal ArticleDOI
TL;DR: This work uses a multiscale synthesis algorithm incorporating local annealing to obtain larger realizations of texture visually indistinguishable from the training texture.
Abstract: Our noncausal, nonparametric, multiscale, Markov random field (MRF) model is capable of synthesizing and capturing the characteristics of a wide variety of textures, from the highly structured to the stochastic. We use a multiscale synthesis algorithm incorporating local annealing to obtain larger realizations of texture visually indistinguishable from the training texture.

Journal ArticleDOI
TL;DR: A spatially variant finite mixture model is proposed for pixel labeling and image segmentation and an expectation-maximization (EM) algorithm is derived for maximum likelihood estimation of the pixel labels and the parameters of the mixture densities.
Abstract: A spatially variant finite mixture model is proposed for pixel labeling and image segmentation. For the case of spatially varying mixtures of Gaussian density functions with unknown means and variances, an expectation-maximization (EM) algorithm is derived for maximum likelihood estimation of the pixel labels and the parameters of the mixture densities, An a priori density function is formulated for the spatially variant mixture weights. A generalized EM algorithm for maximum a posteriori estimation of the pixel labels based upon these prior densities is derived. This algorithm incorporates a variation of gradient projection in the maximization step and the resulting algorithm takes the form of grouped coordinate ascent. Gaussian densities have been used for simplicity, but the algorithm can easily be modified to incorporate other appropriate models for the mixture model component densities. The accuracy of the algorithm is quantitatively evaluated through Monte Carlo simulation, and its performance is qualitatively assessed via experimental images from computerized tomography (CT) and magnetic resonance imaging (MRI).

Journal ArticleDOI
TL;DR: The relationship with similar PDE systems in particular with the functional proposed by Ambrosio-Tortorelli in order to approach the Mumford-Shah functional developed in the segmentation application is studied.
Abstract: This paper deals with edge-preserving regularization for inverse problems in image processing. We first present a synthesis of the main results we have obtained in edge-preserving regularization by using a variational approach. We recall the model involving regularizing functions /spl phi/ and we analyze the geometry-driven diffusion process of this model in the three-dimensional (3-D) case. Then a half-quadratic theorem is used to give a very simple reconstruction algorithm. After a critical analysis of this model, we propose another functional to minimize for edge-preserving reconstruction purposes. It results in solving two coupled partial differential equations (PDEs): one processes the intensity, the other the edges. We study the relationship with similar PDE systems in particular with the functional proposed by Ambrosio-Tortorelli (1990, 1992) in order to approach the Mumford-Shah (1989) functional developed in the segmentation application. Experimental results on synthetic and real images are presented.

Journal ArticleDOI
TL;DR: A new image cryptosystem to protect image data that encrypts the original image into another virtual image that can confuse illegal users and is more efficient than a method that encrypted the entire image directly.
Abstract: We propose a new image cryptosystem to protect image data. It encrypts the original image into another virtual image. Since both original and virtual images are significant, our new cryptosystem can confuse illegal users. Besides the camouflage, this new cryptosystem has three other benefits. First, our cryptosystem is secure even if the illegal users know that our virtual image is a camouflage. Second, this cryptosystem can compress image data. Finally, our method is more efficient than a method that encrypts the entire image directly.

Journal ArticleDOI
TL;DR: A comprehensive comparison of 2D spectral estimation methods for SAR imaging shows that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fouriers.
Abstract: Discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.

Journal ArticleDOI
TL;DR: This work introduces a new wavelet-based framework for analyzing block-based fractal compression schemes, and gives new insight into the convergence properties of fractal block coders, and leads to an unconditionally convergent scheme with a fast decoding algorithm.
Abstract: Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new wavelet-based framework for analyzing block-based fractal compression schemes. Within this framework we are able to draw upon insights from the well-established transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin (1992) are Haar wavelet subtree quantization schemes. We examine a generalization of the schemes to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best results in the literature for a Jacquin-style coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and it leads us to develop an unconditionally convergent scheme with a fast decoding algorithm. Our experiments with this new algorithm indicate that fractal coders derive much of their effectiveness from their ability to efficiently represent wavelet zero trees. Finally, our framework reveals some of the fundamental limitations of current fractal compression schemes.