scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2005"


Journal ArticleDOI
TL;DR: An algorithm to split an image into a sum u + v of a bounded variation component and a component containing the textures and the noise is constructed, inspired from a recent work of Y. Meyer.
Abstract: We construct an algorithm to split an image into a sum u + v of a bounded variation component and a component containing the textures and the noise. This decomposition is inspired from a recent work of Y. Meyer. We find this decomposition by minimizing a convex functional which depends on the two variables u and v, alternately in each variable. Each minimization is based on a projection algorithm to minimize the total variation. We carry out the mathematical study of our method. We present some numerical results. In particular, we show how the u component can be used in nontextured SAR image restoration.

369 citations


Journal ArticleDOI
TL;DR: This work studies the least squares fit (LSF) of circular arcs to incomplete scattered data, analyzes theoretical aspects of the problem, and reveals the cause of unstable behavior of conventional algorithms.
Abstract: Fitting standard shapes or curves to incomplete data (which represent only a small part of the curve) is a notoriously difficult problem Even if the curve is quite simple, such as an ellipse or a circle, it is hard to reconstruct it from noisy data sampled along a short arc Here we study the least squares fit (LSF) of circular arcs to incomplete scattered data We analyze theoretical aspects of the problem and reveal the cause of unstable behavior of conventional algorithms We also find a remedy that allows us to build another algorithm that accurately fits circles to data sampled along arbitrarily short arcs

251 citations


Journal ArticleDOI
TL;DR: This paper considers a special kind of image data: families of images generated by articulation of one or several objects in a scene; their lack of differentiability when the images have edges is studied, and it is shown that there exists a natural renormalization of geodesic distance which yields a well-defined metric.
Abstract: Recently, the Isomap procedure [10] was proposed as a new way to recover a low-dimensional parametrization of data lying on a low-dimensional submanifold in high-dimensional space. The method assumes that the submanifold, viewed as a Riemannian submanifold of the ambient high-dimensional space, is isometric to a convex subset of Euclidean space. This naturally raises the question: what datasets can reasonably be modeled by this condition? In this paper, we consider a special kind of image data: families of images generated by articulation of one or several objects in a scene--for example, images of a black disk on a white background with center placed at a range of locations. The collection of all images in such an articulation family, as the parameters of the articulation vary, makes up an articulation manifold, a submanifold of L2. We study the properties of such articulation manifolds, in particular, their lack of differentiability when the images have edges. Under these conditions, we show that there exists a natural renormalization of geodesic distance which yields a well-defined metric. We exhibit a list of articulation models where the corresponding manifold equipped with this new metric is indeed isometric to a convex subset of Euclidean space. Examples include translations of a symmetric object, rotations of a closed set, articulations of a horizon, and expressions of a cartoon face. The theoretical predictions from our study are borne out by empirical experiments with published Isomap code. We also note that in the case where several components of the image articulate independently, isometry may fail; for example, with several disks in an image avoiding contact, the underlying Riemannian manifold is locally isometric to an open, connected, but not convex subset of Euclidean space. Such a situation matches the assumptions of our recently-proposed Hessian Eigenmaps procedure, but not the original Isomap procedure.

153 citations


Journal ArticleDOI
TL;DR: A necessary and sufficient condition for a map G to be a watershed of a map F, this condition is based on a notion of extension and is derived from the framework of topological watersheds.
Abstract: In this paper, we investigate topological watersheds (Couprie and Bertrand, 1997). One of our main results is a necessary and sufficient condition for a map G to be a watershed of a map F, this condition is based on a notion of extension. A consequence of the theorem is that there exists a (greedy) polynomial time algorithm to decide whether a map G is a watershed of a map F or not. We introduce a notion of "separation between two points" of an image which leads to a second necessary and sufficient condition. We also show that, given an arbitrary total order on the minima of a map, it is possible to define a notion of "degree of separation of a minimum" relative to this order. This leads to a third necessary and sufficient condition for a map G to be a watershed of a map F. At last we derive, from our framework, a new definition for the dynamics of a minimum.

138 citations


Journal ArticleDOI
TL;DR: The proposed subgradient method discourages blocking effects and Gibbs phenomenon to appear while edges are kept as sharp as possible and leads to a simple and fast algorithm that may be applied to the great set of JPEG images to decompress them more efficiently.
Abstract: The widely used JPEG lossy baseline coding system is known to produce, at low bit rates, blocking effects and Gibbs phenomenon. This paper develops a method to get rid of these artifacts without smoothing images and without removing perceptual features. This results in better looking pictures and improved PSNR. Our algorithm is based on an adapted total variation minimization approach constrained by the knowledge of the input intervals the unquantized cosine coefficients belong to. In this way, we reconstruct an image having the same quantized coefficients than the original one, but which is minimal in term of the total variation. This discourages blocking effects and Gibbs phenomenon to appear while edges are kept as sharp as possible. Although the proposed subgradient method is converging in infinite time, experiments show that best results are obtained with a very few number of iterations. This leads to a simple and fast algorithm that may be applied to the great set of JPEG images to decompress them more efficiently.

133 citations


Journal ArticleDOI
TL;DR: An approach to optimal object segmentation in the geodesic active contour framework is presented with application to automated image segmentation and an efficient algorithm is presented for the computation of globally optimal segmentations.
Abstract: An approach to optimal object segmentation in the geodesic active contour framework is presented with application to automated image segmentation. The new segmentation scheme seeks the geodesic active contour of globally minimal energy under the sole restriction that it contains a specified internal point pint. This internal point selects the object of interest and may be used as the only input parameter to yield a highly automated segmentation scheme. The image to be segmented is represented as a Riemannian space S with an associated metric induced by the image. The metric is an isotropic and decreasing function of the local image gradient at each point in the image, encoding the local homogeneity of image features. Optimal segmentations are then the closed geodesics which partition the object from the background with minimal similarity across the partitioning. An efficient algorithm is presented for the computation of globally optimal segmentations and applied to cell microscopy, x-ray, magnetic resonance and cDNA microarray images.

115 citations


Journal ArticleDOI
TL;DR: It is shown that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms and the preconditioned conjugate gradient method is applied.
Abstract: In this paper, we study the problem of reconstructing a high-resolution image from several blurred low-resolution image frames. The image frames consist of decimated, blurred and noisy versions of the high-resolution image. The high-resolution image is modeled as a Markov random field (MRF), and a maximum a posteriori (MAP) estimation technique is used for the restoration. We show that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore the high-resolution image. Computer simulations are given to illustrate the effectiveness of the proposed method.

113 citations


Journal ArticleDOI
TL;DR: A variety of digitally-continuous functions that preserve homotopy types or homotopic-related properties such as the digital fundamental group are studied.
Abstract: Several recent papers have adapted notions of geometric topology to the emerging field of `digital topology.' An important notion is that of digital homotopy. In this paper, we study a variety of digitally-continuous functions that preserve homotopy types or homotopy-related properties such as the digital fundamental group.

106 citations


Journal ArticleDOI
TL;DR: This paper proposes and proves a characterization of the points that can be lowered during a W-thinning, which may be checked locally and efficiently implemented thanks to a data structure called component tree, and proposes quasi-linear algorithms for computing M-watersheds and topological watersheds.
Abstract: The watershed transformation is an efficient tool for segmenting grayscale images. An original approach to the watershed (Bertrand, Journal of Mathematical Imaging and Vision, Vol. 22, Nos. 2/3, pp. 217--230, 2005.; Couprie and Bertrand, Proc. SPIE Vision Geometry VI, Vol. 3168, pp. 136--146, 1997.) consists in modifying the original image by lowering some points while preserving some topological properties, namely, the connectivity of each lower cross-section. Such a transformation (and its result) is called a W-thinning, a topological watershed being an "ultimate" W-thinning. In this paper, we study algorithms to compute topological watersheds. We propose and prove a characterization of the points that can be lowered during a W-thinning, which may be checked locally and efficiently implemented thanks to a data structure called component tree. We introduce the notion of M-watershed of an image F, which is a W-thinning of F in which the minima cannot be extended anymore without changing the connectivity of the lower cross-sections. The set of points in an M-watershed of F which do not belong to any regional minimum corresponds to a binary watershed of F. We propose quasi-linear algorithms for computing M-watersheds and topological watersheds. These algorithms are proved to give correct results with respect to the definitions, and their time complexity is analyzed.

101 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a way to compute openings and closings over large numbers of constrained, oriented paths in an efficient manner, suitable for building filters with applications to the analysis of oriented features, such as texture.
Abstract: This paper lays the theoretical foundations to path openings and closings. The traditional morphological filter used for the analysis of linear structures in images is the union of openings (or the intersection of closings) by linear segments. However structures in images are rarely strictly straight, and as a result a more flexible approach is needed. An extension to the idea of using straight line segments as structuring elements is to use constrained paths, i.e. discrete, one-pixel thick successions of pixels oriented in a particular direction, but in general forming curved lines rather than perfectly straight lines. However the number of such paths is prohibitive and the resulting algorithm by simple composition is inefficient. In this paper we propose a way to compute openings and closings over large numbers of constrained, oriented paths in an efficient manner, suitable for building filters with applications to the analysis of oriented features, such as for example texture.

91 citations


Journal ArticleDOI
TL;DR: An alternative solution is proposed, in which the topographic surface is modified in such a way that flooding it with a non viscous fluid will produce the same lakes as flooding the original relief with a viscous fluids.
Abstract: The watershed transform is the basic morphological tool for image segmentation. Watershed lines, also called divide lines, are a topographical concept: a drop of water falling on a topographical surface follows a steepest descent line until it stops when reaching a regional minimum. Falling on a divide line, the same drop of water may glide towards one or the other of both adjacent catchment basins. For segmenting an image, one takes as topographic surface the modulus of its gradient: the associated watershed lines will follow the contour lines in the initial image. The trajectory of a drop of water is disturbed if the relief is not smooth: it is undefined for instance on plateaus. On the other hand, each regional minimum of the gradient image is the attraction point of a catchment basin. As gradient images generally present many minima, the result is a strong oversegmentation. For these reasons a more robust scheme is used for the construction of the watershed based on flooding: a set of sources are defined, pouring water in such a way that the altitude of the water increases with constant speed. As the flooding proceeds, the boundaries of the lakes propagate in the direction of the steepest descent line of the gradient. The set of points where lakes created by two distinct sources meet are the contours. As the sources are far less numerous than the minima, there is no more oversegmentation. And on the plateaus the flooding also is well defined and propagates from the boundary towards the inside of the plateau. Used in conjunction with markers, the watershed is a powerful, fast and robust segmentation method. Powerful: it has been used with success in a variety of applications. Robust: it is insensitive to the precise placement or shape of the markers. Fast: efficient algorithms are able to mimic the progression of the flood. In some cases however the resulting segmentation will be poor: the contours always belong to the watershed lines of the gradient and these lines are poorly defined when the initial image is blurred or extremely noisy. In such cases, an additional regularization has to take place. Denoising and filtering the image before constructing the gradient is a widely used method. It is however not always sufficient. In some cases, one desires smoothing the contour, despite the chaotic fluctuations of the watershed lines. For this two options are possible. The first consists in using a viscous fluid for the flooding: a viscous fluid will not be able to follow all irregularities of the relief and produce lakes with smooth boundaries. Simulating a viscous fluid is however computationally intensive. For this reason we propose an alternative solution, in which the topographic surface is modified in such a way that flooding it with a non viscous fluid will produce the same lakes as flooding the original relief with a viscous fluid. On this new relief, the standard watershed algorithm can be used, which has been optimized for various architectures. Two types of viscous fluids will be presented, yielding two distinct regularization methods. We will illustrate the method on various examples.

Journal ArticleDOI
TL;DR: In this article, a new point-based elastic registration scheme for medical images based on elastic body splines (EBS) is introduced. But the method is not suitable to cope with local as well as global differences in the images.
Abstract: We introduce a new point-based elastic registration scheme for medical images which is based on elastic body splines (EBS). Since elastic body splines result from a physical model in form of analytical solutions of the Navier equation these splines describe elastic deformations of physical objects. This property is advantageous in medical registration applications, in which the geometric differences between the images are often caused by physical deformations of human tissue due to surgical interventions or pathological processes. In this contribution we introduce a new class of elastic body splines which is based on Gaussian forces (GEBS). By varying the standard deviation of the Gaussian forces our new approach is well suited to cope with local as well as global differences in the images. This is in contrast to the previous EBS approach where polynomial and rational forces have been used. We demonstrate the performance of our new approach by presenting two different kinds of experiments. First, we demonstrate that this approach well approximates deformations given by an analytic solution of the Navier equation. Second, we apply our approach to pre- and postsurgical tomographic images of the human brain. It turns out that the new EBS approach well models the physical deformation behavior of tissues and in the case of local deformations performs significantly better than the previous EBS.

Journal ArticleDOI
TL;DR: The theoretical results of this paper develop foundations for unifying large classes of nonlinear translation-invariant image and signal processing systems of the max or min type.
Abstract: This paper explores some aspects of the algebraic theory of mathematical morphology from the viewpoints of minimax algebra and translation-invariant systems and extends them to a more general algebraic structure that includes generalized Minkowski operators and lattice fuzzy image operators. This algebraic structure is based on signal spaces that combine the sup-inf lattice structure with a scalar semi-ring arithmetic that possesses generalized `additions' and ?-`multiplications'. A unified analysis is developed for: (i) representations of translation-invariant operators compatible with these generalized algebraic structures as nonlinear sup-? convolutions, and (ii) kernel representations of increasing translation-invariant operators as suprema of erosion-like nonlinear convolutions by kernel elements. The theoretical results of this paper develop foundations for unifying large classes of nonlinear translation-invariant image and signal processing systems of the max or min type. The envisioned applications lie in the broad intersection of mathematical morphology, minimax signal algebra and fuzzy logic.

Journal ArticleDOI
TL;DR: A theoretical framework of anchors is introduced that aims at a better understanding of the process involved in the computation of erosions and openings, and an algorithm for one-dimensional erosion and openings which exploits opening anchors is proposed.
Abstract: Several efficient algorithms for computing erosions and openings have been proposed recently. They improve on van Herk's algorithm in terms of number of comparisons for large structuring elements. In this paper we introduce a theoretical framework of anchors that aims at a better understanding of the process involved in the computation of erosions and openings. It is shown that the knowledge of opening anchors of a signal f is sufficient to perform both the erosion and the opening of f. Then we propose an algorithm for one-dimensional erosions and openings which exploits opening anchors. This algorithm improves on the fastest algorithms available in literature by approximately 30% in terms of computation speed, for a range of structuring element sizes and image contents.

Journal ArticleDOI
TL;DR: An improved method to select the most meaningful level lines (boundaries of level sets) from an image by improving the original method proposed in [10], and empirically prove that regularity makes detection more robust but does not qualitatively change the results.
Abstract: Since the beginning, Mathematical Morphology has proposed to extract shapesfrom images as connected components of level sets. These methods have proved veryefficient in shape recognition and shape analysis. In this paper, we present an improved method to select the most meaningful level lines (boundaries of level sets) from an image. This extraction can be based on statistical arguments, leading to a parameter free algorithm. It permits to roughly extract all pieces of level lines of an image, that coincide with pieces of edges. By this method, the numberof encoded level lines is reduced by a factor 100, without any loss of shape contents. In contrast to edge detection algorithms or snakes methods, such a level lines selection method delivers accurate shape elements, without user parameter since selection parameters can be computed by the Helmholtz Principle. The paper aims at improving the original method proposed in [10]. We give a mathematicalinterpretation of the model, which explains why some pieces of curve are overdetected. We introduce a multiscale approach that makes the method more robust to noise. A more local algorithm is introduced, taking local contrast variations into account. Finally, we empirically prove that regularity makes detection more robust but does not qualitatively change the results.

Journal ArticleDOI
TL;DR: The proposed variational principle penalizes a departure from rigidity and thereby provides a natural generalization of strictly rigid registration techniques used widely in medical contexts.
Abstract: In this paper a variational method for registering or mapping like points in medical images is proposed and analyzed. The proposed variational principle penalizes a departure from rigidity and thereby provides a natural generalization of strictly rigid registration techniques used widely in medical contexts. Difficulties with finite displacements are elucidated, and alternative infinitesimal displacements are developed for an optical flow formulation which also permits image interpolation. The variational penalty against non-rigid flows provides sufficient regularization for a well-posed minimization and yet does not rule out irregular registrations corresponding to an object excision. Image similarity is measured by penalizing the variation of intensity along optical flow trajectories. The approach proposed here is also independent of the order in which images are taken. For computations, a lumped finite element Eulerian discretization is used to solve for the optical flow. Also, a Lagrangian integration of the intensity along optical flow trajectories has the advantage of prohibiting diffusion among trajectories which would otherwise blur interpolated images. The subtle aspects of the methods developed are illustrated in terms of simple examples, and the approach is finally applied to the registration of magnetic resonance images.

Journal ArticleDOI
TL;DR: This work summarizes the theoretical foundations needed to deal with the pose problem and contains mainly basics of Euclidean, projective and conformal geometry, which is not well known in computer science.
Abstract: 2D-3D pose estimation means to estimate the relative position and orientation of a 3D object with respect to a reference camera system. This work has its main focus on the theoretical foundations of the 2D-3D pose estimation problem: We discuss the involved mathematical spaces and their interaction within higher order entities. To cope with the pose problem (how to compare 2D projective image features with 3D Euclidean object features), the principle we propose is to reconstruct image features (e.g. points or lines) to one dimensional higher entities (e.g. 3D projection rays or 3D reconstructed planes) and express constraints in the 3D space. It turns out that the stratification hierarchy [11] introduced by Faugeras is involved in the scenario. But since the stratification hierarchy is based on pure point concepts a new algebraic embedding is required when dealing with higher order entities. The conformal geometric algebra (CGA) [24] is well suited to solve this problem, since it subsumes the involved mathematical spaces. Operators are defined to switch entities between the algebras of the conformal space and its Euclidean and projective subspaces. This leads to another interpretation of the stratification hierarchy, which is not restricted to be based solely on point concepts. This work summarizes the theoretical foundations needed to deal with the pose problem. Therefore it contains mainly basics of Euclidean, projective and conformal geometry. Since especially conformal geometry is not well known in computer science, we recapitulate the mathematical concepts in some detail. We believe that this geometric model is useful also for many other computer vision tasks and has been ignored so far. Applications of these foundations are presented in Part II [36].

Journal ArticleDOI
TL;DR: This work extends the approach to more general cost functions of FNS, CFNS and HEIV to be placed within a common framework, and finds unconstrained (FNS, HEIV) and constrained (CFNS) minimisers of cost functions.
Abstract: Estimation of parameters from image tokens is a central problem in computer vision. FNS, CFNS and HEIV are three recently developed methods for solving special but important cases of this problem. The schemes are means for finding unconstrained (FNS, HEIV) and constrained (CFNS) minimisers of cost functions. In earlier work of the authors, FNS, CFNS and a core version of HEIV were applied to a specific cost function. Here we extend the approach to more general cost functions. This allows the FNS, CFNS and HEIV methods to be placed within a common framework.

Journal ArticleDOI
TL;DR: Part II uses the foundations of Part I [35] to define constraint equations for 2D-3D pose estimation of different corresponding entities and proposes to use linearized twist transformations which result in well conditioned and fast solvable systems of equations.
Abstract: Part II uses the foundations of Part I [35] to define constraint equations for 2D-3D pose estimation of different corresponding entities. Most articles on pose estimation concentrate on specific types of correspondences, mostly between points, and only rarely use line correspondences. The first aim of this part is to extend pose estimation scenarios to correspondences of an extended set of geometric entities. In this context we are interested to relate the following (2D) image and (3D) model types: 2D point/3D point, 2D line/3D point, 2D line/3D line, 2D conic/3D circle, 2D conic/3D sphere. Furthermore, to handle articulated objects, we describe kinematic chains in this context in a similar manner. We ensure that all constraint equations end up in a distance measure in the Euclidean space, which is well posed in the context of noisy data. We also discuss the numerical estimation of the pose. We propose to use linearized twist transformations which result in well conditioned and fast solvable systems of equations. The key idea is not to search for the representation of the Lie group, describing the rigid body motion, but for the representation of their generating Lie algebra. This leads to real-time capable algorithms.

Journal ArticleDOI
TL;DR: Using threshold decomposition, a denoising algorithm for grey level images is proposed, and a Poisson approximation for the probability of appearance of any local pattern can be computed.
Abstract: Area openings and closings are morphological filters which efficiently suppress impulse noise from an image, by removing small connected components of level sets. The problem of an objective choice of threshold for the area remains open. Here, a mathematical model for random images will be considered. Under this model, a Poisson approximation for the probability of appearance of any local pattern can be computed. In particular, the probability of observing a component with size larger than k in pure impulse noise has an explicit form. This permits the definition of a statistical test on the significance of connected components, thus providing an explicit formula for the area threshold of the denoising filter, as a function of the impulse noise probability parameter. Finally, using threshold decomposition, a denoising algorithm for grey level images is proposed.

Journal ArticleDOI
TL;DR: It will be proved in this article that (in the framework of simplicial complexes) any n-surface is an n-pseudomanifold, and thatAny n-dimensional combinatorial manifold is ann-surface, and it will be shown how topologically consistent Marching Cubes-like algorithms can be designed using the context of partially ordered sets.
Abstract: Many applications require the extraction of an object boundary from a discrete image. In most cases, the result of such a process is expected to be, topologically, a surface, and this property might be required in subsequent operations. However, only through careful design can such a guarantee be provided. In the present article we will focus on partially ordered sets and the notion of n-surfaces introduced by Evako et al. to deal with this issue. Partially ordered sets are topological spaces that can represent the topology of a wide range of discrete spaces, including abstract simplicial complexes and regular grids. It will be proved in this article that (in the framework of simplicial complexes) any n-surface is an n-pseudomanifold, and that any n-dimensional combinatorial manifold is an n-surface. Moreover, given a subset of an n-surface (an object), we show how to build a partially ordered set called frontier order, which represents the boundary of this object. Similarly to the continuous case, where the boundary of an n-manifold, if not empty, is an (n?1)-manifold, we prove that the frontier order associated to an object is a union of disjoint (n?1)-surfaces. Thanks to this property, we show how topologically consistent Marching Cubes-like algorithms can be designed using the framework of partially ordered sets.

Journal ArticleDOI
TL;DR: This paper considers morphological operations on images whose pixel values are considered as labels without ordering between them, except for a least element ⊥ (meaning no label) and a greatest element⊤(Meaning conflicting labels).
Abstract: We consider morphological operations on images whose pixel values are considered as labels without ordering between them, except for a least element ? (meaning no label) and a greatest element ?(Meaning conflicting labels). Flat dilations and erosions can be defined as in usual grey-level morphology. Since the lattice of label images is not distributive, non-flat operators can be obtained by combination of flat ones. Given any connectivity on sets, there is a connection on label images for which the connected components of an image correspond precisely to its flat zones with their labels attached. Some specific applications of label morphology are given. In the sequel of this paper [20], we will examine geodesic dilations and reconstructions on label images, and show how this variant of mathematical morphology can be applied to the segmentation of moving objects in video sequences [2, 3].

Journal ArticleDOI
TL;DR: A new class of Radon transforms in a discrete setting for the purpose of applying them to the ridgelet and curvelet transforms is introduced and the effectiveness of some types of the generalized Radon transform in reducing a type of noise known as speckle that is present in synthetic aperture radar (SAR) imagery is studied.
Abstract: We introduce and study a new class of Radon transforms in a discrete setting for the purpose of applying them to the ridgelet and curvelet transforms. We give a detailed analysis of the p-adic case and provide a closed-form formula for an inverse of the p-adic Radon transform. We give conditions for a scaled version of the generalized discrete Radon transform to yield a tight frame, and discuss a direct Radon matrix method for the implementation of a local ridgelet transform. We then study the effectiveness of some types of the generalized Radon transforms in reducing a type of noise known as speckle that is present in synthetic aperture radar (SAR) imagery.

Journal ArticleDOI
TL;DR: Simulation results indicate that the proposed filter consistently outperforms other color image filters by balancing the tradeoff between noise suppression and detail preservation.
Abstract: The Vector Rank M-type K-Nearest Neighbour (VRMKNN) filter to remove impulsive noise from color images and video color sequences is presented. This filter utilizes multichannel image processing by using the vector approach and the Rank M-Type K-Nearest Neighbour (RMKNN) algorithm. Simulation results indicate that the proposed filter consistently outperforms other color image filters by balancing the tradeoff between noise suppression and detail preservation. The implementation of the filter was realized on the DSP TMS320C6711 to demonstrate that the proposed filter potentially could provide a real-time solution to quality video transmission.

Journal ArticleDOI
TL;DR: This paper proposes the notion of a σ-connected operator, that is, an operator connected at scale σ, and discusses the application of multiscale connected openings in granulometric analysis, where both size and connectivity information are summarized.
Abstract: Among the major developments in Mathematical Morphology in the last two decades are the interrelated subjects of connectivity classes and connected operators. Braga-Neto and Goutsias have proposed an extension of the theory of connectivity classes to a multiscale setting, whereby one can assign connectivity to an object observed at different scales. In this paper, we study connected operators in the context of multiscale connectivity. We propose the notion of a ?-connected operator, that is, an operator connected at scale ?. We devote some attention to the study of binary ?-grain operators. In particular, we show that families of ?-grain openings and ?-grain closings, indexed by the connectivity scale parameter, are granulometries and anti-granulometries, respectively. We demonstrate the use of multiscale connected operators with image analysis applications. The first is the scale-space representation of grayscale images using multiscale levelings, where the role of scale is played by the connectivity scale. Then we discuss the application of multiscale connected openings in granulometric analysis, where both size and connectivity information are summarized. Finally, we describe an application of multiscale connected operators to an automatic target recognition problem.

Journal ArticleDOI
TL;DR: It is shown how the mathematical framework of catastrophe theory can be used to describe the different types of annihilations and the creation of pairs of critical points and how this knowledge can be exploited in a scale space hierarchy tree for the purpose of a topology based segmentation.
Abstract: In order to investigate the deep structure of Gaussian scale space images, one needs to understand the behaviour of critical points under the influence of blurring. We show how the mathematical framework of catastrophe theory can be used to describe the different types of annihilations and the creation of pairs of critical points and how this knowledge can be exploited in a scale space hierarchy tree for the purpose of a topology based segmentation. A key role is played by scale space saddles and iso-intensity manifolds through them. We discuss the role of non-generic catastrophes and their influence on the tree and the segmentation. Furthermore it is discussed, based on the structure of iso-intensity manifolds, why creations of pairs of critical points don't influence the tree. We clarify the theory with an artificial image and a simulated MR image.

Journal ArticleDOI
TL;DR: This paper investigates if sequences of illumination spectra can be described by one-parameter subgroups of Lorentz-transformations and presents two methods to estimate the parameters of such a curve from a set of coordinate points.
Abstract: It is known that for every selection of illumination spectra there is a coordinate system such that all coordinate vectors of these illumination spectra are located in a cone. A natural set of transformations of this cone are the Lorentz transformations. In this paper we investigate if sequences of illumination spectra can be described by one-parameter subgroups of Lorentz-transformations. We present two methods to estimate the parameters of such a curve from a set of coordinate points. We also use an optimization technique to approximate a given set of points by a one-parameter curve with a minimum approximation error. In the experimental part of the paper we investigate series of blackbody radiators and sequences of measured daylight spectra and show that one-parameter curves provide good approximations for large sequences of illumination spectra.

Journal ArticleDOI
TL;DR: A formal definition of a combinatorial topology is presented for the discrete N-dimensional space defined by the An* lattice, which induces the simplest discrete topology definition because its dual is a K-simplex.
Abstract: A formal definition of a combinatorial topology is presented in this paper for the discrete N-dimensional space defined by the An* lattice. The use of this grid instead of the classical n is based on two arguments: It is the optimal sampling grid in the sense of Shannon's sampling theorem in 2 and 3 dimensions, It induces the simplest discrete topology definition because its dual is a K-simplex. Their interest in image processing and in the medical imaging field is presented with some examples.

Journal ArticleDOI
Renato Keshet1
TL;DR: A new, self-dual approach for morphological image processing, based on a semilattice framework, is introduced, which is generalized to grayscale functions thanks to the tree of shapes, a recently introduced generalization of adjacency trees.
Abstract: A new, self-dual approach for morphological image processing, based on a semilattice framework, is introduced. The related morphological erosion, in particular, shrinks all 'objects` in an image, regardless to whether they are bright or dark. The theory is first developed for the binary case, where it is closely related to the adjacency tree. Under certain constraints, it is shown to yield a lattice structure, which is complete for discrete images. It is then generalized to grayscale functions thanks to the tree of shapes, a recently introduced generalization of adjacency trees.

Journal ArticleDOI
TL;DR: This paper analyses the behavior in scale space of linear junction models (L, Y and X models), nonlinear junction models, and linear junction multi-models and shows that for infinite models the Laplacian of the Gaussian at the corner point is not always equal to zero.
Abstract: This paper analyses the behavior in scale space of linear junction models (L, Y and X models), nonlinear junction models, and linear junction multi-models. The variation of the grey level is considered to be constant, linear or nonlinear in the case of linear models and constant for the other models. We are mainly interested in the extrema points provided by the Laplacian of the Gaussian function. Moreover, we show that for infinite models the Laplacian of the Gaussian at the corner point is not always equal to zero.