scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2007"


Journal ArticleDOI
TL;DR: This paper proposes to unify three well-known image variational models, namely the snake model, the Rudin–Osher–Fatemi denoising model and the Mumford–Shah segmentation model, and establishes theorems with proofs to determine a global minimum of the active contour model.
Abstract: The active contour/snake model is one of the most successful variational models in image segmentation. It consists of evolving a contour in images toward the boundaries of objects. Its success is based on strong mathematical properties and efficient numerical schemes based on the level set method. The only drawback of this model is the existence of local minima in the active contour energy, which makes the initial guess critical to get satisfactory results. In this paper, we propose to solve this problem by determining a global minimum of the active contour model. Our approach is based on the unification of image segmentation and image denoising tasks into a global minimization framework. More precisely, we propose to unify three well-known image variational models, namely the snake model, the Rudin---Osher---Fatemi denoising model and the Mumford---Shah segmentation model. We will establish theorems with proofs to determine the existence of a global minimum of the active contour model. From a numerical point of view, we propose a new practical way to solve the active contour propagation problem toward object boundaries through a dual formulation of the minimization problem. The dual formulation, easy to implement, allows us a fast global minimization of the snake energy. It avoids the usual drawback in the level set approach that consists of initializing the active contour in a distance function and re-initializing it periodically during the evolution, which is time-consuming. We apply our segmentation algorithms on synthetic and real-world images, such as texture images and medical images, to emphasize the performances of our model compared with other segmentation models.

909 citations


Journal ArticleDOI
TL;DR: A new variational model to denoise an image corrupted by Poisson noise uses total-variation regularization, which preserves edges, and the result is that the strength of the regularization is signal dependent, precisely likePoisson noise.
Abstract: We propose a new variational model to denoise an image corrupted by Poisson noise. Like the ROF model described in [1] and [2], the new model uses total-variation regularization, which preserves edges. Unlike the ROF model, our model uses a data-fidelity term that is suitable for Poisson noise. The result is that the strength of the regularization is signal dependent, precisely like Poisson noise. Noise of varying scales will be removed by our model, while preserving low-contrast features in regions of low intensity.

412 citations


Journal ArticleDOI
TL;DR: Experiments with the inpainting of gray tone and color images show that the novel algorithm meets the high level of quality of the methods of Bertalmio et al. while being faster by at least an order of magnitude.
Abstract: High-quality image inpainting methods based on nonlinear higher-order partial differential equations have been developed in the last few years. These methods are iterative by nature, with a time variable serving as iteration parameter. For reasons of stability a large number of iterations can be needed which results in a computational complexity that is often too large for interactive image manipulation. Based on a detailed analysis of stationary first order transport equations the current paper develops a fast noniterative method for image inpainting. It traverses the inpainting domain by the fast marching method just once while transporting, along the way, image values in a coherence direction robustly estimated by means of the structure tensor. Depending on a measure of coherence strength the method switches continuously between diffusion and directional transport. It satisfies a comparison principle. Experiments with the inpainting of gray tone and color images show that the novel algorithm meets the high level of quality of the methods of Bertalmio et al. (SIG-GRAPH '00: Proc. 27th Conf. on Computer Graphics and Interactive Techniques, New Orleans, ACM Press/Addison-Wesley, New York, pp. 417---424, 2000), Masnou (IEEE Trans. Image Process. 11(2):68---76, 2002), and Tschumperle (Int. J. Comput. Vis. 68(1):65---82, 2006), while being faster by at least an order of magnitude.

209 citations


Journal ArticleDOI
TL;DR: This paper proposes similarity measures that attempt to capture the “spirit” of dynamic time warping while being defined over continuous domains, and presents efficient algorithms for computing them.
Abstract: The problem of curve matching appears in many application domains, like time series analysis, shape matching, speech recognition, and signature verification, among others. Curve matching has been studied extensively by computational geometers, and many measures of similarity have been examined, among them being the Frechet distance (sometimes referred in folklore as the "dog-man" distance). A measure that is very closely related to the Frechet distance but has never been studied in a geometric context is the Dynamic Time Warping measure (DTW), first used in the context of speech recognition. This measure is ubiquitous across different domains, a surprising fact because notions of similarity usually vary significantly depending on the application. However, this measure suffers from some drawbacks, most importantly the fact that it is defined between sequences of points rather than curves. Thus, the way in which a curve is sampled to yield such a sequence can dramatically affect the quality of the result. Some attempts have been made to generalize the DTW to continuous domains, but the resulting algorithms have exponential complexity. In this paper we propose similarity measures that attempt to capture the "spirit" of dynamic time warping while being defined over continuous domains, and present efficient algorithms for computing them. Our formulation leads to a very interesting connection with finding short paths in a combinatorial manifold defined on the input chains, and in a deeper sense relates to the way light travels in a medium of variable refractivity.

178 citations


Journal ArticleDOI
TL;DR: The mimetic finite difference method introduced by Hyman and Shashkov is exploited to present a framework for estimating vector fields and related scalar fields (divergence, curl) of physical interest from image sequences to provide a basis for consistent definitions of higher-order differential operators.
Abstract: We exploit the mimetic finite difference method introduced by Hyman and Shashkov to present a framework for estimating vector fields and related scalar fields (divergence, curl) of physical interest from image sequences. Our approach provides a basis for consistent definitions of higher-order differential operators, for the analysis and a novel stability result concerning second-order div-curl regularizers, for novel variational schemes to the estimation of solenoidal (divergence-free) image flows, and to convergent numerical methods in terms of subspace corrections.

116 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that the Generalised problem has up to eight solutions that can be found as the intersections between a circle and a ruled quartic surface, where the camera is allowed to sample rays in some arbitrary but known fashion and is not assumed to perform a central perspective projection.
Abstract: It is a well known classical result that given the image projections of three known world points it is possible to solve for the pose of a calibrated perspective camera to up to four pairs of solutions. We solve the Generalised problem where the camera is allowed to sample rays in some arbitrary but known fashion and is not assumed to perform a central perspective projection. That is, given three back-projected rays that emanate from a camera or multi-camera rig in an arbitrary but known fashion, we seek the possible poses of the camera such that the three rays meet three known world points. We show that the Generalised problem has up to eight solutions that can be found as the intersections between a circle and a ruled quartic surface. A minimal and efficient constructive numerical algorithm is given to find the solutions. The algorithm derives an octic polynomial whose roots correspond to the solutions. In the classical case, when the three rays are concurrent, the ruled quartic surface and the circle possess a reflection symmetry such that their intersections come in symmetric pairs. This manifests itself in that the odd order terms of the octic polynomial vanish. As a result, the up to four pairs of solutions can be found in closed form. The proposed algorithm can be used to solve for the pose of any type of calibrated camera or camera rig. The intended use for the algorithm is in a hypothesise-and-test architecture.

113 citations


Journal ArticleDOI
TL;DR: Based on the theory on semismooth operators,Semismooth Newton’s methods for total variation minimization are studied to show the effectiveness of the proposed algorithms.
Abstract: In [2], Chambolle proposed an algorithm for minimizing the total variation of an image. In this short note, based on the theory on semismooth operators, we study semismooth Newton's methods for total variation minimization. The convergence and numerical results are also presented to show the effectiveness of the proposed algorithms.

78 citations


Journal ArticleDOI
TL;DR: An anisotropic filter for speckle reduction in ultrasound images and an adaptation of the geodesic active contours technique for the segmentation of breast tumors are presented.
Abstract: In this paper we present an anisotropic filter for speckle reduction in ultrasound images and an adaptation of the geodesic active contours technique for the segmentation of breast tumors. The anisotropic diffusion we propose is based on a texture description provided by a set of Gabor filters and allows reducing speckle noise while preserving edges. Furthermore, it is used to extract an initial pre-segmentation of breast tumors which is used as initialization for the active contours technique. This technique has been adapted to the characteristics of ultrasonography by adding certain texture-related terms which provide a better discrimination of the regions inside and outside the nodules. These terms allow obtaining a more accurate contour when the gradients are not high and uniform.

76 citations


Journal ArticleDOI
TL;DR: An application of the quaternion Fourier transform for the preprocessing for neural-computing in a new way the 1D acoustic signals of French spoken words are represented as 2D signals in the frequency and time domain, which allows to greatly reduce the dimension of the feature vector.
Abstract: This paper presents an application of the quaternion Fourier transform for the preprocessing for neural-computing. In a new way the 1D acoustic signals of French spoken words are represented as 2D signals in the frequency and time domain. These kind of images are then convolved in the quaternion Fourier domain with a quaternion Gabor filter for the extraction of features. This approach allows to greatly reduce the dimension of the feature vector. Two methods of feature extraction are tested. The features vectors were used for the training of a simple MLP, a TDNN and a system of neural experts. The improvement in the classification rate of the neural network classifiers are very encouraging which amply justify the preprocessing in the quaternion frequency domain. This work also suggests the application of the quaternion Fourier transform for other image processing tasks.

75 citations


Journal ArticleDOI
TL;DR: It is shown that under simple and feasible hypotheses, small baseline stereovision can be rehabilitated and even favoured and block-matching methods, which had become somewhat obsolete for large baseline sterevision, regain their relevance.
Abstract: This paper presents a study of small baseline stereovision. It is generally admitted that because of the finite resolution of images, getting a good precision in depth from stereovision demands a large angle between the views. In this paper, we show that under simple and feasible hypotheses, small baseline stereovision can be rehabilitated and even favoured. The main hypothesis is that the images should be band limited, in order to achieve sub-pixel precisions in the matching process. This assumption is not satisfied for common stereo pairs. Yet, this becomes realistic for recent spatial or aerian acquisition devices. In this context, block-matching methods, which had become somewhat obsolete for large baseline stereovision, regain their relevance. A multi-scale algorithm dedicated to small baseline stereovision is described along with experiments on small angle stereo pairs at the end of the paper.

72 citations


Journal ArticleDOI
TL;DR: Two algorithms with annealing properties based on the mathematical theory of interacting particle systems are derived and evaluated to provide a general guidance on suitable parameter choices for different applications.
Abstract: Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.

Journal ArticleDOI
TL;DR: A unified a contrario detection method is proposed to solve three classical problems in clustering analysis to solve the validity of a cluster candidate and a correct merging rule between meaningful clusters.
Abstract: A unified a contrario detection method is proposed to solve three classical problems in clustering analysis. The first one is to evaluate the validity of a cluster candidate. The second problem is that meaningful clusters can contain or be contained in other meaningful clusters. A rule is needed to define locally optimal clusters by inclusion. The third problem is the definition of a correct merging rule between meaningful clusters, permitting to decide whether they should stay separate or unite. The motivation of this theory is shape recognition. Matching algorithms usually compute correspondences between more or less local features (called shape elements) between images to be compared. Each pair of matching shape elements leads to a unique transformation (similarity or affine map.) The present theory is used to group these shape elements into shapes by detecting clusters in the transformation space.

Journal ArticleDOI
TL;DR: This paper derives necessary equations for the solution of both BV-regularization and the taut string algorithm by computing suitable Gateaux derivatives and shows that the equivalence follows from a uniqueness result.
Abstract: It is known that discrete BV-regularization and the taut string algorithm are equivalent. In this paper we extend this result to the continuous case. First we derive necessary equations for the solution of both BV-regularization and the taut string algorithm by computing suitable Gateaux derivatives. The equivalence then follows from a uniqueness result.

Journal ArticleDOI
TL;DR: The analytic properties of the resulting new functional ℱ, defined in terms of edge-preserving potential functions φα, inherits many nice properties from φ α, including the first and second order Lipschitz continuity, strong convexity, and positive definiteness of its Hessian.
Abstract: Recently, a powerful two-phase method for restoring images corrupted with high level impulse noise has been developed. The main drawback of the method is the computational efficiency of the second phase which requires the minimization of a non-smooth objective functional. However, it was pointed out in (Chan et al. in Proc. ICIP 2005, pp. 125---128) that the non-smooth data-fitting term in the functional can be deleted since the restoration in the second phase is applied to noisy pixels only. In this paper, we study the analytic properties of the resulting new functional ?. We show that ?, which is defined in terms of edge-preserving potential functions ? ? , inherits many nice properties from ? ? , including the first and second order Lipschitz continuity, strong convexity, and positive definiteness of its Hessian. Moreover, we use these results to establish the convergence of optimization methods applied to ?. In particular, we prove the global convergence of some conjugate gradient-type methods and of a recently proposed low complexity quasi-Newton algorithm. Numerical experiments are given to illustrate the convergence and efficiency of the two methods.

Journal ArticleDOI
TL;DR: A new model which decomposes an image into three parts (structures, textures and noise) based on a local regularization scheme is proposed, which is compared with the recent work of Aujol and Chambolle.
Abstract: These last few years, image decomposition algorithms have been proposed to split an image into two parts: the structures and the textures. These algorithms are not adapted to the case of noisy images because the textures are corrupted by noise. In this paper, we propose a new model which decomposes an image into three parts (structures, textures and noise) based on a local regularization scheme. We compare our results with the recent work of Aujol and Chambolle. We finish by giving another model which combines the advantages of the two previous ones.

Journal ArticleDOI
TL;DR: This paper addresses the problem of segmentation and rate distortion optimization using Guigues algorithm on a hierarchy of partitions constructed using the simplified Mumford-Shah multiscale energy on a set of partitions represented in this hierarchy.
Abstract: This paper discusses the interest of the Tree of Shapes of an image as a region oriented image representation. The Tree of Shapes offers a compact and structured representation of the family of level lines of an image. This representation has been used for many processing tasks such as filtering, registration, or shape analysis. In this paper we show how this representation can be used for segmentation, rate distortion optimization, and encoding. We address the problem of segmentation and rate distortion optimization using Guigues algorithm on a hierarchy of partitions constructed using the simplified Mumford-Shah multiscale energy. To segment an image, we minimize the simplified Mumford-Shah energy functional on the set of partitions represented in this hierarchy. The rate distortion problem is also solved in this hierarchy of partitions. In the case of encoding, we propose a variational model to select a family of level lines of a gray level image in order to obtain a minimal description of it. Our energy functional represents the cost in bits of encoding the selected level lines while controlling the maximum error of the reconstructed image. In this case, a greedy algorithm is used to minimize the corresponding functional. Some experiments are displayed.

Journal ArticleDOI
TL;DR: This paper presents mathematical analysis that proves the existence of the characteristic GGD and provides a sufficient condition for the uniqueness of CGGD, thus establishing a theoretical basis for its use.
Abstract: Generalized Gaussian density (GGD) is a well established model for high frequency wavelet subbands and has been applied in texture image retrieval with very good results. In this paper, we propose to adopt the GGD model in a supervised learning context for texture classification. Given a training set of GGDs, we define a characteristic GGD (CGGD) that minimizes its Kullback-Leibler distance (KLD) to the training set. We present mathematical analysis that proves the existence of our characteristic GGD and provide a sufficient condition for the uniqueness of CGGD, thus establishing a theoretical basis for its use. Our experimental results show that the proposed CGGD signature together with the use of KLD has a very promising recognition performance compared with existing approaches.

Journal ArticleDOI
TL;DR: A new algorithm for reconstructing binary images from their projections along a small number of directions, each using only two projections, which demonstrates that the algorithm can compute highly accurate reconstructions from asmall number of projections, even in the presence of noise.
Abstract: We present a new algorithm for reconstructing binary images from their projections along a small number of directions. Our algorithm performs a sequence of related reconstructions, each using only two projections. The algorithm makes extensive use of network flow algorithms for solving the two-projection subproblems. Our experimental results demonstrate that the algorithm can compute highly accurate reconstructions from a small number of projections, even in the presence of noise. Although the effectiveness of the algorithm is based on certain smoothness assumptions about the image, even tiny, non-smooth details are reconstructed exactly. The class of images for which the algorithm is most effective includes images of convex objects, but images of objects that contain holes or consist of multiple components can also be reconstructed very well.

Journal ArticleDOI
TL;DR: A level set based variational model to capture a typical class of illusory contours such as Kanizsa triangle is proposed, which completes missing boundaries in a smooth way via Euler’s elastica, and also preserves corners by incorporating curvature information of object boundaries.
Abstract: Illusory contours, such as the classical Kanizsa triangle and square [9], are intrinsic phenomena in human vision. These contours are not completely defined by real object boundaries, but also include illusory boundaries which are not explicitly present in the images. Therefore, the major computational challenge of capturing illusory contours is to complete the illusory boundaries. In this paper, we propose a level set based variational model to capture a typical class of illusory contours such as Kanizsa triangle. Our model completes missing boundaries in a smooth way via Euler's elastica, and also preserves corners by incorporating curvature information of object boundaries. Our model can capture illusory contours regardless of whether the missing boundaries are straight lines or curves. We compare the choice of the second order Euler's elastica used in our model and that of the first order Euler's elastica developed in Nitzberg-Mumford-Shiota's work on the problem of segmentation with depth [15, 16]. We also prove that with the incorporation of curvature information of objects boundaries our model can preserve corners as completely as one wants. Finally we present the numerical results by applying our model on some standard illusory contours.

Journal ArticleDOI
TL;DR: A new image operator is presented, which solves segmentation by pruning trees of the forest by applying the Image Foresting Transform to create an optimum-path forest whose roots are seed pixels, selected inside a desired object.
Abstract: The Image Foresting Transform (IFT) is a tool for the design of image processing operators based on connectivity, which reduces image processing problems into an optimum-path forest problem in a graph derived from the image. A new image operator is presented, which solves segmentation by pruning trees of the forest. An IFT is applied to create an optimum-path forest whose roots are seed pixels, selected inside a desired object. In this forest, object and background are connected by optimum paths (leaking paths), which cross the object's boundary through its "most weakly connected" parts (leaking pixels). These leaking pixels are automatically identified and their subtrees are eliminated, such that the remaining forest defines the object. Tree pruning runs in linear time, is extensible to multidimensional images, is free of ad hoc parameters, and requires only internal seeds, with little interference from the heterogeneity of the background. These aspects favor solutions for automatic segmentation. We present a formal definition of the obtained objects, algorithms, sufficient conditions for tree pruning, and two applications involving automatic segmentation: 3D MR-image segmentation of the human brain and image segmentation of license plates. Given that its most competitive approach is the watershed transform by markers, we also include a comparative analysis between them.

Journal Article
TL;DR: The convergence of local estimators based on Digital Straight Segment (DSS) recognition is studied, closely linked to the asymptotic growth of maximal DSS, for which bounds are shown both about their number and sizes.
Abstract: Discrete geometric estimators approach geometric quantities on digitized shapes without any knowledge of the continuous shape. A classical yet difficult problem is to show that an estimator asymptotically converges toward the true geometric quantity as the resolution increases. We study here the convergence of local estimators based on Digital Straight Segment (DSS) recognition. It is closely linked to the asymptotic growth of maximal DSS, for which we show bounds both about their number and sizes. These results not only give better insights about digitized curves but indicate that curvature estimators based on local DSS recognition are not likely to converge. We indeed invalidate an hypothesis which was essential in the only known convergence theorem of a discrete curvature estimator. The proof involves results from arithmetic properties of digital lines, digital convexity, combinatorics, continued fractions and random polytopes.

Journal ArticleDOI
TL;DR: A novel concept of shape prior for the processing of tubular structures in 3D images based on the notion of an anisotropic area energy and the corresponding geometric gradient flow is proposed.
Abstract: We propose a novel concept of shape prior for the processing of tubular structures in 3D images. It is based on the notion of an anisotropic area energy and the corresponding geometric gradient flow. The anisotropic area functional incorporates a locally adapted template as a shape prior for tubular vessel structures consisting of elongated, ellipsoidal shape models. The gradient flow for this functional leads to an anisotropic curvature motion model, where the evolution is driven locally in direction of the considered template. The problem is formulated in a level set framework, and a stable and robust method for the identification of the local prior is presented. The resulting algorithm is able to smooth the vessels, pushing solution toward elongated cylinders with round cross sections, while bridging gaps in the underlying raw data. The implementation includes a finite-element scheme for numerical accuracy and a narrow band strategy for computational efficiency.

Journal ArticleDOI
TL;DR: This work invalidates a conjecture which was essential in the only known convergence theorem of a discrete curvature estimator and shows bounds both about their number and sizes on Convex Digital Polygons.
Abstract: Discrete geometric estimators approach geometric quantities on digitized shapes without any knowledge of the continuous shape. A classical yet difficult problem is to show that an estimator asymptotically converges toward the true geometric quantity as the resolution increases. For estimators of local geometric quantities based on Digital Straight Segment (DSS) recognition this problem is closely linked to the asymptotic growth of maximal DSS for which we show bounds both about their number and sizes on Convex Digital Polygons. These results not only give better insights about digitized curves but indicate that curvature estimators based on local DSS recognition are not likely to converge. We indeed invalidate a conjecture which was essential in the only known convergence theorem of a discrete curvature estimator. The proof involves results from arithmetic properties of digital lines, digital convexity, combinatorics and continued fractions.

Journal ArticleDOI
TL;DR: This paper proposes a novel variational method for color image segmentation using modified geodesic active contour method that can detect OOI but without unwanted objects.
Abstract: In this paper, we propose a novel variational method for color image segmentation using modified geodesic active contour method. Our goal is to detect Object(s) of Interest (OOI) from a given color image, regardless of other objects. The main novelty of our method is that we modify the stopping function in the functional of usual geodesic active contour method so that the new stopping function is coupled by a discrimination function of OOI. By minimizing the functional, the OOI is segmented. Firstly, we study the pixel properties of the OOI by sample pixels visually chosen from OOI. From these sample pixels, by the principal component analysis and interval estimation, the discrimination function of whether a pixel is in the OOI is obtained probabilistically. Then we propose the energy functional for the segmentation of OOI with new stopping function. Unlike usual stopping functions defined by the image gradient, our improved stopping function depends on not only the image gradient but also the discrimination function derived from the color information of OOI. As a result, better than usual active contour methods which detect all objects in the image, our modified active contour method can detect OOI but without unwanted objects. Experiments are conducted in both synthetic and natural images. The result shows that our algorithm is very efficient for detecting OOI even the background is complicated.

Journal ArticleDOI
TL;DR: This is the first multivariate statistical analysis of the human brain in AD that uses the whole features simultaneously rather than segmented version of the images, giving multivariate results that are plausible and easy to interpret by the clinicians.
Abstract: Multivariate statistical discrimination methods are suitable not only for classification but also for characterization of differences between a reference group of patterns and the population under investigation. In the last years, statistical methods have been proposed to classify and analyse morphological and anatomical structures of medical images. Most of these techniques work in high-dimensional spaces of particular features such as shapes or statistical parametric maps and have overcome the difficulty of dealing with the inherent high dimensionality of medical images by analysing segmented structures individually or performing hypothesis tests on each feature separately. In this paper, we present a general multivariate linear framework that addresses the small sample size problem in medical images. The goal is to identify and analyse the most discriminating hyper-plane separating two populations using all the intensity features simultaneously rather than segmented versions of the data separately or feature-by-feature. To demonstrate the performance of the multivariate linear framework we carry out experimental results on artificially generated data set and on a real medical data composed of magnetic resonance images (MRI) of subjects suffering from Alzheimer's disease (AD) compared to an elderly healthy control group. To our knowledge this is the first multivariate statistical analysis of the human brain in AD that uses the whole features (texture + shapes) simultaneously rather than segmented version of the images. The conceptual and mathematical simplicity of the approach involves the same operations irrespective of the complexity of the experiment or nature of the spatially normalized data, giving multivariate results that are plausible and easy to interpret by the clinicians.

Journal ArticleDOI
TL;DR: Computer simulation results indicate that the proposed method accurately measures the blocking artifacts without using the original image, and can be easily implemented in both pixel and DCT domains.
Abstract: The objective measurement of blocking artifacts plays an important role in the design, optimization, and assessment of image and video compression. In this paper, we propose a novel measurement algorithm for blocking artifacts. Computer simulation results indicate that the proposed method accurately measures the blocking artifacts without using the original image. Moreover, the proposed algorithm can be easily implemented in both pixel and DCT domains.

Journal ArticleDOI
TL;DR: Severe problems with standard measures used for benchmarking automatic localisation of correspondences for the construction of Statistical Shape Models from examples are analysed both theoretically and experimentally both on natural and synthetic datasets.
Abstract: Automatic localisation of correspondences for the construction of Statistical Shape Models from examples has been the focus of intense research during the last decade. Several algorithms are available and benchmarking is needed to rank the different algorithms. Prior work has argued that the quality of the models produced by the algorithms can be evaluated by measuring compactness, generality and specificity. In this paper severe problems with these standard measures are analysed both theoretically and experimentally both on natural and synthetic datasets. We also propose that a Ground Truth Correspondence Measure (GCM) is used for benchmarking and in this paper benchmarking is performed on several state of the art algorithms using seven real and one synthetic dataset.

Journal ArticleDOI
TL;DR: This special issue containing revised and extended versions of the best computer vision and image processing papers presented at the XIX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI 2006), held on October 8–11, 2006, in Manaus, AM, Brazil is introduced.
Abstract: I am pleased to introduce this special issue containing revised and extended versions of the best computer vision and image processing papers presented at the XIX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI 2006), held on October 8–11, 2006, in Manaus, AM, Brazil. SIBGRAPI has been the most important Brazilian meeting in the areas of computer graphics, image processing and computer vision since 1988, when its first edition took place. It has been held annually, with the sponsorship of the Brazilian Computer Society. In spite of its name, SIBGRAPI has grown into a truly international conference, especially since 1997, when IEEE CS Press started publishing its proceedings. An evidence of that is the caliber of the invited speakers of its 2006 edition: Prof. Anil Jain, from Michigan State University, Prof. François Sillion, from INRIA Rhône-Alpes, Prof. Steve Seitz, from the University of Washington and Prof. Steve Cunningham, from California State University, Stanislaus. Another evidence of the international character of SIBGRAPI 2006 is the diversity of its Program Committee, composed of 104 members: 45 from Latin America, 39 from USA/Canada, 15 from Europe and 5 from Asia/Australia. These facts led to a pool of 127 full-paper submissions from 12 different countries. All submissions were subject to a rigorous evaluation process, which included at least three double-blind reviews per paper, a rebuttal from the authors and a final on-line discussion among the reviewers, on which several conflicting cases were settled. Based on the results

Journal ArticleDOI
TL;DR: It is proved mathematically that these algorithms converge rapidly, provided the noise is small, and in just 1-2 iterations they achieve maximum possible statistical accuracy.
Abstract: We investigate several numerical schemes for estimating parameters in computer vision problems: HEIV, FNS, renormalization method, and others. We prove mathematically that these algorithms converge rapidly, provided the noise is small. In fact, in just 1-2 iterations they achieve maximum possible statistical accuracy. Our results are supported by a numerical experiment. We also discuss the performance of these algorithms when the noise increases and/or outliers are present.

Journal ArticleDOI
Erik Melin1
TL;DR: It is demonstrated that the graph of a Khalimsky-continuous mapping X→ℤ is a surface, which separates X×ℬ, and it is shown that the adjacency boundary of a connected subset, U, of the Khalimski plane is connected precisely when the complement of U is connected.
Abstract: Let X be a smallest-neighborhood space, sometimes called an Alexandrov space. We demonstrate that the graph of a Khalimsky-continuous mapping X?? is a surface, which separates X×?. We study infima and suprema of families of such continuous mappings, a study that naturally leads to the introduction of an extended Khalimsky line. Moreover, we show that the adjacency boundary of a connected subset, U, of the Khalimsky plane is connected precisely when the complement of U is connected.