scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2008"


Journal ArticleDOI
TL;DR: This paper introduces a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion, and shows that this anisotropic diffusion equation with a diffusion tensor outperforms many other PDEs when sparse scattered data must be interpolated.
Abstract: Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec.

159 citations


Journal ArticleDOI
TL;DR: A novel nonlocal prior such that differences are computed over a broader neighborhoods of each pixel with weights depending on its similarity with respect to the other pixels in such a way connectivity and continuity of the image is exploited.
Abstract: Bayesian approaches, or maximum a posteriori (MAP) methods, are effective in providing solutions to ill-posed problems in image reconstruction. Based on Bayesian theory, prior information of the target image is imposed on image reconstruction to suppress noise. Conventionally, the information in most of prior models comes from weighted differences between pixel intensities within a small local neighborhood. In this paper, we propose a novel nonlocal prior such that differences are computed over a broader neighborhoods of each pixel with weights depending on its similarity with respect to the other pixels. In such a way connectivity and continuity of the image is exploited. A two-step reconstruction algorithm using the nonlocal prior is developed. The proposed nonlocal prior Bayesian reconstruction algorithm has been applied to emission tomographic reconstructions using both computer simulated data and patient SPECT data. Compared to several existing reconstruction methods, our approach shows better performance in both lowering the noise and preserving the edges.

132 citations


Journal ArticleDOI
TL;DR: A comprehensive method for detecting straight line segments in any digital image, accurately controlling both false positive and false negative detections is proposed, based on Helmholtz principle.
Abstract: In this paper we propose a comprehensive method for detecting straight line segments in any digital image, accurately controlling both false positive and false negative detections Based on Helmholtz principle, the proposed method is parameterless At the core of the work lies a new way to interpret binary sequences in terms of unions of segments, for which a dynamic programming implementation is given The proposed algorithm is extensively tested on synthetic and real images and compared with the state of the art

101 citations


Journal ArticleDOI
TL;DR: It is proved that in Size Theory the comparison of multidimensional size functions can be reduced to the 1-dimensional case by a suitable change of variables, and the definition of a new distance between multiddimensional size functions is defined, and to the proof of their stability with respect to that distance.
Abstract: Size Theory has proven to be a useful framework for shape analysis in the context of pattern recognition. Its main tool is a shape descriptor called size function. Size Theory has been mostly developed in the 1-dimensional setting, meaning that shapes are studied with respect to functions, defined on the studied objects, with values in ?. The potentialities of the k-dimensional setting, that is using functions with values in ? k , were not explored until now for lack of an efficient computational approach. In this paper we provide the theoretical results leading to a concise and complete shape descriptor also in the multidimensional case. This is possible because we prove that in Size Theory the comparison of multidimensional size functions can be reduced to the 1-dimensional case by a suitable change of variables. Indeed, a foliation in half-planes can be given, such that the restriction of a multidimensional size function to each of these half-planes turns out to be a classical size function in two scalar variables. This leads to the definition of a new distance between multidimensional size functions, and to the proof of their stability with respect to that distance. Experiments are carried out to show the feasibility of the method.

98 citations


Journal ArticleDOI
TL;DR: The development of the general theory of generalized Fourier-descriptors, with several new results, about their completeness in particular, lead to simple formulas for motion-invariants of images, that are “complete” in a certain sense, and are used in the first part of the paper.
Abstract: This paper is about generalized Fourier descriptors, and their application to the research of invariants under group actions. A general methodology is developed, crucially related to Pontryagin's, Tannaka's, Chu's and Tatsuuma's dualities, from abstract harmonic analysis. Application to motion groups provides a general methodology for pattern recognition. This methodology generalizes the classical basic method of Fourier-invariants of contours of objects. In the paper, we use the results of this theory, inside a Support-Vector-Machine context, for 3D objects-recognition. As usual in practice, we classify 3D objects starting from 2D information. However our method is rather general and could be applied directly to 3D data, in other contexts. Our applications and comparisons with other methods are about human-face recognition, but also we provide tests and comparisons based upon standard data-bases such as the COIL data-base. Our methodology looks extremely efficient, and effective computations are rather simple and low cost. The paper is divided in two parts: first, the part relative to applications and computations, in a SVM environment. The second part is devoted to the development of the general theory of generalized Fourier-descriptors, with several new results, about their completeness in particular. These results lead to simple formulas for motion-invariants of images, that are "complete" in a certain sense, and that are used in the first part of the paper. The computation of these invariants requires only standard FFT estimations, and one dimensional integration.

87 citations


Journal ArticleDOI
TL;DR: It is shown that a certain approach to fuzzy mathematical morphology ultimately depends on the choice of a fuzzy inclusion measure and on a notion of duality, which gives rise to a clearly defined scheme for classifying fuzzy mathematical morphologies.
Abstract: Mathematical morphology was originally conceived as a set theoretic approach for the processing of binary images. Extensions of classical binary morphology to gray-scale morphology include approaches based on fuzzy set theory. This paper discusses and compares several well-known and new approaches towards gray-scale and fuzzy mathematical morphology. We show in particular that a certain approach to fuzzy mathematical morphology ultimately depends on the choice of a fuzzy inclusion measure and on a notion of duality. This fact gives rise to a clearly defined scheme for classifying fuzzy mathematical morphologies. The umbra and the level set approach, an extension of the threshold approach to gray-scale mathematical morphology, can also be embedded in this scheme since they can be identified with certain fuzzy approaches.

80 citations


Journal ArticleDOI
TL;DR: An improved hybrid method for removing noise from low SNR molecular images is introduced and shows that the proposed model performs better even at higher levels of noise.
Abstract: In this paper an improved hybrid method for removing noise from low SNR molecular images is introduced. The method provides an improvement over the one suggested by Jian Ling and Alan C. Bovik (IEEE Trans. Med. Imaging, 21(4), [2002]). The proposed model consists of two stages. The first stage consists of a fourth order PDE and the second stage is a relaxed median filter, which processes the output of fourth order PDE. The model enjoys the benefit of both nonlinear fourth order PDE and relaxed median filter. Apart from the method suggested by Ling and Bovik, the proposed method will not introduce any staircase effect and preserves fine details, sharp corners, curved structures and thin lines. Experiments were done on molecular images (fluorescence microscopic images) and standard test images and the results shows that the proposed model performs better even at higher levels of noise.

73 citations


Journal ArticleDOI
TL;DR: Partial partitions and partial connections (where connected components of a set are mutually disjoint but do not necessarily cover the set) are studied and some methods for generating partial connections are described.
Abstract: In connective segmentation (Serra in J. Math. Imaging Vis. 24(1):83---130, [2006]), each image determines subsets of the space on which it is "homogeneous", in such a way that this family of subsets always constitutes a connection (connectivity class); then the segmentation of the image is the partition of space into its connected components according to that connection. Several concrete examples of connective segmentations or of connections on sets, indicate that the space covering requirement of the partition should be relaxed. Furthermore, morphological operations on partitions require the consideration of wider framework. We study thus partial partitions (families of mutually disjoint non-void subsets of the space) and partial connections (where connected components of a set are mutually disjoint but do not necessarily cover the set). We describe some methods for generating partial connections. We investigate the links between the two lattices of partial connections and of partial partitions. We generalize Serra's characterization of connective segmentation and discuss its relevance. Finally we give some ideas on how the theory of partial connections could lead to improved segmentation algorithms.

71 citations


Journal ArticleDOI
Sang-Eon Han1
TL;DR: The pseudo-multiplicative property (contrary to the multiplicative property) of the digital fundamental group can be used in classifying digital images from the view points of both digital k-homotopy theory and mathematical morphology.
Abstract: In order to discuss digital topological properties of a digital image (X,k), many recent papers have used the digital fundamental group and several digital topological invariants such as the k-linking number, the k-topological number, and so forth. Owing to some difficulties of an establishment of the multiplicative property of the digital fundamental group, a k-homotopic thinning method can be essentially used in calculating the digital fundamental group of a digital product with k-adjacency. More precisely, let $\mathit{SC}_{k_{i}}^{n_{i},l_{i}}$ be a simple closed k i -curve with l i elements in $\mathbf{Z}^{n_{i}},i\in\{1,2\}$ . For some k-adjacency of the digital product $\mathit{SC}_{k_{1}}^{n_{1},l_{1}}\times\mathit{SC}_{k_{2}}^{n_{2},l_{2}}\subset\mathbf{Z}^{n_{1}+n_{2}}$ which is a torus-like set, proceeding with the k-homotopic thinning of $\mathit{SC}_{k_{1}}^{n_{1},l_{1}}\times\mathit{SC}_{k_{2}}^{n_{2},l_{2}}$ , we obtain its k-homotopic thinning set denoted by DT k . Writing an algorithm for calculating the digital fundamental group of $\mathit{SC}_{k_{1}}^{n_{1},l_{1}}\times\mathit {SC}_{k_{2}}^{n_{2},l_{2}}$ , we investigate the k-fundamental group of $(\mathit{SC}_{k_{1}}^{n_{1},l_{1}}\times\mathit{SC}_{k_{2}}^{n_{2},l_{2}},k)$ by the use of various properties of a digital covering (Z×Z,p 1×p 2,DT k ), a strong k-deformation retract, and algebraic topological tools. Finally, we find the pseudo-multiplicative property (contrary to the multiplicative property) of the digital fundamental group. This property can be used in classifying digital images from the view points of both digital k-homotopy theory and mathematical morphology.

70 citations


Journal ArticleDOI
TL;DR: An algorithm for achieving a Maximum A Posteriori (map) solution is proposed and it is shown experimentally that the map-solution generalizes far better than the prior-free Maximum Likelihood (ml) solution.
Abstract: This paper describes an approach to implicit Non-Rigid Structure-from-Motion based on the low-rank shape model. The main contributions are the use of an implicit model, of matching tensors, a rank estimation procedure, and the theory and implementation of two smoothness priors. Contrarily to most previous methods, the proposed method is fully automatic: it handles a substantial amount of missing data as well as outlier contaminated data, and it automatically estimates the degree of deformation. A major problem in many previous methods is that they generalize badly. Although the estimated model fits the visible training data well, it often predicts the missing data badly. To improve generalization a temporal smoothness prior and a surface shape prior are developed. The temporal smoothness prior constrains the camera trajectory and the configuration weights to behave smoothly. The surface shape prior constrains consistently close image point tracks to have similar implicit structure. We propose an algorithm for achieving a Maximum A Posteriori (map) solution and show experimentally that the map-solution generalizes far better than the prior-free Maximum Likelihood (ml) solution.

61 citations


Journal ArticleDOI
TL;DR: A new method for computing shape elongation is defined that is boundary based and uses all the boundary points, and finds the elongation for shapes whose boundary is not extracted completely, which is impossible to achieve with area based measures.
Abstract: Shape elongation is one of the basic shape descriptors that has a very clear intuitive meaning. That is the reason for its applicability in many shape classification tasks. In this paper we define a new method for computing shape elongation. The new measure is boundary based and uses all the boundary points. We start with shapes having polygonal boundaries. After that we extend the method to shapes with arbitrary boundaries. The new elongation measure converges when the assigned polygonal approximation converges toward a shape. We express the measure with closed formulas in both cases: for polygonal shapes and for arbitrary shapes. The new measure finds the elongation for shapes whose boundary is not extracted completely, which is impossible to achieve with area based measures.

Journal ArticleDOI
TL;DR: This paper discusses two main options for translating the relative variation of one shape with respect to another in a template centered representation, based on the Riemannian metric and the coadjoint transport.
Abstract: This paper focuses on the issue of translating the relative variation of one shape with respect to another in a template centered representation. The context is the theory of Diffeomorphic Pattern Matching which provides a representation of the space of shapes of objects, including images and point sets, as an infinite dimensional Riemannian manifold which is acted upon by groups of diffeomorphisms. We discuss two main options for achieving our goal; the first one is the parallel translation, based on the Riemannian metric; the second one, based on the group action, is the coadjoint transport. These methods are illustrated with 3D experiments.

Journal ArticleDOI
TL;DR: The notion of crucial pixel is introduced, which permits to link this work with the framework of digital topology, and simple local characterizations are proved, which allow us to express thinning algorithms by way of sets of masks.
Abstract: Critical kernels constitute a general framework in the category of abstract complexes for the study of parallel thinning in any dimension. The most fundamental result in this framework is that, if a subset Y of X contains the critical kernel of X, then Y is guaranteed to have "the same topology as X". Here, we focus on 2D structures in spaces of two and three dimensions. We introduce the notion of crucial pixel, which permits to link this work with the framework of digital topology. We prove simple local characterizations, which allow us to express thinning algorithms by way of sets of masks. We propose several new parallel algorithms, which are both fast and simple to implement, that yield symmetrical or non-symmetrical skeletons of 2D objects in 2D or 3D grids. We prove some properties of these skeletons, related to topology preservation, to minimality, and to the inclusion of the topological axis. The latter may be seen as a generalization of the medial axis. We also show how to use critical kernels in order to provide simple proofs of the topological soundness of existing thinning schemes. Finally, we clarify the link between critical kernels, minimal non-simple sets, and P-simple points.

Journal ArticleDOI
TL;DR: The general structure of visual spaces for different visual fields is explored and may be interpreted as the group of congruences (proper motions) of the space to improve viewing systems for optical man-machine interfaces.
Abstract: The "visual space" of an optical observer situated at a single, fixed viewpoint is necessarily very ambiguous. Although the structure of the "visual field" (the lateral dimensions, i.e., the "image") is well defined, the "depth" dimension has to be inferred from the image on the basis of "monocular depth cues" such as occlusion, shading, etc. Such cues are in no way "given", but are guesses on the basis of prior knowledge about the generic structure of the world and the laws of optics. Thus such a guess is like a hallucination that is used to tentatively interpret image structures as depth cues. The guesses are successful if they lead to a coherent interpretation. Such "controlled hallucination" (in psychological terminology) is similar to the "analysis by synthesis" of computer vision. Although highly ambiguous, visual spaces do have geometrical structure. The group of ambiguities left open by the cues (e.g., the well known bas-relief ambiguity in the case of shape from shading) may be interpreted as the group of congruences (proper motions) of the space. The general structure of visual spaces for different visual fields is explored in the paper. Applications include improved viewing systems for optical man-machine interfaces.

Journal ArticleDOI
TL;DR: This work presents new sampling theorems for surfaces and higher dimensional manifolds and shows how to apply the main results to obtain a new, geometric proof of the classical Shannon sampling theorem, and also to image analysis.
Abstract: We present new sampling theorems for surfaces and higher dimensional manifolds. The core of the proofs resides in triangulation results for manifolds with boundary, not necessarily bounded. The method is based upon geometric considerations that are further augmented for 2-dimensional manifolds (i.e surfaces). In addition, we show how to apply the main results to obtain a new, geometric proof of the classical Shannon sampling theorem, and also to image analysis.

Journal ArticleDOI
TL;DR: It is pointed out that the well-known non-iterative closed-form for the leave-one-out cross-validation score is actually a good approximation to the true score and shown that it extends to the warp estimation problem by replacing the usual vector two-norm by the matrix Frobenius norm.
Abstract: Estimating smooth image warps from landmarks is an important problem in computer vision and medical image analysis. The standard paradigm is to find the model parameters by minimizing a compound energy including a data term and a smoother, balanced by a `smoothing parameter' that is usually fixed by trial and error. We point out that warp estimation is an instance of the general supervised machine learning problem of fitting a flexible model to data, and propose to learn the smoothing parameter while estimating the warp. The leading idea is to depart from the usual paradigm of minimizing the energy to the one of maximizing the predictivity of the warp, i.e. its ability to do well on the entire image, rather than only on the given landmarks. We use cross-validation to measure predictivity, and propose a complete framework to solve for the desired warp. We point out that the well-known non-iterative closed-form for the leave-one-out cross-validation score is actually a good approximation to the true score and show that it extends to the warp estimation problem by replacing the usual vector two-norm by the matrix Frobenius norm. Experimental results on real data show that the procedure selects sensible smoothing parameters, very close to user selected ones.

Journal ArticleDOI
TL;DR: The proposed method enables a continuous tracking along an image sequence of both a deformable curve and its velocity field, and is demonstrated on two types of image sequences showing deformable objects and fluid motions.
Abstract: In this paper, a new framework for the tracking of closed curves and their associated motion fields is described. The proposed method enables a continuous tracking along an image sequence of both a deformable curve and its velocity field. Such an approach is formalized through the minimization of a global spatio-temporal continuous cost functional, w.r.t a set of variables representing the curve and its related motion field. The resulting minimization process relies on optimal control approach and consists in a forward integration of an evolution law followed by a backward integration of an adjoint evolution model. This latter pde includes a term related to the discrepancy between the current estimation of the state variable and discrete noisy measurements of the system. The closed curves are represented through implicit surface modeling, whereas the motion is described either by a vector field or through vorticity and divergence maps depending on the kind of targeted applications. The efficiency of the approach is demonstrated on two types of image sequences showing deformable objects and fluid motions.

Journal ArticleDOI
TL;DR: A new randomized algorithm for repairing the topology of objects represented by 3D binary digital images, which finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.
Abstract: We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images. By "repairing the topology", we mean a systematic way of modifying a given binary image in order to produce a similar binary image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.

Journal ArticleDOI
TL;DR: A novel theoretical framework defined in the frequency domain is proposed for approaching the multidimensional image registration problem, providing more efficient implementations of the most common registration methods than already existing approaches and providing an interesting framework to design tailor-made regularization models apart from the classical, spatial domain based schemes.
Abstract: Image registration is a widely used task in image analysis, having applications in various fields. Its classical formulation is usually given in the spatial domain. In this paper, a novel theoretical framework defined in the frequency domain is proposed for approaching the multidimensional image registration problem. The variational minimization of the joint energy functional is performed entirely in the frequency domain, leading to a simple formulation and design, and offering important computational savings if the multidimensional FFT algorithm is used. Therefore the proposed framework provides more efficient implementations of the most common registration methods than already existing approaches, adding simplicity to the variational image registration formulation and allowing for an easy extension to higher dimensions by using the multidimensional Fourier transform of discrete multidimensional signals. The new formulation also provides an interesting framework to design tailor-made regularization models apart from the classical, spatial domain based schemes. Simulation examples validate the theoretical results.

Journal ArticleDOI
TL;DR: This work provides a completely new rigorous matrix formulation of the absolute quadratic complex (AQC), given by the set of lines intersecting the absolute conic, and completely characterize the 6×6 matrices acting on lines which are induced by a spatial homography.
Abstract: We provide a completely new rigorous matrix formulation of the absolute quadratic complex (AQC), given by the set of lines intersecting the absolute conic. The new results include closed-form expressions for the camera intrinsic parameters in terms of the AQC, an algorithm to obtain the dual absolute quadric from the AQC using straightforward matrix operations, and an equally direct computation of a Euclidean-upgrading homography from the AQC. We also completely characterize the 6×6 matrices acting on lines which are induced by a spatial homography. Several algorithmic possibilities arising from the AQC are systematically explored and analyzed in terms of efficiency and computational cost. Experiments include 3D reconstruction from real images.

Journal ArticleDOI
TL;DR: A new algorithm is proposed for the reconstruction of binary images that do not have an intrinsic lattice structure and are defined on a continuous domain, from a small number of their projections, based on the fact that the problem of reconstructing an image from only two projections can be formulated as a network flow problem in a graph.
Abstract: Tomography is a powerful technique to obtain accurate images of the interior of an object in a nondestructive way. Conventional reconstruction algorithms, such as filtered backprojection, require many projections to obtain high quality reconstructions. If the object of interest is known in advance to consist of only a few different materials, corresponding to known image intensities, the use of this prior knowledge in the reconstruction procedure can dramatically reduce the number of required projections. In previous work we proposed a network flow algorithm for reconstructing a binary image defined on a lattice from its projections. In this paper we propose a new algorithm for the reconstruction of binary images that do not have an intrinsic lattice structure and are defined on a continuous domain, from a small number of their projections. Our algorithm relies on the fact that the problem of reconstructing an image from only two projections can be formulated as a network flow problem in a graph. We derive this formulation for parallel beam and fan beam tomography and show how it can be used to compute binary reconstructions from more than two projections. Our algorithm is capable of computing high quality reconstructions from very few projections. We evaluate its performance on both simulated and real experimental projection data and compare it to other reconstruction algorithms.

Journal ArticleDOI
TL;DR: The main contribution of this paper is a characterisation of the non-trivial simple sets composed of exactly two voxels, such sets being called minimal simple pairs.
Abstract: Preserving topological properties of objects during thinning procedures is an important issue in the field of image analysis. This paper constitutes an introduction to the study of non-trivial simple sets in the framework of cubical 3-D complexes. A simple set has the property that the homotopy type of the object in which it lies is not changed when the set is removed. The main contribution of this paper is a characterisation of the non-trivial simple sets composed of exactly two voxels, such sets being called minimal simple pairs.

Journal ArticleDOI
TL;DR: It is observed that turbulent paintings of van Gogh belong to his last period, during which episodes of prolonged psychotic agitation of this artist were frequent.
Abstract: We show that the patterns of luminance in some impassioned van Gogh paintings display the mathematical structure of fluid turbulence. Specifically, we show that the probability distribution function (PDF) of luminance fluctuations of points (pixels) separated by a distance R compares notably well with the PDF of the velocity differences in a turbulent flow, as predicted by the statistical theory of A.N. Kolmogorov. We observe that turbulent paintings of van Gogh belong to his last period, during which episodes of prolonged psychotic agitation of this artist were frequent. Our approach suggests new tools that open the possibility of quantitative objective research for art representation.

Journal ArticleDOI
TL;DR: This paper considers a segmentation as a set of connected regions, separated by a frontier, and defines four classes of graphs for which it is proved, thanks to the notion of cleft, that one of these classes is the class of graphs in which any cleft is thin.
Abstract: Region merging methods consist of improving an initial segmentation by merging some pairs of neighboring regions. In this paper, we consider a segmentation as a set of connected regions, separated by a frontier. If the frontier set cannot be reduced without merging some regions then we call it a cleft, or binary watershed. In a general graph framework, merging two regions is not straightforward. We define four classes of graphs for which we prove, thanks to the notion of cleft, that some of the difficulties for defining merging procedures are avoided. Our main result is that one of these classes is the class of graphs in which any cleft is thin. None of the usual adjacency relations on ?2 and ?3 allows a satisfying definition of merging. We introduce the perfect fusion grid on ? n , a regular graph in which merging two neighboring regions can always be performed by removing from the frontier set all the points adjacent to both regions.

Journal ArticleDOI
TL;DR: In this paper, digital covering spaces are classified using the conjugacy class corresponding to a digital covering space and the results show that the spaces covered by such a class have different eigenvalues.
Abstract: In this paper we classify digital covering spaces using the conjugacy class corresponding to a digital covering space.

Journal ArticleDOI
TL;DR: A method for deconvolution of images by means of an inversion of fractional powers of the Gaussian using a regularizing term which is also a fractional power of the Laplacian to recover higher frequencies.
Abstract: We present a method for deconvolution of images by means of an inversion of fractional powers of the Gaussian. The main feature of our model is the introduction of a regularizing term which is also a fractional power of the Laplacian. This term allows us to recover higher frequencies. The model is particularly useful to devise an algorithm for blind deconvolution. We will show, analyze and illustrate through examples the performance of this algorithm.

Journal ArticleDOI
TL;DR: A new method for resistant and robust alignment of sets of 2D shapes wrt.
Abstract: This paper describes a new method for resistant and robust alignment of sets of 2D shapes wrt. position, rotation, and iso-tropical scaling. Apart from robustness a major advantage of the method is that it is formulated as a linear programming (LP) problem, thus enabling the use of well known and thoroughly tested standard numerical software. The problem is formulated as the minimization of the norm of a linear vector function with a constraint of non-zero size. This is achieved by using the Manhattan distance between points in the plane. Unfortunately the Manhattan distance is dependent on the orientation of the coordinate system, i.e. it is not rotationally invariant. However, by simultaneously minimizing the Manhattan distances in a series of rotated coordinate systems we are able to approximate the circular equidistance curves of Euclidean distances with a regular polygonal equidistance curve to the precision needed. Using 3 coordinate systems rotated 30° we get a 12 sided regular polygon, with which we achieve deviations from Euclidean distances less than 2% over all directions. This new formulation allows for minimization in the L 1-norm using LP. We demonstrate that the use of the L 1-norm results in resistance towards object as well as landmark outliers. Examples that illustrate the properties of the robust norm are given on simulated as well as a biological data sets.

Journal ArticleDOI
TL;DR: A natural and intrinsic regularization of the log-likelihood estimate based on differential geometrical properties of teeth surfaces is proposed, and general conditions under which this may be considered a Bayes prior are shown.
Abstract: We propose a method for restoring the surface of tooth crowns in a 3D model of a human denture, so that the pose and anatomical features of the tooth will work well for chewing. This is achieved by including information about the position and anatomy of the other teeth in the mouth. Our system contains two major parts: A statistical model of a selection of tooth shapes and a reconstruction of missing data. We use a training set consisting of 3D scans of dental cast models obtained with a laser scanner, and we have build a model of the shape variability of the teeth, their neighbors, and their antagonists, using the eigenstructure of the covariance matrix, also known as Principle Component Analysis (PCA). PCA is equivalent to fitting a multivariate Gaussian distribution to the data and the principle directions constitute a linear model for stochastic data and is used both for a data reduction or equivalently noise elimination and for data analysis. However for small sets of high dimensional data, the log-likelihood estimator for the covariance matrix is often far from convergence, and therefore reliable models must be obtained by use of prior information. We propose a natural and intrinsic regularization of the log-likelihood estimate based on differential geometrical properties of teeth surfaces, and we show general conditions under which this may be considered a Bayes prior. Finally we use Bayes method to propose the reconstruction of missing data, for e.g. finding the most probable shape of a missing tooth based on the best match with our shape model on the known data, and we superior improved reconstructions of our full system.

Journal ArticleDOI
TL;DR: Calculation of the focal length and the optical center of the camera are the main objectives of this research work and the proposed technique requires a single image having two vanishing points to generate vanishing points.
Abstract: This paper presents a novel method for 3D camera calibration. Calculation of the focal length and the optical center of the camera are the main objectives of this research work. The proposed technique requires a single image having two vanishing points. A rectangular prism is employed as the calibration target to generate vanishing points. The special arrangement of the calibration object adds more accuracy in finding the intrinsic parameters. Based on the geometry of the perspective distortion of the edges of the prisms from the image, vanishing points are found. There on, fixing up the picture plane followed by fixing up of the station point is carried out based on the relations that are formulated. Experimental results of our method are likened with Zhang's method. Results are tabulated to show the accuracy of the proposed approach.

Journal ArticleDOI
TL;DR: A common framework for the triangulation problem of points, lines and conics is presented and an algorithm for computing the globally optimal solution is derived based on convex and concave relaxations for both fractionals and monomials.
Abstract: The problem of reconstructing 3D scene features from multiple views with known camera motion and given image correspondences is considered. This is a classical and one of the most basic geometric problems in computer vision and photogrammetry. Yet, previous methods fail to guarantee optimal reconstructions--they are either plagued by local minima or rely on a non-optimal cost-function. A common framework for the triangulation problem of points, lines and conics is presented. We define what is meant by an optimal triangulation based on statistical principles and then derive an algorithm for computing the globally optimal solution. The method for achieving the global minimum is based on convex and concave relaxations for both fractionals and monomials. The performance of the method is evaluated on real image data.