scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2019"


Journal ArticleDOI
TL;DR: The blended cubic spline technique combines the additional properties of interpolating the data when $$\lambda \rightarrow \infty $$λ→∞ and reducing to the well-known cubic smoothing spline when the manifold is Euclidean.
Abstract: We propose several methods that address the problem of fitting a $$C^1$$ curve $$\gamma $$ to time-labeled data points on a manifold The methods have a parameter, $$\lambda $$ , to adjust the relative importance of the two goals that the curve should meet: being “straight enough” while fitting the data “closely enough” The methods are designed for ease of use: they only require to compute Riemannian exponentials and logarithms, they represent the curve by means of a number of tangent vectors that grows linearly with the number of data points, and, once the representation is computed, evaluating $$\gamma (t)$$ at any t requires a small number of exponentials and logarithms (independent of the number of data points) Among the proposed methods, the blended cubic spline technique combines the additional properties of interpolating the data when $$\lambda \rightarrow \infty $$ and reducing to the well-known cubic smoothing spline when the manifold is Euclidean The methods are illustrated on synthetic and real data

33 citations


Journal ArticleDOI
TL;DR: A new hybrid set of orthogonal polynomials (OPs) is presented, termed as squared Krawtchouk–Tchebichef polynomial (SKTP), which is formed based on two existing hybrid OPs which are originated from K Rawtchouk and Tchebicf poynomials.
Abstract: In the past decades, orthogonal moments (OMs) have received a significant attention and have widely been applied in various applications. OMs are considered beneficial and effective tools in different digital processing fields. In this paper, a new hybrid set of orthogonal polynomials (OPs) is presented. The new set of OPs is termed as squared Krawtchouk–Tchebichef polynomial (SKTP). SKTP is formed based on two existing hybrid OPs which are originated from Krawtchouk and Tchebichef polynomials. The mathematical design of the proposed OP is presented. The performance of the SKTP is evaluated and compared with the existing hybrid OPs in terms of signal representation, energy compaction (EC) property, and localization property. The achieved results show that SKTP outperforms the existing hybrid OPs. In addition, face recognition system is employed using a well-known database under clean and different noisy environments to evaluate SKTP capabilities. Particularly, SKTP is utilized to transform face images into moment (transform) domain to extract features. The performance of SKTP is compared with existing hybrid OPs. The comparison results confirm that SKTP displays remarkable and stable results for face recognition system.

31 citations


Journal ArticleDOI
TL;DR: It can be shown that a variety of algorithms for classical image reconstruction problems, including TV-$$L^2$$L2 denoising and inpainting, can be implemented in low- and higher-order finite element spaces with the same efficiency as their counterparts originally developed for images on Cartesian grids.
Abstract: The total variation (TV)-seminorm is considered for piecewise polynomial, globally discontinuous (DG) and continuous (CG) finite element functions on simplicial meshes. A novel, discrete variant (DTV) based on a nodal quadrature formula is defined. DTV has favorable properties, compared to the original TV-seminorm for finite element functions. These include a convenient dual representation in terms of the supremum over the space of Raviart–Thomas finite element functions, subject to a set of simple constraints. It can therefore be shown that a variety of algorithms for classical image reconstruction problems, including TV- $$L^2$$ denoising and inpainting, can be implemented in low- and higher-order finite element spaces with the same efficiency as their counterparts originally developed for images on Cartesian grids.

30 citations


Journal ArticleDOI
TL;DR: In this article, a broad variety of methods have been proposed for anomaly detection in images, and the best representative algorithms in each class are reformulated by attaching to them an a-contrario detection that controls the number of false positives and thus deriving a uniform detection scheme for all.
Abstract: We review the broad variety of methods that have been proposed for anomaly detection in images. Most methods found in the literature have in mind a particular application. Yet we focus on a classification of the methods based on the structural assumption they make on the “normal” image, assumed to obey a “background model.” Five different structural assumptions emerge for the background model. Our analysis leads us to reformulate the best representative algorithms in each class by attaching to them an a-contrario detection that controls the number of false positives and thus deriving a uniform detection scheme for all. By combining the most general structural assumptions expressing the background’s normality with the proposed generic statistical detection tool, we end up proposing several generic algorithms that seem to generalize or reconcile most methods. We compare the six best representatives of our proposed classes of algorithms on anomalous images taken from classic papers on the subject, and on a synthetic database. Our conclusion hints that it is possible to perform automatic anomaly detection on a single image.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the edge-weighted geodesic distance from a marker set was used as a penalty term for selective segmentation in medical images. And the proposed model is less parameter dependent and requires less user input than previous models.
Abstract: Selective segmentation is an important application of image processing. In contrast to global segmentation in which all objects are segmented, selective segmentation is used to isolate specific objects in an image and is of particular interest in medical imaging—permitting segmentation and review of a single organ. An important consideration is to minimise the amount of user input to obtain the segmentation; this differs from interactive segmentation in which more user input is allowed than selective segmentation. To achieve selection, we propose a selective segmentation model which uses the edge-weighted geodesic distance from a marker set as a penalty term. It is demonstrated that this edge-weighted geodesic penalty term improves on previous selective penalty terms. A convex formulation of the model is also presented, allowing arbitrary initialisation. It is shown that the proposed model is less parameter dependent and requires less user input than previous models. Further modifications are made to the edge-weighted geodesic distance term to ensure segmentation robustness to noise and blur. We can show that the overall Euler–Lagrange equation admits a unique viscosity solution. Numerical results show that the result is robust to user input and permits selective segmentations that are not possible with other models.

22 citations


Journal ArticleDOI
TL;DR: A simple algorithm is introduced that finds an optimal matching between two curves by computing the geodesic of the infinite-dimensional manifold of curves that is at all time horizontal to the fibers of the shape bundle, and comparison with dynamic programming is established.
Abstract: The aim of this paper is to find an optimal matching between manifold-valued curves, and thereby adequately compare their shapes, seen as equivalent classes with respect to the action of reparameterization. Using a canonical decomposition of a path in a principal bundle, we introduce a simple algorithm that finds an optimal matching between two curves by computing the geodesic of the infinite-dimensional manifold of curves that is at all time horizontal to the fibers of the shape bundle. We focus on the elastic metric studied in Le Brigant (J Geom Mech 9(2):131–156, 2017) using the so-called square root velocity framework. The quotient structure of the shape bundle is examined, and in particular horizontality with respect to the fibers. These results are more generally given for any elastic metric. We then introduce a comprehensive discrete framework which correctly approximates the smooth setting when the base manifold has constant sectional curvature. It is itself a Riemannian structure on the product manifold $$M^{n}$$ of “discrete curves” given by n points, and we show its convergence to the continuous model as the size n of the discretization goes to $$\infty $$ . Illustrations of geodesics and optimal matchings between discrete curves are given in the hyperbolic plane, the plane and the sphere, for synthetic and real data, and comparison with dynamic programming (Srivastava and Klassen in Functional and shape data analysis, Springer, Berlin, 2016) is established.

22 citations


Journal ArticleDOI
TL;DR: This paper proposes a multiscale edge detection method based on first-order derivative of anisotropic Gaussian kernels normalized in scale-space, yielding a maximum response at the scale of the observed edge, and accordingly, the edge scale can be identified.
Abstract: Spatially scaled edges are ubiquitous in natural images. To better detect edges with heterogeneous widths, in this paper, we propose a multiscale edge detection method based on first-order derivative of anisotropic Gaussian kernels. These kernels are normalized in scale-space, yielding a maximum response at the scale of the observed edge, and accordingly, the edge scale can be identified. Subsequently, the maximum response and the identified edge scale are used to compute the edge strength. Furthermore, we propose an adaptive anisotropy factor of which the value decreases as the kernel scale increases. This factor improves the noise robustness of small-scale kernels while alleviating the anisotropy stretch effect that occurs in conventional anisotropic methods. Finally, we evaluate our method on widely used datasets. Experimental results validate the benefits of our method over the competing methods.

20 citations


Journal ArticleDOI
TL;DR: A nonlinear diffusion equation with smooth solution is proposed to remove multiplicative Gamma noise and takes full advantage of two features of multiplicative noise image, namely, gradient information and gray level information, which makes the model has the ability to remove high level noise effectively and protect the edges.
Abstract: The multiplicative noise removal problem is of momentous significance in various image processing applications. In this paper, a nonlinear diffusion equation with smooth solution is proposed to remove multiplicative Gamma noise. The diffusion coefficient takes full advantage of two features of multiplicative noise image, namely, gradient information and gray level information, which makes the model has the ability to remove high level noise effectively and protect the edges. The existence of the solution has been analyzed by Schauder’s fixed-point theorem. Some other theoretical properties such as the maximum principle are also presented in the paper. In the numerical aspect, the explicit finite difference method, fast explicit diffusion method, additive operator splitting method and Krylov subspace spectral method are employed to implement the proposed model. Experimental results show that the fast explicit diffusion method achieves a better trade-off between computational time and denoising performance, and the Krylov subspace spectral method gets better restored results in the visual aspect. In addition, the capability of the proposed model for denoising is illustrated by comparison with other denoising models.

20 citations


Journal ArticleDOI
TL;DR: A novel variational formulation of the multivariate Gaussian fitting problem, applicable to any dimension and accounts for possible nonzero background and noise in the input data, and shows a good robustness when tested on synthetic datasets.
Abstract: Fitting Gaussian functions to empirical data is a crucial task in a variety of scientific applications, especially in image processing. However, most of the existing approaches for performing such fitting are restricted to two dimensions and they cannot be easily extended to higher dimensions. Moreover, they are usually based on alternating minimization schemes which benefit from few theoretical guarantees in the underlying nonconvex setting. In this paper, we provide a novel variational formulation of the multivariate Gaussian fitting problem, which is applicable to any dimension and accounts for possible nonzero background and noise in the input data. The block multiconvexity of our objective function leads us to propose a proximal alternating method to minimize it in order to estimate the Gaussian shape parameters. The resulting FIGARO algorithm is shown to converge to a critical point under mild assumptions. The algorithm shows a good robustness when tested on synthetic datasets. To demonstrate the versatility of FIGARO, we also illustrate its excellent performance in the fitting of the point spread functions of experimental raw data from a two-photon fluorescence microscope.

19 citations


Journal ArticleDOI
TL;DR: This article studies several reconstruction methods for the inverse source problem of photoacoustic tomography with spatially variable sound speed and damping and derives the well posedness of the aforementioned wave equation in a natural functional space and proves the finite speed of propagation.
Abstract: In this article, we study several reconstruction methods for the inverse source problem of photoacoustic tomography with spatially variable sound speed and damping. The backbone of these methods is the adjoint operators, which we thoroughly analyze in both the $$L^2$$ - and $$H^1$$ -settings. They are casted in the form of a nonstandard wave equation. We derive the well posedness of the aforementioned wave equation in a natural functional space and also prove the finite speed of propagation. Under the uniqueness and visibility condition, our formulations of the standard iterative reconstruction methods, such as Landweber’s and conjugate gradients (CG), achieve a linear rate of convergence in either $$L^2$$ - or $$H^1$$ -norm. When the visibility condition is not satisfied, the problem is severely ill posed and one must apply a regularization technique to stabilize the solutions. To that end, we study two classes of regularization methods: (i) iterative and (ii) variational regularization. In the case of full data, our simulations show that the CG method works best; it is very fast and robust. In the ill-posed case, the CG method behaves unstably. Total variation regularization method (TV), in this case, significantly improves the reconstruction quality.

17 citations


Journal ArticleDOI
TL;DR: A new method, fast and efficient, for calculating orthogonal moments on the discrete 3D image by cuboids having same gray levels called image cuboid representation is proposed and it is proved that this method makes it possible to improve the quality of3D image reconstruction from low-order moment.
Abstract: The rise of the digital imaging is remarkable, and the methods and techniques of image processing and analysis of the digital one must also accompany this technological evolution. In a line of research on the moments theory associated with digital imaging, values are extracted from digital images for the needs of classifications or even of reconstruction, as unique descriptors of an image, our work fits. In this paper, we propose a new method, fast and efficient, for calculating orthogonal moments on the discrete 3D image. We opted for the orthogonal polynomials of Meixner and for a new representation of the 3D image by cuboids having same gray levels called image cuboid representation. Based on this representation, we calculate the moments on each cuboid before summing all cuboids in order to obtain the global moments of a 3D image. Through a set of simulations, we prove that our method allows to reduce the time required for the calculation of moment on a 3D image of any size and any order, but not only, this method makes it possible to improve the quality of 3D image reconstruction from low-order moment.

Journal ArticleDOI
TL;DR: This paper proposes a robust computational method to automatically detect and remove flare spot artifacts and defines a new confidence measure able to select flare spots among the candidates; and a method to accurately determine the flare region is given.
Abstract: Flare spot is one type of flare artifact caused by a number of conditions, frequently provoked by one or more high-luminance sources within or close to the camera field of view. When light rays coming from a high-luminance source reach the front element of a camera, it can produce intra-reflections within camera elements that emerge at the film plane forming non-image information or flare on the captured image. Even though preventive mechanisms are used, artifacts can appear. In this paper, we propose a robust computational method to automatically detect and remove flare spot artifacts. Our contribution is threefold: firstly, we propose a characterization which is based on intrinsic properties that a flare spot is likely to satisfy; secondly, we define a new confidence measure able to select flare spots among the candidates; and, finally, a method to accurately determine the flare region is given. Then, the detected artifacts are removed by using exemplar-based inpainting. We show that our algorithm achieves top-tier quantitative and qualitative performance.

Journal ArticleDOI
TL;DR: An algorithm guaranteeing the non-expansiveness of the images gradient support set is proposed, and it is proved that the restorations by the algorithm have edge preservation property.
Abstract: We consider a class of non-Lipschitz regularization problems that include the $$\hbox {TV}^p$$ model as a special case. A lower bound theory of the non-Lipschitz regularization is obtained, which inspires us to propose an algorithm guaranteeing the non-expansiveness of the images gradient support set. After being proximally linearized, this algorithm can be easily implemented. Some standard techniques in image processing, like the fast Fourier transform, could be utilized. The global convergence is also established. Moreover, we prove that the restorations by the algorithm have edge preservation property. Numerical examples are given to show good performance of the algorithm and the rationality of the theories.

Journal ArticleDOI
TL;DR: A combinatorial interpretation of the algorithm based on the concept of a multidimensional discrete Morse function is introduced for the first time in this paper and a substantial rate of reduction in the number of cells achieved by the algorithm is shown.
Abstract: Given a simplicial complex and a vector-valued function on its vertices, we present an algorithmic construction of an acyclic partial matching on the cells of the complex compatible with the given function. This implies the construction can be used to build a reduced filtered complex with the same multidimensional persistent homology as of the original one filtered by the sublevel sets of the function. The correctness of the algorithm is proved, and its complexity is analyzed. A combinatorial interpretation of our algorithm based on the concept of a multidimensional discrete Morse function is introduced for the first time in this paper. Numerical experiments show a substantial rate of reduction in the number of cells achieved by the algorithm.

Journal ArticleDOI
TL;DR: A new variational model for removing haze from a single input image is proposed that can generate a haze-free image with less staircasing artifacts in the slanted plane and more details in the remote scene of an input image.
Abstract: In this paper, we propose a new variational model for removing haze from a single input image. The proposed model combines two total generalized variation (TGV) regularizations, which are related to the image intensity and the transmission map, respectively, to build an optimization problem. Actually, TGV functionals are more appropriate for describing a natural color image and its transmission map with slanted plane. By minimizing the energy functional with double-TGV regularizations, we obtain the final haze-free image and the refined transmission map simultaneously instead of the general two-step framework. The existence and uniqueness of solutions to the proposed variational model are further obtained. Moreover, the variational model can be solved in a unified way by realizing a primal–dual method for associated saddle-point problems. A number of experimental results on natural hazy images are presented to demonstrate our superior performance, in comparison with some state-of-the-art methods in terms of the subjective and objective visual quality assessments. Compared with the total variation-based models, the proposed model can generate a haze-free image with less staircasing artifacts in the slanted plane and more details in the remote scene of an input image.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a regulariser based on the natural vector field operations gradient, divergence, curl and shear, which generalizes well-known first and second-order TV-type regularisation methods and enables interpolation between them.
Abstract: We introduce a novel regulariser based on the natural vector field operations gradient, divergence, curl and shear. For suitable choices of the weighting parameters contained in our model, it generalises well-known first- and second-order TV-type regularisation methods including TV, ICTV and TGV $$^2$$ and enables interpolation between them. To better understand the influence of each parameter, we characterise the nullspaces of the respective regularisation functionals. Analysing the continuous model, we conclude that it is not sufficient to combine penalisation of the divergence and the curl to achieve high-quality results, but interestingly it seems crucial that the penalty functional includes at least one component of the shear or suitable boundary conditions. We investigate which requirements regarding the choice of weighting parameters yield a rotational invariant approach. To guarantee physically meaningful reconstructions, implying that conservation laws for vectorial differential operators remain valid, we need a careful discretisation that we therefore discuss in detail.

Journal ArticleDOI
TL;DR: A rigid motion scheme that preserves geometry and topology properties of the transformed digital object and provides sufficient conditions to be fulfilled by a continuous object for guaranteeing both topology and geometry preservation during its digitization is proposed.
Abstract: Rigid motions (i.e. transformations based on translations and rotations) are simple, yet important, transformations in image processing. In Euclidean spaces, namely R^n, they are both topology and geometry preserving. Unfortunately, these properties are generally lost in Z^n. In particular, when applying a rigid motion on a digital object, one generally alters its structure but also the global shape of its boundary. These alterations are mainly caused by (re)digitization during the transformation process. In this specific context of digitization, some solutions for the handling of topological issues were proposed in Z^2 and/or Z^3. In this article, we also focus on geometric issues, in the case of Z^2. More precisely, we propose a rigid motion algorithmic scheme that relies on an initial polygonization and a final digitization step. The intermediate modeling of a digital object of Z^2 as a piecewise affine object of R^2 allows us to avoid the geometric alterations generally induced by standard pointwise rigid motions. The crucial step is then related to the final (re)digitization of the polygon back to Z^2. To tackle this issue, we propose a new notion of quasi-regularity that provides sufficient conditions to be fulfilled by an object for guaranteeing both topology and geometry preservation, in particular the preservation of the convex/concave parts of its boundary.

Journal ArticleDOI
TL;DR: A novel discretization of the Laplace–Beltrami operator on digital surfaces is presented, adapting an existing convolution technique proposed by Belkin et al.
Abstract: This article presents a novel discretization of the Laplace–Beltrami operator on digital surfaces.We adapt an existing convolution technique proposed by Belkin et al. [5] for triangular meshes to topological border of subsets of Z n. The core of the method relies on first-order estimation of measures associated with our discrete elements (such as length, area etc.). We show strong consistency (i.e. pointwise convergence) of the operator and compare it against various other discretizations.

Journal ArticleDOI
TL;DR: The proposed method introduces a new fitting term that is more useful in practice than the Chan–Vese framework, to define a term that allows for the background to consist of multiple regions of inhomogeneity.
Abstract: Selective segmentation involves incorporating user input to partition an image into foreground and background, by discriminating between objects of a similar type. Typically, such methods involve introducing additional constraints to generic segmentation approaches. However, we show that this is often inconsistent with respect to common assumptions about the image. The proposed method introduces a new fitting term that is more useful in practice than the Chan-Vese framework. In particular, the idea is to define a term that allows for the background to consist of multiple regions of inhomogeneity. We provide comparative experimental results to alternative approaches to demonstrate the advantages of the proposed method, broadening the possible application of these methods.

Journal ArticleDOI
TL;DR: This paper generalizes fuzzy mathematical morphology to process multivariate images in such a way that overcomes the problem of defining an appropriate order among colors.
Abstract: Mathematical morphology is a framework composed by a set of well-known image processing techniques, widely used for binary and grayscale images, but less commonly used to process color or multivariate images. In this paper, we generalize fuzzy mathematical morphology to process multivariate images in such a way that overcomes the problem of defining an appropriate order among colors. We introduce the soft color erosion and the soft color dilation, which are the foundations of the rest of operators. Besides studying their theoretical properties, we analyze their behavior and compare them with the corresponding morphological operators from other frameworks that deal with color images. The soft color morphology outstands when handling images in the CIEL $${}^*a{}^*b{}^*$$ color space, where it guarantees that no colors with different chromatic values to the original ones are created. The soft color morphological operators prove to be easily customizable but also highly interpretable. Besides, they are fast operators and provide smooth outputs, more visually appealing than the crisp color transitions provided by other approaches.

Journal ArticleDOI
TL;DR: This paper proposes a novel denoising technique based on the total variation method with an emphasis on edge preservation that incorporates in the model functional, a novel edge detector derived from fuzzy complement, non-local mean filter and structure tensor.
Abstract: In medical imaging applications, diagnosis relies essentially on good quality images. Edges play a crucial role in identifying features useful to reach accurate conclusions. However, noise can compromise this task as it degrades image information by altering important features and adding new artifacts rendering images non-diagnosable. In this paper, we propose a novel denoising technique based on the total variation method with an emphasis on edge preservation. Image denoising techniques such as the Rudin–Osher–Fatemi model which are guided by gradient regularizer are generally accompanied with staircasing effect and loss of details. To overcome these issues, our technique incorporates in the model functional, a novel edge detector derived from fuzzy complement, non-local mean filter and structure tensor. This procedure offers more control over the regularization, allowing more denoising in smooth regions and less denoising when processing edge regions. Experimental results on synthetic images demonstrate the ability of the proposed edge detector to determine edges with high accuracy. Furthermore, denoising experiments conducted on CT scan images and comparison with other denoising methods show the outperformance of the proposed denoising method.

Journal ArticleDOI
TL;DR: It is shown that the polyhedral complex in the FCC grid, obtained by the repairing algorithm, is well-composed and homotopy equivalent to the complex naturally associated with the given image I with edge-adjacency (18- adjacency).
Abstract: A 3D image I is well-composed if it does not contain critical edges or vertices (where the boundary of I is non-manifold). The process of transforming an image into a well composed one is called repairing. We propose to repair 3D images by associating the face-centered cubic grid (FCC grid) with the cubic grid. We show that the polyhedral complex in the FCC grid, obtained by our repairing algorithm, is well-composed and homotopy equivalent to the complex naturally associated with the given image I with edge-adjacency (18-adjacency). We illustrate an application on two tasks related to the repaired image: boundary reconstruction and computation of its Euler characteristic.

Journal ArticleDOI
TL;DR: A novel accelerated alternating optimization scheme to solve block biconvex nonsmooth problems whose objectives can be split into smooth (separable) regularizers and simple coupling terms is proposed.
Abstract: In this paper, we propose a novel accelerated alternating optimization scheme to solve block biconvex nonsmooth problems whose objectives can be split into smooth (separable) regularizers and simple coupling terms. The proposed method performs a Bregman distance-based generalization of the well-known forward–backward splitting for each block, along with an inertial strategy which aims at getting empirical acceleration. We discuss the theoretical convergence of the proposed scheme and provide numerical experiments on image colorization.

Journal ArticleDOI
TL;DR: Monge’s theory is brought to the field of 3D reconstruction and it is proved that equiareal Shape-from-Template has a maximum of two local solutions sufficiently near an initial curve that lies on the surface.
Abstract: This paper studies the 3D reconstruction of a deformable surface from a single image and a reference surface, known as the template. This problem is known as Shape-from-Template and has been recently shown to be well-posed for isometric deformations, for which the surface bends without altering geodesics. This paper studies the case of equiareal deformations. They are elastic deformations where the local area is preserved and thus include isometry as a special case. Elastic deformations have been studied before in Shape-from-Template, yet no theoretical results were given on the existence or uniqueness of solutions. The equiareal model is much more widely applicable than isometry. This paper brings Monge’s theory, widely used for studying the solutions of nonlinear first-order PDEs, to the field of 3D reconstruction. It uses this theory to establish a theoretical framework for equiareal Shape-from-Template and answers the important question of whether it is possible to reconstruct a surface exactly with a much weaker prior than isometry. We prove that equiareal Shape-from-Template has a maximum of two local solutions sufficiently near an initial curve that lies on the surface. In addition, we propose an analytical reconstruction algorithm that can recover the multiple solutions. Our algorithm uses standard numerical tools for ODEs. We use the perspective camera model and give reconstruction results with both synthetic and real examples.

Journal ArticleDOI
TL;DR: A type of adaptive heavy-tailed image priors is presented, which result in a new regularized formulation for nonparametric blind super-resolution, and the proposed priors are demonstrated quite applicable to blind image deblurring which is a degenerated problem of non Parametric blind SR.
Abstract: Single-image nonparametric blind super-resolution is a fundamental image restoration problem yet largely ignored in the past decades among the computational photography and computer vision communities. An interesting phenomenon is observed that learning-based single-image super-resolution (SR) has been experiencing a rapid development since the boom of the sparse representation in 2005s and especially the representation learning in 2010s, wherein the high-res image is generally blurred by a supposed bicubic or Gaussian blur kernel. However, the parametric assumption on the form of blur kernels does not hold in most practical applications because in real low-res imaging a high-res image can undergo complex blur processes, e.g., Gaussian-shaped kernels of varying sizes, ellipse-shaped kernels of varying orientations, curvilinear kernels of varying trajectories. The paper is mainly motivated by one of our previous works: Shao and Elad (in: Zhang (ed) ICIG 2015, Part III, Lecture notes in computer science, Springer, Cham, 2015). Specifically, we take one step further in this paper and present a type of adaptive heavy-tailed image priors, which result in a new regularized formulation for nonparametric blind super-resolution. The new image priors can be expressed and understood as a generalized integration of the normalized sparsity measure and relative total variation. Although it seems that the proposed priors are simple, the core merit of the priors is their practical capability for the challenging task of nonparametric blur kernel estimation for both super-resolution and deblurring. Harnessing the priors, a higher-quality intermediate high-res image becomes possible and therefore more accurate blur kernel estimation can be accomplished. A great many experiments are performed on both synthetic and real-world blurred low-res images, demonstrating the comparative or even superior performance of the proposed algorithm convincingly. Meanwhile, the proposed priors are demonstrated quite applicable to blind image deblurring which is a degenerated problem of nonparametric blind SR.

Journal ArticleDOI
TL;DR: A new proof of the equivalence between the ROF model and the so-called taut string algorithm is presented, and a fundamental estimate on the denoised signal in terms of the corrupted signal is derived.
Abstract: We study the one-dimensional version of the Rudin–Osher–Fatemi (ROF) denoising model and some related TV-minimization problems. A new proof of the equivalence between the ROF model and the so-called taut string algorithm is presented, and a fundamental estimate on the denoised signal in terms of the corrupted signal is derived. Based on duality and the projection theorem in Hilbert space, the proof of the taut string interpretation is strictly elementary with the existence and uniqueness of solutions (in the continuous setting) to both models following as by-products. The standard convergence properties of the denoised signal, as the regularizing parameter tends to zero, are recalled and efficient proofs provided. The taut string interpretation plays an essential role in the proof of the fundamental estimate. This estimate implies, among other things, the strong convergence (in the space of functions of bounded variation) of the denoised signal to the corrupted signal as the regularization parameter vanishes. It can also be used to prove semi-group properties of the denoising model. Finally, it is indicated how the methods developed can be applied to related problems such as the fused lasso model, isotonic regression and signal restoration with higher-order total variation regularization.

Journal ArticleDOI
TL;DR: The novelty in this work is a new algorithm that incorporates a translation-invariant Besov regularizer that does not depend on wavelets, thus improving on earlier results.
Abstract: We formulate various variational problems in which the smoothness of functions is measured using Besov space semi-norms. Equivalent Besov space semi-norms can be defined in terms of moduli of smoothness or sequence norms of coefficients in appropriate wavelet expansions. Wavelet-based semi-norms have been used before in variational problems, but existing algorithms do not preserve edges, and many result in blocky artifacts. Here, we devise algorithms using moduli of smoothness for the $$B^1_\infty (L_1(I))$$ Besov space semi-norm. We choose that particular space because it is closely related both to the space of functions of bounded variation, $${\text {BV}}(I)$$ , that is used in Rudin–Osher–Fatemi image smoothing, and to the $$B^1_1(L_1(I))$$ Besov space, which is associated with wavelet shrinkage algorithms. It contains all functions in $${\text {BV}}(I)$$ , which include functions with discontinuities along smooth curves, as well as “fractal-like” rough regions; examples are given in an appendix. Furthermore, it prefers affine regions to staircases, potentially making it a desirable regularizer for recovering piecewise affine data. While our motivations and computational examples come from image processing, we make no claim that our methods “beat” the best current algorithms. The novelty in this work is a new algorithm that incorporates a translation-invariant Besov regularizer that does not depend on wavelets, thus improving on earlier results. Furthermore, the algorithm naturally exposes a range of scales that depends on the image data, noise level, and the smoothing parameter. We also analyze the norms of smooth, textured, and random Gaussian noise data in $$B^1_\infty (L_1(I))$$ , $$B^1_1(L_1(I))$$ , $${\text {BV}}(I)$$ and $$L^2(I)$$ and their dual spaces. Numerical results demonstrate properties of solutions obtained from this moduli of smoothness-based regularizer.

Journal ArticleDOI
TL;DR: The classical models (decentering, thin prism distortion) are found to be particular instances of the family of models found by geometric considerations and these results allow to find generalizations of the most usually employed models while preserving the desired geometrical properties.
Abstract: Polynomial functions are a usual choice to model the nonlinearity of lenses. Typically, these models are obtained through physical analysis of the lens system or on purely empirical grounds. The aim of this work is to facilitate an alternative approach to the selection or design of these models based on establishing a priori the desired geometrical properties of the distortion functions. With this purpose we obtain all the possible isotropic linear models and also those that are formed by functions with symmetry with respect to some axis. In this way, the classical models (decentering, thin prism distortion) are found to be particular instances of the family of models found by geometric considerations. These results allow to find generalizations of the most usually employed models while preserving the desired geometrical properties. Our results also provide a better understanding of the geometric properties of the models employed in the most usual computer vision software libraries.

Journal ArticleDOI
TL;DR: In this article, Menaldi et al. presented an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors, which are derivative-free double integrals of such functions.
Abstract: We present an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors. We introduce regularization functionals, which are derivative-free double integrals of such functions. These regularization functionals are motivated from double integrals, which approximate Sobolev semi-norms of intensity functions. These were introduced in Bourgain et al. (Another look at Sobolev spaces. In: Menaldi, Rofman, Sulem (eds) Optimal control and partial differential equations-innovations and applications: in honor of professor Alain Bensoussan’s 60th anniversary, IOS Press, Amsterdam, pp 439–455, 2001). For the proposed regularization functionals, we prove existence of minimizers as well as a stability and convergence result for functions with values in a set of vectors.

Journal ArticleDOI
TL;DR: A way to compute a well-composed representation of any gray-level image defined on a discrete surface, which is a more general framework than the usual cubical grid is proposed.
Abstract: In 2013, Najman and Geraud proved that by working on a well-composed discrete representation of a gray-level image, we can compute what is called its tree of shapes, a hierarchical representation of the shapes in this image. This way, we can proceed to morphological filtering and to image segmentation. However, the authors did not provide such a representation for the non-cubical case. We propose in this paper a way to compute a well-composed representation of any gray-level image defined on a discrete surface, which is a more general framework than the usual cubical grid. Furthermore, the proposed representation is self-dual in the sense that it treats bright and dark components in the image the same way. This paper can be seen as an extension to gray-level images of the works of Daragon et al. on discrete surfaces.