scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Computational Imaging in 2017"


Journal ArticleDOI
TL;DR: It is shown that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged, and a novel, differentiable error function is proposed.
Abstract: Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is $\ell _2$ . In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image is to be evaluated by a human observer. We compare the performance of several losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged.

1,758 citations


Journal ArticleDOI
TL;DR: It is shown that for any denoising algorithm satisfying an asymptotic criteria, called bounded denoisers, Plug-and-Play ADMM converges to a fixed point under a continuation scheme.
Abstract: Alternating direction method of multiplier (ADMM) is a widely used algorithm for solving constrained optimization problems in image restoration. Among many useful features, one critical feature of the ADMM algorithm is its modular structure, which allows one to plug in any off-the-shelf image denoising algorithm for a subproblem in the ADMM algorithm. Because of the plug-in nature, this type of ADMM algorithms is coined the name “Plug-and-Play ADMM.” Plug-and-Play ADMM has demonstrated promising empirical results in a number of recent papers. However, it is unclear under what conditions and by using what denoising algorithms would it guarantee convergence. Also, since Plug-and-Play ADMM uses a specific way to split the variables, it is unclear if fast implementation can be made for common Gaussian and Poissonian image restoration problems. In this paper, we propose a Plug-and-Play ADMM algorithm with provable fixed-point convergence. We show that for any denoising algorithm satisfying an asymptotic criteria, called bounded denoisers, Plug-and-Play ADMM converges to a fixed point under a continuation scheme. We also present fast implementations for two image restoration problems on superresolution and single-photon imaging. We compare Plug-and-Play ADMM with state-of-the-art algorithms in each problem type and demonstrate promising experimental results of the algorithm.

509 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an efficient method to produce an image that is significantly sharper than the input blurry one, without introducing artifacts, such as halos and noise amplification, which can be used as a preprocessing step to induce the learning of more effective upscaling filters with built-in sharpening and contrast enhancement.
Abstract: Given an image, we wish to produce an image of larger size with significantly more pixels and higher image quality. This is generally known as the single image super-resolution problem. The idea is that with sufficient training data (corresponding pairs of low and high resolution images) we can learn set of filters (i.e., a mapping) that when applied to given image that is not in the training set, will produce a higher resolution version of it, where the learning is preferably low complexity. In our proposed approach, the run-time is more than one to two orders of magnitude faster than the best competing methods currently available, while producing results comparable or better than state-of-the-art. A closely related topic is image sharpening and contrast enhancement, i.e., improving the visual quality of a blurry image by amplifying the underlying details (a wide range of frequencies). Our approach additionally includes an extremely efficient way to produce an image that is significantly sharper than the input blurry one, without introducing artifacts, such as halos and noise amplification. We illustrate how this effective sharpening algorithm, in addition to being of independent interest, can be used as a preprocessing step to induce the learning of more effective upscaling filters with built-in sharpening and contrast enhancement effect.

239 citations


Journal ArticleDOI
TL;DR: The FlatCam design is demonstrated using two prototypes: one at visible wavelengths and one at infrared wavelengths, and a separable mask is employed to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity.
Abstract: FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera, where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that enables thin and flat form-factor imaging devices. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.

137 citations


Journal ArticleDOI
TL;DR: In this paper, the unmixing of contributions from signal and noise sources is addressed by removing detections likely to be due to noise at each pixel in an image, where data from neighboring pixels are combined to improve depth estimates.
Abstract: Conventional LIDAR systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images. Recent photon-efficient computational imaging methods are remarkably effective with only 1.0 to 3.0 detected photons per pixel, but they are not demonstrated at signal-to-background ratio (SBR) below 1.0 because their imaging accuracies degrade significantly in the presence of high background noise. We introduce a new approach to depth and reflectivity estimation that emphasizes the unmixing of contributions from signal and noise sources. At each pixel in an image, short-duration range gates are adaptively determined and applied to remove detections likely to be due to noise. For pixels with too few detections to perform this censoring accurately, data are combined from neighboring pixels to improve depth estimates, where the neighborhood formation is also adaptive to scene content. Algorithm performance is demonstrated on experimental data at varying levels of noise. Results show improved performance of both reflectivity and depth estimates over state-of-the-art methods, especially at low SBR. In particular, accurate imaging is demonstrated with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant method demonstrates the viability of rapid, long-range, and low-power LIDAR imaging.

124 citations


Journal ArticleDOI
TL;DR: A new framework for the choice of the patterns among a wavelet basis in an adaptive fashion is proposed together with a simple and efficient image recovery scheme to overcome the computationally demanding $\ell _1$ -minimization of the compressed sensing.
Abstract: Single-pixel camera imaging is an emerging paradigm that allows high-quality images to be provided by a device only equipped with a single point detector. A single-pixel camera is an experimental setup able to measure the inner product of the scene under view—the image—with any user-defined pattern. Postprocessing a sequence of point measurements obtained with different patterns permits to recover spatial information, as it has been demonstrated by state-of-the-art approaches belonging to the compressed sensing framework. In this paper, a new framework for the choice of the patterns is proposed together with a simple and efficient image recovery scheme. Our goal is to overcome the computationally demanding $\ell _1$ -minimization of the compressed sensing. We propose to choose patterns among a wavelet basis in an adaptive fashion, which essentially relies onto the prediction of the significant wavelet coefficients’ location. More precisely, we adopt a multiresolution strategy that exploits the set of measurements acquired at coarse scales to predict the set of measurements to be performed at a finer scale. Prediction is based on a fast cubic interpolation in the image domain. A general formalism is given so that any kind of wavelets can be used, which enables one to adjust the wavelet to the type of images related to the desired application. Both simulated and experimental results demonstrate the ability of our technique to reconstruct biomedical images with improved quality compared with compressive-sensing-based recovery. Application to the real-time fluorescence imaging of biological tissues could benefit from the proposed method.

88 citations


Journal ArticleDOI
TL;DR: A new method for video SR named motion compensation and residual net (MCResNet) is proposed, which employs a novel deep residual convolutional neural network (CNN) to predict a high-resolution image using multiple motion compensated observations.
Abstract: Video superresolution (SR) techniques are of essential usages for high-resolution display devices due to the current lack of high-resolution videos. Although many algorithms have been proposed, video SR still remains a very challenging inverse problem under different conditions. In this paper, we propose a new method for video SR named motion compensation and residual net (MCResNet). We use optical flow algorithm for motion estimation and motion compensation as a preprocessing step. Then, we employ a novel deep residual convolutional neural network (CNN) to predict a high-resolution image using multiple motion compensated observations. The new residual CNN model preserves the low-frequency contents and facilitates the restoration of high-frequency details. Our method is able to handle large and complex motions adaptively. Extensive experimental results validate that our proposed method outperforms state-of-the-art single-image-based and multi-frame-based algorithms for video SR quantitatively and qualitatively.

87 citations


Journal ArticleDOI
TL;DR: This paper describes a method for minimizing the mutual coherence of sensing matrices in electromagnetic imaging applications and demonstrates the algorithm's ability to both decrease the coherence and to generate sensing matrix with improved CS recovery capabilities.
Abstract: Compressive sensing (CS) theory states that sparse signals can be recovered from a small number of linear measurements $y=Ax$ using $\ell _1\text{-}$ norm minimization techniques, provided that the sensing matrix satisfies a restricted isometry property (RIP). Unfortunately, the RIP is difficult to verify in electromagnetic imaging applications, where the sensing matrix is computed deterministically. Although it provides weaker reconstruction guarantees than the RIP, the mutual coherence is a more practical metric for assessing the CS recovery properties of deterministic matrices. In this paper, we describe a method for minimizing the mutual coherence of sensing matrices in electromagnetic imaging applications. Numerical results for the design method are presented for a simple multiple monostatic imaging application, in which the sensor positions for each measurement serve as the design variables. These results demonstrate the algorithm's ability to both decrease the coherence and to generate sensing matrices with improved CS recovery capabilities.

82 citations


Journal ArticleDOI
TL;DR: In this article, the generic iterative reweighted annihilation filter algorithm was proposed to exploit the convolutional structure of the lifted matrix to work in the original unlifted domain, thus reducing the complexity.
Abstract: Fourier-domain structured low-rank matrix priors are emerging as powerful alternatives to traditional image recovery methods such as total variation and wavelet regularization. These priors specify that a convolutional structured matrix, i.e., Toeplitz, Hankel, or their multilevel generalizations, built from Fourier data of the image should be low-rank. The main challenge in applying these schemes to large-scale problems is the computational complexity and memory demand resulting from lifting the image data to a large-scale matrix. We introduce a fast and memory-efficient approach called the generic iterative reweighted annihilation filter algorithm that exploits the convolutional structure of the lifted matrix to work in the original unlifted domain, thus considerably reducing the complexity. Our experiments on the recovery of images from undersampled Fourier measurements show that the resulting algorithm is considerably faster than previously proposed algorithms and can accommodate much larger problem sizes than previously studied.

80 citations


Journal ArticleDOI
TL;DR: In this article, two Markov random field priors enforcing spatial correlations are assigned to the depth and reflectivity images, and the restoration problem is reduced to a convex formulation with respect to each of the parameters of interest.
Abstract: This paper presents two new algorithms for the joint restoration of depth and reflectivity (DR) images constructed from time-correlated single-photon counting measurements. Two extreme cases are considered: 1) a reduced acquisition time that leads to very low photon counts; and 2) imaging in a highly attenuating environment (such as a turbid medium), which makes the reflectivity estimation more difficult at increasing range. Adopting a Bayesian approach, the Poisson distributed observations are combined with prior distributions about the parameters of interest, to build the joint posterior distribution. More precisely, two Markov random field (MRF) priors enforcing spatial correlations are assigned to the DR images. Under some justified assumptions, the restoration problem (regularized likelihood) reduces to a convex formulation with respect to each of the parameters of interest. This problem is first solved using an adaptive Markov chain Monte Carlo (MCMC) algorithm that approximates the minimum mean square parameter estimators. This algorithm is fully automatic since it adjusts the parameters of the MRFs by maximum marginal likelihood estimation. However, the MCMC-based algorithm exhibits a relatively long computational time. The second algorithm deals with this issue and is based on a coordinate descent algorithm. Results on single-photon depth data from laboratory-based underwater measurements demonstrate the benefit of the proposed strategy that improves the quality of the estimated DR images.

71 citations


Journal ArticleDOI
TL;DR: A parametric level-set method for the reconstruction of salt-bodies in seismic full-waveform inversion and extends the method to joint inversion of both the background velocity model and the salt geometry.
Abstract: Seismic full-waveform inversion tries to estimate subsurface medium parameters from seismic data. Areas with subsurface salt bodies are of particular interest because they often have hydrocarbon reservoirs on their sides or underneath. Accurate reconstruction of their geometry is a challenge for current techniques. This paper presents a parametric level-set method for the reconstruction of salt-bodies in seismic full-waveform inversion. We split the subsurface model in two parts: a background velocity model and a salt body with known velocity but undetermined shape. The salt geometry is represented by a level-set function that evolves during the inversion. We choose radial basis functions to represent the level-set function, leading to an optimization problem with a modest number of parameters. A common problem with level-set methods is to fine-tune the width of the level-set boundary for optimal sensitivity. We propose a robust algorithm that dynamically adapts the width of the level-set boundary to ensure faster convergence. Tests on a suite of idealized salt geometries show that the proposed method is stable against a modest amount of noise. We also extend the method to joint inversion of both the background velocity model and the salt geometry.

Journal ArticleDOI
Fei Wen1, Ling Pei1, Yuan Yang, Wenxian Yu1, Peilin Liu1 
TL;DR: In this article, the authors proposed a robust formulation for sparse reconstruction that employs the $\ell 1$ -norm as the loss function for the residual error and utilizes a generalized nonconvex penalty for sparsity inducing.
Abstract: This paper addresses the robust reconstruction problem of a sparse signal from compressed measurements. We propose a robust formulation for sparse reconstruction that employs the $\ell _1$ -norm as the loss function for the residual error and utilizes a generalized nonconvex penalty for sparsity inducing. The $\ell _1$ -loss is less sensitive to outliers in the measurements than the popular $\ell _2$ -loss, while the nonconvex penalty has the capability of ameliorating the bias problem of the popular convex LASSO penalty and thus can yield more accurate recovery. To solve this nonconvex and nonsmooth minimization formulation efficiently, we propose a first-order algorithm based on alternating direction method of multipliers. A smoothing strategy on the $\ell _1$ -loss function has been used in deriving the new algorithm to make it convergent. Further, a sufficient condition for the convergence of the new algorithm has been provided for generalized nonconvex regularization. In comparison with several state-of-the-art algorithms, the new algorithm showed better performance in numerical experiments in recovering sparse signals and compressible images. The new algorithm scales well for large-scale problems, as often encountered in image processing.

Journal ArticleDOI
TL;DR: This paper reviews multispectral demosaicing methods and proposes a new one based on the pseudo-panchromatic image (PPI), which provides estimated images of better quality than classical ones.
Abstract: Single-sensor color cameras, which classically use a color filter array to sample RGB channels, have recently been extended to the multispectral domain. To sample more than three wavelength bands, such systems use a multispectral filter array that provides a raw image in which a single channel value is available at each pixel. A demosaicing procedure is then needed to estimate a fully defined multispectral image. In this paper, we review multispectral demosaicing methods and propose a new one based on the pseudo-panchromatic image (PPI). Pixel values in the PPI are computed as the average spectral values. Experimental results show that our method provides estimated images of better quality than classical ones.

Journal ArticleDOI
TL;DR: This paper is focused on high-resolution inverse synthetic aperture radar (ISAR) imaging and motion estimation of maneuvering targets from compressively sampled echo data and proposes a local structural sparse Bayesian learning (LS-SBL) algorithm exploiting the joint sparsity pattern of adjacent scatterers.
Abstract: This paper is focused on high-resolution inverse synthetic aperture radar (ISAR) imaging and motion estimation of maneuvering targets from compressively sampled echo data. Herein, a local structural sparse Bayesian learning (LS-SBL) algorithm is proposed by exploiting the joint sparsity pattern of adjacent scatterers. A structured prior by modeling the neighboring correlation or dependence is utilized to encode the joint sparsity pattern. Meanwhile, a parametric dictionary with unknown rotational parameters is constructed to represent the target maneuverability. The solution to the LS-SBL algorithm is decomposed into iterations between sparse imaging and dictionary learning. In sparse imaging, an expectation-maximization method is employed for ISAR image formation and hyperparameter estimation by using a predesigned dictionary. In dictionary learning, an efficient approach of rotational parameter estimation is presented to dynamically update the parametric dictionary. Due to the exploitation of joint sparsity pattern, enhanced performance of ISAR image reconstruction can be achieved by effectively preserving the target structure. In addition, the cross-range scaled ISAR image is obtainable by extracting the target geometry, which benefits from the rotational motion estimation. Finally, experiments on simulated and measured data demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A dataset of 37,921 frontal-facing American high school yearbook photos is presented that allows us to use computation to glimpse into the historical visual record too voluminous to be evaluated manually and may be used together with weakly-supervised data-driven techniques to perform scalable historical analysis of large image corpora with minimal human effort.
Abstract: Imagery offers a rich description of our world and communicates a volume and type of information that cannot be captured by text alone. Since the invention of the camera, an ever-increasing number of photographs document our “visual culture” complementing historical texts. Currently, this treasure trove of knowledge can only be analyzed manually by historians, and only at small scale. In this paper, we perform automated analysis on a large-scale historical image dataset. Our main contributions are: 1) A publicly available dataset of 168,055 (37,921 frontal-facing) American high school yearbook portraits. 2) Weakly supervised data-driven techniques to discover historical visual trends in fashion and identify date-specific visual patterns. 3) A classifier to predict when a portrait was taken, with median error of 4 years for women and 6 for men. 4) A new method for discovering and displaying the visual elements used by the classifier to perform the dating task, finding that they correspond to the tell-tale fashions of each era.

Journal ArticleDOI
TL;DR: The proposed distributed unmixing algorithm achieves improved performance and faster convergence than existing state-of-the-art techniques as it is verified by extensive simulations on synthetic and real hyperspectral data.
Abstract: Hyperspectral unmixing is a crucial processing step in remote sensing image analysis. Its aim is the decomposition of each pixel in a hyperspectral image into a number of materials, the so-called endmembers, and their corresponding abundance fractions. Among the various unmixing approaches that have been suggested in the literature, we are interested here in unsupervised techniques that rely on some form of non-negative Matrix factorization (NMF). NMF-based techniques provide an easy way to simultaneously estimate the endmembers and their corresponding abundances, though they suffer from mediocre performance and high computational complexity due to the nonconvexity of the involved cost function. Improvements in performance have been recently achieved by imposing additional constraints to the NMF optimization problem related to the sparsity of the abundances. Another feature of hyperspectral images that can be exploited is their high spatial correlation, which is translated into the low rank of the involved abundance matrices. Motivated by this, in this paper we propose a novel unmixing method that is based on a simultaneously sparse and low-rank constrained NMF. In addition, prompted by the rapid evolution of multicore processors and graphics processing units, we devise a distributed unmixing scheme that processes in parallel different parts of the image. The proposed distributed unmixing algorithm achieves improved performance and faster convergence than existing state-of-the-art techniques as it is verified by extensive simulations on synthetic and real hyperspectral data.

Journal ArticleDOI
TL;DR: The proposed mixture models and their unmixing algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inference and the computational complexity when compared to the state-of-the-art algorithms.
Abstract: This paper presents two novel hyperspectral mixture models and associated unmixing algorithms. The two models assume a linear mixing model corrupted by an additive term whose expression can be adapted to account for multiple scattering nonlinearities (NL), or mismodeling effects (ME). The NL model generalizes bilinear models by taking into account higher order interaction terms. The ME model accounts for different effects, such as endmember variability or the presence of outliers. The abundance and residual parameters of these models are estimated by considering a convex formulation suitable for fast estimation algorithms. This formulation accounts for constraints, such as the sum-to-one and nonnegativity of the abundances, the nonnegativity of the nonlinearity coefficients, the spectral smoothness of the ME terms and the spatial sparseness of the residuals. The resulting convex problem is solved using the alternating direction method of multipliers whose convergence is ensured theoretically. The proposed mixture models and their unmixing algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inference and the computational complexity when compared to the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A large-scale feature selection wrapper is discussed for the classification of high dimensional remote sensing and an efficient implementation based on intrinsic properties of Gaussian mixtures models and block matrix is proposed.
Abstract: A large-scale feature selection wrapper is discussed for the classification of high dimensional remote sensing. An efficient implementation is proposed based on intrinsic properties of Gaussian mixtures models and block matrix. The criterion function is split into two parts:one that is updated to test each feature and one that needs to be updated only once per feature selection. This split saved a lot of computation for each test. The algorithm is implemented in C++ and integrated into the Orfeo Toolbox. It has been compared to other classification algorithms on two high dimension remote sensing images. Results show that the approach provides good classification accuracies with low computation time.

Journal ArticleDOI
TL;DR: In this paper, a method for lensless imaging based on compressive ultrafast sensing is presented. But this method requires a large number of illumination patterns that result in a long acquisition process.
Abstract: Lensless imaging is an important and challenging problem. One notable solution to lensless imaging is a single-pixel camera that benefits from ideas central to compressive sampling. However, traditional single-pixel cameras require many illumination patterns that result in a long acquisition process. Here, we present a method for lensless imaging based on compressive ultrafast sensing. Each sensor acquisition is encoded with a different illumination pattern and produces a time series where time is a function of the photon's origin in the scene. Currently available hardware with picosecond time resolution enables time tagging photons as they arrive to an omnidirectional sensor. This allows lensless imaging with significantly fewer patterns compared to regular single-pixel imaging. To that end, we develop a framework for designing lensless imaging systems that use ultrafast detectors. We provide an algorithm for ideal sensor placement and an algorithm for optimized active illumination patterns. We show that efficient lensless imaging is possible with ultrafast measurement and compressive sensing. This paves the way for novel imaging architectures and remote sensing in extreme situations where imaging with a lens is not possible.

Journal ArticleDOI
TL;DR: In this article, a block coordinate descent approach was proposed to estimate the unknowns of the input data and then used for dictionary-blind image reconstruction, which achieved promising performance and speed-up over previous schemes.
Abstract: The sparsity of signals in a transform domain or dictionary has been exploited in applications, such as compression, denoising, and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically nonconvex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speed-ups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

Journal ArticleDOI
TL;DR: This work proposes a novel optimization approach for geometric calibration of a projector-camera system that estimates the intrinsic, extrinsic, and distortion parameters of both the camera and projector in an automatic fashion using structured light.
Abstract: Automatic calibration of structured-light systems, generally consisting of a projector and camera, is of great importance for a variety of practical applications. We propose a novel optimization approach for geometric calibration of a projector-camera system that estimates the intrinsic, extrinsic, and distortion parameters of both the camera and projector in an automatic fashion using structured light. Our approach benefits from a novel multifactor objective function that finds maximum-likelihood estimates from noisy point correspondences using constraints on focal lengths and resolves ambiguities estimating the fundamental matrix by enforcing epipolar geometry on the rectified noisy data. This new formulation allows estimation of all calibration parameters simultaneously and minimization is ensured by a greedy descent algorithm that decreases the cost function at each iteration. This provides more accurate parameter estimation, reconstruction accuracy, and robustness to noise and poor initialization compared to previous methods. Experimental results demonstrate the stability and robustness of our method, and show that the proposed solution outperforms a currently leading approach to an automatic geometric projector-camera calibration.

Journal ArticleDOI
TL;DR: Two compressive sensing inspired approaches for the solution of non-linear inverse scattering problems are introduced and discussed and they can successfully tackle both reduced number of data (with respect to Nyquist sampling) and overcomplete dictionaries.
Abstract: Two compressive sensing inspired approaches for the solution of non-linear inverse scattering problems are introduced and discussed. Differently from the sparsity promoting approaches proposed in most of the papers published in the literature, the two methods here tackle the problem in its full non-linearity, by adopting a contrast source inversion scheme. In the first approach, the ${\ell _1}$ -norm of the unknown is added as a weighted penalty term to the contrast source cost functional. The second, and (to the best of our knowledge) completely original, approach enforces sparsity by constraining the solution of the non-linear problem into a convex set defined by the ${\ell _1}$ -norm of the unknown. A numerical assessment against a widely used benchmark example (the “Austria” profile) is given to assess the capabilities of the proposed approaches. Notably, the two approaches can be applied to any kind of basis functions and they can successfully tackle both reduced number of data (with respect to Nyquist sampling) and/or overcomplete dictionaries.

Journal ArticleDOI
TL;DR: In this article, a hierarchical Bayesian spectral unmixing algorithm is proposed to analyze remote scenes sensed via sparse multispectral Lidar measurements, where the spectral information is used in order to identify and quantify the main materials in the scene, in addition to estimation of the LIDar-based range profiles.
Abstract: This paper presents a new Bayesian spectral unmixing algorithm to analyze remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e., on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the main materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed three-dimensional scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).

Journal ArticleDOI
TL;DR: In this article, a linear inverse problem involving quadratic combinations of vertices is reformulated in the interferometric framework, and the authors show a deterministic recovery result for vectors $x$ from measurements of the form $(Ax)_i \overline{(ax)_j}$ for some left-invertible $A$.
Abstract: This paper discusses some questions that arise when a linear inverse problem involving $Ax = b$ is reformulated in the interferometric framework, where quadratic combinations of $b$ are considered as data in place of $b$ . First, we show a deterministic recovery result for vectors $x$ from measurements of the form $(Ax)_i \overline{(Ax)_j}$ for some left-invertible $A$ . Recovery is exact, or stable in the noisy case, when the couples $(i,j)$ are chosen as edges of a well-connected graph. One possible way of obtaining the solution is as a feasible point of a simple semidefinite program. Furthermore, we show how the proportionality constant in the error estimate depends on the spectral gap of a data-weighted graph Laplacian. Second, we present a new application of this formulation to interferometric waveform inversion, where products of the form $(Ax)_i \overline{(Ax)_j}$ in frequency encode generalized cross correlations in time. We present numerical evidence that interferometric inversion does not suffer from the loss of resolution generally associated with interferometric imaging, and can provide added robustness with respect to specific kinds of kinematic uncertainties in the forward model $A$ .

Journal ArticleDOI
TL;DR: The results demonstrate the superior capabilities of the proposed C-B-TR-ML microwave-imaging technique in detecting and localizing multiple tumors embedded inside highly dense breast phantoms.
Abstract: Detection of tumors in highly dense breasts is a critical but challenging issue for early-stage breast cancer detection. We present the application of coherent focusing for time-reversal (TR) microwave imaging in beamspace for the detection and localization of multiple tumors in highly dense 3-D breast phantoms. We propose a novel coherent beamspace time reversal maximum likelihood (C-B-TR-ML) technique to obtain accurate tumor locations with reduced computational burden. To compare the performance, the coherent beamspace processing is also extended for conventional decomposition of the TR operator (DORT) and TR-MUSIC algorithms. A novel hybrid technique involving time of arrival and entropy is also proposed for early-time artifact removal as well as for estimating the Green's function of an equivalent virtual medium required for the TR operation. Finite-difference time-domain computations on anatomically realistic 3-D numerical breast phantoms are used to obtain the backscattered data. The results demonstrate the superior capabilities of the proposed C-B-TR-ML microwave-imaging technique in detecting and localizing multiple tumors embedded inside highly dense breast phantoms.

Journal ArticleDOI
TL;DR: A new unsupervised approach for feature extraction, based on data driven discovery, is introduced for accurate classification of remotely sensed data and exploits mutual information maximization in order to retrieve the most relevant features with respect to information measures.
Abstract: In Earth observations technical literature, several methods have been proposed and implemented to efficiently extract a proper set of features for classification and segmentation purposes. However, these architectures show drawbacks when the considered datasets are characterized by complex interactions among the samples, especially when they rely on strong assumptions on noise and label domains. In this paper, a new unsupervised approach for feature extraction, based on data driven discovery, is introduced for accurate classification of remotely sensed data. Specifically, the proposed architecture exploits mutual information maximization in order to retrieve the most relevant features with respect to information measures. Experimental results on real datasets show that the proposed approach represents a valid framework for feature extraction from remote sensing images.

Journal ArticleDOI
TL;DR: This study extends the recently introduced analog data assimilation to high-dimensional spatio-temporal fields using a multiscale patch-based decomposition and demonstrates the relevance of the proposed data-driven scheme for the real missing data patterns of the high-resolution infrared METOP sensor.
Abstract: Satellite-derived products are of key importance for the high-resolution monitoring of the ocean surface on a global scale. Due to the sensitivity of spaceborne sensors to the atmospheric conditions as well as the associated spatio-temporal sampling, ocean remote sensing data may be subject to high-missing data rates. The spatio-temporal interpolation of these data remains a key challenge to deliver L4 gridded products to end-users. Whereas operational products mostly rely on model-driven approaches, especially optimal interpolation based on Gaussian process priors, the availability of large-scale observation and simulation datasets calls for the development of novel data-driven models. This study investigates such models. We extend the recently introduced analog data assimilation to high-dimensional spatio-temporal fields using a multiscale patch-based decomposition. Using an observing system simulation experiment for sea surface temperature, we demonstrate the relevance of the proposed data-driven scheme for the real missing data patterns of the high-resolution infrared METOP sensor. It has resulted in a significant improvement w.r.t. state-of-the-art techniques in terms of interpolation error (about 50% of relative gain) and spectral characteristics for horizontal scales smaller than 100 km. We further discuss the key features and parameterizations of the proposed data-driven approach as well as its relevance with respect to classical interpolation techniques.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. And the proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes.
Abstract: Archetypal scenarios for change detection generally consider two images acquired through sensors of the same modality. However, in some specific cases such as emergency situations, the only images available may be those acquired through different kinds of sensors. More precisely, this paper addresses the problem of detecting changes between two multiband optical images characterized by different spatial and spectral resolutions. This sensor dissimilarity introduces additional issues in the context of operational change detection. To alleviate these issues, classical change detection methods are applied after independent preprocessing steps (e.g., resampling) used to get the same spatial and spectral resolutions for the pair of observed images. Nevertheless, these preprocessing steps tend to throw away relevant information. Conversely, in this paper, we propose a method that more effectively uses the available information by modeling the two observed images as spatial and spectral versions of two (unobserved) latent images characterized by the same high spatial and high spectral resolutions. As they cover the same scene, these latent images are expected to be globally similar except for possible changes in sparse spatial locations. Thus, the change detection task is envisioned through a robust multiband image fusion method, which enforces the differences between the estimated latent images to be spatially sparse. This robust fusion problem is formulated as an inverse problem, which is iteratively solved using an efficient block-coordinate descent algorithm. The proposed method is applied to real panchromatic, multispectral, and hyperspectral images with simulated realistic and real changes. A comparison with state-of-the-art change detection methods evidences the accuracy of the proposed strategy.

Journal ArticleDOI
TL;DR: Results show that the proposed approach for signal recovery in the presence of non-Gaussian noise is efficient and achieves performance comparable with other methods where the regularization parameter is manually tuned from the ground truth.
Abstract: In this paper, a methodology is investigated for signal recovery in the presence of non-Gaussian noise. In contrast with regularized minimization approaches often adopted in the literature, in our algorithm the regularization parameter is reliably estimated from the observations. As the posterior density of the unknown parameters is analytically intractable, the estimation problem is derived in a variational Bayesian framework where the goal is to provide a good approximation to the posterior distribution in order to compute posterior mean estimates. Moreover, a majorization technique is employed to circumvent the difficulties raised by the intricate forms of the non-Gaussian likelihood and of the prior density. We demonstrate the potential of the proposed approach through comparisons with state-of-the-art techniques that are specifically tailored to signal recovery in the presence of mixed Poisson–Gaussian noise. Results show that the proposed approach is efficient and achieves performance comparable with other methods where the regularization parameter is manually tuned from the ground truth.

Journal ArticleDOI
TL;DR: A highly parallel branchless DD algorithm for three-dimensional cone beam CT that utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed and was evaluated by iterative reconstruction algorithms.
Abstract: Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in computed tomography (CT). The distance-driven (DD) projection and backprojection are widely used for their favorable image quality properties, highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices, such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for three-dimensional cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved eight-fold acceleration for forward projection and ten-fold acceleration for backprojection. The GPU-based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.