scispace - formally typeset
Search or ask a question
Author

Pier Luigi Dragotti

Bio: Pier Luigi Dragotti is an academic researcher from Imperial College London. The author has contributed to research in topics: Wavelet transform & Data compression. The author has an hindex of 32, co-authored 255 publications receiving 5533 citations. Previous affiliations of Pier Luigi Dragotti include École Polytechnique Fédérale de Lausanne & École Normale Supérieure.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist–Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

835 citations

Journal ArticleDOI
TL;DR: This paper shows that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm.
Abstract: Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely . These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We, thus, show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling

481 citations

Journal ArticleDOI
TL;DR: It is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals, which should lead to further research in sparse sampling, as well as new applications.
Abstract: Sparse sampling of continuous-time sparse signals is addressed. In particular, it is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals. The noisy case is analyzed and solved, proposing methods reaching the optimal performance given by the Cramer-Rao bounds. Finally, a number of applications have been discussed where sparsity can be taken advantage of. The comprehensive coverage given in this article should lead to further research in sparse sampling, as well as new applications. One main application to use the theory presented in this article is ultra-wide band (UWB) communications.

481 citations

Journal ArticleDOI
TL;DR: This work presents a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT, which provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O-2/, is much better than O-1/ achieved with wavelets, but at similar complexity.
Abstract: In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a nonsparse representation. To efficiently capture these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multidirectional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard two-dimensional WT, unlike in the case of some other directional transform constructions (e.g., curvelets, contourlets, or edgelets). The corresponding anisotropic basis functions (directionlets) have directional vanishing moments along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N/sup -1.55/), which, while slower than the optimal rate O(N/sup -2/), is much better than O(N/sup -1/) achieved with wavelets, but at similar complexity.

320 citations

Journal ArticleDOI
TL;DR: This paper investigates such distributed approaches to the Karhunen-Loeve transform, where several distributed terminals observe disjoint subsets of a random vector and introduces several versions of the distributed KLT.
Abstract: The Karhunen-Loeve transform (KLT) is a key element of many signal processing and communication tasks. Many recent applications involve distributed signal processing, where it is not generally possible to apply the KLT to the entire signal; rather, the KLT must be approximated in a distributed fashion. This paper investigates such distributed approaches to the KLT, where several distributed terminals observe disjoint subsets of a random vector. We introduce several versions of the distributed KLT. First, a local KLT is introduced, which is the optimal solution for a given terminal, assuming all else is fixed. This local KLT is different and in general improves upon the marginal KLT which simply ignores other terminals. Both optimal approximation and compression using this local KLT are derived. Two important special cases are studied in detail, namely, the partial observation KLT which has access to a subset of variables, but aims at reconstructing them all, and the conditional KLT which has access to side information at the decoder. We focus on the jointly Gaussian case, with known correlation structure, and on approximation and compression problems. Then, the distributed KLT is addressed by considering local KLTs in turn at the various terminals, leading to an iterative algorithm which is locally convergent, sometimes reaching a global optimum, depending on the overall correlation structure. For compression, it is shown that the classical distributed source coding techniques admit a natural transform coding interpretation, the transform being the distributed KLT. Examples throughout illustrate the performance of the proposed distributed KLT. This distributed transform has potential applications in sensor networks, distributed image databases, hyper-spectral imagery, and data fusion

202 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information is pursued and it is shown that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves.
Abstract: The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.

3,948 citations

Journal ArticleDOI
TL;DR: Several methods for filter design are described for dual-tree CWT that demonstrates with relatively short filters, an effective invertible approximately analytic wavelet transform can indeed be implemented using the dual- tree approach.
Abstract: The paper discusses the theory behind the dual-tree transform, shows how complex wavelets with good properties can be designed, and illustrates a range of applications in signal and image processing The authors use the complex number symbol C in CWT to avoid confusion with the often-used acronym CWT for the (different) continuous wavelet transform The four fundamentals, intertwined shortcomings of wavelet transform and some solutions are also discussed Several methods for filter design are described for dual-tree CWT that demonstrates with relatively short filters, an effective invertible approximately analytic wavelet transform can indeed be implemented using the dual-tree approach

2,407 citations

Journal ArticleDOI
TL;DR: By allowing image reconstruction to continue even after a packet is lost, this type of representation can prevent a Web browser from becoming dormant, and the source can be approximated from any subset of the chunks.
Abstract: This article focuses on the compressed representations of pictures. The representation does not affect how many bits get from the Web server to the laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in their choice than merely selecting a compression ratio. The techniques presented represent a single information source with several chunks of data ("descriptions") so that the source can be approximated from any subset of the chunks. By allowing image reconstruction to continue even after a packet is lost, this type of representation can prevent a Web browser from becoming dormant.

1,533 citations

Journal ArticleDOI
22 Apr 2010
TL;DR: This paper surveys the various options such training has to offer, up to the most recent contributions and structures of the MOD, the K-SVD, the Generalized PCA and others.
Abstract: Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

1,345 citations