scispace - formally typeset
Search or ask a question
Author

Rebecca Willett

Bio: Rebecca Willett is an academic researcher from University of Chicago. The author has contributed to research in topics: Poisson distribution & Compressed sensing. The author has an hindex of 42, co-authored 256 publications receiving 7475 citations. Previous affiliations of Rebecca Willett include Cooperative Institute for Meteorological Satellite Studies & Duke University.


Papers
More filters
Journal ArticleDOI
TL;DR: A single disperser spectral imager is presented that exploits recent theoretical work in the area of compressed sensing to achieve snapshot spectral imaging and can be used to capture spatiospectral information of a scene that consists of two balls illuminated by different light sources.
Abstract: We present a single disperser spectral imager that exploits recent theoretical work in the area of compressed sensing to achieve snapshot spectral imaging. An experimental prototype is used to capture the spatiospectral information of a scene that consists of two balls illuminated by different light sources. An iterative algorithm is used to reconstruct the data cube. The average spectral resolution is 3.6 nm per spectral channel. The accuracy of the instrument is demonstrated by comparison of the spectra acquired with the proposed system with the spectra acquired by a nonimaging reference spectrometer.

813 citations

Journal ArticleDOI
TL;DR: A single-shot spectral imaging approach based on the concept of compressive sensing with primary features of two dispersive elements, arranged in opposition and surrounding a binary-valued aperture code, which results in easily-controllable, spatially-varying, spectral filter functions with narrow features.
Abstract: This paper describes a single-shot spectral imaging approach based on the concept of compressive sensing. The primary features of the system design are two dispersive elements, arranged in opposition and surrounding a binary-valued aperture code. In contrast to thin-film approaches to spectral filtering, this structure results in easily-controllable, spatially-varying, spectral filter functions with narrow features. Measurement of the input scene through these filters is equivalent to projective measurement in the spectral domain, and hence can be treated with the compressive sensing frameworks recently developed by a number of groups. We present a reconstruction framework and demonstrate its application to experimental data.

649 citations

Journal ArticleDOI
06 May 2020
TL;DR: A taxonomy that can be used to categorize different problems and reconstruction methods in deep neural networks and discusses the tradeoffs associated with these different reconstruction approaches, caveats and common failure modes.
Abstract: Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging. We explore the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. Our taxonomy is organized along two central axes: (1) whether or not a forward model is known and to what extent it is used in training and testing, and (2) whether or not the learning is supervised or unsupervised, i.e., whether or not the training relies on access to matched ground truth image and measurement pairs. We also discuss the tradeoffs associated with these different reconstruction approaches, caveats and common failure modes, plus open problems and avenues for future work.

390 citations

Journal ArticleDOI
TL;DR: A novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images and reveals that, despite its conceptual simplicity, Poisson PCA-based Denoising appears to be highly competitive in very low light regimes.
Abstract: Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.

289 citations

Journal ArticleDOI
TL;DR: Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.
Abstract: The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional wavelet-based methods, are both well suited to photon-limited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized functions at various scales, locations, and orientations that produce piecewise linear image approximations, and a new multiscale image decomposition based on these functions. Platelets are well suited for approximating images consisting of smooth regions separated by smooth boundaries. For smoothness measured in certain Holder classes, it is shown that the error of m-term platelet approximations can decay significantly faster than that of m-term approximations in terms of sinusoids, wavelets, or wedgelets. This suggests that platelets may outperform existing techniques for image denoising and reconstruction. Fast, platelet-based, maximum penalized likelihood methods for photon-limited image denoising, deblurring and tomographic reconstruction problems are developed. Because platelet decompositions of Poisson distributed images are tractable and computationally efficient, existing image reconstruction methods based on expectation-maximization type algorithms can be easily enhanced with platelet techniques. Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.

276 citations


Cited by
More filters
Journal ArticleDOI
01 May 2009
TL;DR: This paper breaks down the energy consumption for the components of a typical sensor node, and discusses the main directions to energy conservation in WSNs, and presents a systematic and comprehensive taxonomy of the energy conservation schemes.
Abstract: In the last years, wireless sensor networks (WSNs) have gained increasing attention from both the research community and actual users. As sensor nodes are generally battery-powered devices, the critical aspects to face concern how to reduce the energy consumption of nodes, so that the network lifetime can be extended to reasonable times. In this paper we first break down the energy consumption for the components of a typical sensor node, and discuss the main directions to energy conservation in WSNs. Then, we present a systematic and comprehensive taxonomy of the energy conservation schemes, which are subsequently discussed in depth. Special attention has been devoted to promising solutions which have not yet obtained a wide attention in the literature, such as techniques for energy efficient data acquisition. Finally we conclude the paper with insights for research directions about energy conservation in WSNs.

2,546 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations