scispace - formally typeset
Search or ask a question
Author

John M. Pauly

Bio: John M. Pauly is an academic researcher from Stanford University. The author has contributed to research in topics: Iterative reconstruction & Artificial neural network. The author has an hindex of 11, co-authored 49 publications receiving 509 citations.

Papers
More filters
01 Jan 2004
TL;DR: A method to evaluate the effective randomness of a randomly under-sampled trajectory by analyzing the statistics of aliasing in the sparse transform domain is provided and a 5fold scan time reduction is demonstrated.
Abstract: M. Lustig, D. L. Donoho, J. M. Pauly Electrical Engineering, Stanford University, Stanford, CA, United States, Statistics, Stanford University, Stanford, CA, United States Introduction Recently a rapid imaging method was proposed [1] that exploits the fact that sparse or compressible signals, such as MR images, can be recovered from randomly under-sampled frequency data [1,2,3]. Because pure random sampling in 2D is impractical for MRI hardware, it was proposed to use randomly perturbed spirals to approximate random sampling. Indeed, pure 2D random sampling is impractical, however, randomly undersampling the phase encodes in a 3D Cartesian scan (Fig. 1) is practical, involves no overhead, is simple to implement and is purely random in two dimensions. Moreover, scan-time reduction in 3D Cartesian scans is always an issue. We provide a method to evaluate the effective randomness of a randomly under-sampled trajectory by analyzing the statistics of aliasing in the sparse transform domain. Applying this method to MR angiography, where images are truly sparse, we demonstrate a 5fold scan time reduction, which can be crucial in time-limited situations or can be used for time resolved imaging Theory Medical images in general, and specifically angiograms, often have a sparse representation using a linear transform (wavelets, DCT, finite differences, etc.)[1]. Under-sampling the Fourier domain results in aliasing. When the under-sampling is random, the aliasing is incoherent and acts as additional noise interference in the image, but more importantly, as incoherent interference of the sparse transform coefficients. Therefore, it is possible to recover the sparse transform coefficients using a non-linear reconstruction scheme [1-4] and consequently, recover the image itself. The interference in the sparse domain is a generalization of a point-spread function (PSF) and is computed by I(n,m)= where xn is the n th

91 citations

Posted Content
TL;DR: Evaluation on ultra-low-dose clinical data shows that the proposed deep learning method can achieve better result than the state-of-the-art methods and reconstruct images with comparable quality using only 0.5% of the original regular dose.
Abstract: Positron emission tomography (PET) is widely used in various clinical applications, including cancer diagnosis, heart disease and neuro disorders. The use of radioactive tracer in PET imaging raises concerns due to the risk of radiation exposure. To minimize this potential risk in PET imaging, efforts have been made to reduce the amount of radio-tracer usage. However, lowing dose results in low Signal-to-Noise-Ratio (SNR) and loss of information, both of which will heavily affect clinical diagnosis. Besides, the ill-conditioning of low-dose PET image reconstruction makes it a difficult problem for iterative reconstruction algorithms. Previous methods proposed are typically complicated and slow, yet still cannot yield satisfactory results at significantly low dose. Here, we propose a deep learning method to resolve this issue with an encoder-decoder residual deep network with concatenate skip connections. Experiments shows the proposed method can reconstruct low-dose PET image to a standard-dose quality with only two-hundredth dose. Different cost functions for training model are explored. Multi-slice input strategy is introduced to provide the network with more structural information and make it more robust to noise. Evaluation on ultra-low-dose clinical data shows that the proposed method can achieve better result than the state-of-the-art methods and reconstruct images with comparable quality using only 0.5% of the original regular dose.

91 citations

Posted Content
TL;DR: This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dBSNR with 100x faster inference, and for image superresolution, preliminary results indicate that modeling the denoising proximal demands deep ResNets.
Abstract: Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually "plausible" and physically "feasible" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.

56 citations

Posted Content
TL;DR: This work introduces bandpass filtering to increase the flexibility and scalability of deep neural networks for image reconstruction, and demonstrates this flexible architecture for reconstructing subsampled datasets of MRI scans.
Abstract: To increase the flexibility and scalability of deep neural networks for image reconstruction, a framework is proposed based on bandpass filtering. For many applications, sensing measurements are performed indirectly. For example, in magnetic resonance imaging, data are sampled in the frequency domain. The introduction of bandpass filtering enables leveraging known imaging physics while ensuring that the final reconstruction is consistent with actual measurements to maintain reconstruction accuracy. We demonstrate this flexible architecture for reconstructing subsampled datasets of MRI scans. The resulting high subsampling rates increase the speed of MRI acquisitions and enable the visualization rapid hemodynamics.

43 citations

Patent
07 Aug 2008
TL;DR: In this article, a method for 3D magnetic resonance imaging (MRI) with slice-direction distortion correction is provided, where one or more selective crosssections with a thickness along a first axis are excited using a RF pulse with a bandwidth, wherein a selective cross-section is either a selective slice or selective slab.
Abstract: A method for 3D magnetic resonance imaging (MRI) with slice-direction distortion correction is provided. One or more selective cross-sections with a thickness along a first axis are excited using a RF pulse with a bandwidth, wherein a selective cross-section is either a selective slice or selective slab. A refocusing pulse is applied to form a spin echo. One or more 2D encoded image signals are acquired with readout along a second axis and phase encoding along a third axis, wherein the data long the phase encoded first and third axes is acquired with an under sampling scheme. Slice-direction distortion is corrected by resolving the position by using phase encoding.

40 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Patent
14 Jul 2011
TL;DR: By using a multiple receiving coil composed of receiving coils, an imaging portion of a subject is subjected to a first pulse sequence to create n sensitivity images (701 to 703) fewer than the examination images as discussed by the authors.
Abstract: By using a multiple receiving coil composed of receiving coils, an imaging portion of a subject is subjected to a first pulse sequence to create n sensitivity images (701 to 703) fewer than the examination images. When these sensitivity images are created, an NMR signal is measured for only the low-frequency region of the k space. A second pulse sequence from which a phase encode step is removed is conducted to create m (m>n) examination images (704, 705) of the subject by using the receiving coils. When sensitivity distributions (707, 708) of the receiving coils are determined for the sensitivity images (701 to 703), and if there are no sensitivity distributions corresponding to the slice positions of the examination images (704, 705), they are determined by slice interpolation using the sensitivity distributions (701 to 703), and the aliasing artifacts of the examination images (704, 705) are removed by matrix operation by using the sensitivity distributions (707, 708).

1,792 citations

Patent
TL;DR: In this paper, a magnetic resonance imaging (MRI) system is presented for highly precisely detecting and compensating body motions within a short processing time during radial scanning, which includes a control unit that applies radiofrequency magnetic fields and magnetic field gradients to a subject lying down in a static magnetic field and detects magnetic resonance signals generated from the subject.

913 citations

Journal ArticleDOI
TL;DR: In this article, a convolution neural network (CNN)-based regularization prior is proposed for inverse problems with the arbitrary structure, where the forward model is explicitly accounted for and a smaller network with fewer parameters is sufficient to capture the image information compared to direct inversion.
Abstract: We introduce a model-based image reconstruction framework with a convolution neural network (CNN)-based regularization prior. The proposed formulation provides a systematic approach for deriving deep architectures for inverse problems with the arbitrary structure. Since the forward model is explicitly accounted for, a smaller network with fewer parameters is sufficient to capture the image information compared to direct inversion approaches. Thus, reducing the demand for training data and training time. Since we rely on end-to-end training with weight sharing across iterations, the CNN weights are customized to the forward model, thus offering improved performance over approaches that rely on pre-trained denoisers. Our experiments show that the decoupling of the number of iterations from the network complexity offered by this approach provides benefits, including lower demand for training data, reduced risk of overfitting, and implementations with significantly reduced memory footprint. We propose to enforce data-consistency by using numerical optimization blocks, such as conjugate gradients algorithm within the network. This approach offers faster convergence per iteration, compared to methods that rely on proximal gradients steps to enforce data consistency. Our experiments show that the faster convergence translates to improved performance, primarily when the available GPU memory restricts the number of iterations.

815 citations