scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2013"


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper demonstrates with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions.
Abstract: Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions.

884 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate the state-of-the-art denoising performance of BM4D, and its effectiveness when exploited as a regularizer in volumetric data reconstruction.
Abstract: We present an extension of the BM3D filter to volumetric data. The proposed algorithm, BM4D, implements the grouping and collaborative filtering paradigm, where mutually similar d -dimensional patches are stacked together in a (d+1) -dimensional array and jointly filtered in transform domain. While in BM3D the basic data patches are blocks of pixels, in BM4D we utilize cubes of voxels, which are stacked into a 4-D “group.” The 4-D transform applied on the group simultaneously exploits the local correlation present among voxels in each cube and the nonlocal correlation between the corresponding voxels of different cubes. Thus, the spectrum of the group is highly sparse, leading to very effective separation of signal and noise through coefficient shrinkage. After inverse transformation, we obtain estimates of each grouped cube, which are then adaptively aggregated at their original locations. We evaluate the algorithm on denoising of volumetric data corrupted by Gaussian and Rician noise, as well as on reconstruction of volumetric phantom data with non-zero phase from noisy and incomplete Fourier-domain (k-space) measurements. Experimental results demonstrate the state-of-the-art denoising performance of BM4D, and its effectiveness when exploited as a regularizer in volumetric data reconstruction.

748 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: A visual saliency detection algorithm from the perspective of reconstruction errors that applies the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors and refined by an object-biased Gaussian model is proposed.
Abstract: In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

725 citations


Journal ArticleDOI
TL;DR: This paper investigates an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement, and introduces the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.
Abstract: The compressive sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices provide measurement mappings that, with overwhelming probability, achieve nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the binary e-stable embedding property, which characterizes the robustness of the measurement process to sign changes. We show that the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the binary iterative hard thresholding algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.

645 citations


Journal ArticleDOI
TL;DR: Extensive synthetic and real data experiments show that the proposed small target detection method not only works more stably for different target sizes and signal-to-clutter ratio values, but also has better detection performance compared with conventional baseline methods.
Abstract: The robust detection of small targets is one of the key techniques in infrared search and tracking applications. A novel small target detection method in a single infrared image is proposed in this paper. Initially, the traditional infrared image model is generalized to a new infrared patch-image model using local patch construction. Then, because of the non-local self-correlation property of the infrared background image, based on the new model small target detection is formulated as an optimization problem of recovering low-rank and sparse matrices, which is effectively solved using stable principle component pursuit. Finally, a simple adaptive segmentation method is used to segment the target image and the segmentation result can be refined by post-processing. Extensive synthetic and real data experiments show that under different clutter backgrounds the proposed method not only works more stably for different target sizes and signal-to-clutter ratio values, but also has better detection performance compared with conventional baseline methods.

617 citations


Journal ArticleDOI
18 Jan 2013-Science
TL;DR: By leveraging metamaterials and compressive imaging, a low-profile aperture capable of microwave imaging without lenses, moving parts, or phase shifters is demonstrated and allows image compression to be performed on the physical hardware layer rather than in the postprocessing stage, thus averting the detector, storage, and transmission costs associated with full diffraction-limited sampling of a scene.
Abstract: By leveraging metamaterials and compressive imaging, a low-profile aperture capable of microwave imaging without lenses, moving parts, or phase shifters is demonstrated. This designer aperture allows image compression to be performed on the physical hardware layer rather than in the postprocessing stage, thus averting the detector, storage, and transmission costs associated with full diffraction-limited sampling of a scene. A guided-wave metamaterial aperture is used to perform compressive image reconstruction at 10 frames per second of two-dimensional (range and angle) sparse still and video scenes at K-band (18 to 26 gigahertz) frequencies, using frequency diversity to avoid mechanical scanning. Image acquisition is accomplished with a 40:1 compression ratio.

478 citations


Proceedings ArticleDOI
29 Jun 2013
TL;DR: A new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes, designed around a simple and flat point-Based representation.
Abstract: Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects/scenes. Or handle larger scenes by trading real-time performance and/or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-Based representation, which directly works with the input acquired from range/depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations, i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales.

388 citations


Journal ArticleDOI
TL;DR: This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations.

360 citations


Journal ArticleDOI
TL;DR: Iterative reconstruction technology for CT is presented in non-mathematical terms and IR can improve image quality in routine-dose CT and lower the radiation dose, and IR's disadvantages include longer computation and blotchy appearance of some images.
Abstract: Objectives To explain the technical principles of and differences between commercially available iterative reconstruction (IR) algorithms for computed tomography (CT) in non-mathematical terms for radiologists and clinicians.

357 citations


Journal ArticleDOI
TL;DR: It is shown that Fourier ring correlation provides an easy-to-use, laboratory consistent standard for measuring the resolution of SRM images, and is provided a freely available software tool that combines resolution measurement with image reconstruction.

288 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: It is argued that image segmentation and dense 3D reconstruction contribute valuable information to each other's task and a rigorous mathematical framework is proposed to formulate and solve a joint segmentations and dense reconstruction problem.
Abstract: Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.

Journal ArticleDOI
TL;DR: A method to retrieve and correct position errors during the image reconstruction iterations and improve both the quality of the retrieved object image and the position accuracy requirement while acquiring the diffraction patterns is proposed.
Abstract: Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

Journal ArticleDOI
TL;DR: Benefits of IR include improved subjective and objective image quality as well as radiation dose reduction while preserving image quality and future studies need to address the value of IR in ultra-low-dose CT with clinically relevant endpoints.
Abstract: Objectives To present the results of a systematic literature search aimed at determining to what extent the radiation dose can be reduced with iterative reconstruction (IR) for cardiopulmonary and body imaging with computed tomography (CT) in the clinical setting and what the effects on image quality are with IR versus filtered back-projection (FBP) and to provide recommendations for future research on IR.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper proposes a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback.
Abstract: In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.

Journal ArticleDOI
TL;DR: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data.
Abstract: Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.

Journal ArticleDOI
TL;DR: A depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account and can reconstruct quite accurate and dense point clouds with high computational efficiency.
Abstract: In this paper, we propose a depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account. In the proposed method, an efficient patch-based stereo matching process is used to generate depth-map at each image with acceptable errors, followed by a depth-map refinement process to enforce consistency over neighboring views. Compared to state-of-the-art methods, the proposed method can reconstruct quite accurate and dense point clouds with high computational efficiency. Besides, the proposed method could be easily parallelized at image level, i.e., each depth-map is computed individually, which makes it suitable for large-scale scene reconstruction with high resolution images. The accuracy and efficiency of the proposed method are evaluated quantitatively on benchmark data and qualitatively on large data sets.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence and reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.
Abstract: This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.

Journal ArticleDOI
TL;DR: A discrete imaging model for PACT is developed that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations and permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise.
Abstract: Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and/or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data.

Journal ArticleDOI
TL;DR: The MBIR algorithm considerably improved objective and subjective image quality parameters of routine abdominal multidetector CT images compared with those of ASIR and FBP.
Abstract: Our experimental data suggest that the use of model-based iterative reconstruction considerably improved image quality compared with that of both the adaptive statistical iterative reconstruction algorithm and the noniterative filtered back projection.

Journal ArticleDOI
TL;DR: This work demonstrates a system that utilizes a digital light projector to illuminate a scene with approximately 1300 different light patterns every second and correlate these with the back scattered light measured by three spectrally-filtered single-pixel photodetectors to produce a full-color high-quality image in a few seconds of data acquisition.
Abstract: Single-pixel detectors can be used as imaging devices by making use of structured illumination These systems work by correlating a changing incident light field with signals measured on a photodiode to derive an image of an object In this work we demonstrate a system that utilizes a digital light projector to illuminate a scene with approximately 1300 different light patterns every second and correlate these with the back scattered light measured by three spectrally-filtered single-pixel photodetectors to produce a full-color high-quality image in a few seconds of data acquisition We utilize a differential light projection method to self normalize the measured signals, improving the reconstruction quality whilst making the system robust to external sources of noise This technique can readily be extended for imaging applications at non-visible wavebands

Journal ArticleDOI
TL;DR: Discretization issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum are reviewed.
Abstract: There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of x-ray CT imaging. IR has the ability to significantly reduce patient dose; it provides the flexibility to reconstruct images from arbitrary x-ray system geometries and allows one to include detailed models of photon transport and detection physics to accurately correct for a wide variety of image degrading effects. This paper reviews discretization issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. The widespread implementation of IR with a highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling.

Journal ArticleDOI
TL;DR: This novel image restoration method, which is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm, utilizes sparse representation dictionaries constructed from previously collected datasets and qualitatively and quantitatively outperforms other state-of-the-art methods.
Abstract: In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods.

Journal ArticleDOI
TL;DR: Model-based iterative reconstruction allows detection of pulmonary nodules with ULD-CT with radiation exposure in the range of a posterior to anterior (PA) and lateral chest X-ray, and solid pulmonary nodule images are clearly depicted on ultra-low-dose chest CT.
Abstract: Objectives The purpose of this study was to assess the diagnostic image quality of ultra-low-dose chest computed tomography (ULD-CT) obtained with a radiation dose comparable to chest radiography and reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard dose diagnostic CT (SDD-CT) or low-dose diagnostic CT (LDD-CT) reconstructed with FBP alone.

Journal ArticleDOI
TL;DR: In this paper, a hybrid algorithm is proposed that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume.
Abstract: For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

Journal ArticleDOI
TL;DR: VEO clearly confirms the tremendous potential of iterative reconstructions for dose reduction in CT and appears to be an important tool for patient follow-up, especially for pediatric patients where cumulative lifetime dose still remains high.

Journal ArticleDOI
TL;DR: A dimensionality reduction method that fits SRC well, which maximizes the ratio of between- class reconstruction residual to within-class reconstruction residual in the projected space and thus enables SRC to achieve better performance.
Abstract: A sparse representation-based classifier (SRC) is developed and shows great potential for real-world face recognition. This paper presents a dimensionality reduction method that fits SRC well. SRC adopts a class reconstruction residual-based decision rule, we use it as a criterion to steer the design of a feature extraction method. The method is thus called the SRC steered discriminative projection (SRC-DP). SRC-DP maximizes the ratio of between-class reconstruction residual to within-class reconstruction residual in the projected space and thus enables SRC to achieve better performance. SRC-DP provides low-dimensional representation of human faces to make the SRC-based face recognition system more efficient. Experiments are done on the AR, the extended Yale B, and PIE face image databases, and results demonstrate the proposed method is more effective than other feature extraction methods based on the SRC.

Journal ArticleDOI
TL;DR: To examine the effects of the reconstruction algorithm of magnitude images from multichannel diffusion MRI on fiber orientation estimation, six sclerosis patients with central giant cell granuloma are studied.
Abstract: Purpose: To examine the effects of the reconstruction algorithm of magnitude images from multi-channel diffusion MRI on fibre orientation estimation. Theory and Methods: It is well established that the method used to combine signals from different coil elements in multi-channel MRI can have an impact on the properties of the reconstructed magnitude image. Utilising a root-sum-of-squares (RSoS) approach results in a magnitude signal that follows an effective non-central-distribution. As a result, the noise floor, the minimum measurable in the absence of any true signal, is elevated. This is particularly relevant for diffusion-weighted MRI, where the signal attenuation is of interest. Results: In this study, we illustrate problems that such image reconstruction characteristics may cause in the estimation of fibre orientations, both for model-based and model-free approaches, when modern 32-channel coils are employed. We further propose an alternative image reconstruction method that is based on sensitivity encoding (SENSE) and preserves the Rician nature of the single-channel, magnitude MR signal. We show that for the same k-space data, RSoS can cause excessive overfitting and reduced precision in orientation estimation compared to the SENSE-based approach. Conclusion: These results highlight the importance of choosing the appropriate image reconstruction method for tractography studies that use multi-channel receiver coils for diffusion MRI acquisition.

Journal ArticleDOI
04 Jun 2013-Sensors
TL;DR: The currently available optoacoustic image reconstruction and quantification approaches are assessed, including back-projection and model-based inversion algorithms, sparse signal representation, wavelet-based approaches, methods for reduction of acoustic artifacts as well as multi-spectral methods for visualization of tissue bio-markers.
Abstract: This paper comprehensively reviews the emerging topic of optoacoustic imaging from the image reconstruction and quantification perspective. Optoacoustic imaging combines highly attractive features, including rich contrast and high versatility in sensing diverse biological targets, excellent spatial resolution not compromised by light scattering, and relatively low cost of implementation. Yet, living objects present a complex target for optoacoustic imaging due to the presence of a highly heterogeneous tissue background in the form of strong spatial variations of scattering and absorption. Extracting quantified information on the actual distribution of tissue chromophores and other biomarkers constitutes therefore a challenging problem. Image quantification is further compromised by some frequently-used approximated inversion formulae. In this review, the currently available optoacoustic image reconstruction and quantification approaches are assessed, including back-projection and model-based inversion algorithms, sparse signal representation, wavelet-based approaches, methods for reduction of acoustic artifacts as well as multi-spectral methods for visualization of tissue bio-markers. Applicability of the different methodologies is further analyzed in the context of real-life performance in small animal and clinical in-vivo imaging scenarios.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper introduces an automatic method for removing reflection interference when imaging a scene behind a glass surface using the use of SIFT-flow to align the images such that a pixel-wise comparison can be made across the input set.
Abstract: This paper introduces an automatic method for removing reflection interference when imaging a scene behind a glass surface. Our approach exploits the subtle changes in the reflection with respect to the background in a small set of images taken at slightly different view points. Key to this idea is the use of SIFT-flow to align the images such that a pixel-wise comparison can be made across the input set. Gradients with variation across the image set are assumed to belong to the reflected scenes while constant gradients are assumed to belong to the desired background scene. By correctly labelling gradients belonging to reflection or background, the background scene can be separated from the reflection interference. Unlike previous approaches that exploit motion, our approach does not make any assumptions regarding the background or reflected scenes' geometry, nor requires the reflection to be static. This makes our approach practical for use in casual imaging scenarios. Our approach is straight forward and produces good results compared with existing methods.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: A formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction, and automatically augments the SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery.
Abstract: We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.