scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2022"


Journal ArticleDOI
TL;DR: In this article , a two-step reconstruction method is proposed to improve the resolution of reconstructed temperature distribution images and maintain high accuracy, in which the problem of solving the temperature distribution is converted to an optimization problem and then solved by an improved version of the equilibrium optimizer.
Abstract: The precise temperature distribution measurement is crucial in many industrial fields, where ultrasonic tomography (UT) has broad application prospects and significance. In order to improve the resolution of reconstructed temperature distribution images and maintain high accuracy, a novel two-step reconstruction method is proposed in this article. First, the problem of solving the temperature distribution is converted to an optimization problem and then solved by an improved version of the equilibrium optimizer (IEO), in which a new nonlinear time strategy and novel population update rules are deployed. Then, based on the low-resolution and high-precision images reconstructed by IEO, Gaussian process regression (GPR) is adopted to enhance image resolution and keep the reconstruction errors low. After that, the number of divided grids and the parameters of IEO are also further studied to improve the reconstruction quality. The results of numerical simulations and experiments indicate that high-resolution images with low reconstruction errors can be reconstructed effectively by the proposed IEO-GPR method, and it also shows excellent robust performance. For a complex three-peak temperature distribution, a competitive accuracy with 3.10% and 2.37% error at root-mean-square error and average relative error is achieved, respectively. In practical experiment, the root-mean-square error of IEO-GPR is 0.72%, which is at least 0.89% lower than that of conventional algorithms.

61 citations


Journal ArticleDOI
TL;DR: In portal venous abdominal photon-counting detector CT, an iterative reconstruction algorithm (QIR; Siemens Healthcare) at high strength levels improved image quality by reducing noise and improving contrast-to-noise ratio and lesion conspicuity without compromising image texture or CT attenuation values.
Abstract: Background An iterative reconstruction (IR) algorithm was introduced for clinical photon-counting detector (PCD) CT. Purpose To investigate the image quality and the optimal strength level of a quantum IR algorithm (QIR; Siemens Healthcare) for virtual monoenergetic images and polychromatic images (T3D) in a phantom and in patients undergoing portal venous abdominal PCD CT. Materials and Methods In this retrospective study, noise power spectrum (NPS) was measured in a water-filled phantom. Consecutive oncologic patients who underwent portal venous abdominal PCD CT between March and April 2021 were included. Virtual monoenergetic images at 60 keV and T3D were reconstructed without QIR (QIR-off; reference standard) and with QIR at four levels (QIR 1-4; index tests). Global noise index, contrast-to-noise ratio (CNR), and voxel-wise CT attenuation differences were measured. Noise and texture, artifacts, diagnostic confidence, and overall quality were assessed qualitatively. Conspicuity of hypodense liver lesions was rated by four readers. Parametric (analyses of variance, paired t tests) and nonparametric tests (Friedman, post hoc Wilcoxon signed-rank tests) were used to compare quantitative and qualitative image quality among reconstructions. Results In the phantom, NPS showed unchanged noise texture across reconstructions with maximum spatial frequency differences of 0.01 per millimeter. Fifty patients (mean age, 59 years ± 16 [standard deviation]; 31 women) were included. Global noise index was reduced from QIR-off to QIR-4 by 45% for 60 keV and by 44% for T3D (both, P < .001). CNR of the liver improved from QIR-off to QIR-4 by 74% for 60 keV and by 69% for T3D (both, P < .001). No evidence of difference was found in mean attenuation of fat and liver (P = .79-.84) and on a voxel-wise basis among reconstructions. Qualitatively, QIR-4 outperformed all reconstructions in every category for 60 keV and T3D (P value range, <.001 to .01). All four readers rated QIR-4 superior to other strengths for lesion conspicuity (P value range, <.001 to .04). Conclusion In portal venous abdominal photon-counting detector CT, an iterative reconstruction algorithm (QIR; Siemens Healthcare) at high strength levels improved image quality by reducing noise and improving contrast-to-noise ratio and lesion conspicuity without compromising image texture or CT attenuation values. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Sinitsyn in this issue.

49 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a method of MAS brightness temperature image reconstruction with deep convolutional neural network (CNN), which includes two fully connected (FC) layers, multiple convolution layers, and deconvolutional layers, which realize the image reconstruction for MAS.
Abstract: In mirrored aperture synthesis (MAS), the existing brightness temperature image reconstruction methods include inverse cosine transform and impulse matrix reconstruction methods. However, the quality of the MAS brightness temperature images reconstructed by the existing methods is still poor and needs to be improved. This article proposes a method of MAS brightness temperature image reconstruction with deep convolutional neural network (CNN). The network includes two fully connected (FC) layers, multiple convolutional layers, and deconvolutional layers, which realize the image reconstruction for MAS. This method uses deep CNN to learn the MAS image reconstruction mapping and system errors, so as to improve the performance of the brightness temperature image reconstruction. Both simulation and experimental results verify that the performance of the proposed MAS-CNN method is better than the existing MAS image reconstruction methods.

40 citations


Journal ArticleDOI
TL;DR: A novel 3D reconstruction method based on the fusion of polarization imaging and binocular stereo vision for high quality3D reconstruction, including a data fitting term and a robust low-rank matrix factorization constraint is investigated.

34 citations


Proceedings ArticleDOI
25 May 2022
TL;DR: This work introduces a new method that enables efficient and accurate surface reconstruction from Internet photo collections in the presence of varying illumination and proposes a hybrid voxel- and surface-guided sampling technique that allows for more efficient ray sampling around surfaces and leads to significant improvements in reconstruction quality.
Abstract: We are witnessing an explosion of neural implicit representations in computer vision and graphics. Their applicability has recently expanded beyond tasks such as shape generation and image-based rendering to the fundamental problem of image-based 3D reconstruction. However, existing methods typically assume constrained 3D environments with constant illumination captured by a small set of roughly uniformly distributed cameras. We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections in the presence of varying illumination. To achieve this, we propose a hybrid voxel- and surface-guided sampling technique that allows for more efficient ray sampling around surfaces and leads to significant improvements in reconstruction quality. Further, we present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes. We perform extensive experiments, demonstrating that our approach surpasses both classical and neural reconstruction methods on a wide variety of metrics. Code and data will be made available at https://zju3dv.github.io/neuralrecon-w.

33 citations


Journal ArticleDOI
TL;DR: In this paper , the feasibility and quality of ultra-high-resolution coronary computed tomography angiography (CCTA) with dual-source photon-counting detector CT (PCD-CT) in patients with a high coronary calcium load, including an analysis of the optimal reconstruction kernel and matrix size was evaluated.
Abstract: The aim of this study was to evaluate the feasibility and quality of ultra-high-resolution coronary computed tomography angiography (CCTA) with dual-source photon-counting detector CT (PCD-CT) in patients with a high coronary calcium load, including an analysis of the optimal reconstruction kernel and matrix size.In this institutional review board-approved study, 20 patients (6 women; mean age, 79 ± 10 years; mean body mass index, 25.6 ± 4.3 kg/m 2 ) undergoing PCD-CCTA in the ultra-high-resolution mode were included. Ultra-high-resolution CCTA was acquired in an electrocardiography-gated dual-source spiral mode at a tube voltage of 120 kV and collimation of 120 × 0.2 mm. The field of view (FOV) and matrix sizes were adjusted to the resolution properties of the individual reconstruction kernels using a FOV of 200 × 200 mm 2 or 150 × 150 mm 2 and a matrix size of 512 × 512 pixels or 1024 × 1024 pixels, respectively. Images were reconstructed using vascular kernels of 8 sharpness levels (Bv40, Bv44, Bv56, Bv60, Bv64, Bv72, Bv80, and Bv89), using quantum iterative reconstruction (QIR) at a strength level of 4, and a slice thickness of 0.2 mm. Images with the Bv40 kernel, QIR at a strength level of 4, and a slice thickness of 0.6 mm served as the reference. Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, and blooming artifacts were quantified. For subjective image quality, 2 blinded readers evaluated image noise and delineation of coronary artery plaques and the adjacent vessel lumen using a 5-point discrete visual scale. A phantom scan served to characterize image noise texture by calculating the noise power spectrum for every reconstruction kernel.Maximum spatial frequency (f peak ) gradually shifted to higher values for reconstructions with the Bv40 to Bv64 kernel (0.15 to 0.56 mm -1 ), but not for reconstructions with the Bv72 to Bv89 kernel. Ultra-high-resolution CCTA was feasible in all patients (median calcium score, 479). In patients, reconstructions with the Bv40 kernel and a slice thickness of 0.6 mm showed largest blooming artifacts (55.2% ± 9.8%) and lowest vessel sharpness (477.1 ± 73.6 ΔHU/mm) while achieving highest SNR (27.4 ± 5.6) and CNR (32.9 ± 6.6) and lowest noise (17.1 ± 2.2 HU). Considering reconstructions with a slice thickness of 0.2 mm, image noise, SNR, CNR, vessel sharpness, and blooming artifacts significantly differed across kernels (all P 's < 0.001). With higher kernel sharpness, SNR and CNR continuously decreased, whereas image noise and vessel sharpness increased, with highest sharpness for the Bv89 kernel (2383.4 ± 787.1 ΔHU/mm). Blooming artifacts continuously decreased for reconstructions with the Bv40 (slice thickness, 0.2 mm; 52.8% ± 9.2%) to the Bv72 kernel (39.7% ± 9.1%). Subjective noise was perceived by both readers in agreement with the objective measurements. Considering delineation of coronary artery plaques and the adjacent vessel lumen, reconstructions with the Bv64 and Bv72 kernel (for both, median score of 5) were favored by the readers providing an excellent anatomic delineation of plaque characteristics and vessel lumen.Ultra-high-resolution CCTA with PCD-CT is feasible and enables the visualization of calcified coronaries with an excellent image quality, high sharpness, and reduced blooming. Coronary plaque characterization and delineation of the adjacent vessel lumen are possible with an optimal quality using Bv64 kernel, a FOV of 200 × 200 mm 2 , and a matrix size of 512 × 512 pixels.

29 citations


Journal ArticleDOI
TL;DR: Lee et al. as mentioned in this paper compared the image quality and lung nodule detectability of DLIR and adaptive statistical iterative reconstruction-V (ASIR-V) in ULD CT images.
Abstract: Background Ultra-low-dose (ULD) CT could facilitate the clinical implementation of large-scale lung cancer screening while minimizing the radiation dose. However, traditional image reconstruction methods are associated with image noise in low-dose acquisitions. Purpose To compare the image quality and lung nodule detectability of deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction-V (ASIR-V) in ULD CT. Materials and Methods Patients who underwent noncontrast ULD CT (performed at 0.07 or 0.14 mSv, similar to a single chest radiograph) and contrast-enhanced chest CT (CECT) from April to June 2020 were included in this prospective study. ULD CT images were reconstructed with filtered back projection (FBP), ASIR-V, and DLIR. Three-dimensional segmentation of lung tissue was performed to evaluate image noise. Radiologists detected and measured nodules with use of a deep learning-based nodule assessment system and recognized malignancy-related imaging features. Bland-Altman analysis and repeated-measures analysis of variance were used to evaluate the differences between ULD CT images and CECT images. Results A total of 203 participants (mean age ± standard deviation, 61 years ± 12; 129 men) with 1066 nodules were included, with 100 scans at 0.07 mSv and 103 scans at 0.14 mSv. The mean lung tissue noise ± standard deviation was 46 HU ± 4 for CECT and 59 HU ± 4, 56 HU ± 4, 53 HU ± 4, 54 HU ± 4, and 51 HU ± 4 in FBP, ASIR-V level 40%, ASIR-V level 80% (ASIR-V-80%), medium-strength DLIR, and high-strength DLIR (DLIR-H), respectively, of ULD CT scans (P < .001). The nodule detection rates of FBP reconstruction, ASIR-V-80%, and DLIR-H were 62.5% (666 of 1066 nodules), 73.3% (781 of 1066 nodules), and 75.8% (808 of 1066 nodules), respectively (P < .001). Bland-Altman analysis showed the percentage difference in long diameter from that of CECT was 9.3% (95% CI of the mean: 8.0, 10.6), 9.2% (95% CI of the mean: 8.0, 10.4), and 6.2% (95% CI of the mean: 5.0, 7.4) in FBP reconstruction, ASIR-V-80%, and DLIR-H, respectively (P < .001). Conclusion Compared with adaptive statistical iterative reconstruction-V, deep learning image reconstruction reduced image noise, increased nodule detection rate, and improved measurement accuracy on ultra-low-dose chest CT images. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Lee in this issue.

25 citations


Journal ArticleDOI
TL;DR: In this paper , the authors compared the effect of two deep learning image reconstruction (DLR) algorithms in chest computed tomography (CT) with different clinical indications and found that DLR algorithms reduce the image-noise and improve lesion detectability.
Abstract: The purpose of this study was to compare the effect of two deep learning image reconstruction (DLR) algorithms in chest computed tomography (CT) with different clinical indications.Acquisitions on image quality and anthropomorphic phantoms were performed at six dose levels (CTDIvol: 10/7.5/5/2.5/1/0.5mGy) on two CT scanners equipped with two different DLR algorithms (TrueFidelityTM and AiCE). Raw data were reconstructed using the filtered back-projection (FBP) and the lowest/intermediate/highest DLR levels (L-DLR/M-DLR/H-DLR) of each algorithm. Noise power spectrum, task-based transfer function (TTF) and detectability index (d') were computed: d' modelled detection of a soft tissue mediastinal nodule, ground-glass opacity, or high-contrast pulmonary lesion. Subjective image quality of anthropomorphic phantom images was analyzed by two radiologists.For the L-DLR/M-DLR levels, the noise magnitude was lower with TrueFidelityTM than with AiCE from 2.5 to 10 mGy. For H-DLR, noise magnitude was lower with AiCE . For L-DLR and M-DLR, the average NPS spatial frequency (fav) values were greater for AiCE except for 0.5 mGy. For H-DLR levels, fav was greater for TrueFidelityTM than for AiCE. TTF50% values were greater with AiCE for the air insert, and lower than TrueFidelityTM for the polyethylene insert. From 2.5 to10 mGy, d' was greater for AiCE than for TrueFidelityTM for H-DLR for all lesions, but similar for L-DLR and M-DLR. Image quality was rated clinically appropriate for all levels of both algorithms, for dose from 2.5 to 10 mGy, except for L-DLR of AiCE.DLR algorithms reduce the image-noise and improve lesion detectability. Their operations and properties impacted both noise-texture and spatial resolution.

24 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a joint 3D reconstruction model for depth fusion, including a data fitting term and a robust low-rank matrix factorization constraint, and adopted an efficient solution based on the alternating direction method of multipliers.

24 citations


Journal ArticleDOI
01 Apr 2022-Patterns
TL;DR: In this paper , the authors introduce the bounded relative error norm (BREN) property, which is a special case of the Lipschitz continuity, and perform a convergence study consisting of two parts: (1) a heuristic analysis on the convergence of the analytic compressed iterative deep (ACID) scheme and (2) a mathematically denser analysis (with the two approximations: [1] AT is viewed as an inverse A-1 in the perspective of an iterative reconstruction procedure and [2] a pseudo-inverse is used for a total variation operator H).

21 citations


Journal ArticleDOI
01 Apr 2022-Patterns
TL;DR: In this paper , an analytic compressed iterative deep (ACID) framework is proposed to solve three kinds of instabilities: strong image artefacts from tiny perturbations, small features missed in a deeply reconstructed image, and decreased imaging performance with increased input data.

Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

Journal ArticleDOI
TL;DR: The performance analysis of different 3D face reconstruction techniques has been discussed in terms of software, hardware, pros and cons as discussed by the authors , and challenges and future scope of 3d face reconstruction methods have also been discussed.
Abstract: 3D face reconstruction is the most captivating topic in biometrics with the advent of deep learning and readily available graphical processing units. This paper explores the various aspects of 3D face reconstruction techniques. Five techniques have been discussed, namely, deep learning, epipolar geometry, one-shot learning, 3D morphable model, and shape from shading methods. This paper provides an in-depth analysis of 3D face reconstruction using deep learning techniques. The performance analysis of different face reconstruction techniques has been discussed in terms of software, hardware, pros and cons. The challenges and future scope of 3d face reconstruction techniques have also been discussed.

Journal ArticleDOI
TL;DR: Recently, deep learning has become the main research frontier for biological image reconstruction and enhancement problems thanks to their high performance and ultrafast inference times as discussed by the authors . However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data.
Abstract: Recently, deep learning (DL) approaches have become the main research frontier for biological image reconstruction and enhancement problems thanks to their high performance and ultrafast inference times. However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data. In particular, self-supervised learning and generative models have been successfully used for various biological imaging applications. In this article, we provide an overview of these approaches from a coherent perspective in the context of classical inverse problems and discuss their applications to biological imaging, including electron, fluorescence, deconvolution microscopy, optical diffraction tomography (ODT), and functional neuroimaging.

Journal ArticleDOI
TL;DR: DIOR as mentioned in this paper combines iterative optimization and deep learning based on the residual domain to improve the convergence property and generalization ability of residual domain for limited-angle CT image reconstruction.
Abstract: Limited-angle CT is a challenging problem in real applications. Incomplete projection data will lead to severe artifacts and distortions in reconstruction images. To tackle this problem, we propose a novel reconstruction framework termed Deep Iterative Optimization-based Residual-learning (DIOR) for limited-angle CT. Instead of directly deploying the regularization term on image space, the DIOR combines iterative optimization and deep learning based on the residual domain, significantly improving the convergence property and generalization ability. Specifically, the asymmetric convolutional modules are adopted to strengthen the feature extraction capacity in smooth regions for deep priors. Besides, in our DIOR method, the information contained in low-frequency and high-frequency components is also evaluated by perceptual loss to improve the performance in tissue preservation. Both simulated and clinical datasets are performed to validate the performance of DIOR. Compared with existing competitive algorithms, quantitative and qualitative results show that the proposed method brings a promising improvement in artifact removal, detail restoration and edge preservation.

Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

Journal ArticleDOI
TL;DR: This paper proposes a novel hybrid network module, namely CCoT (Convolution and Contextual Transformer) block, which can acquire the inductive bias ability of convolution and the powerful modeling ability of transformer simultaneously, and is conducive to improving the quality of reconstruction to restore details.
Abstract: Spectral compressive imaging (SCI) is able to encode the high-dimensional hyperspectral image to a 2D measurement, and then uses algorithms to reconstruct the spatio-spectral data-cube. At present, the main bottleneck of SCI is the reconstruction algorithm, and the state-of-the-art (SOTA) reconstruction methods generally face the problem of long reconstruction time and/or poor detail recovery. In this paper, we propose a novel hybrid network module, namely CCoT (Convolution and Contextual Transformer) block, which can acquire the inductive bias ability of convolution and the powerful modeling ability of transformer simultaneously,and is conducive to improving the quality of reconstruction to restore fine details. We integrate the proposed CCoT block into deep unfolding framework based on the generalized alternating projection algorithm, and further propose the GAP-CCoT network. Through the experiments of extensive synthetic and real data, our proposed model achieves higher reconstruction quality ($>$2dB in PSNR on simulated benchmark datasets) and shorter running time than existing SOTA algorithms by a large margin. The code and models are publicly available at https://github.com/ucaswangls/GAP-CCoT.

Journal ArticleDOI
TL;DR: In this paper , an efficient non-Cartesian unrolled neural network-based reconstruction and an accurate approximation for backpropagation through the non-uniform fast Fourier transform (NUFFT) operator are used to accurately reconstruct and backpropagate multi-coil non-cartesian data.
Abstract: Optimizing k-space sampling trajectories is a promising yet challenging topic for fast magnetic resonance imaging (MRI). This work proposes to optimize a reconstruction method and sampling trajectories jointly concerning image reconstruction quality in a supervised learning manner. We parameterize trajectories with quadratic B-spline kernels to reduce the number of parameters and apply multi-scale optimization, which may help to avoid sub-optimal local minima. The algorithm includes an efficient non-Cartesian unrolled neural network-based reconstruction and an accurate approximation for backpropagation through the non-uniform fast Fourier transform (NUFFT) operator to accurately reconstruct and back-propagate multi-coil non-Cartesian data. Penalties on slew rate and gradient amplitude enforce hardware constraints. Sampling and reconstruction are trained jointly using large public datasets. To correct for possible eddy-current effects introduced by the curved trajectory, we use a pencil-beam trajectory mapping technique. In both simulations and in- vivo experiments, the learned trajectory demonstrates significantly improved image quality compared to previous model-based and learning-based trajectory optimization methods for 10× acceleration factors. Though trained with neural network-based reconstruction, the proposed trajectory also leads to improved image quality with compressed sensing-based reconstruction.


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper introduced a deep learning framework, termed Fourier Imager Network (FIN), that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting unprecedented success in external generalization.
Abstract: Abstract Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging. However, the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge. Here we introduce a deep learning framework, termed Fourier Imager Network (FIN), that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting unprecedented success in external generalization. FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field. Compared with existing convolutional deep neural networks used for hologram reconstruction, FIN exhibits superior generalization to new types of samples, while also being much faster in its image inference speed, completing the hologram reconstruction task in ~0.04 s per 1 mm 2 of the sample area. We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate, salivary gland tissue and Pap smear samples, proving its superior external generalization and image reconstruction speed. Beyond holographic microscopy and quantitative phase imaging, FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: Cai et al. as mentioned in this paper proposed a multi-stage Spectral-wise Transformer (MST++), which employs Spectralwise Multi-head Self-attention (S-MSA) that is based on the hyperspectral image (HSI) spatially sparse while spectrally self-similar nature to compose the basic unit.
Abstract: Existing leading methods for spectral reconstruction (SR) focus on designing deeper or wider convolutional neural networks (CNNs) to learn the end-to-end mapping from the RGB image to its hyperspectral image (HSI). These CNN-based methods achieve impressive restoration performance while showing limitations in capturing the long-range dependencies and self-similarity prior. To cope with this problem, we propose a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), for efficient spectral reconstruction. In particular, we employ Spectral-wise Multi-head Self-attention (S-MSA) that is based on the HSI spatially sparse while spectrally self-similar nature to compose the basic unit, Spectral-wise Attention Block (SAB). Then SABs build up Single-stage Spectral-wise Transformer (SST) that exploits a U-shaped structure to extract multi-resolution contextual information. Finally, our MST++, cascaded by several SSTs, progressively improves the reconstruction quality from coarse to fine. Comprehensive experiments show that our MST++ significantly outperforms other state-of-the-art methods. In the NTIRE 2022 Spectral Reconstruction Challenge, our approach won the First place. Code and pre-trained models are publicly available at https://github.com/caiyuanhao1998/MST-plus-plus.

Journal ArticleDOI
20 Mar 2022-Optica
TL;DR: A reconstruction algorithm for MRI-guided NIRST based on deep learning is proposed and validated by simulation and real patient imaging data for breast cancer characterization and shows that the well-trained neural network with only simulation data sets can be directly used for differentiating malignant from benign breast tumors.
Abstract: Non-invasive near-infrared spectral tomography (NIRST) can incorporate the structural information provided by simultaneous magnetic resonance imaging (MRI), and this has significantly improved the images obtained of tissue function. However, the process of MRI guidance in NIRST has been time consuming because of the needs for tissue-type segmentation and forward diffuse modeling of light propagation. To overcome these problems, a reconstruction algorithm for MRI-guided NIRST based on deep learning is proposed and validated by simulation and real patient imaging data for breast cancer characterization. In this approach, diffused optical signals and MRI images were both used as the input to the neural network, and simultaneously recovered the concentrations of oxy-hemoglobin, deoxy-hemoglobin, and water via end-to-end training by using 20,000 sets of computer-generated simulation phantoms. The simulation phantom studies showed that the quality of the reconstructed images was improved, compared to that obtained by other existing reconstruction methods. Reconstructed patient images show that the well-trained neural network with only simulation data sets can be directly used for differentiating malignant from benign breast tumors.

Journal ArticleDOI
TL;DR: Part-wise AtlasNet as discussed by the authors proposes to add reconstruction constraints to the local structures of 3D objects, which facilitates imposition of several local constraints on the final reconstruction loss, hence better recovering 3D object with finer local structures.
Abstract: Learning to generate three dimensional (3D) point clouds from a single image remains a challenging task. Numerous approaches with encoder–decoder architectures have been proposed. However, these methods are hard to realize structured reconstructions and usually lack constraints on the local structures of 3D objects. AtlasNet as a representative model of 3D reconstruction consists of many branches, and each branch is a neural network used to reconstruct one local patch of a 3D object. However, the neural networks in AtlasNet and the patches of 3D objects are not in one-to-one correspondence before training. This case is not conducive to adding some reconstruction constraints to the local structures of 3D objects. Based on the architecture of AtlasNet, we propose Part-Wise AtlasNet in which each neural network is only responsible for reconstructing one specific part of a 3D object. This kind of restriction facilitates imposition of several local constraints on the final reconstruction loss, hence better recovering 3D objects with finer local structures. Both the qualitative results and quantitative analysis show that the variants of the proposed method with the local reconstruction losses generate structured point clouds with a higher visual quality and achieve better performance than other methods in 3D point cloud generation from a single image.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this article , the authors propose to represent the surface using an implicit function (truncated signed distance function) instead of a volumetric representation of the surface, and extend it to use depth measurements from a commodity RGB-D sensor, such as a Kinect.
Abstract: Obtaining high-quality 3D reconstructions of room-scale scenes is of paramount importance for upcoming applications in AR or VR. These range from mixed reality applications for teleconferencing, virtual measuring, virtual room planing, to robotic applications. While current volume-based view synthesis methods that use neural radiance fields (NeRFs) show promising results in reproducing the appearance of an object or scene, they do not reconstruct an actual surface. The volumetric representation of the surface based on densities leads to artifacts when a surface is extracted using Marching Cubes, since during optimization, densities are accumulated along the ray and are not used at a single sample point in isolation. Instead of this volumetric representation of the surface, we propose to represent the surface using an implicit function (truncated signed distance function). We show how to incorporate this representation in the NeRF framework, and extend it to use depth measurements from a commodity RGB-D sensor, such as a Kinect. In addition, we propose a pose and camera re-finement technique which improves the overall reconstruction quality. In contrast to concurrent work on integrating depth priors in NeRF which concentrates on novel view synthesis, our approach is able to reconstruct high-quality, metrical 3D reconstructions.

Journal ArticleDOI
TL;DR: In this paper , the authors compared image quality of deep learning reconstruction for radiomics feature extraction with filtered back projection (FBP), hybrid iterative reconstruction (AIDR 3D), and model-based iterative reconstructions (FIRST), and found that AiCE was the only reconstruction technique that enabled extraction of higher-order features.
Abstract: To compare image quality of deep learning reconstruction (AiCE) for radiomics feature extraction with filtered back projection (FBP), hybrid iterative reconstruction (AIDR 3D), and model-based iterative reconstruction (FIRST).Effects of image reconstruction on radiomics features were investigated using a phantom that realistically mimicked a 65-year-old patient's abdomen with hepatic metastases. The phantom was scanned at 18 doses from 0.2 to 4 mGy, with 20 repeated scans per dose. Images were reconstructed with FBP, AIDR 3D, FIRST, and AiCE. Ninety-three radiomics features were extracted from 24 regions of interest, which were evenly distributed across three tissue classes: normal liver, metastatic core, and metastatic rim. Features were analyzed in terms of their consistent characterization of tissues within the same image (intraclass correlation coefficient ≥ 0.75), discriminative power (Kruskal-Wallis test p value < 0.05), and repeatability (overall concordance correlation coefficient ≥ 0.75).The median fraction of consistent features across all doses was 6%, 8%, 6%, and 22% with FBP, AIDR 3D, FIRST, and AiCE, respectively. Adequate discriminative power was achieved by 48%, 82%, 84%, and 92% of features, and 52%, 20%, 17%, and 39% of features were repeatable, respectively. Only 5% of features combined consistency, discriminative power, and repeatability with FBP, AIDR 3D, and FIRST versus 13% with AiCE at doses above 1 mGy and 17% at doses ≥ 3 mGy. AiCE was the only reconstruction technique that enabled extraction of higher-order features.AiCE more than doubled the yield of radiomics features at doses typically used clinically. Inconsistent tissue characterization within CT images contributes significantly to the poor stability of radiomics features.• Image quality of CT images reconstructed with filtered back projection and iterative methods is inadequate for the majority of radiomics features due to inconsistent tissue characterization, low discriminative power, or low repeatability. • Deep learning reconstruction enhances image quality for radiomics and more than doubled the feature yield at doses that are typically used in clinical CT imaging. • Image reconstruction algorithms can optimize image quality for more reliable quantification of tissues in CT images.

Journal ArticleDOI
TL;DR: DLIR improves vessel conspicuity, CNR, and lesion conspicuity of virtual monochromatic and iodine density images in abdominal contrast-enhanced DECT, compared to hybrid IR.

Journal ArticleDOI
TL;DR: In this paper , an implicit neural representation learning with prior embedding (NeRP) method is proposed to reconstruct a computational image from sparsely sampled measurements, which exploits the internal information in an image prior and the physics of the sparse sampled measurements to produce a representation of the unknown subject.
Abstract: Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses additional challenges due to limited measurements. In this work, we propose a methodology of implicit Neural Representation learning with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.

Journal ArticleDOI
TL;DR: The NC-PDNet (Non-Cartesian Primal Dual Netwok) as discussed by the authors is the first density-compensated (DCp) unrolled neural network, and validate the need for its key components via an ablation study.
Abstract: Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for non-Cartesian acquisition settings. We design the NC-PDNet (Non-Cartesian Primal Dual Netwok), the first density-compensated (DCp) unrolled neural network, and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test this network in out-of-distribution settings, for example training on knee data and validating on brain data. The results show that NC-PDNet outperforms baseline (U-Net, Deep image prior) models both visually and quantitatively in all settings. In particular, in the 2D multi-coil acquisition scenario, the NC-PDNet provides up to a 1.2 dB improvement in peak signal-to-noise ratio (PSNR) over baseline networks, while also allowing a gain of at least 1dB in PSNR in generalization settings. We provide the open-source implementation of NC-PDNet, and in particular the Non-uniform Fourier Transform in TensorFlow, tested on 2D multi-coil and 3D single-coil k-space data.

Journal ArticleDOI
TL;DR: Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are reviewed in this paper , where the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Abstract: Abstract Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.

Journal ArticleDOI
TL;DR: In this article , a range migration kernel-based iterative shrinkage thresholding network (RMIST-Net) is proposed for near-field 3D millimeter-wave (mmW) sparse imaging.
Abstract: Compressed sensing (CS) demonstrates significant potential to improve image quality in 3-D millimeter-wave imaging compared with conventional matched filtering (MF). However, existing sparsity-driven 3-D imaging algorithms always suffer from large-scale storage, excessive computational cost, and nontrivial tuning of parameters due to the huge-dimensional matrix–vector multiplication in complicated iterative optimization steps. In this article, we present a novel range migration (RM) kernel-based iterative-shrinkage thresholding network, dubbed as RMIST-Net, by combining the traditional model-based CS method and data-driven deep learning method for near-field 3-D millimeter-wave (mmW) sparse imaging. First, the measurement matrices in ISTA optimization steps are replaced by RM kernels, by which matrix–vector multiplication is converted to the Hadamard product. Then, the modified ISTA optimization is unrolled into a deep hierarchical architecture, in which all parameters are learned automatically instead of manually tuned. Subsequently, 1000 pairs of oracle images with randomly distributed targets and their corresponding echoes are simulated to train the network. A well-trained RMIST-Net produces high-quality 3-D images from range-focused echoes. Finally, we experimentally prove that RMIST-Net is capable process $512 \times 512$ large-scale imaging tasks within 1 s. Besides, we compare RMIST-Net with other state-of-the-art methods in near-field 3-D imaging applications. Both simulations and real-measured experiments demonstrate that RMIST-Net produces impressive reconstruction performance while maintaining high computational speed compared with conventional and sparse imaging algorithms.