scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2022"


Journal ArticleDOI
TL;DR: Willelink et al. as discussed by the authors evaluated the technical performance of a dual-source photon-counting detector (PCD) CT system with use of phantoms and representative participant examinations.
Abstract: Background The first clinical CT system to use photon-counting detector (PCD) technology has become available for patient care. Purpose To assess the technical performance of the PCD CT system with use of phantoms and representative participant examinations. Materials and Methods Institutional review board approval and written informed consent from four participants were obtained. Technical performance of a dual-source PCD CT system was measured for standard and high-spatial-resolution (HR) collimations. Noise power spectrum, modulation transfer function, section sensitivity profile, iodine CT number accuracy in virtual monoenergetic images (VMIs), and iodine concentration accuracy were measured. Four participants were enrolled (between May 2021 and August 2021) in this prospective study and scanned using similar or lower radiation doses as their respective clinical examinations performed on the same day using energy-integrating detector (EID) CT. Image quality and findings from the participants' PCD CT and EID CT examinations were compared. Results All standard technical performance measures met accreditation and regulatory requirements. Relative to filtered back-projection reconstructions, images from iterative reconstruction had lower noise magnitude but preserved noise power spectrum shape and peak frequency. Maximum in-plane spatial resolutions of 125 and 208 µm were measured for HR and standard PCD CT scans, respectively. Minimum values for section sensitivity profile full width at half maximum measurements were 0.34 mm (0.2-mm nominal section thickness) and 0.64 mm (0.4-mm nominal section thickness) for HR and standard PCD CT scans, respectively. In a 120-kV standard PCD CT scan of a 40-cm phantom, VMI iodine CT numbers had a mean percentage error of 5.7%, and iodine concentration had root mean squared error of 0.5 mg/cm3, similar to previously reported values for EID CT. VMIs, iodine maps, and virtual noncontrast images were created for a coronary CT angiogram acquired with 66-msec temporal resolution. Participant PCD CT images showed up to 47% lower noise and/or improved spatial resolution compared with EID CT. Conclusion Technical performance of clinical photon-counting detector (PCD) CT is improved relative to that of a current state-of-the-art CT system. The dual-source PCD geometry facilitated 66-msec temporal resolution multienergy cardiac imaging. Study participant images illustrated the effect of the improved technical performance. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Willemink and Grist in this issue.

120 citations


Journal ArticleDOI
Junqing Zhu1, Jingtao Zhong1, Tao Ma1, Xiaoming Huang1, Weiguang Zhang1, Yang Zhou1 
TL;DR: In this paper, a UAV platform for pavement image collection was assembled, and the flight settings were studied for optimal image quality, and collected images were processed and annotated for model training.

66 citations


Journal ArticleDOI
TL;DR: The IntOPMICM technique is introduced, a new image compression scheme that combines GenPSO and VQ that produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data.
Abstract: Due to the increasing number of medical imaging images being utilized for the diagnosis and treatment of diseases, lossy or improper image compression has become more prevalent in recent years. The compression ratio and image quality, which are commonly quantified by PSNR values, are used to evaluate the performance of the lossy compression algorithm. This article introduces the IntOPMICM technique, a new image compression scheme that combines GenPSO and VQ. A combination of fragments and genetic algorithms was used to create the codebook. PSNR, MSE, SSIM, NMSE, SNR, and CR indicators were used to test the suggested technique using real-time medical imaging. The suggested IntOPMICM approach produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data. Furthermore, for a given compression ratio, the suggested IntOPMICM approach produces lower MSE, RMSE, and SNR values than existing methods.

65 citations


Journal ArticleDOI
TL;DR: In portal venous abdominal photon-counting detector CT, an iterative reconstruction algorithm (QIR; Siemens Healthcare) at high strength levels improved image quality by reducing noise and improving contrast-to-noise ratio and lesion conspicuity without compromising image texture or CT attenuation values.
Abstract: Background An iterative reconstruction (IR) algorithm was introduced for clinical photon-counting detector (PCD) CT. Purpose To investigate the image quality and the optimal strength level of a quantum IR algorithm (QIR; Siemens Healthcare) for virtual monoenergetic images and polychromatic images (T3D) in a phantom and in patients undergoing portal venous abdominal PCD CT. Materials and Methods In this retrospective study, noise power spectrum (NPS) was measured in a water-filled phantom. Consecutive oncologic patients who underwent portal venous abdominal PCD CT between March and April 2021 were included. Virtual monoenergetic images at 60 keV and T3D were reconstructed without QIR (QIR-off; reference standard) and with QIR at four levels (QIR 1-4; index tests). Global noise index, contrast-to-noise ratio (CNR), and voxel-wise CT attenuation differences were measured. Noise and texture, artifacts, diagnostic confidence, and overall quality were assessed qualitatively. Conspicuity of hypodense liver lesions was rated by four readers. Parametric (analyses of variance, paired t tests) and nonparametric tests (Friedman, post hoc Wilcoxon signed-rank tests) were used to compare quantitative and qualitative image quality among reconstructions. Results In the phantom, NPS showed unchanged noise texture across reconstructions with maximum spatial frequency differences of 0.01 per millimeter. Fifty patients (mean age, 59 years ± 16 [standard deviation]; 31 women) were included. Global noise index was reduced from QIR-off to QIR-4 by 45% for 60 keV and by 44% for T3D (both, P < .001). CNR of the liver improved from QIR-off to QIR-4 by 74% for 60 keV and by 69% for T3D (both, P < .001). No evidence of difference was found in mean attenuation of fat and liver (P = .79-.84) and on a voxel-wise basis among reconstructions. Qualitatively, QIR-4 outperformed all reconstructions in every category for 60 keV and T3D (P value range, <.001 to .01). All four readers rated QIR-4 superior to other strengths for lesion conspicuity (P value range, <.001 to .04). Conclusion In portal venous abdominal photon-counting detector CT, an iterative reconstruction algorithm (QIR; Siemens Healthcare) at high strength levels improved image quality by reducing noise and improving contrast-to-noise ratio and lesion conspicuity without compromising image texture or CT attenuation values. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Sinitsyn in this issue.

49 citations


Journal ArticleDOI
TL;DR: The current popular deep learning technology in underwater image enhancement, and the underwater video enhancement technologies are also mentioned and possible future developments in this area are discussed.
Abstract: Underwater video images, as the primary carriers of underwater information, play a vital role in human exploration and development of the ocean. Due to the optical characteristics of water bodies, underwater video images generally have problems such as color bias and unclear image quality, and image quality degradation is severe. Degenerated images have adverse effects on the visual tasks of underwater vehicles, such as recognition and detection. Therefore, it is vital to obtain high-quality underwater video images. Firstly, this paper analyzes the imaging principle of underwater images and the reasons for their decline in quality and briefly classifies various existing methods. Secondly, it focuses on the current popular deep learning technology in underwater image enhancement, and the underwater video enhancement technologies are also mentioned. It also introduces some standard underwater data sets, common video image evaluation indexes and underwater image specific indexes. Finally, this paper discusses possible future developments in this area.

36 citations


Journal ArticleDOI
TL;DR: In this paper , the authors evaluated the image quality and performance of an artificial intelligence-based computer-aided detection (CAD) system in photon-counting detector computed tomography (PCD-CT) for pulmonary nodule evaluation at different low-dose levels.
Abstract: The aim of this study was to evaluate the image quality (IQ) and performance of an artificial intelligence (AI)-based computer-aided detection (CAD) system in photon-counting detector computed tomography (PCD-CT) for pulmonary nodule evaluation at different low-dose levels.An anthropomorphic chest-phantom containing 14 pulmonary nodules of different sizes (range, 3-12 mm) was imaged on a PCD-CT and on a conventional energy-integrating detector CT (EID-CT). Scans were performed with each of the 3 vendor-specific scanning modes (QuantumPlus [Q+], Quantum [Q], and High Resolution [HR]) at decreasing matched radiation dose levels (volume computed tomography dose index ranging from 1.79 to 0.31 mGy) by adapting IQ levels from 30 to 5. Image noise was measured manually in the chest wall at 8 different locations. Subjective IQ was evaluated by 2 readers in consensus. Nodule detection and volumetry were performed using a commercially available AI-CAD system.Subjective IQ was superior in PCD-CT compared with EID-CT (P < 0.001), and objective image noise was similar in the Q+ and Q-mode (P > 0.05) and superior in the HR-mode (PCD 55.8 ± 11.7 HU vs EID 74.8 ± 5.4 HU; P = 0.01). High resolution showed the lowest image noise values among PCD modes (P = 0.01). Overall, the AI-CAD system delivered comparable results for lung nodule detection and volumetry between PCD- and dose-matched EID-CT (P = 0.08-1.00), with a mean sensitivity of 95% for PCD-CT and of 86% for dose-matched EID-CT in the lowest evaluated dose level (IQ5). Q+ and Q-mode showed higher false-positive rates than EID-CT at lower-dose levels (IQ10 and IQ5). The HR-mode showed a sensitivity of 100% with a false-positive rate of 1 even at the lowest evaluated dose level (IQ5; CDTIvol, 0.41 mGy).Photon-counting detector CT was superior to dose-matched EID-CT in subjective IQ while showing comparable to lower objective image noise. Fully automatized AI-aided nodule detection and volumetry are feasible in PCD-CT, but attention has to be paid to false-positive findings.

34 citations


Journal ArticleDOI
TL;DR: In this paper , the authors compared quantitative and qualitative image quality of contrastenhanced abdominal photon-counting detector CT (PCD-CT) compared to EID-CT in the same patients.

32 citations


Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this article , a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers is proposed to extract both local and non-local features from the input image.
Abstract: The goal of No-Reference Image Quality Assessment (NR-IQA) is to estimate the perceptual image quality in accordance with subjective evaluations, it is a complex and unsolved problem due to the absence of the pristine reference image. In this paper, we propose a novel model to address the NR-IQA task by leveraging a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers to extract both local and non-local features from the input image. We capture local structure information of the image via CNNs, then to circumvent the locality bias among the extracted CNNs features and obtain a non-local representation of the image, we utilize Transformers on the extracted features where we model them as a sequential input to the Transformer model. Furthermore, to improve the monotonicity correlation between the subjective and objective scores, we utilize the relative distance information among the images within each batch and enforce the relative ranking among them. Last but not least, we observe that the performance of NR-IQA models degrades when we apply equivariant transformations (e.g. horizontal flipping) to the inputs. Therefore, we propose a method that leverages self-consistency as a source of self-supervision to improve the robustness of NR-IQA models. Specifically, we enforce self-consistency between the outputs of our quality assessment model for each image and its transformation (horizontally flipped) to utilize the rich self-supervisory information and reduce the uncertainty of the model. To demonstrate the effectiveness of our work, we evaluate it on seven standard IQA datasets (both synthetic and authentic) and show that our model achieves state-of-the-art results on various datasets. 1

31 citations


Journal ArticleDOI
TL;DR: In this paper , the feasibility and quality of ultra-high-resolution coronary computed tomography angiography (CCTA) with dual-source photon-counting detector CT (PCD-CT) in patients with a high coronary calcium load, including an analysis of the optimal reconstruction kernel and matrix size was evaluated.
Abstract: The aim of this study was to evaluate the feasibility and quality of ultra-high-resolution coronary computed tomography angiography (CCTA) with dual-source photon-counting detector CT (PCD-CT) in patients with a high coronary calcium load, including an analysis of the optimal reconstruction kernel and matrix size.In this institutional review board-approved study, 20 patients (6 women; mean age, 79 ± 10 years; mean body mass index, 25.6 ± 4.3 kg/m 2 ) undergoing PCD-CCTA in the ultra-high-resolution mode were included. Ultra-high-resolution CCTA was acquired in an electrocardiography-gated dual-source spiral mode at a tube voltage of 120 kV and collimation of 120 × 0.2 mm. The field of view (FOV) and matrix sizes were adjusted to the resolution properties of the individual reconstruction kernels using a FOV of 200 × 200 mm 2 or 150 × 150 mm 2 and a matrix size of 512 × 512 pixels or 1024 × 1024 pixels, respectively. Images were reconstructed using vascular kernels of 8 sharpness levels (Bv40, Bv44, Bv56, Bv60, Bv64, Bv72, Bv80, and Bv89), using quantum iterative reconstruction (QIR) at a strength level of 4, and a slice thickness of 0.2 mm. Images with the Bv40 kernel, QIR at a strength level of 4, and a slice thickness of 0.6 mm served as the reference. Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, and blooming artifacts were quantified. For subjective image quality, 2 blinded readers evaluated image noise and delineation of coronary artery plaques and the adjacent vessel lumen using a 5-point discrete visual scale. A phantom scan served to characterize image noise texture by calculating the noise power spectrum for every reconstruction kernel.Maximum spatial frequency (f peak ) gradually shifted to higher values for reconstructions with the Bv40 to Bv64 kernel (0.15 to 0.56 mm -1 ), but not for reconstructions with the Bv72 to Bv89 kernel. Ultra-high-resolution CCTA was feasible in all patients (median calcium score, 479). In patients, reconstructions with the Bv40 kernel and a slice thickness of 0.6 mm showed largest blooming artifacts (55.2% ± 9.8%) and lowest vessel sharpness (477.1 ± 73.6 ΔHU/mm) while achieving highest SNR (27.4 ± 5.6) and CNR (32.9 ± 6.6) and lowest noise (17.1 ± 2.2 HU). Considering reconstructions with a slice thickness of 0.2 mm, image noise, SNR, CNR, vessel sharpness, and blooming artifacts significantly differed across kernels (all P 's < 0.001). With higher kernel sharpness, SNR and CNR continuously decreased, whereas image noise and vessel sharpness increased, with highest sharpness for the Bv89 kernel (2383.4 ± 787.1 ΔHU/mm). Blooming artifacts continuously decreased for reconstructions with the Bv40 (slice thickness, 0.2 mm; 52.8% ± 9.2%) to the Bv72 kernel (39.7% ± 9.1%). Subjective noise was perceived by both readers in agreement with the objective measurements. Considering delineation of coronary artery plaques and the adjacent vessel lumen, reconstructions with the Bv64 and Bv72 kernel (for both, median score of 5) were favored by the readers providing an excellent anatomic delineation of plaque characteristics and vessel lumen.Ultra-high-resolution CCTA with PCD-CT is feasible and enables the visualization of calcified coronaries with an excellent image quality, high sharpness, and reduced blooming. Coronary plaque characterization and delineation of the adjacent vessel lumen are possible with an optimal quality using Bv64 kernel, a FOV of 200 × 200 mm 2 , and a matrix size of 512 × 512 pixels.

29 citations


Journal ArticleDOI
TL;DR: Deep learning image reconstruction (DLIR) improved CT image quality at 65% radiation dose reduction while preserving detection of liver lesions larger than 0.5 cm, and reduced-dose DLIR demonstrated overall inferior characterization of liver patients and reader confidence.
Abstract: Background Assessment of liver lesions is constrained as CT radiation doses are lowered; evidence suggests deep learning reconstructions mitigate such effects. Purpose To evaluate liver metastases and image quality between reduced-dose deep learning image reconstruction (DLIR) and standard-dose filtered back projection (FBP) contrast-enhanced abdominal CT. Materials and Methods In this prospective Health Insurance Portability and Accountability Act-compliant study (September 2019 through April 2021), participants with biopsy-proven colorectal cancer and liver metastases at baseline CT underwent standard-dose and reduced-dose portal venous abdominal CT in the same breath hold. Three radiologists detected and characterized lesions at standard-dose FBP and reduced-dose DLIR, reported confidence, and scored image quality. Contrast-to-noise ratios for liver metastases were recorded. Summary statistics were reported, and a generalized linear mixed model was used. Results Fifty-one participants (mean age ± standard deviation, 57 years ± 13; 31 men) were evaluated. The mean volume CT dose index was 65.1% lower with reduced-dose CT (12.2 mGy) than with standard-dose CT (34.9 mGy). A total of 161 lesions (127 metastases, 34 benign lesions) with a mean size of 0.7 cm ± 0.3 were identified. Subjective image quality of reduced-dose DLIR was superior to that of standard-dose FBP (P < .001). The mean contrast-to-noise ratio for liver metastases of reduced-dose DLIR (3.9 ± 1.7) was higher than that of standard-dose FBP (3.5 ± 1.4) (P < .001). Differences in detection were identified only for lesions 0.5 cm or smaller: 63 of 65 lesions detected with standard-dose FBP (96.9%; 95% CI: 89.3, 99.6) and 47 lesions with reduced-dose DLIR (72.3%; 95% CI: 59.8, 82.7). Lesion accuracy with standard-dose FBP and reduced-dose DLIR was 80.1% (95% CI: 73.1, 86.0; 129 of 161 lesions) and 67.1% (95% CI: 59.3, 74.3; 108 of 161 lesions), respectively (P = .01). Lower lesion confidence was reported with a reduced dose (P < .001). Conclusion Deep learning image reconstruction (DLIR) improved CT image quality at 65% radiation dose reduction while preserving detection of liver lesions larger than 0.5 cm. Reduced-dose DLIR demonstrated overall inferior characterization of liver lesions and reader confidence. Clinical trial registration no. NCT03151564 © RSNA, 2022 Online supplemental material is available for this article.

29 citations


Journal ArticleDOI
TL;DR: In this paper , the quantitative and qualitative image quality of low-dose CT scans of the abdomen on a novel photon-counting detector CT (PCD-CT) was analyzed.

Journal ArticleDOI
TL;DR: Abdominal virtual noncont contrast images from the arterial and portal venous phase of photon-counting detector CT yielded accurate CT attenuation and good image quality compared with true noncontrast images.
Abstract: Background Accurate CT attenuation and diagnostic quality of virtual noncontrast (VNC) images acquired with photon-counting detector (PCD) CT are needed to replace true noncontrast (TNC) scans. Purpose To assess the attenuation errors and image quality of VNC images from abdominal PCD CT compared with TNC images. Materials and Methods In this retrospective study, consecutive adult patients who underwent a triphasic examination with PCD CT from July 2021 to October 2021 were included. VNC images were reconstructed from arterial and portal venous phase CT. The absolute attenuation error of VNC compared with TNC images was measured in multiple structures by two readers. Then, two readers blinded to image reconstruction assessed the overall image quality, image noise, noise texture, and delineation of small structures using five-point discrete visual scales (5 = excellent, 1 = nondiagnostic). Overall image quality greater than or equal to 3 was deemed diagnostic. In a phantom, noise texture, spatial resolution, and detectability index were assessed. A detectability index greater than or equal to 5 indicated high diagnostic accuracy. Interreader agreement was evaluated using the Krippendorff α coefficient. The paired t test and Friedman test were applied to compare objective and subjective results. Results Overall, 100 patients (mean age, 72 years ± 10 [SD]; 81 men) were included. In patients, VNC image attenuation values were consistent between readers (α = .60), with errors less than 5 HU in 76% and less than 10 HU in 95% of measurements. There was no evidence of a difference in error of VNC images from arterial or portal venous phase CT (3.3 HU vs 3.5 HU, P = .16). Subjective image quality was rated lower in VNC images for all categories (all, P < .001). Diagnostic quality of VNC images was reached in 99% and 100% of patients for readers 1 and 2, respectively. In the phantom, VNC images exhibited 33% higher noise, blotchier noise texture, similar spatial resolution, and inferior but overall good image quality (detectability index >20) compared with TNC images. Conclusion Abdominal virtual noncontrast images from the arterial and portal venous phase of photon-counting detector CT yielded accurate CT attenuation and good image quality compared with true noncontrast images. © RSNA, 2022 Online supplemental material is available for this article See also the editorial by Sosna in this issue.

Journal ArticleDOI
TL;DR: Yia et al. as discussed by the authors constructed a new Subjectively-Annotated Underwater Image Enhancement (UIE) benchmark dataset (SAUD) which simultaneously provides real-world raw underwater images, readily available enhanced results by representative UIE algorithms, and subjective ranking scores of each enhanced result.
Abstract: Due to the attenuation and scattering of light by water, there are many quality defects in raw underwater images such as color casts, decreased visibility, reduced contrast, et al. . Many different underwater image enhancement (UIE) algorithms have been proposed to enhance underwater image quality. However, how to fairly compare the performance among UIE algorithms remains a challenging problem. So far, the lack of comprehensive human subjective user study with large-scale benchmark dataset and reliable objective image quality assessment (IQA) metric makes it difficult to fully understand the true performance of UIE algorithms. We in this paper make efforts in both subjective and objective aspects to fill these gaps. Firstly, we construct a new Subjectively-Annotated UIE benchmark Dataset (SAUD) which simultaneously provides real-world raw underwater images, readily available enhanced results by representative UIE algorithms, and subjective ranking scores of each enhanced result. Secondly, we propose an effective No-reference (NR) Underwater Image Quality metric (NUIQ) to automatically evaluate the visual quality of enhanced underwater images. Experiments on the constructed SAUD dataset demonstrate the superiority of our proposed NUIQ metric, achieving higher consistency with subjective rankings than 22 mainstream NR-IQA metrics. The dataset and source code will be made available at https://github.com/yia-yuese/SAUD-Dataset .

Journal ArticleDOI
TL;DR: Lee et al. as mentioned in this paper compared the image quality and lung nodule detectability of DLIR and adaptive statistical iterative reconstruction-V (ASIR-V) in ULD CT images.
Abstract: Background Ultra-low-dose (ULD) CT could facilitate the clinical implementation of large-scale lung cancer screening while minimizing the radiation dose. However, traditional image reconstruction methods are associated with image noise in low-dose acquisitions. Purpose To compare the image quality and lung nodule detectability of deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction-V (ASIR-V) in ULD CT. Materials and Methods Patients who underwent noncontrast ULD CT (performed at 0.07 or 0.14 mSv, similar to a single chest radiograph) and contrast-enhanced chest CT (CECT) from April to June 2020 were included in this prospective study. ULD CT images were reconstructed with filtered back projection (FBP), ASIR-V, and DLIR. Three-dimensional segmentation of lung tissue was performed to evaluate image noise. Radiologists detected and measured nodules with use of a deep learning-based nodule assessment system and recognized malignancy-related imaging features. Bland-Altman analysis and repeated-measures analysis of variance were used to evaluate the differences between ULD CT images and CECT images. Results A total of 203 participants (mean age ± standard deviation, 61 years ± 12; 129 men) with 1066 nodules were included, with 100 scans at 0.07 mSv and 103 scans at 0.14 mSv. The mean lung tissue noise ± standard deviation was 46 HU ± 4 for CECT and 59 HU ± 4, 56 HU ± 4, 53 HU ± 4, 54 HU ± 4, and 51 HU ± 4 in FBP, ASIR-V level 40%, ASIR-V level 80% (ASIR-V-80%), medium-strength DLIR, and high-strength DLIR (DLIR-H), respectively, of ULD CT scans (P < .001). The nodule detection rates of FBP reconstruction, ASIR-V-80%, and DLIR-H were 62.5% (666 of 1066 nodules), 73.3% (781 of 1066 nodules), and 75.8% (808 of 1066 nodules), respectively (P < .001). Bland-Altman analysis showed the percentage difference in long diameter from that of CECT was 9.3% (95% CI of the mean: 8.0, 10.6), 9.2% (95% CI of the mean: 8.0, 10.4), and 6.2% (95% CI of the mean: 5.0, 7.4) in FBP reconstruction, ASIR-V-80%, and DLIR-H, respectively (P < .001). Conclusion Compared with adaptive statistical iterative reconstruction-V, deep learning image reconstruction reduced image noise, increased nodule detection rate, and improved measurement accuracy on ultra-low-dose chest CT images. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Lee in this issue.

Journal ArticleDOI
TL;DR: In this paper , the authors compared the effect of two deep learning image reconstruction (DLR) algorithms in chest computed tomography (CT) with different clinical indications and found that DLR algorithms reduce the image-noise and improve lesion detectability.
Abstract: The purpose of this study was to compare the effect of two deep learning image reconstruction (DLR) algorithms in chest computed tomography (CT) with different clinical indications.Acquisitions on image quality and anthropomorphic phantoms were performed at six dose levels (CTDIvol: 10/7.5/5/2.5/1/0.5mGy) on two CT scanners equipped with two different DLR algorithms (TrueFidelityTM and AiCE). Raw data were reconstructed using the filtered back-projection (FBP) and the lowest/intermediate/highest DLR levels (L-DLR/M-DLR/H-DLR) of each algorithm. Noise power spectrum, task-based transfer function (TTF) and detectability index (d') were computed: d' modelled detection of a soft tissue mediastinal nodule, ground-glass opacity, or high-contrast pulmonary lesion. Subjective image quality of anthropomorphic phantom images was analyzed by two radiologists.For the L-DLR/M-DLR levels, the noise magnitude was lower with TrueFidelityTM than with AiCE from 2.5 to 10 mGy. For H-DLR, noise magnitude was lower with AiCE . For L-DLR and M-DLR, the average NPS spatial frequency (fav) values were greater for AiCE except for 0.5 mGy. For H-DLR levels, fav was greater for TrueFidelityTM than for AiCE. TTF50% values were greater with AiCE for the air insert, and lower than TrueFidelityTM for the polyethylene insert. From 2.5 to10 mGy, d' was greater for AiCE than for TrueFidelityTM for H-DLR for all lesions, but similar for L-DLR and M-DLR. Image quality was rated clinically appropriate for all levels of both algorithms, for dose from 2.5 to 10 mGy, except for L-DLR of AiCE.DLR algorithms reduce the image-noise and improve lesion detectability. Their operations and properties impacted both noise-texture and spatial resolution.

Journal ArticleDOI
TL;DR: Deep learning reconstruction improves the image quality of diffusion-weighted MRI scans of prostate cancer with no impact on apparent diffusion coefficient quantitation with a 3.0-T MRI system.
Abstract: Background Deep learning reconstruction (DLR) may improve image quality. However, its impact on diffusion-weighted imaging (DWI) of the prostate has yet to be assessed. Purpose To determine whether DLR can improve image quality of diffusion-weighted MRI at b values ranging from 1000 sec/mm2 to 5000 sec/mm2 in patients with prostate cancer. Materials and Methods In this retrospective study, images of the prostate obtained at DWI with a b value of 0 sec/mm2, DWI with a b value of 1000 sec/mm2 (DWI1000), DWI with a b value of 3000 sec/mm2 (DWI3000), and DWI with a b value of 5000 sec/mm2 (DWI5000) from consecutive patients with biopsy-proven cancer from January to June 2020 were reconstructed with and without DLR. Image quality was assessed using signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) from region-of-interest analysis and qualitatively assessed using a five-point visual scoring system (1 [very poor] to 5 [excellent]) for each high-b-value DWI sequence with and without DLR. The SNR, CNR, and visual score for DWI with and without DLR were compared with the paired t test and the Wilcoxon signed rank test with Bonferroni correction, respectively. Apparent diffusion coefficients (ADCs) from DWI with and without DLR were also compared with the paired t test with Bonferroni correction. Results A total of 60 patients (mean age, 67 years; age range, 49-79 years) were analyzed. DWI with DLR showed significantly higher SNRs and CNRs than DWI without DLR (P < .001); for example, with DWI1000 the mean SNR was 38.7 ± 0.6 versus 17.8 ± 0.6, respectively (P < .001), and the mean CNR was 18.4 ± 5.6 versus 7.4 ± 5.6, respectively (P < .001). DWI with DLR also demonstrated higher qualitative image quality than DWI without DLR (mean score: 4.8 ± 0.4 vs 4.0 ± 0.7, respectively, with DWI1000 [P = .001], 3.8 ± 0.7 vs 3.0 ± 0.8 with DWI3000 [P = .002], and 3.1 ± 0.8 vs 2.0 ± 0.9 with DWI5000 [P < .001]). ADCs derived with and without DLR did not differ substantially (P > .99). Conclusion Deep learning reconstruction improves the image quality of diffusion-weighted MRI scans of prostate cancer with no impact on apparent diffusion coefficient quantitation with a 3.0-T MRI system. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Turkbey in this issue.

Journal ArticleDOI
TL;DR: Low Light Image Enhancement Previously different algorithms used to enhance the quality of low light Images Among that one algorithm is multimedia algorithm, which can process on only gray scale Images and thequality of Image is not properly enhance to that extent.
Abstract: Digital images play an important role both in daily life applications such as satellite television, magnetic resonance imaging, as well as in areas of research and technology such as geographical information systems and astronomy. Whenever an image is converted from one form to other such as digitizing the image some form of degradation occurs at output. Improvement in quality of these degraded images can be achieved by using application of enhancement techniques. The main purpose of image enhancement is to bring out details that are hidden in an image, or to increase the contrast in a low contrast image, by changing the pixel intensity of the input image. Enhancing the quality of low light Images is a critical problem. To overcome this problem efficient method introduced that is Low Light Image Enhancement Previously different algorithms used to enhance the quality of low light Images Among that one algorithm is multimedia algorithm .Through this algorithm we can process on only gray scale Images and the quality of Image is not properly enhance to that extent .The main drawback is that the quality of the image gets reduced because the processing can be done by considering the single pixel.

Posted ContentDOI
TL;DR: The proposed DnSRGAN method can solve the problem of high noise and artifacts that cause the cardiac image to be reconstructed incorrectly during super-resolution, and is capable of high-quality reconstruction of noisy cardiac images.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a denoising super-resolution Generative Adversarial Network (DnSRGAN) for high-quality superresolution reconstruction of noisy cardiac magnetic resonance (CMR) images.

Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

Journal ArticleDOI
TL;DR: LD-HR PCCT scans of the lung provide better image quality while using significantly less radiation dose compared to EID-CT scans, and image quality and image sharpness were rated best in P CCT (QIR+) images.
Abstract: This study aims to investigate the qualitative and quantitative image quality of low-dose high-resolution (LD-HR) lung CT scans acquired with the first clinical approved photon counting CT (PCCT) scanner. Furthermore, the radiation dose used by the PCCT is compared to a conventional CT scanner with an energy-integrating detector system (EID-CT). Twenty-nine patients who underwent a LD-HR chest CT scan with dual-source PCCT and had previously undergone a LD-HR chest CT with a standard EID-CT scanner were retrospectively included in this study. Images of the whole lung as well as enlarged image sections displaying a specific finding (lesion) were evaluated in terms of overall image quality, image sharpness and image noise by three senior radiologists using a 5-point Likert scale. The PCCT images were reconstructed with and without a quantum iterative reconstruction algorithm (PCCT QIR+/−). Noise and signal-to-noise (SNR) were measured and the effective radiation dose was calculated. Overall, image quality and image sharpness were rated best in PCCT (QIR+) images. A significant difference was seen particularly in image sections of PCCT (QIR+) images compared to EID-CT images (p < 0.005). Image noise of PCCT (QIR+) images was significantly lower compared to EID-CT images in image sections (p = 0.005). In contrast, noise was lowest on EID-CT images (p < 0.001). The PCCT used significantly less radiation dose compared to the EID-CT (p < 0.001). In conclusion, LD-HR PCCT scans of the lung provide better image quality while using significantly less radiation dose compared to EID-CT scans.

Journal ArticleDOI
TL;DR: DuDoDR-Net as discussed by the authors proposes a dual-domain data consistent recurrent network for SVMAR, which can reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations.

Journal ArticleDOI
TL;DR: In this paper , the authors evaluated the impact of contrast enhancement and different virtual monoenergetic image energies on automatized emphysema quantification with photon-counting detector computed tomography (PCD-CT).
Abstract: Purpose The aim of this study was to evaluate the impact of contrast enhancement and different virtual monoenergetic image energies on automatized emphysema quantification with photon-counting detector computed tomography (PCD-CT). Material and Methods Sixty patients who underwent contrast-enhanced chest CT on a first-generation, clinical dual-source PCD-CT were retrospectively included. Scans were performed in the multienergy (QuantumPlus) mode at 120 kV with weight-adjusted intravenous contrast agent. Virtual noncontrast (VNC) images as well as virtual monoenergetic images (VMIs) from 40 to 80 keV obtained in 10-keV intervals were reconstructed. Computed tomography attenuation was measured in the aorta. Noise was measured in subcutaneous fat and defined as the standard deviation of attenuation. Contrast-to-noise with region of interest in the ascending aorta and signal-to-noise ratio in the subcutaneous fat were calculated. Subjective image quality (and emphysema assessment, lung parenchyma evaluation, and vessel evaluation) was rated by 2 blinded radiologists. Emphysema quantification (with a threshold of −950 HU) was performed by a commercially available software. Virtual noncontrast images served as reference standard for emphysema quantification. Results Noise and contrast-to-noise ratio showed a strong negative correlation (r = −0.98; P < 0.01) to VMI energies. The score of subjective assessment was highest at 70 keV for lung parenchyma and 50 keV for pulmonary vessel evaluation (P < 0.001). The best trade-off for the assessment of emphysema while maintaining reasonable contrast for pulmonary vessel evaluation was determined between 60 and 70 keV. Overall, contrast-enhanced imaging led to significant and systematic underestimation of emphysema as compared with VNC (P < 0.001). This underestimation decreased with increasing VMI-energy (r = 0.98; P = 0.003). Emphysema quantification showed significantly (P < 0.05) increased emphysema volumes with increasing VMI energies, except between 60–70 keV and 70–80 keV. The least difference in emphysema quantification between contrast-enhanced scans and VNC was found at 80 keV. Conclusion Computed tomography emphysema quantification was significantly affected by intravenous contrast administration and VMI-energy level. Virtual monoenergetic image at 80 keV yielded most comparable results to VNC. The best trade-off in qualitative as well as in quantitative image quality evaluation was determined at 60/70 keV.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a no-reference omnidirectional image quality assessment (NR OIQA) algorithm by multi-frequency information and local-global naturalness (MFILGN).
Abstract: 360-degree/omnidirectional images (OIs) have received remarkable attention due to the increasing applications of virtual reality (VR). Compared to conventional 2D images, OIs can provide more immersive experiences to consumers, benefiting from the higher resolution and plentiful field of views (FoVs). Moreover, observing OIs is usually in a head-mounted display (HMD) without references. Therefore, an efficient blind quality assessment method, which is specifically designed for 360-degree images, is urgently desired. In this paper, motivated by the characteristics of the human visual system (HVS) and the viewing process of VR visual content, we propose a novel and effective no-reference omnidirectional image quality assessment (NR OIQA) algorithm by MultiFrequency Information and Local-Global Naturalness (MFILGN). Specifically, inspired by the frequency-dependent property of the visual cortex, we first decompose the projected equirectangular projection (ERP) maps into wavelet subbands by using discrete Haar wavelet transform (DHWT). Then, the entropy intensities of low-frequency and high-frequency subbands are exploited to measure the multifrequency information of OIs. In addition to considering the global naturalness of ERP maps, owing to the browsed FoVs, we extract the natural scene statistics (NSS) features from each viewport image as the measure of local naturalness. With the proposed multifrequency information measurement and local-global naturalness measurement, we utilize support vector regression (SVR) as the final image quality regressor to train the quality evaluation model from visual quality-related features to human ratings. To our knowledge, the proposed model is the first no-reference quality assessment method for 360-degree images that combines multifrequency information and image naturalness. Experimental results on two publicly available OIQA databases demonstrate that our proposed MFILGN outperforms state-of-the-art full-reference (FR) and NR approaches.

Journal ArticleDOI
TL;DR: VNCPC-reconstructions of PCD-CT-angiography datasets have excellent image quality with complete contrast removal and only minimal erroneous subtractions of stent parts/calcifications and could replace TNC-series in almost all cases.
Abstract: The purpose of this study was to evaluate virtual-non contrast reconstructions of Photon-Counting Detector (PCD) CT-angiography datasets using a novel calcium-preserving algorithm (VNCPC) vs. the standard algorithm (VNCConv) for their potential to replace unenhanced acquisitions (TNC) in patients after endovascular aneurysm repair (EVAR). 20 EVAR patients who had undergone CTA (unenhanced and arterial phase) on a novel PCD-CT were included. VNCConv- and VNCPC-series were derived from CTA-datasets and intraluminal signal and noise compared. Three readers evaluated image quality, contrast removal, and removal of calcifications/stent parts and assessed all VNC-series for their suitability to replace TNC-series. Image noise was higher in VNC- than in TNC-series (18.6 ± 5.3 HU, 16.7 ± 7.1 HU, and 14.9 ± 7.1 HU for VNCConv-, VNCPC-, and TNC-series, p = 0.006). Subjective image quality was substantially higher in VNCPC- than VNCConv-series (4.2 ± 0.9 vs. 2.5 ± 0.6; p < 0.001). Aortic contrast removal was complete in all VNC-series. Unlike in VNCConv-reconstructions, only minuscule parts of stents or calcifications were erroneously subtracted in VNCPC-reconstructions. Readers considered 95% of VNCPC-series fully or mostly suited to replace TNC-series; for VNCConv-reconstructions, however, only 75% were considered mostly (and none fully) suited for TNC-replacement. VNCPC-reconstructions of PCD-CT-angiography datasets have excellent image quality with complete contrast removal and only minimal erroneous subtractions of stent parts/calcifications. They could replace TNC-series in almost all cases.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: The NTIRE 2022 challenge on perceptual image quality assessment (IQA), held in conjunction with the New Trends in Image Restoration and Enhancement workshop (NTIRE) workshop at CVPR 2022, has attracted 192 and 179 registered participants for two tracks as mentioned in this paper .
Abstract: This paper reports on the NTIRE 2022 challenge on perceptual image quality assessment (IQA), held in conjunction with the New Trends in Image Restoration and Enhancement workshop (NTIRE) workshop at CVPR 2022. This challenge is held to address the emerging challenge of IQA by perceptual image processing algorithms. The output images of these algorithms have completely different characteristics from traditional distortions and are included in the PIPAL dataset used in this challenge. This challenge is divided into two tracks, a full-reference IQA track similar to the previous NTIRE IQA challenge and a new track that focuses on the no-reference IQA methods. The challenge has 192 and 179 registered participants for two tracks. In the final testing stage, 7 and 8 participating teams submitted their models and fact sheets. Almost all of them have achieved better results than existing IQA methods, and the winning method can demonstrate state-of-the-art performance.

Journal ArticleDOI
TL;DR: The CONTRASTIVE Image QUality Evaluator (CONTRIQUE) as mentioned in this paper uses prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions.
Abstract: We consider the problem of obtaining image quality representations in a self-supervised manner. We use prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions. We then train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem. We refer to the proposed training framework and resulting deep IQA model as the CONTRastive Image QUality Evaluator (CONTRIQUE). During evaluation, the CNN weights are frozen and a linear regressor maps the learned representations to quality scores in a No-Reference (NR) setting. We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models, even without any additional fine-tuning of the CNN backbone. The learned representations are highly robust and generalize well across images afflicted by either synthetic or authentic distortions. Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets. The implementations used in this paper are available at https://github.com/pavancm/CONTRIQUE.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an adaptive rectification based generative adversarial network with spectrum constraint to estimate the residual between the preliminarily predicted image and the real SPET image.

Journal ArticleDOI
TL;DR: In this paper , the authors compared the image quality of contrast-enhanced abdominal 1st-generation Photon-Counting Detector CT (PCD-CT) to a 2nd-generation Dual-Source Dual-Energy-Integrating-Detector (DSCT) in obese patients.

Journal ArticleDOI
TL;DR: Sun et al. as mentioned in this paper developed an underwater image enhancement framework based on reinforcement learning, in which states are represented by image feature maps, actions are representing by image enhancement methods, and rewards are represented as image quality improvements.
Abstract: In this article, we develop an underwater image enhancement framework based on reinforcement learning. To do this, we model the underwater image enhancement as a Markov decision process (MDP), in which states are represented by image feature maps, actions are represented by image enhancement methods, and rewards are represented by image quality improvements. The MDP trained with reinforcement learning can characterize a sequence of enhanced results for an underwater image. At each step of the MDP, a state transitions from one to another according to an action of image enhancement selected by a deep Q network. The final enhanced image in the sequence is obtained with respect to the biggest overall image quality improvement. In this manner, our reinforcement learning framework effectively organizes a sequence of image enhancement methods in a principled manner. In contrast to the black box processing schemes of deep learning methods, our reinforcement learning framework gives a sequence of specific actions, which are transparent from the implementation perspective. Benefiting from the exploration and exploitation training fashion, our reinforcement learning framework possibly generates enhanced images that are of better quality than reference images. Experimental results validate the effectiveness of our reinforcement learning framework in underwater image enhancement. The code and detailed results are available at https://gitee.com/sunshixin_upc/underwater-image-enhancement-with-reinforcement-learning.