scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 2021"


Journal ArticleDOI
TL;DR: A survey on recent advances of image super-resolution techniques using deep learning approaches in a systematic way, which can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR.
Abstract: Image Super-Resolution (SR) is an important class of image processing techniqueso enhance the resolution of images and videos in computer vision. Recent years have witnessed remarkable progress of image super-resolution using deep learning techniques. This article aims to provide a comprehensive survey on recent advances of image super-resolution using deep learning approaches. In general, we can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR. In addition, we also cover some other important issues, such as publicly available benchmark datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future directions and open issues which should be further addressed by the community in the future.

837 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: Hu et al. as discussed by the authors proposed a pre-trained image processing transformer (IPT) model for denoising, super-resolution and deraining tasks, which is trained on corrupted image pairs with multi-heads and multi-tails.
Abstract: As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the representation ability of transformer and its variant architectures. In this paper, we study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks. Code is available at https://github.com/huawei-noah/Pretrained-IPT and https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/IPT

416 citations


Journal ArticleDOI
TL;DR: In this paper, an emerging technique called algorithm unrolling, or unfolding, offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are widely used in signal processing and deep neural networks.
Abstract: Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets. An emerging technique called algorithm unrolling, or unfolding, offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are widely used in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention, and it is rapidly growing in both theoretic investigations and practical applications. The increasing popularity of unrolled deep networks is due, in part, to their potential in developing efficient, high-performance (yet interpretable) network architectures from reasonably sized training sets.

377 citations


Journal ArticleDOI
TL;DR: Point Cloud Transformer (PCT) as mentioned in this paper is based on Transformer, which is inherently permutation invariant for processing a sequence of points, making it well suited for point cloud learning.
Abstract: The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named Point Cloud Transformer (PCT) for point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation, semantic segmentation, and normal estimation tasks.

361 citations


Journal ArticleDOI
TL;DR: Compared to other state-of-the-art segmentation networks, this model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation, which demonstrates the efficiency of the approach to generate precise and reliable automatic segmentations of medical images.
Abstract: Even though convolutional neural networks (CNNs) are driving progress in medical image segmentation, standard models still have some drawbacks. First, the use of multi-scale approaches, i.e., encoder-decoder architectures, leads to a redundant use of information, where similar low-level features are extracted multiple times at multiple scales. Second, long-range feature dependencies are not efficiently modeled, resulting in non-optimal discriminative feature representations associated with each semantic class. In this paper we attempt to overcome these limitations with the proposed architecture, by capturing richer contextual dependencies based on the use of guided self-attention mechanisms. This approach is able to integrate local features with their corresponding global dependencies, as well as highlight interdependent channel maps in an adaptive manner. Further, the additional loss between different modules guides the attention mechanisms to neglect irrelevant information and focus on more discriminant regions of the image by emphasizing relevant feature associations. We evaluate the proposed model in the context of semantic segmentation on three different datasets: abdominal organs, cardiovascular structures and brain tumors. A series of ablation experiments support the importance of these attention modules in the proposed architecture. In addition, compared to other state-of-the-art segmentation networks our model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation. This demonstrates the efficiency of our approach to generate precise and reliable automatic segmentations of medical images. Our code is made publicly available at: https://github.com/sinAshish/Multi-Scale-Attention .

302 citations


Journal ArticleDOI
TL;DR: Support for 2D, 3D and 4D images such as X-ray, histopathology, CT, ultrasound and diffusion MRI and focus on reproducibility and traceability to encourage open-science practices.

292 citations


Journal ArticleDOI
TL;DR: A comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.
Abstract: As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.

231 citations


Journal ArticleDOI
TL;DR: The experimental results in this paper show that traditional machine learning has a better solution effect on small sample data sets, and deep learning framework has higher recognition accuracy on large sample data set.

227 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach using deep learning, TensorFlow, Keras, and OpenCV to detect face masks using Single Shot Multibox Detector as a face detector and MobilenetV2 architecture as a framework for the classifier.

193 citations


Journal ArticleDOI
TL;DR: This paper proposes a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models and raises the bar for defense-aware attacks.
Abstract: Recently, many studies have demonstrated deep neural network (DNN) classifiers can be fooled by the adversarial example, which is crafted via introducing some perturbations into an original sample. Accordingly, some powerful defense techniques were proposed. However, existing defense techniques often require modifying the target model or depend on the prior knowledge of attacks. In this paper, we propose a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models. We consider the perturbation to images as a kind of noise and introduce two classic image processing techniques, scalar quantization and smoothing spatial filter , to reduce its effect. The image entropy is employed as a metric to implement an adaptive noise reduction for different kinds of images. Consequently, the adversarial example can be effectively detected by comparing the classification results of a given sample and its denoised version, without referring to any prior knowledge of attacks. More than 20,000 adversarial examples against some state-of-the-art DNN models are used to evaluate the proposed method, which are crafted with different attack techniques. The experiments show that our detection method can achieve a high overall F1 score of 96.39 percent and certainly raises the bar for defense-aware attacks.

185 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explored the performance of fuzzy system-based medical image processing for brain disease prediction, and designed a brain image processing and brain disease diagnosis prediction model based on improved fuzzy clustering and HPU-Net (Hybrid Pyramid U-Net Model for Brain Tumor Segmentation).
Abstract: The present work aims to explore the performance of fuzzy system-based medical image processing for brain disease prediction. The imaging mechanism of NMR (Nuclear Magnetic Resonance) and the complexity of human brain tissues cause the brain MRI (Magnetic Resonance Imaging) images to present varying degrees of noise, weak boundaries, and artifacts. Hence, improvements are made over the fuzzy clustering algorithm. While ensuring the model safety performance, a brain image processing and brain disease diagnosis prediction model is designed based on improved fuzzy clustering and HPU-Net (Hybrid Pyramid U-Net Model for Brain Tumor Segmentation). Brain MRI images collected from the Department of Brain Oncology, XX Hospital, are employed in simulation experiments to validate the performance of the proposed algorithm. Moreover, CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), FCM (Fuzzy C-Means), LDCFCM (Local Density Clustering Fuzzy C-Means), and AFCM (Adaptive Fuzzy C-Means) are included in simulation experiments for performance comparison. Results demonstrated that the proposed algorithm has more nodes, lower energy consumption, and more stable changes than other models under the same conditions. Regarding the overall network performance, the proposed algorithm can complete the data transmission tasks the fastest, basically maintaining at about 4.5 seconds on average, which performs remarkably better than other models. A further prediction performance analysis reveals that the proposed algorithm provides the highest prediction accuracy for the Whole Tumor under the DSC coefficient, reaching 0.936. Besides, its Jaccard coefficient is 0.845, proving its superior segmentation accuracy over other models. To sum up, the proposed algorithm can provide higher accuracy while ensuring energy consumption, a more apparent denoising effect, and the best segmentation and recognition effect than other models, which can provide an experimental basis for the feature recognition and predictive diagnosis of brain images.

Journal ArticleDOI
01 Jun 2021
TL;DR: The experimental results prove the superiority of the proposed method in terms of visual quality and a variety of quantitative evaluation criteria, and greatly improve the fusion effect, image detail clarity and time efficiency.
Abstract: Deep learning technology has been extensively explored in pattern recognition and image processing areas. A multi-mode medical image fusion with deep learning will be proposed, according to the characters of multi-modal medical image, medical diagnostic technology and practical implementation, according to the practical needs for medical diagnosis. It cannot be only made up for the deficiencies of MRI, CT and SPECT image fusion, but also can be implemented in different types of multi-modal medical image fusion problems in batch processing mode, and can be effectively overcome the limitation of only one-page processing. The proposed method can greatly improve the fusion effect, image detail clarity and time efficiency. The experiments on multi-modal medical images are implemented to analyze performance, algorithm stability and so on. The experimental results prove the superiority of our proposed method in terms of visual quality and a variety of quantitative evaluation criteria.

Book ChapterDOI
TL;DR: A comparison of recent Deep Convolutional Neural Network (DCNN) architectures for automatic binary classification of pneumonia images based fined tuned versions of VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inceptions_V3, Resnet50, MobileNet-V2 and Xception is presented.
Abstract: Recently, researchers, specialists, and companies around the world are rolling out deep learning and image processing-based systems that can fastly process hundreds of X-Ray and Computed Tomography (CT) images to accelerate the diagnosis of pneumonia such as SARS, covid-19, etc., and aid in its containment. Medical image analysis is one of the most promising research areas; it provides facilities for diagnosis and making decisions of several diseases such as MERS, covid-19, etc. In this paper, we present a comparison of recent deep convolutional neural network (CNN) architectures for automatic binary classification of pneumonia images based on fined tuned versions of (VGG16, VGG19, DenseNet201, Inception_ResNet_V2, Inception_V3, Resnet50, MobileNet_V2 and Xception) and a retraining of a baseline CNN. The proposed work has been tested using chest X-Ray & CT dataset, which contains 6087 images (4504 pneumonia and 1583 normal). As a result, we can conclude that the fine-tuned version of Resnet50 shows highly satisfactory performance with rate of increase in training and testing accuracy (more than 96% of accuracy).

Journal ArticleDOI
TL;DR: In this article, the authors proposed a high-speed and accurate fully-automated method to detect COVID-19 from the patient's chest CT scan images, which achieved 98.49% accuracy on more than 7996 test images.

Journal ArticleDOI
TL;DR: This paper mainly works for systematically reviewing the emerging achievements for image retrieval from RS big data, and discusses the RS image retrieval based applications including fusion-oriented RS image processing, geo-localization and disaster rescue.

Journal ArticleDOI
TL;DR: The novel meta-heuristic algorithm called Black Widow Optimization (BWO) is introduced to find the best threshold configuration using Otsu or Kapur as objective function and is found to be most promising for multi-level image segmentation problem over other segmentation approaches that are currently used in the literature.
Abstract: Segmentation is a crucial step in image processing applications. This process separates pixels of the image into multiple classes that permits the analysis of the objects contained in the scene. Multilevel thresholding is a method that easily performs this task, the problem is to find the best set of thresholds that properly segment each image. Techniques as Otsu’s between class variance or Kapur’s entropy helps to find the best thresholds but they are computationally expensive for more than two thresholds. To overcome such problem this paper introduces the use of the novel meta-heuristic algorithm called Black Widow Optimization (BWO) to find the best threshold configuration using Otsu or Kapur as objective function. To evaluate the performance and effectiveness of the BWO-based method, it has been considered the use of a variety of benchmark images, and compared against six well-known meta-heuristic algorithms including; the Gray Wolf Optimization (GWO), Moth Flame Optimization (MFO), Whale Optimization Algorithm (WOA), Sine–Cosine Algorithm (SCA), Slap Swarm Algorithm (SSA), and Equilibrium Optimization (EO). The experimental results have revealed that the proposed BWO-based method outperform the competitor algorithms in terms of the fitness values as well as the others performance measures such as PSNR, SSIM and FSIM. The statistical analysis manifests that the BWO-based method achieves efficient and reliable results in comparison with the other methods. Therefore, BWO-based method was found to be most promising for multi-level image segmentation problem over other segmentation approaches that are currently used in the literature.

Journal ArticleDOI
TL;DR: In this paper, a deep Fourier channel attention network (DFCAN) was proposed to learn hierarchical representations of high-frequency information about diverse biological structures using multimodal structured illumination microscopy (SIM).
Abstract: Deep neural networks have enabled astonishing transformations from low-resolution (LR) to super-resolved images However, whether, and under what imaging conditions, such deep-learning models outperform super-resolution (SR) microscopy is poorly explored Here, using multimodality structured illumination microscopy (SIM), we first provide an extensive dataset of LR-SR image pairs and evaluate the deep-learning SR models in terms of structural complexity, signal-to-noise ratio and upscaling factor Second, we devise the deep Fourier channel attention network (DFCAN), which leverages the frequency content difference across distinct features to learn precise hierarchical representations of high-frequency information about diverse biological structures Third, we show that DFCAN's Fourier domain focalization enables robust reconstruction of SIM images under low signal-to-noise ratio conditions We demonstrate that DFCAN achieves comparable image quality to SIM over a tenfold longer duration in multicolor live-cell imaging experiments, which reveal the detailed structures of mitochondrial cristae and nucleoids and the interaction dynamics of organelles and cytoskeleton

Journal ArticleDOI
TL;DR: The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input or to improve the results of the network output (post-processing), focusing on digital pathology image analysis.

Journal ArticleDOI
TL;DR: In this article, a comprehensive review of transfer learning in medical image analysis is presented, including the structure of CNN, background knowledge, different types of strategies performing transfer learning, different sub-fields of analysis, and discussion on the future prospect for transfer learning.
Abstract: Compared with common deep learning methods (e.g., convolutional neural networks), transfer learning is characterized by simplicity, efficiency and its low training cost, breaking the curse of small datasets. Medical image analysis plays an indispensable role in both scientific research and clinical diagnosis. Common medical image acquisition methods include Computer Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), X-Ray, etc. Although these medical imaging methods can be applied for non-invasive qualitative and quantitative analysis of patients—compared with image datasets in other computer vision fields such like faces—medical images, especially its labeling, is still scarce and insufficient. Therefore, more and more researchers adopted transfer learning for medical image processing. In this study, after reviewing one hundred representative papers from IEEE, Elsevier, Google Scholar, Web of Science and various sources published from 2000 to 2020, a comprehensive review is presented, including (i) structure of CNN, (ii) background knowledge of transfer learning, (iii) different types of strategies performing transfer learning, (iv) application of transfer learning in various sub-fields of medical image analysis, and (v) discussion on the future prospect of transfer learning in the field of medical image analysis. Through this review paper, beginners could receive an overall and systematic knowledge of transfer learning application in medical image analysis. And policymaker of related realm will benefit from the summary of the trend of transfer learning in medical imaging field and may be encouraged to make policy positive to the future development of transfer learning in the field of medical image analysis.

Journal ArticleDOI
18 Apr 2021-Sensors
TL;DR: In this article, an innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences-Fusion) is proposed to extract features of thermal images of three angle grinders.
Abstract: The paper presents an analysis and classification method to evaluate the working condition of angle grinders by means of infrared (IR) thermography and IR image processing. An innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences—Fusion) is proposed in this paper. This method is used to extract features of thermal images of three angle grinders. The computed features are 1-element or 256-element vectors. Feature vectors are the sum of pixels of matrix V or PCA of matrix V or histogram of matrix V. Three different cases of thermal images were considered: healthy angle grinder, angle grinder with 1 blocked air inlet, angle grinder with 2 blocked air inlets. The classification of feature vectors was carried out using two classifiers: Support Vector Machine and Nearest Neighbor. Total recognition efficiency for 3 classes (TRAG) was in the range of 98.5–100%. The presented technique is efficient for fault diagnosis of electrical devices and electric power tools.

Journal ArticleDOI
TL;DR: A detailed description of the application of deep learning in defect classification, localization and segmentation follows the discussion of traditional defect detection algorithms.
Abstract: Machine vision significantly improves the efficiency, quality, and reliability of defect detection. In visual inspection, excellent optical illumination platforms and suitable image acquisition hardware are the prerequisites for obtaining high-quality images. Image processing and analysis are key technologies in obtaining defect information, while deep learning is significantly impacting the field of image analysis. In this study, a brief history and the state of the art in optical illumination, image acquisition, image processing, and image analysis in the field of visual inspection are systematically discussed. The latest developments in industrial defect detection based on machine vision are introduced. In the further development of the field of visual inspection, the application of deep learning will play an increasingly important role. Thus, a detailed description of the application of deep learning in defect classification, localization and segmentation follows the discussion of traditional defect detection algorithms. Finally, future prospects for the development of visual inspection technology are explored.

Journal ArticleDOI
TL;DR: An overview of the use of CNNs, for image classification, segmentation, detection, and other tasks such as registration, content-based image retrieval, image generation and enhancement, in some typical medical diagnosis areas such as brain, breast, and abdominal are presented.

Journal ArticleDOI
TL;DR: An automated crack detection method based on image processing using the light gradient boosting machine (LightGBM), one of the supervised machine learning methods, that can detect cracks with high accuracy and training time is shortened.
Abstract: Automated crack detection based on image processing is widely used when inspecting concrete structures. The existing methods for crack detection are not yet accurate enough due to the diff...

Journal ArticleDOI
Ailong Ma1, Yuting Wan1, Yanfei Zhong1, Junjue Wang1, Liangpei Zhang1 
TL;DR: In this article, a framework for scene classification network architecture search based on multi-objective neural evolution (SceneNet) is proposed, and the effectiveness of SceneNet is demonstrated by experimental comparisons with several deep neural networks designed by human experts.
Abstract: The scene classification approaches using deep learning have been the subject of much attention for remote sensing imagery. However, most deep learning networks have been constructed with a fixed architecture for natural image processing, and they are difficult to apply directly to remote sensing images, due to the more complex geometric structural features. Thus, there is an urgent need for automatic search for the most suitable neural network architecture from the image data in scene classification, in which a powerful search mechanism is required, and the computational complexity and performance error of the searched network should be balanced for a practical choice. In this article, a framework for scene classification network architecture search based on multi-objective neural evolution (SceneNet) is proposed. In SceneNet, the network architecture coding and searching are achieved using an evolutionary algorithm, which can implement a more flexible hierarchical extraction of the remote sensing image scene information. Moreover, the computational complexity and the performance error of the searched network are balanced by employing the multi-objective optimization method, and the competitive neural architectures are obtained in a Pareto solution set. The effectiveness of SceneNet is demonstrated by experimental comparisons with several deep neural networks designed by human experts.

Journal ArticleDOI
TL;DR: A novel unsupervised deep learning model is proposed to address multi-focus image fusion problem and analyzes sharp appearance in deep feature instead of original image to achieve state-of-art fusion performance.
Abstract: Muti-focus image fusion is the extraction of focused regions from different images to create one all-in-focus fused image. The key point is that only objects within the depth-of-field have a sharp appearance in the photograph, while other objects are likely to be blurred. We propose an unsupervised deep learning model for multi-focus image fusion. We train an encoder–decoder network in an unsupervised manner to acquire deep features of input images. Then, we utilize spatial frequency, a gradient-based method to measure sharp variation from these deep features, to reflect activity levels. We apply some consistency verification methods to adjust the decision map and draw out the fused result. Our method analyzes sharp appearances in deep features instead of original images, which can be seen as another success story of unsupervised learning in image processing. Experimental results demonstrate that the proposed method achieves state-of-the-art fusion performance compared to 16 fusion methods in objective and subjective assessments, especially in gradient-based fusion metrics.

Journal ArticleDOI
TL;DR: In this article, a fast wind turbine abnormal data cleaning algorithm via image processing for wind turbine power generation performance measurement and evaluation is proposed, which includes two stages, data cleaning and data classification.
Abstract: A fast wind turbine abnormal data cleaning algorithm via image processing for wind turbine power generation performance measurement and evaluation is proposed in this paper. The proposed method includes two stages, data cleaning and data classification. At the data cleaning stage, pixels of normal data are extracted via image processing based on pixel spatial distribution characteristics of abnormal and normal data in wind power curve (WPC) images. At the data classification stage, wind power data points are classified as normal and abnormal based on the existence of corresponding pixels in the processed WPC image. To accelerate the proposed method, the cleaning operation is executed parallelly using graphics processing units (GPUs) via compute unified device architecture (CUDA). The effectiveness of the proposed method is validated based on real data sets collected from 37 wind turbines of two commercial farms and three types of GPUs are employed to implement the proposed algorithm. The computational results prove the proposed approach has achieved better performance in cleaning abnormal wind power data while the execution time is tremendously reduced. Therefore, the proposed method is available and practical for real wind turbine power generation performance evaluation and monitoring tasks.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed complex-valued denoising CNN performs competitively against existing state-of-the-art real-valuedDenoisingCNNs, with better robustness to possible inconsistencies of noise models between training samples and test images.

Journal ArticleDOI
TL;DR: In this article, a review of state-of-the-art image fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics and their applications in different domains have been discussed.
Abstract: The necessity of image fusion is growing in recently in image processing applications due to the tremendous amount of acquisition systems. Fusion of images is defined as an alignment of noteworthy Information from diverse sensors using various mathematical models to generate a single compound image. The fusion of images is used for integrating the complementary multi-temporal, multi-view and multi-sensor Information into a single image with improved image quality and by keeping the integrity of important features. It is considered as a vital pre-processing phase for several applications such as robot vision, aerial, satellite imaging, medical imaging, and a robot or vehicle guidance. In this paper, various state-of-art image fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics and their applications in different domains have been discussed. Finally, this review has concluded various future directions for different applications of image fusion.

Journal ArticleDOI
TL;DR: QSIPrep as mentioned in this paper is an integrative software platform for the processing of diffusion images that is compatible with nearly all dMRI sampling schemes, and facilitates the implementation of best practices for processing diffusion images.
Abstract: Diffusion-weighted magnetic resonance imaging (dMRI) is the primary method for noninvasively studying the organization of white matter in the human brain. Here we introduce QSIPrep, an integrative software platform for the processing of diffusion images that is compatible with nearly all dMRI sampling schemes. Drawing on a diverse set of software suites to capitalize on their complementary strengths, QSIPrep facilitates the implementation of best practices for processing of diffusion images. QSIPrep is a software platform for processing of most diffusion MRI datasets and ensures that adequate workflows are used.

Journal ArticleDOI
TL;DR: DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.
Abstract: OBJECTIVE Iterative reconstruction degrades image quality. Thus, further advances in image reconstruction are necessary to overcome some limitations of this technique in low-dose computed tomography (LDCT) scan of the chest. Deep-learning image reconstruction (DLIR) is a new method used to reduce dose while maintaining image quality. The purposes of this study was to evaluate image quality and noise of LDCT scan images reconstructed with DLIR and compare with those of images reconstructed with the adaptive statistical iterative reconstruction-Veo at a level of 30% (ASiR-V 30%). MATERIALS AND METHODS This retrospective study included 58 patients who underwent LDCT scan for lung cancer screening. Datasets were reconstructed with ASiR-V 30% and DLIR at medium and high levels (DLIR-M and DLIR-H, respectively). The objective image signal and noise, which represented mean attenuation value and standard deviation in Hounsfield units for the lungs, mediastinum, liver, and background air, and subjective image contrast, image noise, and conspicuity of structures were evaluated. The differences between CT scan images subjected to ASiR-V 30%, DLIR-M, and DLIR-H were evaluated. RESULTS Based on the objective analysis, the image signals did not significantly differ among ASiR-V 30%, DLIR-M, and DLIR-H (p = 0.949, 0.737, 0.366, and 0.358 in the lungs, mediastinum, liver, and background air, respectively). However, the noise was significantly lower in DLIR-M and DLIR-H than in ASiR-V 30% (all p < 0.001). DLIR had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than ASiR-V 30% (p = 0.027, < 0.001, and < 0.001 in the SNR of the lungs, mediastinum, and liver, respectively; all p < 0.001 in the CNR). According to the subjective analysis, DLIR had higher image contrast and lower image noise than ASiR-V 30% (all p < 0.001). DLIR was superior to ASiR-V 30% in identifying the pulmonary arteries and veins, trachea and bronchi, lymph nodes, and pleura and pericardium (all p < 0.001). CONCLUSION DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.