scispace - formally typeset
Search or ask a question
JournalISSN: 2638-6100

Radiology 

Radiological Society of North America
About: Radiology is an academic journal published by Radiological Society of North America. The journal publishes majorly in the area(s): Medicine & Computer science. It has an ISSN identifier of 2638-6100. Over the lifetime, 79 publications have been published receiving 294 citations. The journal is also known as: Radiol Artif Intell & RAI.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models.
Abstract: Purpose To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications. Materials and Methods This retrospective study presents a family of bidirectional encoder representations from transformers (BERT)-based language models adapted for radiology, named RadBERT. Transformers were pretrained with either 2.16 or 4.42 million radiology reports from U.S. Department of Veterans Affairs health care systems nationwide on top of four different initializations (BERT-base, Clinical-BERT, robustly optimized BERT pretraining approach [RoBERTa], and BioMed-RoBERTa) to create six variants of RadBERT. Each variant was fine-tuned for three representative NLP tasks in radiology: (a) abnormal sentence classification: models classified sentences in radiology reports as reporting abnormal or normal findings; (b) report coding: models assigned a diagnostic code to a given radiology report for five coding systems; and (c) report summarization: given the findings section of a radiology report, models selected key sentences that summarized the findings. Model performance was compared by bootstrap resampling with five intensively studied transformer language models as baselines: BERT-base, BioBERT, Clinical-BERT, BlueBERT, and BioMed-RoBERTa. Results For abnormal sentence classification, all models performed well (accuracies above 97.5 and F1 scores above 95.0). RadBERT variants achieved significantly higher scores than corresponding baselines when given only 10% or less of 12 458 annotated training sentences. For report coding, all variants outperformed baselines significantly for all five coding systems. The variant RadBERT-BioMed-RoBERTa performed the best among all models for report summarization, achieving a Recall-Oriented Understudy for Gisting Evaluation-1 score of 16.18 compared with 15.27 by the corresponding baseline (BioMed-RoBERTa, P < .004). Conclusion Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models.Keywords: Translation, Unsupervised Learning, Transfer Learning, Neural Networks, Informatics Supplemental material is available for this article. © RSNA, 2022See also commentary by Wiggins and Tejani in this issue.

23 citations

Journal ArticleDOI
TL;DR: This report focuses on four aspects of model development where bias may arise: data augmentation, model and loss function, optimizers, and transfer learning.
Abstract: There are increasing concerns about the bias and fairness of artificial intelligence (AI) models as they are put into clinical practice. Among the steps for implementing machine learning tools into clinical workflow, model development is an important stage where different types of biases can occur. This report focuses on four aspects of model development where such bias may arise: data augmentation, model and loss function, optimizers, and transfer learning. This report emphasizes appropriate considerations and practices that can mitigate biases in radiology AI studies. Keywords: Model, Bias, Machine Learning, Deep Learning, Radiology © RSNA, 2022.

18 citations

Journal ArticleDOI
TL;DR: Cadrin-Chênevert et al. as mentioned in this paper compared RadImageNet with ImageNet using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems.
Abstract: To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning.This retrospective study included patients who underwent a radiologic study between 2005 and 2020 at an outpatient imaging facility. Key images and associated labels from the studies were retrospectively extracted from the original study interpretation. These images were used for RadImageNet model training with random weight initiation. The RadImageNet models were compared with ImageNet models using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems.The RadImageNet database consists of 1.35 million annotated medical images in 131 872 patients who underwent CT, MRI, and US for musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologic conditions. For transfer learning tasks on small datasets-thyroid nodules (US), breast masses (US), anterior cruciate ligament injuries (MRI), and meniscal tears (MRI)-the RadImageNet models demonstrated a significant advantage (P < .001) to ImageNet models (9.4%, 4.0%, 4.8%, and 4.5% AUC improvements, respectively). For larger datasets-pneumonia (chest radiography), COVID-19 (CT), SARS-CoV-2 (CT), and intracranial hemorrhage (CT)-the RadImageNet models also illustrated improved AUC (P < .001) by 1.9%, 6.1%, 1.7%, and 0.9%, respectively. Additionally, lesion localizations of the RadImageNet models were improved by 64.6% and 16.4% on thyroid and breast US datasets, respectively.RadImageNet pretrained models demonstrated better interpretability compared with ImageNet models, especially for smaller radiologic datasets.Keywords: CT, MR Imaging, US, Head/Neck, Thorax, Brain/Brain Stem, Evidence-based Medicine, Computer Applications-General (Informatics) Supplemental material is available for this article. Published under a CC BY 4.0 license.See also the commentary by Cadrin-Chênevert in this issue.

17 citations

Journal ArticleDOI
TL;DR: Artificial intelligence-based software can achieve noninferior image quality for 3D brain MRI sequences with a 45% scan time reduction, potentially improving the patient experience and scanner efficiency without sacrificing diagnostic quality.
Abstract: Artificial intelligence (AI)-based image enhancement has the potential to reduce scan times while improving signal-to-noise ratio (SNR) and maintaining spatial resolution. This study prospectively evaluated AI-based image enhancement in 32 consecutive patients undergoing clinical brain MRI. Standard-of-care (SOC) three-dimensional (3D) T1 precontrast, 3D T2 fluid-attenuated inversion recovery, and 3D T1 postcontrast sequences were performed along with 45% faster versions of these sequences using half the number of phase-encoding steps. Images from the faster sequences were processed by a Food and Drug Administration-cleared AI-based image enhancement software for resolution enhancement. Four board-certified neuroradiologists scored the SOC and AI-enhanced image series independently on a five-point Likert scale for image SNR, anatomic conspicuity, overall image quality, imaging artifacts, and diagnostic confidence. While interrater κ was low to fair, the AI-enhanced scans were noninferior for all metrics and actually demonstrated a qualitative SNR improvement. Quantitative analyses showed that the AI software restored the high spatial resolution of small structures, such as the septum pellucidum. In conclusion, AI-based software can achieve noninferior image quality for 3D brain MRI sequences with a 45% scan time reduction, potentially improving the patient experience and scanner efficiency without sacrificing diagnostic quality. Keywords: MR Imaging, CNS, Brain/Brain Stem, Reconstruction Algorithms © RSNA, 2022.

15 citations

Journal ArticleDOI
TL;DR: W Wiggins et al. as mentioned in this paper proposed a robustly optimized BERT pretraining approach for radiology report classification, and used the BERT pre-training approach to understand radiology reports.
Abstract: HomeRadiology: Artificial IntelligenceVol. 4, No. 4 PreviousNext CommentaryOn the Opportunities and Risks of Foundation Models for Natural Language Processing in RadiologyWalter F. Wiggins , Ali S. TejaniWalter F. Wiggins , Ali S. TejaniAuthor AffiliationsFrom the Department of Radiology, Duke University Health System, 2301 Erwin Rd, Durham, NC 27710 (W.F.W.); Duke Center for Artificial Intelligence in Radiology, Duke University School of Medicine, Durham, NC (W.F.W.); and Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Tex (A.S.T.).Address correspondence to W.F.W. (email: [email protected]).Walter F. Wiggins Ali S. TejaniPublished Online:Jul 20 2022https://doi.org/10.1148/ryai.220119MoreSectionsFull textPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked In References1. Linna N, Kahn CE Jr. Applications of natural language processing in radiology: A systematic review. Int J Med Inform 2022;163:104779. Crossref, Medline, Google Scholar2. Vaswani A, Shazeer NM, Parmar N, et al. Attention is all you need. arXiv:1706.03762 [preprint] https://arxiv.org/abs/1706.03762. Posted June 12, 2017. Accessed June 15, 2022. Google Scholar3. Devlin J, Chang M, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [preprint] https://arxiv.org/abs/1810.04805. Posted October 11, 2018. Accessed June 15, 2022. Google Scholar4. Huang K, Altosaar J, Ranganath R.. ClinicalBERT: Modeling Clinical notes and predicting hospital readmission. arXiv:1904.05342 [preprint] https://arxiv.org/abs/1904.05342. Posted April 10, 2019. Accessed June 15, 2022. Google Scholar5. Wiggins WF, Kitamura F, Santos I, Prevedello LM. Natural language processing of radiology text reports: interactive text classification. Radiol Artif Intell 2021;3(4):e210035. Link, Google Scholar6. Jaiswal A, Tang L, Ghosh M, Rousseau JF, Peng Y, Ding Y. RadBERT-CL: Factually-aware contrastive learning for radiology report classification. Proc Mach Learn Res 2021;158:196–208. Medline, Google Scholar7. Kuling G, Curpen B, Martel AL. BI-RADS BERT and using section segmentation to understand radiology reports. J Imaging 2022;8(5):131. Crossref, Medline, Google Scholar8. Yan A, McAuley J, Lu X, et al. RadBERT: Adapting transformer-based language models to radiology. Radiol Artif Intell 2022;4(4):e210258. Link, Google Scholar9. Liu Y, Ott M, Goyal N, et al. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692 [preprint] https://arxiv.org/abs/1907.11692. Posted July 26, 2019. Accessed June 15, 2022. Google Scholar10. Bommasani R, Hudson DA, Adeli E, et al. On the opportunities and risks of foundation models. arXiv:2108.07258 [preprint] https://arxiv.org/abs/2108.07258. Posted August 16, 2021. Accessed June 15, 2022. Google ScholarArticle HistoryReceived: June 17 2022Revision requested: June 19 2022Revision received: June 23 2022Accepted: June 27 2022Published online: July 20 2022 FiguresReferencesRelatedDetailsCited ByMachine Learning for Precision Epilepsy SurgeryLaraJehi2023 | Epilepsy CurrentsThe role of artificial intelligence in the differential diagnosis of wheezing symptoms in childrenLanSong, ZhenchenZhu, GeHu, XinSui, WeiSong, ZhengyuJin2022 | Radiology Science, Vol. 1, No. 1Applying BERT for Early-Stage Recognition of Persistence in Chat-Based Social Engineering AttacksNikolaosTsinganos, PanagiotisFouliras, IoannisMavridis2022 | Applied Sciences, Vol. 12, No. 23Accompanying This ArticleRadBERT: Adapting Transformer-based Language Models to RadiologyJun 15 2022Radiology: Artificial IntelligenceEpisode 23: NLP/Transformer Models for RadiologyOct 7 2022Default Digital Object SeriesRecommended Articles Natural Language Processing of Radiology Text Reports: Interactive Text ClassificationRadiology: Artificial Intelligence2021Volume: 3Issue: 4Current Applications and Future Impact of Machine Learning in RadiologyRadiology2018Volume: 288Issue: 2pp. 318-328RadBERT: Adapting Transformer-based Language Models to RadiologyRadiology: Artificial Intelligence2022Volume: 4Issue: 4Moving from ImageNet to RadImageNet for Improved Transfer Learning and GeneralizabilityRadiology: Artificial Intelligence2022Volume: 4Issue: 5Preparing Medical Imaging Data for Machine LearningRadiology2020Volume: 295Issue: 1pp. 4-15See More RSNA Education Exhibits Seeing Through the Eyes (and Visual Cortex) of a Machine: Convolutional Neural Networks at the Forefront of Machine Intelligence in Medical ImagingDigital Posters2018A Cased-Based Health Equity Primer for Radiologists: Real Cases, Real Problems, Real SolutionsDigital Posters2020Anatomy of a Deep Learning Project for Breast Cancer Prognosis Prediction: From Collecting Data to Building a PipelineDigital Posters2019 RSNA Case Collection Invasive ductal carcinoma of the breastRSNA Case Collection2020Pancreatic Schwannoma RSNA Case Collection2021COVID-19 pneumoniaRSNA Case Collection2020 Vol. 4, No. 4 PodcastMetrics Downloaded 181 times Altmetric Score PDF download

14 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202327
202265