scispace - formally typeset
Search or ask a question
Author

Tim Leiner

Bio: Tim Leiner is an academic researcher from Utrecht University. The author has contributed to research in topics: Medicine & Coronary artery disease. The author has an hindex of 52, co-authored 363 publications receiving 10496 citations. Previous affiliations of Tim Leiner include Maastricht University & University Medical Center Utrecht.


Papers
More filters
Journal ArticleDOI
TL;DR: Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels.
Abstract: Noise is inherent to low-dose CT acquisition We propose to train a convolutional neural network (CNN) jointly with an adversarial CNN to estimate routine-dose CT images from low-dose CT images and hence reduce noise A generator CNN was trained to transform low-dose CT images into routine-dose CT images using voxelwise loss minimization An adversarial discriminator CNN was simultaneously trained to distinguish the output of the generator from routine-dose CT images The performance of this discriminator was used as an adversarial loss for the generator Experiments were performed using CT images of an anthropomorphic phantom containing calcium inserts, as well as patient non-contrast-enhanced cardiac CT images The phantom and patients were scanned at 20% and 100% routine clinical dose Three training strategies were compared: the first used only voxelwise loss, the second combined voxelwise loss and adversarial loss, and the third used only adversarial loss The results showed that training with only voxelwise loss resulted in the highest peak signal-to-noise ratio with respect to reference routine-dose images However, CNNs trained with adversarial loss captured image statistics of routine-dose images better Noise reduction improved quantification of low-density calcified inserts in phantom CT images and allowed coronary calcium scoring in low-dose patient CT images with high noise levels Testing took less than 10 s per CT volume CNN-based low-dose CT noise reduction in the image domain is feasible Training with an adversarial network improves the CNNs ability to generate images with an appearance similar to that of reference routine-dose CT images

781 citations

Journal ArticleDOI
TL;DR: SPECT is widely available and most extensively validated; PET achieved the highest diagnostic performance; CMR may provide an alternative without ionizing radiation and a similar diagnostic accuracy as PET.

412 citations

Journal ArticleDOI
TL;DR: Iterative reconstruction technology for CT is presented in non-mathematical terms and IR can improve image quality in routine-dose CT and lower the radiation dose, and IR's disadvantages include longer computation and blotchy appearance of some images.
Abstract: Objectives To explain the technical principles of and differences between commercially available iterative reconstruction (IR) algorithms for computed tomography (CT) in non-mathematical terms for radiologists and clinicians.

357 citations

Journal ArticleDOI
TL;DR: The purpose of this study was to determine the interobserver agreement and diagnostic accuracy of CTA and MRA in comparison with DSA and to examine whether CTA or MRA can be used as an initial test for detection of renal artery stenosis.
Abstract: Computed tomographic angiography and magnetic resonance angiography are not sufficiently reproducible or sensitive to rule out renal artery stenosis in hypertensive patients. Therefore, digital sub...

330 citations

Journal ArticleDOI
TL;DR: Stress myocardial perfusion imaging with MRI, computed tomography, or positron emission tomography can accurately rule out hemodynamically significant coronary artery disease and can act as a gatekeeper for invasive revascularization.
Abstract: Background— Hemodynamically significant coronary artery disease is an important indication for revascularization. Stress myocardial perfusion imaging is a noninvasive alternative to invasive fractional flow reserve for evaluating hemodynamically significant coronary artery disease. The aim was to determine the diagnostic accuracy of myocardial perfusion imaging by single-photon emission computed tomography, echocardiography, MRI, positron emission tomography, and computed tomography compared with invasive coronary angiography with fractional flow reserve for the diagnosis of hemodynamically significant coronary artery disease. Methods and Results— The meta-analysis adhered to the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement. PubMed, EMBASE, and Web of Science were searched until May 2014. Thirty-seven studies, reporting on 4721 vessels and 2048 patients, were included. Meta-analysis yielded pooled sensitivity, pooled specificity, pooled likelihood ratios (LR), pooled diagnostic odds ratio, and summary area under the receiver operating characteristic curve. The negative LR (NLR) was chosen as the primary outcome. At the vessel level, MRI (pooled NLR, 0.16; 95% confidence interval [CI], 0.13–0.21) was performed similar to computed tomography (pooled NLR, 0.22; 95% CI, 0.12–0.39) and positron emission tomography (pooled NLR, 0.15; 95% CI, 0.05–0.44), and better than single-photon emission computed tomography (pooled NLR, 0.47; 95% CI, 0.37–0.59). At the patient level, MRI (pooled NLR, 0.14; 95% CI, 0.10–0.18) performed similar to computed tomography (pooled NLR, 0.12; 95% CI, 0.04–0.33) and positron emission tomography (pooled NLR, 0.14; 95% CI, 0.02–0.87), and better than single-photon emission computed tomography (pooled NLR, 0.39; 95% CI, 0.27–0.55) and echocardiography (pooled NLR, 0.42; 95% CI, 0.30–0.59). Conclusions— Stress myocardial perfusion imaging with MRI, computed tomography, or positron emission tomography can accurately rule out hemodynamically significant coronary artery disease and can act as a gatekeeper for invasive revascularization. Single-photon emission computed tomography and echocardiography are less suited for this purpose.

314 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.

8,730 citations

Journal ArticleDOI
TL;DR: The goals of this new consensus are to provide an abbreviated document to focus on key aspects of diagnosis and management, and to update the information based on new publications and the newer guidelines, but not to add an extensive list of references.

7,099 citations

Journal ArticleDOI
TL;DR: The 11th edition of Harrison's Principles of Internal Medicine welcomes Anthony Fauci to its editorial staff, in addition to more than 85 new contributors.
Abstract: The 11th edition of Harrison's Principles of Internal Medicine welcomes Anthony Fauci to its editorial staff, in addition to more than 85 new contributors. While the organization of the book is similar to previous editions, major emphasis has been placed on disorders that affect multiple organ systems. Important advances in genetics, immunology, and oncology are emphasized. Many chapters of the book have been rewritten and describe major advances in internal medicine. Subjects that received only a paragraph or two of attention in previous editions are now covered in entire chapters. Among the chapters that have been extensively revised are the chapters on infections in the compromised host, on skin rashes in infections, on many of the viral infections, including cytomegalovirus and Epstein-Barr virus, on sexually transmitted diseases, on diabetes mellitus, on disorders of bone and mineral metabolism, and on lymphadenopathy and splenomegaly. The major revisions in these chapters and many

6,968 citations

Journal ArticleDOI
TL;DR: This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.
Abstract: Deep convolutional neural networks have performed remarkably well on many Computer Vision tasks. However, these networks are heavily reliant on big data to avoid overfitting. Overfitting refers to the phenomenon when a network learns a function with very high variance such as to perfectly model the training data. Unfortunately, many application domains do not have access to big data, such as medical image analysis. This survey focuses on Data Augmentation, a data-space solution to the problem of limited data. Data Augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better Deep Learning models can be built using them. The image augmentation algorithms discussed in this survey include geometric transformations, color space augmentations, kernel filters, mixing images, random erasing, feature space augmentation, adversarial training, generative adversarial networks, neural style transfer, and meta-learning. The application of augmentation methods based on GANs are heavily covered in this survey. In addition to augmentation techniques, this paper will briefly discuss other characteristics of Data Augmentation such as test-time augmentation, resolution impact, final dataset size, and curriculum learning. This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing Data Augmentation. Readers will understand how Data Augmentation can improve the performance of their models and expand limited datasets to take advantage of the capabilities of big data.

5,782 citations

Journal ArticleDOI
TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Abstract: Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

4,249 citations