scispace - formally typeset

Showing papers in "Medical Image Analysis in 2016"


Journal ArticleDOI

[...]

TL;DR: This review discusses developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective, and reflects on future opportunities for the quantitation of histopathology.
Abstract: With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of “big data”. It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine “sub-visual” image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular “omics” features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology.

481 citations


Journal ArticleDOI

[...]

TL;DR: In this paper, the authors employed deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets, which outperformed the state-of-the-art methods.
Abstract: Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.

387 citations


Journal ArticleDOI

[...]

TL;DR: A whole heart segmentation method that employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme is presented for cardiac MRI.
Abstract: A whole heart segmentation (WHS) method is presented for cardiac MRI. This segmentation method employs multi-modality atlases from MRI and CT and adopts a new label fusion algorithm which is based on the proposed multi-scale patch (MSP) strategy and a new global atlas ranking scheme. MSP, developed from the scale-space theory, uses the information of multi-scale images and provides different levels of the structural information of images for multi-level local atlas ranking. Both the local and global atlas ranking steps use the information theoretic measures to compute the similarity between the target image and the atlases from multiple modalities. The proposed segmentation scheme was evaluated on a set of data involving 20 cardiac MRI and 20 CT images. Our proposed algorithm demonstrated a promising performance, yielding a mean WHS Dice score of 0.899 ± 0.0340, Jaccard index of 0.818 ± 0.0549, and surface distance error of 1.09 ± 1.11 mm for the 20 MRI data. The average runtime for the proposed label fusion was 12.58 min.

213 citations


Journal ArticleDOI

[...]

TL;DR: It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.
Abstract: A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.

192 citations


Journal ArticleDOI

[...]

TL;DR: This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.
Abstract: Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.

189 citations


Journal ArticleDOI

[...]

TL;DR: CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method, which might obviate the need to acquire a dedicated CSCT scan for CAC scoring, and thus reduce the CT radiation dose received by patients.
Abstract: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. CAC is clinically quantified in cardiac calcium scoring CT (CSCT), but it has been shown that cardiac CT angiography (CCTA) may also be used for this purpose. We present a method for automatic CAC quantification in CCTA. This method uses supervised learning to directly identify and quantify CAC without a need for coronary artery extraction commonly used in existing methods. The study included cardiac CT exams of 250 patients for whom both a CCTA and a CSCT scan were available. To restrict the volume-of-interest for analysis, a bounding box around the heart is automatically determined. The bounding box detection algorithm employs a combination of three ConvNets, where each detects the heart in a different orthogonal plane (axial, sagittal, coronal). These ConvNets were trained using 50 cardiac CT exams. In the remaining 200 exams, a reference standard for CAC was defined in CSCT and CCTA. Out of these, 100 CCTA scans were used for training, and the remaining 100 for evaluation of a voxel classification method for CAC identification. The method uses ConvPairs, pairs of convolutional neural networks (ConvNets). The first ConvNet in a pair identifies voxels likely to be CAC, thereby discarding the majority of non-CAC-like voxels such as lung and fatty tissue. The identified CAC-like voxels are further classified by the second ConvNet in the pair, which distinguishes between CAC and CAC-like negatives. Given the different task of each ConvNet, they share their architecture, but not their weights. Input patches are either 2.5D or 3D. The ConvNets are purely convolutional, i.e. no pooling layers are present and fully connected layers are implemented as convolutions, thereby allowing efficient voxel classification. The performance of individual 2.5D and 3D ConvPairs with input sizes of 15 and 25 voxels, as well as the performance of ensembles of these ConvPairs, were evaluated by a comparison with reference annotations in CCTA and CSCT. In all cases, ensembles of ConvPairs outperformed their individual members. The best performing individual ConvPair detected 72% of lesions in the test set, with on average 0.85 false positive (FP) errors per scan. The best performing ensemble combined all ConvPairs and obtained a sensitivity of 71% at 0.48 FP errors per scan. For this ensemble, agreement with the reference mass score in CSCT was excellent (ICC 0.944 [0.918-0.962]). Aditionally, based on the Agatston score in CCTA, this ensemble assigned 83% of patients to the same cardiovascular risk category as reference CSCT. In conclusion, CAC can be accurately automatically identified and quantified in CCTA using the proposed pattern recognition method. This might obviate the need to acquire a dedicated CSCT scan for CAC scoring, which is regularly acquired prior to a CCTA, and thus reduce the CT radiation dose received by patients.

177 citations


Journal ArticleDOI

[...]

TL;DR: Based on the quantitative evaluation results, it is believed automatic dental radiography analysis is still a challenging and unsolved problem and the datasets and the evaluation software are made available to the research community, further encouraging future developments in this field.
Abstract: Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).

148 citations


Journal ArticleDOI

[...]

TL;DR: The proposed method uses a coarse-to-fine analysis of the localized characteristics in pathology images to automatically differentiate between the two cancer subtypes and showed high stability and robustness to parameter variation.
Abstract: Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p << 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p << 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.

125 citations


Journal ArticleDOI

[...]

TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
Abstract: Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

113 citations


Journal ArticleDOI

[...]

TL;DR: A general, fully learning-based framework for direct bi-ventricular volume estimation, which removes user inputs and unreliable assumptions, and largely outperforms existing direct methods on a larger dataset of 100 subjects including both healthy and diseased cases with twice the number of subjects used in previous methods.
Abstract: Direct estimation of cardiac ventricular volumes has become increasingly popular and important in cardiac function analysis due to its effectiveness and efficiency by avoiding an intermediate segmentation step. However, existing methods rely on either intensive user inputs or problematic assumptions. To realize the full capacities of direct estimation, this paper presents a general, fully learning-based framework for direct bi-ventricular volume estimation, which removes user inputs and unreliable assumptions. We formulate bi-ventricular volume estimation as a general regression framework which consists of two main full learning stages: unsupervised cardiac image representation learning by multi-scale deep networks and direct bi-ventricular volume estimation by random forests. By leveraging strengths of generative and discriminant learning, the proposed method produces high correlations of around 0.92 with ground truth by human experts for both the left and right ventricles using a leave-one-subject-out cross validation, and largely outperforms existing direct methods on a larger dataset of 100 subjects including both healthy and diseased cases with twice the number of subjects used in previous methods. More importantly, the proposed method can not only be practically used in clinical cardiac function analysis but also be easily extended to other organ volume estimation tasks.

103 citations


Journal ArticleDOI

[...]

TL;DR: The groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI.
Abstract: Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as competing methods, while avoiding the need to choose a reference image. It is also shown that the results of the conventional pairwise approach do depend on the choice of this reference image. We therefore conclude that our groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI.

Journal ArticleDOI

[...]

TL;DR: This work introduces an efficient way of processing wave images acquired by multifrequency magnetic resonance elastography (MMRE), which relies on wave number reconstruction at different harmonic frequencies followed by their amplitude-weighted averaging prior to inversion to reveal variations in tissue elasticity in a tomographic fashion.
Abstract: Palpation is one of the most sensitive, effective diagnostic practices, motivating the quantitative and spatially resolved determination of soft tissue elasticity parameters by medical ultrasound or MRI. However, this so-called elastography often suffers from limited anatomical resolution due to noise and insufficient elastic deformation, currently precluding its use as a tomographic modality on its own. We here introduce an efficient way of processing wave images acquired by multifrequency magnetic resonance elastography (MMRE), which relies on wave number reconstruction at different harmonic frequencies followed by their amplitude-weighted averaging prior to inversion. This results in compound maps of wave speed, which reveal variations in tissue elasticity in a tomographic fashion, i.e. an unmasked, slice-wise display of anatomical details at pixel-wise resolution. The method is demonstrated using MMRE data from the literature including abdominal and pelvic organs such as the liver, spleen, uterus body and uterus cervix. Even in small regions with low wave amplitudes, such as nucleus pulposus and spinal cord, elastic parameters consistent with literature values were obtained. Overall, the proposed method provides a simple and noise-robust strategy of in-plane wave analysis of MMRE data, with a pixel-wise resolution producing superior detail to MRE direct inversion methods.

Journal ArticleDOI

[...]

TL;DR: The most popular platforms for navigation technology are reviewed, and an effective way to take advantage of them is shown through an example surgical navigation application.
Abstract: Navigation technology is changing the clinical standards in medical interventions by making existing procedures more accurate, and new procedures possible. Navigation is based on preoperative or intraoperative imaging combined with 3-dimensional position tracking of interventional tools registered to the images. Research of navigation technology in medical interventions requires significant engineering efforts. The difficulty of developing such complex systems has been limiting the clinical translation of new methods and ideas. A key to the future success of this field is to provide researchers with platforms that allow rapid implementation of applications with minimal resources spent on reimplementing existing system features. A number of platforms have been already developed that can share data in real time through standard interfaces. Complete navigation systems can be built using these platforms using a layered software architecture. In this paper, we review the most popular platforms, and show an effective way to take advantage of them through an example surgical navigation application.

Journal ArticleDOI

[...]

TL;DR: A novel framework for estimating the hyper-connectivity network of brain functions and then using this hyper-network for brain disease diagnosis is proposed, which can not only improve the classification performance, but also help discover disease-related biomarkers important for disease diagnosis.
Abstract: Exploring structural and functional interactions among various brain regions enables better understanding of pathological underpinnings of neurological disorders. Brain connectivity network, as a simplified representation of those structural and functional interactions, has been widely used for diagnosis and classification of neurodegenerative diseases, especially for Alzheimer's disease (AD) and its early stage - mild cognitive impairment (MCI). However, the conventional functional connectivity network is usually constructed based on the pairwise correlation among different brain regions and thus ignores their higher-order relationships. Such loss of high-order information could be important for disease diagnosis, since neurologically a brain region predominantly interacts with more than one other brain regions. Accordingly, in this paper, we propose a novel framework for estimating the hyper-connectivity network of brain functions and then use this hyper-network for brain disease diagnosis. Here, the functional connectivity hyper-network denotes a network where each of its edges representing the interactions among multiple brain regions (i.e., an edge can connect with more than two brain regions), which can be naturally represented by a hyper-graph. Specifically, we first construct connectivity hyper-networks from the resting-state fMRI (R-fMRI) time series by using sparse representation. Then, we extract three sets of brain-region specific features from the connectivity hyper-networks, and further exploit a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Finally, we use multi-kernel support vector machine (SVM) for classification. The experimental results on both MCI dataset and attention deficit hyperactivity disorder (ADHD) dataset demonstrate that, compared with the conventional connectivity network-based methods, the proposed method can not only improve the classification performance, but also help discover disease-related biomarkers important for disease diagnosis.

Journal ArticleDOI

[...]

TL;DR: An approach that enables accurate voxel-wise deformable registration of high-resolution 3D images without the need for intermediate image warping or a multi-resolution scheme is proposed, and significant improvements in registration accuracy are shown when using the additional information provided by the registration uncertainty estimates.
Abstract: Discrete optimisation strategies have a number of advantages over their continuous counterparts for deformable registration of medical images. For example: it is not necessary to compute derivatives of the similarity term; dense sampling of the search space reduces the risk of becoming trapped in local optima; and (in principle) an optimum can be found without resorting to iterative coarse-to-fine warping strategies. However, the large complexity of high-dimensional medical data renders a direct voxel-wise estimation of deformation vectors impractical. For this reason, previous work on medical image registration using graphical models has largely relied on using a parameterised deformation model and on the use of iterative coarse-to-fine optimisation schemes. In this paper, we propose an approach that enables accurate voxel-wise deformable registration of high-resolution 3D images without the need for intermediate image warping or a multi-resolution scheme. This is achieved by representing the image domain as multiple comprehensive supervoxel layers and making use of the full marginal distribution of all probable displacement vectors after inferring regularity of the deformations using belief propagation. The optimisation acts on the coarse scale representation of supervoxels, which provides sufficient spatial context and is robust to noise in low contrast areas. Minimum spanning trees, which connect neighbouring supervoxels, are employed to model pair-wise deformation dependencies. The optimal displacement for each voxel is calculated by considering the probabilities for all displacements over all overlapping supervoxel graphs and subsequently seeking the mode of this distribution. We demonstrate the applicability of this concept for two challenging applications: first, for intra-patient motion estimation in lung CT scans; and second, for atlas-based segmentation propagation of MRI brain scans. For lung registration, the voxel-wise mode of displacements is found using the mean-shift algorithm, which enables us to determine continuous valued sub-voxel motion vectors. Finding the mode of brain segmentation labels is performed using a voxel-wise majority voting weighted by the displacement uncertainty estimates. Our experimental results show significant improvements in registration accuracy when using the additional information provided by the registration uncertainty estimates. The multi-layer approach enables fusion of multiple complementary proposals, extending the popular fusion approaches from multi-image registration to probabilistic one-to-one image registration.

Journal ArticleDOI

[...]

TL;DR: In this paper, the authors describe recently developed technologies for better handling of image information: photorealistic visualization of medical images with Cinematic Rendering, artificial agents for in-depth image understanding, support for minimally invasive procedures, and patient-specific computational models with enhanced predictive power.
Abstract: Medical images constitute a source of information essential for disease diagnosis, treatment and follow-up. In addition, due to its patient-specific nature, imaging information represents a critical component required for advancing precision medicine into clinical practice. This manuscript describes recently developed technologies for better handling of image information: photorealistic visualization of medical images with Cinematic Rendering, artificial agents for in-depth image understanding, support for minimally invasive procedures, and patient-specific computational models with enhanced predictive power. Throughout the manuscript we will analyze the capabilities of such technologies and extrapolate on their potential impact to advance the quality of medical care, while reducing its cost.

Journal ArticleDOI

[...]

TL;DR: The benchmarking evaluation framework can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method.
Abstract: Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the Full-Width-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges.

Journal ArticleDOI

[...]

TL;DR: A tree-structured discrete graphical model is introduced that is used to select and label a set of non-overlapping regions in the image by a global optimization of a classification score, and the performance of the model can be improved by considering a proxy problem for learning the surface that allows better selection of the extremal regions.
Abstract: In many microscopy applications the images may contain both regions of low and high cell densities corresponding to different tissues or colonies at different stages of growth. This poses a challenge to most previously developed automated cell detection and counting methods, which are designed to handle either the low-density scenario (through cell detection) or the high-density scenario (through density estimation or texture analysis). The objective of this work is to detect all the instances of an object of interest in microscopy images. The instances may be partially overlapping and clustered. To this end we introduce a tree-structured discrete graphical model that is used to select and label a set of non-overlapping regions in the image by a global optimization of a classification score. Each region is labeled with the number of instances it contains - for example regions can be selected that contain two or three object instances, by defining separate classes for tuples of objects in the detection process. We show that this formulation can be learned within the structured output SVM framework and that the inference in such a model can be accomplished using dynamic programming on a tree structured region graph. Furthermore, the learning only requires weak annotations - a dot on each instance. The candidate regions for the selection are obtained as extremal region of a surface computed from the microscopy image, and we show that the performance of the model can be improved by considering a proxy problem for learning the surface that allows better selection of the extremal regions. Furthermore, we consider a number of variations for the loss function used in the structured output learning. The model is applied and evaluated over six quite disparate data sets of images covering: fluorescence microscopy, weak-fluorescence molecular images, phase contrast microscopy and histopathology images, and is shown to exceed the state of the art in performance.

Journal ArticleDOI

[...]

TL;DR: Theoretical features in model-based and tomographic reconstruction of coronary arteries, and the potential role of reconstructions in clinical decision making and interventional guidance are discussed, and areas for future research are highlighted.
Abstract: Despite continuous progress in X-ray angiography systems, X-ray coronary angiography is fundamentally limited by its 2D representation of moving coronary arterial trees, which can negatively impact assessment of coronary artery disease and guidance of percutaneous coronary intervention. To provide clinicians with 3D/3D+time information of coronary arteries, methods computing reconstructions of coronary arteries from X-ray angiography are required. Because of several aspects (e.g. cardiac and respiratory motion, type of X-ray system), reconstruction from X-ray coronary angiography has led to vast amount of research and it still remains as a challenging and dynamic research area. In this paper, we review the state-of-the-art approaches on reconstruction of high-contrast coronary arteries from X-ray angiography. We mainly focus on the theoretical features in model-based (modelling) and tomographic reconstruction of coronary arteries, and discuss the evaluation strategies. We also discuss the potential role of reconstructions in clinical decision making and interventional guidance, and highlight areas for future research.

Journal ArticleDOI

[...]

TL;DR: This paper proposes a prediction system primarily using radiomic features extracted from FDG-PET images, which aims to improve the prediction accuracy, and reduce the imprecision and overlaps between different classes (treatment outcomes) in a selected feature subspace.
Abstract: As a vital task in cancer therapy, accurately predicting the treatment outcome is valuable for tailoring and adapting a treatment planning. To this end, multi-sources of information (radiomics, clinical characteristics, genomic expressions, etc) gathered before and during treatment are potentially profitable. In this paper, we propose such a prediction system primarily using radiomic features (e.g., texture features) extracted from FDG-PET images. The proposed system includes a feature selection method based on Dempster-Shafer theory, a powerful tool to deal with uncertain and imprecise information. It aims to improve the prediction accuracy, and reduce the imprecision and overlaps between different classes (treatment outcomes) in a selected feature subspace. Considering that training samples are often small-sized and imbalanced in our applications, a data balancing procedure and specified prior knowledge are taken into account to improve the reliability of the selected feature subsets. Finally, the Evidential K-NN (EK-NN) classifier is used with selected features to output prediction results. Our prediction system has been evaluated by synthetic and clinical datasets, consistently showing good performance.

Journal ArticleDOI

[...]

TL;DR: This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time, and improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition.
Abstract: Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our work paves the way for higher spatial resolution acquisition of diffusion MRI datasets, which could in turn reveal new anatomical details that are not discernible at the spatial resolution currently used by the diffusion MRI community.

Journal ArticleDOI

[...]

TL;DR: A fully automated framework for image-level tortuosity estimation, consisting of a hybrid segmentation method and a highly adaptable, definition-free tortuosity estimation algorithm, based on a novel tortUosity estimation paradigm in which discriminative, multi-scale features can be automatically learned for specific anatomical objects and diseases.
Abstract: Recent clinical research has highlighted important links between a number of diseases and the tortuosity of curvilinear anatomical structures like corneal nerve fibres, suggesting that tortuosity changes might detect early stages of specific conditions. Currently, clinical studies are mainly based on subjective, visual assessment, with limited repeatability and inter-observer agreement. To address these problems, we propose a fully automated framework for image-level tortuosity estimation, consisting of a hybrid segmentation method and a highly adaptable, definition-free tortuosity estimation algorithm. The former combines an appearance model, based on a Scale and Curvature-Invariant Ridge Detector (SCIRD), with a context model, including multi-range learned context filters. The latter is based on a novel tortuosity estimation paradigm in which discriminative, multi-scale features can be automatically learned for specific anatomical objects and diseases. Experimental results on 140 in vivo confocal microscopy images of corneal nerve fibres from healthy and unhealthy subjects demonstrate the excellent performance of our method compared to state-of-the-art approaches and ground truth annotations from 3 expert observers.

Journal ArticleDOI

[...]

Jürgen Weese1, Cristian Lorenz1
TL;DR: Algorithms for analyzing heterogeneous image data, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed.
Abstract: Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications.

Journal ArticleDOI

[...]

TL;DR: It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transforms and the Shearlet transform.
Abstract: In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state-of-the-art methods on most of the databases. We will also show that the Weibull distribution is better suited to model the subband coefficient distribution than other commonly used probability distributions like the Gaussian distribution and the generalized Gaussian distribution. So this work gives a reasonable summary of wavelet based methods for colonic polyp classification and the huge amount of endoscopic polyp databases used for our experiments assures a high significance of the achieved results.

Journal ArticleDOI

[...]

TL;DR: It is advocated that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images.
Abstract: Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion.

Journal ArticleDOI

[...]

TL;DR: A framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection where a random binary descriptor using Haar-like features is included as a random forest classifier and a RANSAC-based location verification component that incorporates shape context is proposed.
Abstract: With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art.

Journal ArticleDOI

[...]

TL;DR: The proposed algorithm solves the well-known open problem, in which a shape prior may not be optimal in terms of an objective functional that needs to be minimized during segmentation, and finds an optimal solution by considering all possible shapes generated from an SSM.
Abstract: The goal of this study is to provide a theoretical framework for accurately optimizing the segmentation energy considering all of the possible shapes generated from the level-set-based statistical shape model (SSM). The proposed algorithm solves the well-known open problem, in which a shape prior may not be optimal in terms of an objective functional that needs to be minimized during segmentation. The algorithm allows the selection of an optimal shape prior from among all possible shapes generated from an SSM by conducting a branch-and-bound search over an eigenshape space. The proposed algorithm does not require predefined shape templates or the construction of a hierarchical clustering tree before graph-cut segmentation. It jointly optimizes an objective functional in terms of both the shape prior and segmentation labeling, and finds an optimal solution by considering all possible shapes generated from an SSM. We apply the proposed algorithm to both pancreas and spleen segmentation using multiphase computed tomography volumes, and we compare the results obtained with those produced by a conventional algorithm employing a branch-and-bound search over a search tree of predefined shapes, which were sampled discretely from an SSM. The proposed algorithm significantly improves the segmentation performance in terms of the Jaccard index and Dice similarity index. In addition, we compare the results with the state-of-the-art multiple abdominal organs segmentation algorithm, and confirmed that the performances of both algorithms are comparable to each other. We discuss the high computational efficiency of the proposed algorithm, which was determined experimentally using a normalized number of traversed nodes in a search tree, and the extensibility of the proposed algorithm to other SSMs or energy functionals.

Journal ArticleDOI

[...]

TL;DR: A new computer-aided method to detect lesion images and provide worthwhile guidance for improving the efficiency and accuracy of gastrointestinal disease diagnosis and is a good prospect for clinical application.
Abstract: The gastrointestinal endoscopy in this study refers to conventional gastroscopy and wireless capsule endoscopy (WCE). Both of these techniques produce a large number of images in each diagnosis. The lesion detection done by hand from the images above is time consuming and inaccurate. This study designed a new computer-aided method to detect lesion images. We initially designed an algorithm named joint diagonalisation principal component analysis (JDPCA), in which there are no approximation, iteration or inverting procedures. Thus, JDPCA has a low computational complexity and is suitable for dimension reduction of the gastrointestinal endoscopic images. Then, a novel image feature extraction method was established through combining the algorithm of machine learning based on JDPCA and conventional feature extraction algorithm without learning. Finally, a new computer-aided method is proposed to identify the gastrointestinal endoscopic images containing lesions. The clinical data of gastroscopic images and WCE images containing the lesions of early upper digestive tract cancer and small intestinal bleeding, which consist of 1330 images from 291 patients totally, were used to confirm the validation of the proposed method. The experimental results shows that, for the detection of early oesophageal cancer images, early gastric cancer images and small intestinal bleeding images, the mean values of accuracy of the proposed method were 90.75%, 90.75% and 94.34%, with the standard deviations (SDs) of 0.0426, 0.0334 and 0.0235, respectively. The areas under the curves (AUCs) were 0.9471, 0.9532 and 0.9776, with the SDs of 0.0296, 0.0285 and 0.0172, respectively. Compared with the traditional related methods, our method showed a better performance. It may therefore provide worthwhile guidance for improving the efficiency and accuracy of gastrointestinal disease diagnosis and is a good prospect for clinical application.

Journal ArticleDOI

[...]

TL;DR: An overall scheme of the computer based process for planning a bone fracture reduction is presented, and its main steps, the most common proposed techniques and their main shortcomings are detailed.
Abstract: The development of support systems for surgery significantly increases the likelihood of obtaining satisfactory results. In the case of fracture reduction interventions these systems enable surgery planning, training, monitoring and assessment. They allow improvement of fracture stabilization, a minimizing of health risks and a reduction of surgery time. Planning a bone fracture reduction by means of a computer assisted simulation involves several semiautomatic or automatic steps. The simulation deals with the correct position of osseous fragments and fixation devices for a fracture reduction. Currently, to the best of our knowledge there is no computer assisted methods to plan an entire fracture reduction process. This paper presents an overall scheme of the computer based process for planning a bone fracture reduction, as described above, and details its main steps, the most common proposed techniques and their main shortcomings. In addition, challenges and new trends of this research field are depicted and analyzed.

Journal ArticleDOI

[...]

TL;DR: Minimal user interaction is needed for a good segmentation of the placenta and co-segmentation of multiple volumes outperforms single sparse volume based method.
Abstract: Segmentation of the placenta from fetal MRI is challenging due to sparse acquisition, inter-slice motion, and the widely varying position and shape of the placenta between pregnant women. We propose a minimally interactive framework that combines multiple volumes acquired in different views to obtain accurate segmentation of the placenta. In the first phase, a minimally interactive slice-by-slice propagation method called Slic-Seg is used to obtain an initial segmentation from a single motion-corrupted sparse volume image. It combines high-level features, online Random Forests and Conditional Random Fields, and only needs user interactions in a single slice. In the second phase, to take advantage of the complementary resolution in multiple volumes acquired in different views, we further propose a probability-based 4D Graph Cuts method to refine the initial segmentations using inter-slice and inter-image consistency. We used our minimally interactive framework to examine the placentas of 16 mid-gestation patients from MRI acquired in axial and sagittal views respectively. The results show the proposed method has 1) a good performance even in cases where sparse scribbles provided by the user lead to poor results with the competitive propagation approaches; 2) a good interactivity with low intra- and inter-operator variability; 3) higher accuracy than state-of-the-art interactive segmentation methods; and 4) an improved accuracy due to the co-segmentation based refinement, which outperforms single volume or intensity-based Graph Cuts.