Showing papers in "Medical Image Analysis in 2011"
••
TL;DR: This paper proposes an original categorization for cardiac segmentation methods, with a special emphasis on what level of external information is required (weak or strong) and how it is used to constrain segmentation.
703 citations
••
TL;DR: A general-purpose deformable registration algorithm referred to as "DRAMMS" is presented, which extracts Gabor attributes at each voxel and selects the optimal components, so that they form a highly distinctive morphological signature reflecting the anatomical context around each v oxel in a multi-scale and multi-resolution fashion.
420 citations
••
TL;DR: An efficient algorithm for MR image reconstruction that minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization is proposed.
304 citations
••
TL;DR: The method showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results, and the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated.
254 citations
••
TL;DR: An automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm that exploits the curvilinear structure of the nerve fibre and adapts itself to the local image information.
243 citations
••
TL;DR: In this article, the authors presented a novel technique for analytical EAP reconstruction from multiple q-shell acquisitions based on a Laplace equation by part estimation between the diffusion signal for each shell acquisition, which simplifies greatly the Fourier integral relating diffusion signal and EAP.
187 citations
••
TL;DR: A new algorithm that is applicable to solid, non-solid and part-solid types and solitary, vascularized, and juxtapleural types is proposed that separates lung parenchyma and radiographically denser anatomical structures with coupled competition and diffusion processes.
163 citations
••
TL;DR: A method for pose estimation and shape reconstruction of 3D bone surfaces from two (or more) calibrated X-ray images using a statistical shape model (SSM) and automatic edge selection on a Canny edge map is proposed.
157 citations
••
TL;DR: A way of extending the standard minimum-cost flow algorithm to account for mitosis and merging events through a coupling operation on particular edges is introduced and the resulting graph can be efficiently solved using algorithms such as linear programming to choose the edges of the graph that observe the constraints while leading to the lowest overall cost.
155 citations
••
TL;DR: This work addresses a novel problem domain in the analysis of optical coherence tomography (OCT) images: the diagnosis of multiple macular pathologies in retinal OCT images, using a machine learning approach based on global image descriptors formed from a multi-scale spatial pyramid.
155 citations
••
TL;DR: This work presents an extensive validation of Nyul's approach for intensity normalization in real clinical domain where even after intensity inhomogeneity correction that accounts for scanner-specific artifacts, the MRI volumes can be affected from variations such as data heterogeneity resulting from multi-site multi-scanner acquisitions, the presence of multiple sclerosis lesions and the stage of disease progression in the brain.
••
TL;DR: A detailed review of the dMRI modeling literature places an emphasis on the mathematical and algorithmic underpinnings of the subject, categorizing existing methods according to how they treat the angular and radial sampling of the diffusion signal.
••
TL;DR: In this work a method is presented whereby detailed reference standard data may be constructed in an efficient semi-automatic fashion for quantitative evaluation of image registration algorithms.
••
TL;DR: A coherent Bayesian framework to automatically identify approximately 60 sulcal labels per hemisphere based on a probabilistic atlas estimating simultaneously normalization parameters is proposed, which outperforms significantly standard affine intensity-based normalization techniques in term of sulci alignments.
••
TL;DR: This work presents a heart model comprising the four heart chambers and the attached great vessels and matches the heart model automatically to cardiac CT angiography images in a multi-stage process using parametric as well as deformable mesh adaptation techniques.
••
TL;DR: The method, Quarc, is described, which can be used to measure deformations globally or in regions of interest (ROIs), including large-scale changes in the whole organ, and subtle changes in small-scale structures, based on serial MRI scans.
••
TL;DR: A novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity is presented which relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical ℓ(1) term.
••
TL;DR: An accurate, fast and automatic method for deriving patient-specific cubic Hermite meshes from patient's anatomy using medical images is developed, and the resulting mechanical stability of these customised meshes is successfully demonstrated.
••
TL;DR: A robust multi-resolution SSM algorithm with an adapted initialization to address the segmentation of MRI bone images acquired in small FOVs for modeling and computer-aided diagnosis is presented.
••
TL;DR: Three novel adaptive splitting techniques are proposed, an image- based, a similarity-based, and a motion-based technique within a hierarchical framework which attempt to process regions of similar motion and/or image structure in single registration components.
••
TL;DR: An evaluation framework that allows a standardized and objective quantitative comparison of carotid artery lumen segmentation and stenosis grading algorithms is described and shows that automated segmentation of the vessel lumen is possible with a precision that is comparable to manual annotation.
••
TL;DR: A new segmentation cost function based on a Bayesian framework that incorporates anatomical constraints from surrounding bones and a new appearance model that learns a nonparametric distribution of the intensity histograms inside and outside organ contours are presented.
••
TL;DR: The proposed method showed that poor motor outcome is associated to changes in the corticospinal bundle and white matter tracts originating from the premotor cortex.
••
TL;DR: A new method for the automatic comparison of myocardial motion patterns and the characterization of their degree of abnormality, based on a statistical atlas of motion built from a reference healthy population is presented.
••
TL;DR: A robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images based on a Thin-Plate Spline model and a time-varying dual Fourier series for overcoming tracking disturbances or failures.
••
TL;DR: The semi-automatic prostate segmentation method is found to be a fast, consistent and accurate tool for the delineation of the prostate gland in ultrasound images and compares favorably to the 5-15 min manual segmentation time required for experienced individuals.
••
TL;DR: This paper presents an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously and demonstrates the superiority of proposed method to the procedure currently used in clinical practice.
••
TL;DR: A shape-tuned strain energy density function to measure vessel likelihood in 3D medical images is presented and it is shown that this model performed more effectively in enhancing vessel bifurcations and preserving details, compared to three existing filters.
••
TL;DR: To support the challenging task of early epithelial cancer diagnosis from in vivo endomicroscopy, a content-based video retrieval method that uses an expert-annotated database is proposed that outperforms several state-of-the art methods.
••
TL;DR: A novel self-encoded marker is developed where each feature on the pattern is augmented with a 2-D barcode, which offers considerable advantages over the checkerboard marker in terms of processing speed, since it makes the correspondence search of feature points and marker-model coordinates, which are required for the pose estimation, redundant.