scispace - formally typeset
Search or ask a question
JournalISSN: 2168-1163

Computer methods in biomechanics and biomedical engineering. Imaging & visualization 

Taylor & Francis
About: Computer methods in biomechanics and biomedical engineering. Imaging & visualization is an academic journal published by Taylor & Francis. The journal publishes majorly in the area(s): Computer science & Artificial intelligence. It has an ISSN identifier of 2168-1163. Over the lifetime, 588 publications have been published receiving 4633 citations. The journal is also known as: Computer methods in biomechanics and biomedical engineering. Imaging and visualization & Comput Methods Biomech Biomed Eng Imaging Vis.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A new state-of-the-art performance for cell count on standard synthetic image benchmarks is set and it is shown that the FCRNs trained entirely with synthetic data can generalise well to real microscopy images both for cell counting and detections for the case of overlapping cells.
Abstract: This paper concerns automated cell counting and detection in microscopy images. The approach we take is to use convolutional neural networks (CNNs) to regress a cell spatial density map across the ...

395 citations

Journal ArticleDOI
TL;DR: Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image.
Abstract: Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise...

237 citations

PatentDOI
Xi Cheng1, Li Zhang1, Yefeng Zheng1
TL;DR: In this article, a similarity metric for multi-modal images is provided using the corresponding states of pairs of image patches to generate a classification setting for each pair, the classification settings are used to train a deep neural network via supervised learning.
Abstract: The present embodiments relate to machine learning for multimodal image data. By way of introduction, the present embodiments described below include apparatuses and methods for learning a similarity metric using deep learning based techniques for multimodal medical images. A novel similarity metric for multi-modal images is provided using the corresponding states of pairs of image patches to generate a classification setting for each pair. The classification settings are used to train a deep neural network via supervised learning. A multi-modal stacked denoising auto encoder (SDAE) is used to pre-train the neural network. A continuous and smooth similarity metric is constructed based on the output of the neural network before activation in the last layer. The trained similarity metric may be used to improve the results of image fusion.

178 citations

Journal ArticleDOI
TL;DR: A convolutional neural network-based method to automatically retrieve missing or noisy cardiac acquisition plane information from magnetic resonance imaging and predict the five most common cardiac views is proposed and shows that there is value in fine-tuning a model trained for natural images to transfer it to medical images.
Abstract: In this paper, we propose a convolutional neural network-based method to automatically retrieve missing or noisy cardiac acquisition plane information from magnetic resonance imaging and predict the five most common cardiac views. We fine-tune a convolutional neural network (CNN) initially trained on a large natural image recognition data-set (Imagenet ILSVRC2012) and transfer the learnt feature representations to cardiac view recognition. We contrast this approach with a previously introduced method using classification forests and an augmented set of image miniatures, with prediction using off the shelf CNN features, and with CNNs learnt from scratch. We validate this algorithm on two different cardiac studies with 200 patients and 15 healthy volunteers, respectively. We show that there is value in fine-tuning a model trained for natural images to transfer it to medical images. Our approach achieves an average F1 score of 97.66% and significantly improves the state-of-the-art of image-based cardiac view recognition. This is an important building block to organise and filter large collections of cardiac data prior to further analysis. It allows us to merge studies from multiple centres, to perform smarter image filtering, to select the most appropriate image processing algorithm, and to enhance visualisation of cardiac data-sets in content-based image retrieval.

88 citations

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed method for brain MR image segmentation can obtain better segmentation quality efficiently, and comprehensive evaluations have been carried out on two publicly available data-sets.
Abstract: In this paper, a novel method for brain MR image segmentation has been proposed, with deep learning techniques to obtain preliminary labelling and graphical models to produce the final result. A specific architecture, namely multi-scale structured convolutional neural networks (MS-CNN), is designed to capture discriminative features for each sub-cortical structure and to generate a label probability map for the target image. Due to complex background in brain images and the lack of spatial constraints among testing samples, the initial result obtained with MS-CNN is not smooth. To deal with this problem, dynamic random walker with decayed region of interest is then proposed to enforce label consistency. Comprehensive evaluations have been carried out on two publicly available data-sets and experimental results indicate that the proposed method can obtain better segmentation quality efficiently.

85 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202360
202280
2021103
202076
201965
201861