scispace - formally typeset
Search or ask a question
Author

Kensaku Mori

Bio: Kensaku Mori is an academic researcher from Nagoya University. The author has contributed to research in topics: Segmentation & Image segmentation. The author has an hindex of 45, co-authored 477 publications receiving 8648 citations. Previous affiliations of Kensaku Mori include National Institute of Informatics & Sapporo Medical University.


Papers
More filters
Posted Content
TL;DR: A novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes is proposed to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs).
Abstract: We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

2,452 citations

Journal ArticleDOI
TL;DR: A novel self-supervised learning strategy based on context restoration is proposed in order to better exploit unlabelled images and is validated in three common problems in medical imaging: classification, localization, and segmentation.

393 citations

Journal ArticleDOI
TL;DR: In this article, the authors evaluated the performance of real-time computer-aided diagnosis with endocytoscopes (×520 ultramagnifying colonoscopes) providing microvascular and cellular visualization of colorectal polyps after application of the narrow-band imaging [NBI] and methylene blue staining modes, respectively.
Abstract: Background Computer-aided diagnosis (CAD) for colonoscopy may help endoscopists distinguish neoplastic polyps (adenomas) requiring resection from nonneoplastic polyps not requiring resection, potentially reducing cost. Objective To evaluate the performance of real-time CAD with endocytoscopes (×520 ultramagnifying colonoscopes providing microvascular and cellular visualization of colorectal polyps after application of the narrow-band imaging [NBI] and methylene blue staining modes, respectively). Design Single-group, open-label, prospective study. (UMIN [University hospital Medical Information Network] Clinical Trial Registry: UMIN000027360). Setting University hospital. Participants 791 consecutive patients undergoing colonoscopy and 23 endoscopists. Intervention Real-time use of CAD during colonoscopy. Measurements CAD-predicted pathology (neoplastic or nonneoplastic) of detected diminutive polyps (≤5 mm) on the basis of real-time outputs compared with pathologic diagnosis of the resected specimen (gold standard). The primary end point was whether CAD with the stained mode produced a negative predictive value (NPV) of 90% or greater for identifying diminutive rectosigmoid adenomas, the threshold required to "diagnose-and-leave" nonneoplastic polyps. Best- and worst-case scenarios assumed that polyps lacking either CAD diagnosis or pathology were true- or false-positive or true- or false-negative, respectively. Results Overall, 466 diminutive (including 250 rectosigmoid) polyps from 325 patients were assessed by CAD, with a pathologic prediction rate of 98.1% (457 of 466). The NPVs of CAD for diminutive rectosigmoid adenomas were 96.4% (95% CI, 91.8% to 98.8%) (best-case scenario) and 93.7% (CI, 88.3% to 97.1%) (worst-case scenario) with stained mode and 96.5% (CI, 92.1% to 98.9%) (best-case scenario) and 95.2% (CI, 90.3% to 98.0%) (worst-case scenario) with NBI. Limitation Two thirds of the colonoscopies were conducted by experts who had each experienced more than 200 endocytoscopies; 186 polyps not assessed by CAD were excluded. Conclusion Real-time CAD can achieve the performance level required for a diagnose-and-leave strategy for diminutive, nonneoplastic rectosigmoid polyps. Primary funding source Japan Society for the Promotion of Science.

350 citations

Journal ArticleDOI
TL;DR: A general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlasRegistration and patch-based segmentation, two widely used methods in brain segmentation.
Abstract: A robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance. Many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs. We present a general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans. The method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation, two widely used methods in brain segmentation. The final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step, incorporating high-level spatial knowledge. The proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs. We have evaluated the segmentation on a database of 150 manually segmented CT images. The achieved results compare well to state-of-the-art methods, that are usually tailored to more specific questions, with Dice overlap values of 94%, 93%, 70%, and 92% for liver, kidneys, pancreas, and spleen, respectively.

285 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Abstract: Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

4,249 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations