scispace - formally typeset
Search or ask a question
Author

Kazunari Misawa

Bio: Kazunari Misawa is an academic researcher from Nagoya University. The author has contributed to research in topics: Segmentation & Cancer. The author has an hindex of 26, co-authored 147 publications receiving 3810 citations.


Papers
More filters
Posted Content
TL;DR: A novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes is proposed to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs).
Abstract: We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

2,452 citations

Journal ArticleDOI
TL;DR: A novel self-supervised learning strategy based on context restoration is proposed in order to better exploit unlabelled images and is validated in three common problems in medical imaging: classification, localization, and segmentation.

393 citations

Journal ArticleDOI
TL;DR: A general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlasRegistration and patch-based segmentation, two widely used methods in brain segmentation.
Abstract: A robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance. Many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs. We present a general, fully-automated method for multi-organ segmentation of abdominal computed tomography (CT) scans. The method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation, two widely used methods in brain segmentation. The final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step, incorporating high-level spatial knowledge. The proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs. We have evaluated the segmentation on a database of 150 manually segmented CT images. The achieved results compare well to state-of-the-art methods, that are usually tailored to more specific questions, with Dice overlap values of 94%, 93%, 70%, and 92% for liver, kidneys, pancreas, and spleen, respectively.

285 citations

Journal ArticleDOI
TL;DR: This trial confirmed that LADG was as safe as ODG in terms of adverse events and short-term clinical outcomes, and may be an alternative procedure in clinical IA/IB gastric cancer if the noninferiority of L ADG in Terms of RFS is confirmed.
Abstract: No confirmatory randomized controlled trials (RCTs) have evaluated the efficacy of laparoscopy-assisted distal gastrectomy (LADG) compared with open distal gastrectomy (ODG). We performed an RCT to confirm that LADG is not inferior to ODG in efficacy. We conducted a multi-institutional RCT. Eligibility criteria included histologically proven gastric adenocarcinoma in the middle or lower third of the stomach, clinical stage I tumor. Patients were preoperatively randomized to ODG or LADG. This study is now in the follow-up stage. The primary endpoint is relapse-free survival (RFS) and the primary analysis is planned in 2018. Here, we compared the surgical outcomes of the two groups. This trial was registered at the UMIN Clinical Trials Registry as UMIN000003319. Between March 2010 and November 2013, 921 patients (LADG 462, ODG 459) were enrolled from 33 institutions. Operative time was longer in LADG than in ODG (median 278 vs. 194 min, p < 0.001), while blood loss was smaller (median 38 vs. 115 ml, p < 0.001). There was no difference in the overall proportion with in-hospital grade 3–4 surgical complications (3.3 %: LADG, 3.7 %: ODG). The proportion of patients with elevated serum AST/ALT was higher in LADG than in ODG (16.4 vs. 5.3 %, p < 0.001). There was no operation-related death in either arm. This trial confirmed that LADG was as safe as ODG in terms of adverse events and short-term clinical outcomes. LADG may be an alternative procedure in clinical IA/IB gastric cancer if the noninferiority of LADG in terms of RFS is confirmed.

273 citations

Journal ArticleDOI
TL;DR: This work shows that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models.

191 citations


Cited by
More filters
Posted Content
TL;DR: A novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes is proposed to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs).
Abstract: We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

2,452 citations

Posted Content
TL;DR: This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Abstract: When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.

1,037 citations

Journal ArticleDOI
TL;DR: Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency.

966 citations

Journal ArticleDOI
TL;DR: This review paper covers the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up, and particularly focuses on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals.
Abstract: The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.

916 citations