Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning
read more
Citations
A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges
Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation
U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications
Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks.
Extracting Possibly Representative COVID-19 Biomarkers from X-ray Images with Deep Learning Approach and Image Data Related to Pulmonary Diseases.
References
U-Net: Convolutional Networks for Biomedical Image Segmentation
Fully convolutional networks for semantic segmentation
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
Caffe: Convolutional Architecture for Fast Feature Embedding
A survey on deep learning in medical image analysis
Related Papers (5)
Frequently Asked Questions (10)
Q2. What is the simplest way to train a CNN?
With T̂ , the CNN model (e.g., P-Net or PC-Net) is trained to extract the target from its bounding box, which is a binary segmentation problem irrespective of the object type.
Q3. How many patients were used for training, validation and testing?
The authors performed data splitting at patient level and used images from 10, 2, 6 patients for training, validation and testing, respectively.
Q4. What is the label of the qth instance in X p?
Suppose the label of the qth instance in X p is l pq , Yp is converted into a binary image Ypq based on whether the value of each pixel in Yp equals to l pq .
Q5. How many patients were selected for training?
The authors randomly selected T1c and FLAIR images of 19, 25 patients with a single scan for validation and testing, respectively, and used T1c images of the remaining patients for training.
Q6. How was the resized input of the4DeepMedic?
To deal with organs at different scales, the authors resized the input of P-Net so that the4DeepMedic and HighRes3DNet were implemented in http://niftynet.iominimal value of width and height was 96 pixels.
Q7. What was the weight function used to train the CNNs?
To deal with different organs and different modalities, the region inside a bounding box was normalized by the mean value and standard deviation of that region, and then used as the input of the CNNs.
Q8. What is the proposed interactive framework with bounding box and image-specific fine-tuning?
1. To deal with different (including previously unseen) objects in a unified framework, the authors propose to use a CNN that takes as input the content of a bounding box of one instance and gives a binary segmentation for that instance.
Q9. What was the test used to determine the performance difference between two different segmentation methods?
The authors used a paired Student’s t-test to determine whether the performance difference between two segmentation methods was significant [30].
Q10. What is the way to improve the accuracy of BIFSeg?
To address this problem, BIFSeg allows optional supervised fine-tuning that leverages user interactions to achieve higher robustness and accuracy.