Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge
read more
Citations
MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation.
Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study.
Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy.
Deep learning for image-based cancer detection and diagnosis − A survey
Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy
References
ImageNet Classification with Deep Convolutional Neural Networks
ImageNet Large Scale Visual Recognition Challenge
Caffe: Convolutional Architecture for Fast Feature Embedding
Cancer statistics, 2008.
Caffe: Convolutional Architecture for Fast Feature Embedding
Related Papers (5)
Frequently Asked Questions (15)
Q2. What are the future works mentioned in the paper "Comparative validation of polyp detection methods in video colonoscopy: results from the miccai 2015 endoscopic vision challenge" ?
More precisely, future studies should tackle some of the issues detected such as the variability in source data resolution and size and should aim to cover all different polyp morphological types. This may result in, apart from a more complete analysis, a deeper understanding on how each method works and in which scenarios each of them show the most benefit, thinking of potential optimized combinations of them to finally build up a clinically useful method.
Q3. What is the significance of the metric used to compare different methods?
Considering the scope of the analysis presented in the paper,the metric that will be used to compare different methods willbe F1-score, as it presents a balance between missed polypsand false alarms.
Q4. What is the straightforward conclusion from this experiment?
The most straightforward conclusion from this experiment is that image quality matters, as methods’ performance decrease when only bad quality images are considered.
Q5. What is the main result of this comparative study?
The main result of this comparative study is that methods including some degree of machine learning outperform classic hand-crafted methods, specially regarding specificity scores in non-polyp videos.
Q6. What are the main reasons for the lack of coherence in the analysis of polyps?
The lack of temporal coherence and the greatvariability in polyp appearance due to camera progression andvisibility conditions might impact their performance in the fullsequences analysis, as they might cause instability in theirresponse against similar stimuli.
Q7. What is the reason why the results of the analysis show that PLS offers the performance?
The analysis of sequences without polyp frames shows that PLS offers the best performance, which is possibly due to the presence of a11specific polyp presence module in this approach.
Q8. What are some image challenges that make polyp detection difficult?
There are some image challenges that generally seem to make polyp frames detection difficult such as the presence of overlay information and overexposed regions, with the latter being more prevalent in the explored images.
Q9. What was the requirement for the performance curves drawing?
teams could also provide a confidence value (value between 0 and 1) for the performance curves drawing purposes, though this was not mandatory.
Q10. What is the main feature that a clinically applicable system should have?
The main feature that a clinically applicable system should have is that it should detect all polyps regardless16their appearance (high detection rate (DR), measured as the percentage of polyps detected in at least one frame out of the total of polyps present in the testing videos).
Q11. What can the authors observe about the effect of polyps on methods’ performance?
The authors can also observe how methods tend to provide a higher number of false alarms for good quality images, which the authors interpret as a result of structures likely to be confused with polyps being better visually defined.
Q12. What is the main conclusion to be extracted from the study?
With respect to polyp frames, the first conclusion to be extracted is that low visibility images and the presence of specular highlights within the polyp affect all methods in the same way.
Q13. Why did the authors not perform the same experiment for ETIS-LARIB database?
For the sake of statistical representativeness of the results, the authors did not perform the same experiment for ETIS-LARIB database due to its smaller size.
Q14. What are the three criteria used to account for differences in performance related to polyp morphology?
To account for differences in performance related to polyp morphology the authors will use Precision, Recall and F1 scores as defined in Table II.
Q15. How many teams provided the curves for each method?
In order to provide these curves for all teams, confidence values should have been provided; in this case, only one team per subcategory (UNS-UCLAN in still-frame analysis and ASU-Mayo for full video analysis) provided this information whereas the17rest only provided what the authors assume are results obtained using the best configuration of each particular method.