O
Olivier Petit
Researcher at Conservatoire national des arts et métiers
Publications - 7
Citations - 141
Olivier Petit is an academic researcher from Conservatoire national des arts et métiers. The author has contributed to research in topics: Image segmentation & Deep learning. The author has an hindex of 2, co-authored 6 publications receiving 24 citations.
Papers
More filters
Book ChapterDOI
U-Net Transformer: Self and Cross Attention for Medical Image Segmentation
TL;DR: U-Transformer as discussed by the authors combines a U-shaped architecture for image segmentation with self-and cross-attention from Transformers, which overcomes the inability of U-Nets to model long-range contextual interactions and spatial dependencies.
Book ChapterDOI
Handling Missing Annotations for Semantic Segmentation with Deep ConvNets
TL;DR: SMILE, a new deep convolutional neural network which addresses the issue of learning with incomplete ground truth, aims to identify ambiguous labels in order to ignore them during training, and don’t propagate incorrect or noisy information.
Journal ArticleDOI
Iterative confidence relabeling with deep ConvNets for organ segmentation with partial labels.
TL;DR: In this article, the authors propose an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in the relabeling process.
Biasing Deep ConvNets for Semantic Segmentation of Medical Images with a Prior-driven Prediction Function
TL;DR: The problem of including prior information about the shape and spatial position of the organs to improve the performance of semantic segmentation in CT-scans is addressed.
Posted Content
U-Net Transformer: Self and Cross Attention for Medical Image Segmentation
TL;DR: U-Transformer as discussed by the authors combines a U-shaped architecture for image segmentation with self- and cross-attention from Transformers, which overcomes the inability of U-Nets to model long-range contextual interactions and spatial dependencies.