scispace - formally typeset
Search or ask a question
Author

Robert Bensch

Bio: Robert Bensch is an academic researcher from University of Freiburg. The author has contributed to research in topics: Image segmentation & Segmentation-based object categorization. The author has an hindex of 9, co-authored 14 publications receiving 1242 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: An ImageJ plugin is presented that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service.
Abstract: U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

1,222 citations

Journal ArticleDOI
TL;DR: It is found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the Cell Tracking Challenge.
Abstract: We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.

468 citations

Proceedings ArticleDOI
16 Apr 2015
TL;DR: A new robust, effective, and surprisingly simple approach for the segmentation of cells in phase contrast microscopy images that strongly favors dark-to-bright transitions at the boundaries of the (arbitrarily shaped) segmentation mask.
Abstract: We propose a new robust, effective, and surprisingly simple approach for the segmentation of cells in phase contrast microscopy images. The key feature of our algorithm is that it strongly favors dark-to-bright transitions at the boundaries of the (arbitrarily shaped) segmentation mask. The segmentation mask can be effectively found by a fast min-cut approach. The small but essential difference to standard min-cut based approaches is that our graph contains directed edges with asymmetric edge weights. Combined with a simple region propagation our approach yields better segmentation results on the ISBI Cell Tracking Challenge 2014 dataset than the top ranked methods. We provide an easy-to-use open-source implementation for ImageJ/Fiji and Matlab on our homepage.

58 citations

Journal ArticleDOI
TL;DR: Corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.
Abstract: In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.

53 citations

Journal ArticleDOI
TL;DR: Inturned-mediated complex formation of NPHP4 and DAAM1 is important for ciliogenesis and ciliary function in multiciliated cells, presumably because of its requirement for the local rearrangement of actin cytoskeleton.
Abstract: Motile cilia polarization requires intracellular anchorage to the cytoskeleton; however, the molecular machinery that supports this process remains elusive. We report that Inturned plays a central role in coordinating the interaction between cilia-associated proteins and actin-nucleation factors. We observed that knockdown of nphp4 in multiciliated cells of the Xenopus laevis epidermis compromised ciliogenesis and directional fluid flow. Depletion of nphp4 disrupted the subapical actin layer. Comparison to the structural defects caused by inturned depletion revealed striking similarities. Furthermore, coimmunoprecipitation assays demonstrated that the two proteins interact with each other and that Inturned mediates the formation of ternary protein complexes between NPHP4 and DAAM1. Knockdown of daam1, but not formin-2, resulted in similar disruption of the subapical actin web, whereas nphp4 depletion prevented the association of Inturned with the basal bodies. Thus, Inturned appears to function as an adaptor protein that couples cilia-associated molecules to actin-modifying proteins to rearrange the local actin cytoskeleton.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations

19 Nov 2012

1,653 citations

Journal ArticleDOI
TL;DR: UNet++ as mentioned in this paper proposes an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision, leading to a highly flexible feature fusion scheme.
Abstract: The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects—an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus .

1,487 citations

Journal ArticleDOI
TL;DR: CellProfiler 3.0 is described, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional image stacks, increasingly common in biomedical research.
Abstract: CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler's infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.

1,466 citations

Journal ArticleDOI
TL;DR: An ImageJ plugin is presented that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service.
Abstract: U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

1,222 citations