scispace - formally typeset
Search or ask a question
Author

Stefan Braunewell

Bio: Stefan Braunewell is an academic researcher. The author has contributed to research in topics: Convolutional neural network & Breast cancer. The author has an hindex of 4, co-authored 6 publications receiving 339 citations.

Papers
More filters
Journal ArticleDOI
Neeraj Kumar1, Ruchika Verma2, Deepak Anand3, Yanning Zhou4, Omer Fahri Onder, E. D. Tsougenis, Hao Chen, Pheng-Ann Heng4, Jiahui Li5, Zhiqiang Hu6, Yunzhi Wang7, Navid Alemi Koohbanani8, Mostafa Jahanifar8, Neda Zamani Tajeddin8, Ali Gooya8, Nasir M. Rajpoot8, Xuhua Ren9, Sihang Zhou10, Qian Wang9, Dinggang Shen10, Cheng-Kun Yang, Chi-Hung Weng, Wei-Hsiang Yu, Chao-Yuan Yeh, Shuang Yang11, Shuoyu Xu12, Pak-Hei Yeung13, Peng Sun12, Amirreza Mahbod14, Gerald Schaefer15, Isabella Ellinger14, Rupert Ecker, Örjan Smedby16, Chunliang Wang16, Benjamin Chidester17, That-Vinh Ton18, Minh-Triet Tran19, Jian Ma17, Minh N. Do18, Simon Graham8, Quoc Dang Vu20, Jin Tae Kwak20, Akshaykumar Gunda21, Raviteja Chunduri3, Corey Hu22, Xiaoyang Zhou23, Dariush Lotfi24, Reza Safdari24, Antanas Kascenas, Alison O'Neil, Dennis Eschweiler25, Johannes Stegmaier25, Yanping Cui26, Baocai Yin, Kailin Chen, Xinmei Tian26, Philipp Gruening27, Erhardt Barth27, Elad Arbel28, Itay Remer28, Amir Ben-Dor28, Ekaterina Sirazitdinova, Matthias Kohl, Stefan Braunewell, Yuexiang Li29, Xinpeng Xie29, Linlin Shen29, Jun Ma30, Krishanu Das Baksi31, Mohammad Azam Khan32, Jaegul Choo32, Adrián Colomer33, Valery Naranjo33, Linmin Pei34, Khan M. Iftekharuddin34, Kaushiki Roy35, Debotosh Bhattacharjee35, Anibal Pedraza36, Maria Gloria Bueno36, Sabarinathan Devanathan37, Saravanan Radhakrishnan37, Praveen Koduganty37, Zihan Wu38, Guanyu Cai39, Xiaojie Liu39, Yuqin Wang39, Amit Sethi3 
TL;DR: Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics as well as heavy data augmentation in the MoNuSeg 2018 challenge.
Abstract: Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.

251 citations

Book ChapterDOI
27 Jun 2018
TL;DR: In this article, the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer was investigated.
Abstract: Breast cancer is the most frequently diagnosed cancer and leading cause of cancer-related death among females worldwide. In this article, we investigate the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer. To this end, we study various approaches for transfer learning and apply them to the data set from the 2018 grand challenge on breast cancer histology images (BACH).

32 citations

Posted Content
TL;DR: This article investigates the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer.
Abstract: Breast cancer is the most frequently diagnosed cancer and leading cause of cancer-related death among females worldwide. In this article, we investigate the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer. To this end, we study various approaches for transfer learning and apply them to the data set from the 2018 grand challenge on breast cancer histology images (BACH).

26 citations

Posted Content
TL;DR: The experiments performed on feature inversion and activation maximization demonstrate the benefit of a unified approach to regularization, such as sharper reconstructions via the proposed Sobolev filters and a better control over reconstructed scales.
Abstract: Variational methods for revealing visual concepts learned by convolutional neural networks have gained significant attention during the last years. Being based on noisy gradients obtained via back-propagation such methods require the application of regularization strategies. We present a mathematical framework unifying previously employed regularization methods. Within this framework, we propose a novel technique based on Sobolev gradients which can be implemented via convolutions and does not require specialized numerical treatment, such as total variation regularization. The experiments performed on feature inversion and activation maximization demonstrate the benefit of a unified approach to regularization, such as sharper reconstructions via the proposed Sobolev filters and a better control over reconstructed scales.

4 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: This work introduces a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments.
Abstract: Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.

947 citations

Book ChapterDOI
27 Sep 2021
TL;DR: Jeon et al. as discussed by the authors proposed a gated axial-attention model which extends the existing transformer-based architectures by introducing an additional control mechanism in the selfattention module.
Abstract: Over the past decade, deep convolutional neural networks have been widely adopted for medical image segmentation and shown to achieve adequate performance. However, due to inherent inductive biases present in convolutional architectures, they lack understanding of long-range dependencies in the image. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to explore transformer-based solutions and study the feasibility of using transformer-based network architectures for medical image segmentation tasks. Majority of existing transformer-based network architectures proposed for vision applications require large-scale datasets to train properly. However, compared to the datasets for vision applications, in medical imaging the number of data samples is relatively low, making it difficult to efficiently train transformers for medical imaging applications. To this end, we propose a gated axial-attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module. Furthermore, to train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance. Specifically, we operate on the whole image and patches to learn global and local features, respectively. The proposed Medical Transformer (MedT) is evaluated on three different medical image segmentation datasets and it is shown that it achieves better performance than the convolutional and other related transformer-based architectures. Code: https://github.com/jeya-maria-jose/Medical-Transformer

464 citations

Journal ArticleDOI
TL;DR: A comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis can be found in this paper, where a survey of over 130 papers is presented.

260 citations