scispace - formally typeset
Search or ask a question
Author

Thomas Wollmann

Other affiliations: Heilbronn University
Bio: Thomas Wollmann is an academic researcher from Heidelberg University. The author has contributed to research in topics: Deep learning & Segmentation. The author has an hindex of 10, co-authored 22 publications receiving 273 citations. Previous affiliations of Thomas Wollmann include Heilbronn University.

Papers
More filters
Journal ArticleDOI
TL;DR: A new deep learning method is proposed for cell segmentation, which integrates convolutional neural networks and gated recurrent neural networks over multiple image scales to exploit the strength of both types of networks.

34 citations

Proceedings ArticleDOI
04 Apr 2018
TL;DR: A novel deep learning method based on domain adaptation using a Cycle-Consistent Generative Adversarial Network in conjunction with a densely connected deep neural network for classification of whole-slide images and patient level breast cancer grading.
Abstract: The progression of breast cancer can be quantified in whole-slide images of lymph nodes. We describe a novel deep learning method for classification of whole-slide images and patient level breast cancer grading. Our method is based on domain adaptation using a Cycle-Consistent Generative Adversarial Network (CycleGAN), in conjunction with a densely connected deep neural network. Our method performs classification on small image patches and uses model averaging for boosting. The classification results are used to determine a slide level class and are further aggregated to predict a patient level grade. Our method was applied to the challenging CAMELYON17 dataset. It turned out that domain adaptation improves the result compared to state-of-the-art data augmentation. The fast processing speed of our method enables high-throughput image analysis.

26 citations

Journal ArticleDOI
Thomas Wollmann1, Holger Erfle1, Roland Eils1, Karl Rohr1, Manuel Gunkel1 
TL;DR: In this review, workflow systems for the integration of microscopy image analysis techniques with focus on KNIME and Galaxy are described.

22 citations

Journal ArticleDOI
TL;DR: The Galaxy framework has emerged as a powerful analysis platform for the analysis of MSI data with ease of use and access, together with high levels of reproducibility and transparency.
Abstract: Background Mass spectrometry imaging is increasingly used in biological and translational research because it has the ability to determine the spatial distribution of hundreds of analytes in a sample. Being at the interface of proteomics/metabolomics and imaging, the acquired datasets are large and complex and often analyzed with proprietary software or in-house scripts, which hinders reproducibility. Open source software solutions that enable reproducible data analysis often require programming skills and are therefore not accessible to many mass spectrometry imaging (MSI) researchers. Findings We have integrated 18 dedicated mass spectrometry imaging tools into the Galaxy framework to allow accessible, reproducible, and transparent data analysis. Our tools are based on Cardinal, MALDIquant, and scikit-image and enable all major MSI analysis steps such as quality control, visualization, preprocessing, statistical analysis, and image co-registration. Furthermore, we created hands-on training material for use cases in proteomics and metabolomics. To demonstrate the utility of our tools, we re-analyzed a publicly available N-linked glycan imaging dataset. By providing the entire analysis history online, we highlight how the Galaxy framework fosters transparent and reproducible research. Conclusion The Galaxy framework has emerged as a powerful analysis platform for the analysis of MSI data with ease of use and access, together with high levels of reproducibility and transparency.

19 citations


Cited by
More filters
01 Jan 2015
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Abstract: Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications, and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book’s practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB/GNU Octave source code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.

1,102 citations

Journal Article
TL;DR: The author gives a short review of the most important prognostic factors in breast cancer, with emphasis was laid on steroid receptors, c-erpB-2, p53 and bcl-2 alterations.
Abstract: Prognostic factors are clinical and pathological features that give information in estimating the likely clinical outcome of an individual suffering from cancer. The author gives a short review of the most important prognostic factors in breast cancer. 376 breast cancer cases of a ten year interval in a county hospital are summarized. Traditional clinico-pathological parameters i.e. TNM and steroid receptor status are discussed. The more common karyotipic, oncogene and tumor suppressor gene alterations are outlined in the study. Methods for their detection are presented and their value in prognostication is reviewed. Emphasis was laid on steroid receptors, c-erpB-2, p53 and bcl-2 alterations. Genes responsible for heritable forms of increased breast cancer risk are briefly reviewed.

609 citations

Posted Content
TL;DR: WILDS is presented, a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, and is hoped to encourage the development of general-purpose methods that are anchored to real-world distribution shifts and that work well across different applications and problem settings.
Abstract: Distribution shifts -- where the training distribution differs from the test distribution -- can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild. Despite their ubiquity, these real-world distribution shifts are under-represented in the datasets widely used in the ML community today. To address this gap, we present WILDS, a curated collection of 8 benchmark datasets that reflect a diverse range of distribution shifts which naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping. On each dataset, we show that standard training results in substantially lower out-of-distribution than in-distribution performance, and that this gap remains even with models trained by existing methods for handling distribution shifts. This underscores the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice. To facilitate method development, we provide an open-source package that automates dataset loading, contains default model architectures and hyperparameters, and standardizes evaluations. Code and leaderboards are available at this https URL.

579 citations