scispace - formally typeset
G

Günter Klambauer

Researcher at Johannes Kepler University of Linz

Publications -  81
Citations -  11457

Günter Klambauer is an academic researcher from Johannes Kepler University of Linz. The author has contributed to research in topics: Deep learning & Computer science. The author has an hindex of 24, co-authored 71 publications receiving 6533 citations. Previous affiliations of Günter Klambauer include Johnson & Johnson Pharmaceutical Research and Development.

Papers
More filters
Posted Content

GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium

TL;DR: In this article, a two time-scale update rule (TTUR) was proposed for training GANs with stochastic gradient descent on arbitrary GAN loss functions, which has an individual learning rate for both the discriminator and the generator.
Posted Content

Self-Normalizing Neural Networks

TL;DR: Self-normalizing neural networks (SNNs) are introduced to enable high-level abstract representations and it is proved that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero meanand unit variance -- even under the presence of noise and perturbations.
Journal ArticleDOI

DeepTox: Toxicity Prediction using Deep Learning

TL;DR: DeepTox had the highest performance of all computational methods winning the grand challenge, the nuclear receptor panel, the stress response panel, and six single assays (teams ``Bioinf@JKU'').
Proceedings Article

Self-Normalizing Neural Networks

TL;DR: Self-normalizing neural networks (SNNs) as discussed by the authors have been proposed to enable high-level abstract representations and achieve state-of-the-art performance on many tasks.
Journal ArticleDOI

cn.MOPS: mixture of Poissons for discovering copy number variations in next-generation sequencing data with a low false discovery rate

TL;DR: ‘Copy Number estimation by a Mixture Of PoissonS’ (cn.MOPS), a data processing pipeline for CNV detection in NGS data outperformed its five competitors in terms of precision (1–FDR) and recall for both gains and losses in all benchmark data sets.