scispace - formally typeset
Search or ask a question
Institution

University of Freiburg

EducationFreiburg, Baden-Württemberg, Germany
About: University of Freiburg is a education organization based out in Freiburg, Baden-Württemberg, Germany. It is known for research contribution in the topics: Population & Transplantation. The organization has 41992 authors who have published 77296 publications receiving 2896269 citations. The organization is also known as: alberto-ludoviciana & Albert-Ludwigs-Universität Freiburg.


Papers
More filters
Book ChapterDOI
05 Oct 2015
TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .

49,590 citations

Posted Content
TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .

19,534 citations

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations

Posted Content
TL;DR: This work proposes a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function, and provides empirical evidence that this modification substantially improves Adam's generalization performance.
Abstract: L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at this https URL

6,909 citations

Journal ArticleDOI
TL;DR: The GRADE process begins with asking an explicit question, including specification of all important outcomes, and provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect.

6,093 citations


Authors

Showing all 42309 results

NameH-indexPapersCitations
Giacinto Piacquadio12884574253
Peter Jenni12782473506
Michael Dührssen12778271948
Jochen Dingfelder12784970853
Kristin Lohwasser12787874014
Duc Ta12687473962
Christoph Falk Anders12673468828
Bernhard Meirose12686070532
Bo Barker Jørgensen12640049578
Atsuhiko Ochi12690473632
Zuzana Rurikova12685469970
Riccardo-Maria Bianchi12687073816
Vakhtang Tsiskaridze12683168985
Minoru Hirose12577768038
Michel Janus12582268861
Network Information
Related Institutions (5)
Ludwig Maximilian University of Munich
161.5K papers, 5.7M citations

97% related

Heidelberg University
119.1K papers, 4.6M citations

96% related

Technische Universität München
123.4K papers, 4M citations

95% related

University of Zurich
124K papers, 5.3M citations

95% related

University of Bern
79.4K papers, 3.1M citations

94% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023178
2022585
20214,548
20204,227
20193,825
20183,531