Institution
University of Freiburg
Education•Freiburg, Baden-Württemberg, Germany•
About: University of Freiburg is a education organization based out in Freiburg, Baden-Württemberg, Germany. It is known for research contribution in the topics: Population & Transplantation. The organization has 41992 authors who have published 77296 publications receiving 2896269 citations. The organization is also known as: alberto-ludoviciana & Albert-Ludwigs-Universität Freiburg.
Topics: Population, Transplantation, Large Hadron Collider, Gene, Immune system
Papers published on a yearly basis
Papers
More filters
••
05 Oct 2015TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
49,590 citations
•
TL;DR: It is shown that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Abstract: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .
19,534 citations
••
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.
9,282 citations
•
TL;DR: This work proposes a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function, and provides empirical evidence that this modification substantially improves Adam's generalization performance.
Abstract: L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at this https URL
6,909 citations
••
TL;DR: The GRADE process begins with asking an explicit question, including specification of all important outcomes, and provides explicit criteria for rating the quality of evidence that include study design, risk of bias, imprecision, inconsistency, indirectness, and magnitude of effect.
6,093 citations
Authors
Showing all 42309 results
Name | H-index | Papers | Citations |
---|---|---|---|
Giacinto Piacquadio | 128 | 845 | 74253 |
Peter Jenni | 127 | 824 | 73506 |
Michael Dührssen | 127 | 782 | 71948 |
Jochen Dingfelder | 127 | 849 | 70853 |
Kristin Lohwasser | 127 | 878 | 74014 |
Duc Ta | 126 | 874 | 73962 |
Christoph Falk Anders | 126 | 734 | 68828 |
Bernhard Meirose | 126 | 860 | 70532 |
Bo Barker Jørgensen | 126 | 400 | 49578 |
Atsuhiko Ochi | 126 | 904 | 73632 |
Zuzana Rurikova | 126 | 854 | 69970 |
Riccardo-Maria Bianchi | 126 | 870 | 73816 |
Vakhtang Tsiskaridze | 126 | 831 | 68985 |
Minoru Hirose | 125 | 777 | 68038 |
Michel Janus | 125 | 822 | 68861 |