Institution
Cornell University
Education•Ithaca, New York, United States•
About: Cornell University is a education organization based out in Ithaca, New York, United States. It is known for research contribution in the topics: Population & Gene. The organization has 102246 authors who have published 235546 publications receiving 12283673 citations. The organization is also known as: Cornell & CUI.
Topics: Population, Gene, Cancer, Context (language use), Medicine
Papers published on a yearly basis
Papers
More filters
••
21 Jul 2017TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.
27,821 citations
••
TL;DR: In addition to NDF, new improved methods for total dietary fiber and nonstarch polysaccharides including pectin and beta-glucans now are available and are also of interest in rumen fermentation.
23,302 citations
••
21 Jul 2017TL;DR: This paper exploits the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost and achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles.
Abstract: Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.
16,727 citations
••
Queen's University Belfast1, Collège de France2, English Heritage3, University of Arizona4, University of Sheffield5, University of Oxford6, University of Minnesota7, University of Hohenheim8, University of Kiel9, Lawrence Livermore National Laboratory10, University of Bergen11, ETH Zurich12, University of Waikato13, Woods Hole Oceanographic Institution14, Swiss Federal Institute for Forest, Snow and Landscape Research15, Cornell University16, University of Bristol17, University of Glasgow18, University of California, Irvine19, University of New South Wales20
TL;DR: In this paper, Heaton, AG Hogg, KA Hughen, KF Kaiser, B Kromer, SW Manning, RW Reimer, DA Richards, JR Southon, S Talamo, CSM Turney, J van der Plicht, CE Weyhenmeyer
Abstract: Additional co-authors: TJ Heaton, AG Hogg, KA Hughen, KF Kaiser, B Kromer, SW Manning, RW Reimer, DA Richards, JR Southon, S Talamo, CSM Turney, J van der Plicht, CE Weyhenmeyer
13,605 citations
••
TL;DR: It is demonstrated that the algorithms proposed are highly effective at discovering community structure in both computer-generated and real-world network data, and can be used to shed light on the sometimes dauntingly complex structure of networked systems.
Abstract: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.
12,882 citations
Authors
Showing all 103081 results
Name | H-index | Papers | Citations |
---|---|---|---|
Eric S. Lander | 301 | 826 | 525976 |
David Miller | 203 | 2573 | 204840 |
Lewis C. Cantley | 196 | 748 | 169037 |
Charles A. Dinarello | 190 | 1058 | 139668 |
Scott M. Grundy | 187 | 841 | 231821 |
Paul G. Richardson | 183 | 1533 | 155912 |
Chris Sander | 178 | 713 | 233287 |
David R. Williams | 178 | 2034 | 138789 |
David L. Kaplan | 177 | 1944 | 146082 |
Kari Alitalo | 174 | 817 | 114231 |
Richard K. Wilson | 173 | 463 | 260000 |
George F. Koob | 171 | 935 | 112521 |
Avshalom Caspi | 170 | 524 | 113583 |
Derek R. Lovley | 168 | 582 | 95315 |
Stephen B. Baylin | 168 | 548 | 188934 |