Institution
Carnegie Mellon University
Education•Pittsburgh, Pennsylvania, United States•
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.
Papers published on a yearly basis
Papers
More filters
•
19 Jun 2016
TL;DR: A generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features and which not only can adjust the desired margin but also can avoid overfitting is proposed.
Abstract: Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.
769 citations
••
TL;DR: This work presents an algorithm that establishes a tight bound within this minimal amount of search, and shows how to distribute the desired search across self-interested manipulative agents.
769 citations
01 Jan 1995
769 citations
••
TL;DR: The recently formulated WHAM method is an extension of Ferrenberg and Swendsen's multiple histogram technique for free‐energy and potential of mean force calculations and provides an analysis of the statistical accuracy of the potential ofmean force as well as a guide to the most efficient use of additional simulations to minimize errors.
Abstract: The recently formulated weighted histogram analysis method (WHAM)1 is an extension of Ferrenberg and Swendsen's multiple histogram technique for free-energy and potential of mean force calculations. As an illustration of the method, we have calculated the two-dimensional potential of mean force surface of the dihedrals gamma and chi in deoxyadenosine with Monte Carlo simulations using the all-atom and united-atom representation of the AMBER force fields. This also demonstrates one of the major advantages of WHAM over umbrella sampling techniques. The method also provides an analysis of the statistical accuracy of the potential of mean force as well as a guide to the most efficient use of additional simulations to minimize errors. © 1995 John Wiley & Sons, Inc.
767 citations
••
09 Jun 2016TL;DR: In this paper, a key-value memory network is proposed to make reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation.
Abstract: Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.
767 citations
Authors
Showing all 36645 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Robert C. Nichol | 187 | 851 | 162994 |
Michael I. Jordan | 176 | 1016 | 216204 |
Jasvinder A. Singh | 176 | 2382 | 223370 |
J. N. Butler | 172 | 2525 | 175561 |
P. Chang | 170 | 2154 | 151783 |
Krzysztof Matyjaszewski | 169 | 1431 | 128585 |
Yang Yang | 164 | 2704 | 144071 |
Geoffrey E. Hinton | 157 | 414 | 409047 |
Herbert A. Simon | 157 | 745 | 194597 |
Yongsun Kim | 156 | 2588 | 145619 |
Terrence J. Sejnowski | 155 | 845 | 117382 |
John B. Goodenough | 151 | 1064 | 113741 |
Scott Shenker | 150 | 454 | 118017 |