Institution
Carnegie Mellon University
Education•Pittsburgh, Pennsylvania, United States•
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Computer science & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.
Papers published on a yearly basis
Papers
More filters
•
05 Dec 2005TL;DR: The correlated topic model (CTM) is developed, where the topic proportions exhibit correlation via the logistic normal distribution and a mean-field variational inference algorithm is derived for approximate posterior inference in this model, which is complicated by the fact that the Logistic normal is not conjugate to the multinomial.
Abstract: Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets.
1,046 citations
••
23 Jun 2007
TL;DR: The technical details underlying the Meteor metric are recapped, the latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German, in addition to English.
Abstract: Meteor is an automatic metric for Machine Translation evaluation which has been demonstrated to have high levels of correlation with human judgments of translation quality, significantly outperforming the more commonly used Bleu metric. It is one of several automatic metrics used in this year's shared task within the ACL WMT-07 workshop. This paper recaps the technical details underlying the metric and describes recent improvements in the metric. The latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German, in addition to English.
1,045 citations
••
01 Jan 2005TL;DR: This research suggests that consumers often lack enough information to make privacy-sensitive decisions and, even with sufficient information, are likely to trade off long-term privacy for short-term benefits.
Abstract: Traditional theory suggests consumers should be able to manage their privacy. Yet, empirical and theoretical research suggests that consumers often lack enough information to make privacy-sensitive decisions and, even with sufficient information, are likely to trade off long-term privacy for short-term benefits
1,045 citations
••
TL;DR: The algorithm allows combinatorial auctions to scale up to significantly larger numbers of items and bids than prior approaches to optimal winner determination by capitalizing on the fact that the space of bids is sparsely populated in practice.
1,045 citations
••
TL;DR: The comprehension of visually presented sentences produces brain activation that increases with the linguistic complexity of the sentence, and the amount of neural activity that a given cognitive process engenders is dependent on the computational demand that the task imposes.
Abstract: The comprehension of visually presented sentences produces brain activation that increases with the linguistic complexity of the sentence. The volume of neural tissue activated (number of voxels) during sentence comprehension was measured with echo-planar functional magnetic resonance imaging. The modulation of the volume of activation by sentence complexity was observed in a network of four areas: the classical left-hemisphere language areas (the left laterosuperior temporal cortex, or Wernicke's area, and the left inferior frontal gyrus, or Broca's area) and their homologous right-hemisphere areas, although the right areas had much smaller volumes of activation than did the left areas. These findings generally indicate that the amount of neural activity that a given cognitive process engenders is dependent on the computational demand that the task imposes.
1,045 citations
Authors
Showing all 36645 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Robert C. Nichol | 187 | 851 | 162994 |
Michael I. Jordan | 176 | 1016 | 216204 |
Jasvinder A. Singh | 176 | 2382 | 223370 |
J. N. Butler | 172 | 2525 | 175561 |
P. Chang | 170 | 2154 | 151783 |
Krzysztof Matyjaszewski | 169 | 1431 | 128585 |
Yang Yang | 164 | 2704 | 144071 |
Geoffrey E. Hinton | 157 | 414 | 409047 |
Herbert A. Simon | 157 | 745 | 194597 |
Yongsun Kim | 156 | 2588 | 145619 |
Terrence J. Sejnowski | 155 | 845 | 117382 |
John B. Goodenough | 151 | 1064 | 113741 |
Scott Shenker | 150 | 454 | 118017 |