scispace - formally typeset
Search or ask a question
Institution

Carnegie Mellon University

EducationPittsburgh, Pennsylvania, United States
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.


Papers
More filters
Journal ArticleDOI
TL;DR: Children's and women's haemoglobin statuses improved in some regions where concentrations had been low in the 1990s, leading to a modest global increase in mean haemochemistry and a reduction in anaemia prevalence between 1995 and 2011.

1,335 citations

Journal ArticleDOI
TL;DR: Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to or better than the best results achieved using a single smoothing methods and exhaustive parameter search on the test data.
Abstract: Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and to then rank documents by the likelihood of the query according to the estimated language model. A central issue in language model estimation is smoothing, the problem of adjusting the maximum likelihood estimator to compensate for data sparseness. In this article, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collections. Experimental results show that not only is the retrieval performance generally sensitive to the smoothing parameters, but also the sensitivity pattern is affected by the query type, with performance being more sensitive to smoothing for verbose queries than for keyword queries. Verbose queries also generally require more aggressive smoothing to achieve optimal performance. This suggests that smoothing plays two different role---to make the estimated document language model more accurate and to "explain" the noninformative words in the query. In order to decouple these two distinct roles of smoothing, we propose a two-stage smoothing strategy, which yields better sensitivity patterns and facilitates the setting of smoothing parameters automatically. We further propose methods for estimating the smoothing parameters automatically. Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to---or better than---the best results achieved using a single smoothing method and exhaustive parameter search on the test data.

1,334 citations

Journal ArticleDOI
TL;DR: This paper introduces the database, describes the recording procedure, and presents results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.

1,333 citations

Posted Content
TL;DR: The main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation covariant objective function must belong, which enables the design of a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks.
Abstract: We study the problem of designing models for machine learning tasks defined on \emph{sets}. In contrast to traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets that are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics \cite{poczos13aistats}, to anomaly detection in piezometer data of embankment dams \cite{Jung15Exploration}, to cosmology \cite{Ntampaka16Dynamical,Ravanbakhsh16ICML1}. Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We also derive the necessary and sufficient conditions for permutation equivariance in deep models. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection.

1,329 citations

Journal ArticleDOI
TL;DR: For a complete sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey (SDSS), this article studied the relationship between stellar mass, star formation history, size and internal structure.
Abstract: We study the relations between stellar mass, star formation history, size and internal structure for a complete sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey We show that low-redshift galaxies divide into two distinct families at a stellar mass of 3 x 10 1 0 M O Lower-mass galaxies have young stellar populations, low surface mass densities and the low concentrations typical of discs Their star formation histories are more strongly correlated with surface mass density than with stellar mass A significant fraction of the lowest-mass galaxies in our sample have experienced recent starbursts At given stellar mass, the sizes of low-mass galaxies are lognormally distributed with dispersion σ(In R 5 0 ) ∼05, in excellent agreement with the idea that they form with little angular momentum loss through cooling and condensation in a gravitationally dominant dark matter halo Their median stellar surface mass density scales with stellar mass as μ * M * 054, suggesting that the stellar mass of a disc galaxy is proportional to the three halves power of its halo mass All of this suggests that the efficiency of the conversion of baryons into stars in low-mass galaxies increases in proportion to halo mass, perhaps as a result of supernova feedback processes At stellar masses above 3 x 10 1 0 M O , there is a rapidly increasing fraction of galaxies with old stellar populations, high surface mass densities and the high concentrations typical of bulges In this regime, the size distribution remains lognormal, but its dispersion decreases rapidly with increasing mass and the median stellar mass surface density is approximately constant This suggests that the star formation efficiency decreases in the highest-mass haloes, and that little star formation occurs in massive galaxies after they have assembled

1,328 citations


Authors

Showing all 36645 results

NameH-indexPapersCitations
Yi Chen2174342293080
Rakesh K. Jain2001467177727
Robert C. Nichol187851162994
Michael I. Jordan1761016216204
Jasvinder A. Singh1762382223370
J. N. Butler1722525175561
P. Chang1702154151783
Krzysztof Matyjaszewski1691431128585
Yang Yang1642704144071
Geoffrey E. Hinton157414409047
Herbert A. Simon157745194597
Yongsun Kim1562588145619
Terrence J. Sejnowski155845117382
John B. Goodenough1511064113741
Scott Shenker150454118017
Network Information
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations

95% related

University of Maryland, College Park
155.9K papers, 7.2M citations

93% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

93% related

IBM
253.9K papers, 7.4M citations

93% related

Princeton University
146.7K papers, 9.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023120
2022499
20214,980
20205,375
20195,420
20184,972