scispace - formally typeset
Search or ask a question
Institution

Carnegie Mellon University

EducationPittsburgh, Pennsylvania, United States
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.


Papers
More filters
Proceedings Article
15 Feb 2018
TL;DR: It is shown that one cause for such failures is the exponential moving average used in the algorithms, and suggested that the convergence issues can be fixed by endowing such algorithms with `long-term memory' of past gradients.
Abstract: Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with `long-term memory' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.

1,146 citations

Book
01 Jan 1996
TL;DR: In this article, the authors present a new agenda for cognitive development, which is based on evolutionary and cognitive development with a focus on the evolution and cognitive variability of children and the adaptivity of multiplicity.
Abstract: 1. Whose Children are we Talking About? 2. Evolution and Cognitive Development 3. Cognitive Variability: The Ubiquity of Multiplicity 4. Strategic Development: Trudging up the Staircase or Swimming with the Tide 5. The Adaptivity of Multiplicity 6. Formal Models of Strategy Choice or Plasterers and Professors 7. How Children Generate New Ways of Thinking 8. A New Agenda for Cognitive Development

1,145 citations

Journal ArticleDOI
TL;DR: This paper describes an approach that integrates both paradigms: grid-based and topological, which gains advantages from both worlds: accuracy/consistency and efficiency.

1,140 citations

Posted Content
TL;DR: A review of recent developments in neuroeconomics and their implications for economics can be found in this article, where a two-dimensional dichotomization of neural processes between automatic and controlled processes and cognitive and affective processes is proposed.
Abstract: We review recent developments in neuroeconomics and their implications for economics. The paper consists of six sections. Following the Introduction, the second section enumerates the different research methods that neuroscientists use evaluates their strengths and limitations for analyzing economic phenomena. The third section provides a review of basic findings in neuroscience that we deemed especially relevant to economics, and proposes a two-dimensional dichotomization of neural processes between automatic and controlled processes on the one hand, and cognitive and affective processes on the other. Section four reviews general implications of neuroscience for economics. Research in neuroscience, for example, raises questions about the usefulness of many economic constructs, such as 'time preference' and 'risk preference'. It also suggests that, contrary to the assumption that humans are likely to possess domain-specific intelligence - to be brilliant when it comes to problems that the brain is well evolved for performing and flat-footed for problems that lie outside of the brains existing specialized functions. Section 5 provides more detailed discussions of four specific applications: intertemporal choice, decision making under risk and uncertainty, game theory, and labor-market discrimination. Section 6 concludes by proposing a distinction between two general approaches in applying neuroscience to economics which we term 'incremental' and 'radical'. The former draws on neuroscience findings to refine existing economic models, while the latter poses more basic challenges to the standard economic understanding of human behavior.

1,140 citations

Posted Content
TL;DR: The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated.
Abstract: We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The statistical modeling techniques introduced in this paper differ from those common to much of the natural language processing literature since there is no probabilistic finite state or push-down automaton on which the model is built. Our approach also differs from the techniques common to the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing. Key words: random field, Kullback-Leibler divergence, iterative scaling, divergence geometry, maximum entropy, EM algorithm, statistical learning, clustering, word morphology, natural language processing

1,140 citations


Authors

Showing all 36645 results

NameH-indexPapersCitations
Yi Chen2174342293080
Rakesh K. Jain2001467177727
Robert C. Nichol187851162994
Michael I. Jordan1761016216204
Jasvinder A. Singh1762382223370
J. N. Butler1722525175561
P. Chang1702154151783
Krzysztof Matyjaszewski1691431128585
Yang Yang1642704144071
Geoffrey E. Hinton157414409047
Herbert A. Simon157745194597
Yongsun Kim1562588145619
Terrence J. Sejnowski155845117382
John B. Goodenough1511064113741
Scott Shenker150454118017
Network Information
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations

95% related

University of Maryland, College Park
155.9K papers, 7.2M citations

93% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

93% related

IBM
253.9K papers, 7.4M citations

93% related

Princeton University
146.7K papers, 9.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023120
2022499
20214,980
20205,375
20195,420
20184,972