Institution
University of Massachusetts Amherst
Education•Amherst Center, Massachusetts, United States•
About: University of Massachusetts Amherst is a education organization based out in Amherst Center, Massachusetts, United States. It is known for research contribution in the topics: Population & Galaxy. The organization has 37274 authors who have published 83965 publications receiving 3834996 citations. The organization is also known as: UMass Amherst & Massachusetts State College.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the Smarr formula for static AdS black holes and an expanded first law that includes variations in the cosmological constant were derived and related by a scaling argument based on Euler's theorem.
Abstract: We present geometric derivations of the Smarr formula for static AdS black holes and an expanded first law that includes variations in the cosmological constant. These two results are further related by a scaling argument based on Euler's theorem. The key new ingredient in the constructions is a two-form potential for the static Killing field. Surface integrals of the Killing potential determine the coefficient of the variation of Λ in the first law. This coefficient is proportional to a finite, effective volume for the region outside the AdS black hole horizon, which can also be interpreted as minus the volume excluded from a spatial slice by the black hole horizon. This effective volume also contributes to the Smarr formula. Since Λ is naturally thought of as a pressure, the new term in the first law has the form of effective volume times change in pressure that arises in the variation of the enthalpy in classical thermodynamics. This and related arguments suggest that the mass of an AdS black hole should be interpreted as the enthalpy of the spacetime.
1,258 citations
••
TL;DR: An overview of the methods used in the PROMIS item analyses and proposed calibration of item banks is provided and recommendations are provided for future evaluations of item Banks in HRQOL assessment.
Abstract: Background: The construction and evaluation of item banks to measure unidimensional constructs of health-related quality of life (HRQOL) is a fundamental objective of the Patient-Reported Outcomes Measurement Information System (PROMIS) project. Objectives: Item banks will be used as the foundation for developing short-form instruments and enabling computerized adaptive testing. The PROMIS Steering Committee selected 5 HRQOL domains for initial focus: physical functioning, fatigue, pain, emotional distress, and social role participation. This report provides an overview of the methods used in the PROMIS item analyses and proposed calibration of item banks. Analyses: Analyses include evaluation of data quality (eg, logic and range checking, spread of response distribution within an item), descriptive statistics (eg, frequencies, means), item response theory model assumptions (unidimensionality, local independence, monotonicity), model fit, differential item functioning, and item calibration for banking. Recommendations: Summarized are key analytic issues; recommendations are provided for future evaluations of item banks in HRQOL assessment.
1,251 citations
••
TL;DR: An algorithm based on dynamic programming, which is called Real-Time DP, is introduced, by which an embedded system can improve its performance with experience and illuminate aspects of other DP-based reinforcement learning methods such as Watkins'' Q-Learning algorithm.
1,247 citations
••
TL;DR: Hardt and Negri as discussed by the authors present a history of war and democracy in the age of empire, with a focus on the role of women and women in the process of war.
Abstract: Multitude: War and Democracy in the Age of Empire. Michael Hardt and Antonio Negri. 2004. New York. Penguin Books. 448 pages. ISBN: 0143035592 (paper).
1,244 citations
•
27 Nov 1995TL;DR: It is concluded that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general λ.
Abstract: On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned offline. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD(λ) algorithm when λ = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general λ.
1,244 citations
Authors
Showing all 37601 results
Name | H-index | Papers | Citations |
---|---|---|---|
George M. Whitesides | 240 | 1739 | 269833 |
Joan Massagué | 189 | 408 | 149951 |
David H. Weinberg | 183 | 700 | 171424 |
David L. Kaplan | 177 | 1944 | 146082 |
Michael I. Jordan | 176 | 1016 | 216204 |
James F. Sallis | 169 | 825 | 144836 |
Bradley T. Hyman | 169 | 765 | 136098 |
Anton M. Koekemoer | 168 | 1127 | 106796 |
Derek R. Lovley | 168 | 582 | 95315 |
Michel C. Nussenzweig | 165 | 516 | 87665 |
Alfred L. Goldberg | 156 | 474 | 88296 |
Donna Spiegelman | 152 | 804 | 85428 |
Susan E. Hankinson | 151 | 789 | 88297 |
Bernard Moss | 147 | 830 | 76991 |
Roger J. Davis | 147 | 498 | 103478 |