scispace - formally typeset
Search or ask a question
Author

Alvaro Sandroni

Bio: Alvaro Sandroni is an academic researcher from Northwestern University. The author has contributed to research in topics: Bayesian inference & Nash equilibrium. The author has an hindex of 26, co-authored 85 publications receiving 2902 citations. Previous affiliations of Alvaro Sandroni include University of Rochester & University of Pennsylvania.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a model of participation in elections in which voting is costly and no vote is pivotal is presented, where ethical agents are motivated to participate when they determine that agents of their type are obligated to do so.
Abstract: We analyze a model of participation in elections in which voting is costly and no vote is pivotal. Ethical agents are motivated to participate when they determine that agents of their type are obligated to do so. Unlike previous duty-based models of participation, in our model an ethical agent\'s obligation to vote is determined endogenously as a function of the behavior of other agents. Our model predicts high turnout and comparative statics that are consistent with strategic behavior. (JEL D72)

417 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that among agents who have the same intertemporal discount factor (and who choose savings endogenously), the most prosperous are those who make accurate predictions.
Abstract: Blume and Easley (1992) show that if agents' have the same savings rule, those who maximize the expected logarithm of next period's outcomes will eventually hold all wealth (i.e. are "most prosperous"). However, if no agent adopts this rule then the most prosperous are not necessarily those who make the most accurate predictions. Thus, agents who make inaccurate predictions need not be driven out of the market. In this paper, it is shown that, among agents who have the same intertemporal discount factor (and who choose savings endogenously), the most prosperous are those who make accurate predictions. Hence, convergence to rational expectations obtains because agents who make inaccurate predictions ar1e driven out of the market.

385 citations

Journal ArticleDOI
TL;DR: It is shown that, as long as individuals take their personal signals into account in a Bayesian way, repeated interactions lead them to successfully aggregate information and learn the true parameter.

378 citations

Journal ArticleDOI
TL;DR: A multicenter, prospective, double-blinded randomized trial is necessary and urgently indicated to determine whether Ig therapy is beneficial or harmful in the care of TEN patients.
Abstract: Experimental evidence implicates Fas ligand-mediated keratinocyte apoptosis as an underlying mechanism of toxic epidermal necrolysis syndrome (TEN). In vitro studies indicate a potential role for immunoglobulin (Ig) therapy in blocking Fas ligand signaling, thus reducing the severity of TEN. Anecdotal reports have described successful treatment of TEN patients with Ig; however, no study to date has analyzed outcome data in a large series of patients treated with Ig using institutional controls. The SCORTEN severity-of-illness score ranks severity and predicts prognosis in TEN patients using age, heart rate, TBSA slough, history of malignancy, and admission blood urea nitrogen, serum bicarbonate, and glucose levels. A retrospective chart review was performed that included all patients treated for TEN at our burn center since 1997. Ig therapy was instituted for all patients with biopsy-proven TEN beginning in January 2000. Twenty-one TEN patients were treated before Ig (no-Ig group), and 24 patients have been treated with Ig. SCORTEN data were collected, as well as length of stay (LOS) and status upon discharge. Each patient was given a SCORTEN of 0 to 6, with 1 point each for age greater than 40, TBSA slough greater than 10%, history of malignancy, admission BUN greater than 28 mg/dl, HCO3 less than 20 mg/dl, and glucose greater then 252 mg/dl. Outcome was compared between patients treated with Ig and without Ig. Overall mortality for patients treated before Ig was 28.6% (6/21), and with Ig, mortality was 41.7%% (10/24). There was no significant difference in age or TBSA slough. The average SCORTEN between the groups was equivalent (2.2 in no-Ig group vs 2.7 in Ig group, P = 0.3), and no group of patients with any SCORTEN score showed a significant benefit from Ig therapy. Overall LOS as well as LOS for survivors was longer in the Ig group. This series represents the largest single-institution analysis of TEN patient outcome after institution of Ig therapy. Our data do not show a significant improvement in mortality for TEN patients treated with Ig at any level of severity and may indicate a potential detriment in using Ig. Ig should not be given to TEN patients outside of a clinical trial. A multicenter, prospective, double-blinded randomized trial is necessary and urgently indicated to determine whether Ig therapy is beneficial or harmful in the care of TEN patients.

154 citations

Journal ArticleDOI
TL;DR: The authors argue that large elections may exhibit a moral bias, i.e., conditional on the distribution of preferences within the electorate, alternatives understood by voters to be morally superior are more likely to win in large elections than in small ones.
Abstract: We argue that large elections may exhibit a moral bias, i.e., conditional on the distribution of preferences within the electorate, alternatives understood by voters to be morally superior are more likely to win in large elections than in small ones. This bias can result from ethical expressive preferences which include a payoff voters obtain from taking an action they believe to be ethical. In large elections pivot probability is small, so expressive preferences become more important relative to material self-interest. Ethical expressive preferences can have a disproportionate impact on results in large elections for two reasons. First, as pivot probability declines ethical expressive motivations make agents more likely to vote on the basis of ethical considerations than on the basis of narrow self-interest. Second, as pivot probability declines the set of agents who choose to vote increasingly consist of agents with large ethical expressive payoff s. We provide experimental evidence that is consistent with the hypothesis of moral bias.

148 citations


Cited by
More filters
Journal Article
TL;DR: Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of the authors' brain’s wiring.
Abstract: In 1974 an article appeared in Science magazine with the dry-sounding title “Judgment Under Uncertainty: Heuristics and Biases” by a pair of psychologists who were not well known outside their discipline of decision theory. In it Amos Tversky and Daniel Kahneman introduced the world to Prospect Theory, which mapped out how humans actually behave when faced with decisions about gains and losses, in contrast to how economists assumed that people behave. Prospect Theory turned Economics on its head by demonstrating through a series of ingenious experiments that people are much more concerned with losses than they are with gains, and that framing a choice from one perspective or the other will result in decisions that are exactly the opposite of each other, even if the outcomes are monetarily the same. Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of our brain’s wiring.

4,351 citations

Book
01 Jan 2006
TL;DR: In this paper, the authors provide a comprehensive treatment of the problem of predicting individual sequences using expert advice, a general framework within which many related problems can be cast and discussed, such as repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems.
Abstract: This important text and reference for researchers and students in machine learning, game theory, statistics and information theory offers a comprehensive treatment of the problem of predicting individual sequences. Unlike standard statistical approaches to forecasting, prediction of individual sequences does not impose any probabilistic assumption on the data-generating mechanism. Yet, prediction algorithms can be constructed that work well for all possible sequences, in the sense that their performance is always nearly as good as the best forecasting strategy in a given reference class. The central theme is the model of prediction using expert advice, a general framework within which many related problems can be cast and discussed. Repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems are viewed as instances of the experts' framework and analyzed from a common nonstochastic standpoint that often reveals new and intriguing connections.

3,615 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models.
Abstract: The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models.

1,796 citations

Journal ArticleDOI
TL;DR: In this paper, a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration is proposed, which is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest.
Abstract: Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with cross-validation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.

1,537 citations