scispace - formally typeset
Search or ask a question
Author

Stefan Palan

Other affiliations: University of Innsbruck
Bio: Stefan Palan is an academic researcher from University of Graz. The author has contributed to research in topics: Asset (economics) & Business. The author has an hindex of 12, co-authored 41 publications receiving 1165 citations. Previous affiliations of Stefan Palan include University of Innsbruck.

Papers
More filters
Journal ArticleDOI
TL;DR: This article presents www.prolific.ac and lays out its suitability for recruiting subjects for social and economic science experiments, and traces the platform’s historical development, present its features, and contrast them with requirements for different types of social andEconomic experiments.

1,357 citations

Journal ArticleDOI
Stefan Palan1
TL;DR: The authors discusses the literature on bubbles and crashes in the most commonly used experimental asset market design, introduced by Smith et al. and documents the main findings based on the results from 41 published papers, 3 book chapters and 20 working papers.
Abstract: This paper discusses the literature on bubbles and crashes in the most commonly used experimental asset market design, introduced by Smith et al. It documents the main findings based on the results from 41 published papers, 3 book chapters and 20 working papers.

232 citations

Journal ArticleDOI
TL;DR: In this article, the authors experimentally manipulate agents' information regarding the rationality of others in a setting in which previous studies have found irrationality to be present, namely the asset market experiments introduced by Smith et al.

89 citations

Journal ArticleDOI
TL;DR: In this paper, the authors apply a real options-based approach to assess the impact of climate change policy in the form of a constant or growing price floor on investment decisions of a single firm in a competitive environment.

56 citations

Journal ArticleDOI
TL;DR: This paper investigated the effect of team decision-making in an asset market experiment that has long been known to reliably generate price bubbles and crashes in markets populated by individuals, and found that this tendency is substantially reduced when each decision making unit is instead a team of two.
Abstract: In the world of mutual funds management, responsibility for investment decisions is increasingly entrusted to small teams instead of individuals. Yet the effect of team decision-making in a market environment has never been studied in a controlled experiment. In this paper, we investigate the effect of team decision-making in an asset market experiment that has long been known to reliably generate price bubbles and crashes in markets populated by individuals. We find that this tendency is substantially reduced when each decision-making unit is instead a team of two. This holds across a broad spectrum of measures of the severity of mispricing, both under a continuous double-auction institution and in a call market. The result is not driven by reduced turnover due to time required for deliberation by teams, and continues to hold even when subjects are experienced. Our result also holds not only when our teams treatments are compared to the ‘narrow’ baseline provided by the corresponding individuals treatments, but also when compared more broadly to the results of the large body of previous research on markets of this kind.

49 citations


Cited by
More filters
Book Chapter
01 Jan 2010

1,556 citations

Journal ArticleDOI
25 Mar 2016-Science
TL;DR: To contribute data about replicability in economics, 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014 are replicated, finding that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction.
Abstract: The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

811 citations

Journal ArticleDOI
TL;DR: The Gorilla Experiment Builder (gorilla.sc) is presented, a fully tooled experiment authoring and deployment platform, designed to resolve many timing issues and make reliable online experimentation open and accessible to a wider range of technical abilities.
Abstract: Behavioral researchers are increasingly conducting their studies online, to gain access to large and diverse samples that would be difficult to get in a laboratory environment. However, there are technical access barriers to building experiments online, and web browsers can present problems for consistent timing-an important issue with reaction-time-sensitive measures. For example, to ensure accuracy and test-retest reliability in presentation and response recording, experimenters need a working knowledge of programming languages such as JavaScript. We review some of the previous and current tools for online behavioral research, as well as how well they address the issues of usability and timing. We then present the Gorilla Experiment Builder (gorilla.sc), a fully tooled experiment authoring and deployment platform, designed to resolve many timing issues and make reliable online experimentation open and accessible to a wider range of technical abilities. To demonstrate the platform's aptitude for accessible, reliable, and scalable research, we administered a task with a range of participant groups (primary school children and adults), settings (without supervision, at home, and under supervision, in both schools and public engagement events), equipment (participant's own computer, computer supplied by the researcher), and connection types (personal internet connection, mobile phone 3G/4G). We used a simplified flanker task taken from the attentional network task (Rueda, Posner, & Rothbart, 2004). We replicated the "conflict network" effect in all these populations, demonstrating the platform's capability to run reaction-time-sensitive experiments. Unresolved limitations of running experiments online are then discussed, along with potential solutions and some future features of the platform.

540 citations