scispace - formally typeset
Search or ask a question

Showing papers by "George Wright published in 1994"


Journal ArticleDOI
01 Jan 1994
TL;DR: It is proposed that good performance will be manifest when both ecological validity and learnability are high, but thatperformance will be poor when one of these is low.
Abstract: Frequently the same biases have been manifest in experts as by students in the laboratory, but expertise studies are often no more ecologically valid than laboratory studies because the methods used in both are similar. Further, real-world tasks vary in their learnability, or the availability of outcome feedback necessary for a judge to improve performance with experience. We propose that good performance will be manifest when both ecological validity and learnability are high, but that performance will be poor when one of these is low. Finally, we suggest how researchers and practitioners might use these task-analytic constructs in order to identify true expertise for the formulation of decision support.

169 citations


Journal ArticleDOI
TL;DR: In this article, the relationship between judgmental probability forecasting performance, self-rated expertise, and degree of coherence with the probability laws was investigated, and it was found that self-rating expertise was a good predictor of subsequent performance whereas measures of individual coherence were less predictive.

56 citations


Journal ArticleDOI
TL;DR: The paper uses evidence from the literature to review the effectiveness of a number of strategies which are designed to improve the accuracy of judgmental point forecasts.
Abstract: There is evidence that forecasts produced in business and other organizations often involve substantial elements of human judgment. In forming their judgments forecasters may have access to either time series or time series and contextual information. This paper reviews the literature to ascertain, for each information level, what we currently know about the heuristics people use when producing judgmental point forecasts and the biases which emanate from the use of these heuristics. The paper then uses evidence from the literature to review the effectiveness of a number of strategies which are designed to improve the accuracy of judgmental point forecasts.

52 citations


Journal ArticleDOI
TL;DR: Evidence is presented which suggests that this recomposition technique doesn't guarantee valid probabilities, and some solutions are proposed which should help ensure that probability judgements of increased validity are available to those attempting to capture subjective assessments for input into decision support systems.
Abstract: Many decision-aiding technologies require valid probability judgements to be elicited from domain experts. But how valid are experts' probability judgements? We describe two approaches to the assessment of quality of probability judgement—calibration and coherence—and review the research findings following from these two approaches. In many cases, expert probability judgement has been found to lack validity and this sub-optimality has largely been attributed to computational errors on the part of the expert. The preferred solution to poor validity in probability judgement has therefore been to reduce the amount of computation performed by the expert. Complex probabilities can be calculated mechanically from simple probability judgements elicited from the expert. We present evidence which suggests that this recomposition technique doesn't guarantee valid probabilities. Our explanation for this finding is that there are various problems concerned with eliciting even the simple probabilities which are necessary for subsequent recomposition. We conclude by proposing some solutions to these elicitation problems which should help ensure that probability judgements of increased validity are available to those attempting to capture subjective assessments for input into decision support systems.

9 citations