scispace - formally typeset
Search or ask a question
Author

Dennis V. Lindley

Bio: Dennis V. Lindley is an academic researcher. The author has contributed to research in topics: Bayesian statistics & Statistical inference. The author has an hindex of 7, co-authored 8 publications receiving 258 citations.

Papers
More filters
Journal ArticleDOI
01 Dec 1995-Test
TL;DR: In this article, the authors investigated the coherence properties of the linear, harmonic and logarithmic opinion pools in the context of opinions about an uncertain event and developed general results on coherence of the joint forecast distribution.
Abstract: Anexpert (for You) is here defined as someone who shares Your world-view, but knows more than You do, so that were She to reveal Her current opinion to You, You would adopt it as Your own. When You have access to different experts, with differing information, You require acombination formula to aggregate their various opinions. A number of formulae have been suggested, but here we explore the fundamental requirement ofcoherence to relate such a formula to Your joint distribution for the experts' opinions. In particular, in the context of opinions about an uncertain eventA, we investigate coherence properties of the linear, harmonic and logarithmic opinion pools. Some general results on coherence of the joint forecast distribution are also developed.

91 citations

Journal ArticleDOI
TL;DR: The best ebooks about Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference that you can get for free here by download this Introduction to Probability and Statistics From a Bayesian viewpoint Part 1 Inference and save to your desktop as mentioned in this paper.
Abstract: The best ebooks about Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference that you can get for free here by download this Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference and save to your desktop. This ebooks is under topic such as introduction to probability and statistics from a bayesian cambridge university press viewpoint, part 2 inference d p(aib) p(a) p(,4ib) p(,4~ih) p(a tilt) cambridge university press viewpoint, part 2 inference d introduction to probability and statistics by mendenhall introduction to bayesian inference: selected resources introduction to probability and statistics using r inferences about correlations springer literatur qber bayes-veriahren springer foundations of statistical inference the bayesian approach to statistics by p g moore, td statistical inference: the big picture comment: the em parametric image reconstruction algorithm introduction to the bayesian analysis of clinical trials syllabus for statistics 157: bayesian statistics fundamentals of clinical research for radiologists 1 mth709u/mthm042 bayesian statistics qmul maths comparison of parameters of lognormal distribution based are all dodge dart manual masomo mas8303 modern bayesian inference part 2 tests of equality of parameters of two normal populations a bayesian alternative to parametric hypothesis testing introduction to bayesian data analysis 2: exchangeability a bayesian approach to assessing lab proficiency with a class of loss functions of catenary form springer lexus repair manuals nwdata statistical theory welcome to the department of statistics introduction to bayesian analysis university of prince 2004 acura tsx shock bushing manual alilee american political rhetoric 5th masomo probability and statistics eolss free download handbook of probability book university college of the fraser valley geo joke 2002 nasco answers 44 taniis if there be love zewaar

41 citations

Journal ArticleDOI
TL;DR: The Journal of the American Statistical Association (JSA), Vol. 57, No. 298 (Jun., 1962), pp. 307-326 as mentioned in this paper, is the most cited journal for statistical analysis.
Abstract: Author(s): L. J. Savage, George Barnard, Jerome Cornfield, Irwin Bross, George E. P. Box, I. J. Good, D. V. Lindley, C. W. Clunies-Ross, John W. Pratt, Howard Levene, Thomas Goldman, A. P. Dempster, Oscar Kempthorne and Allan Birnbaum Source: Journal of the American Statistical Association, Vol. 57, No. 298 (Jun., 1962), pp. 307-326 Published by: Taylor & Francis, Ltd. on behalf of the American Statistical Association Stable URL: http://www.jstor.org/stable/2281641 Accessed: 25-12-2017 20:48 UTC

28 citations

Journal ArticleDOI
01 Dec 1993-Test
TL;DR: This paper synthesizes a number of papers in which the author has participated that all concern models in which several decision-makers, each modeled as a Bayesian, appear, and reviews the central ideas.
Abstract: This paper synthesizes a number of papers in which the author has participated that all concern models in which several decision-makers, each modeled as a Bayesian, appear. The areas covered include amalgamation of opinion, treating expert opinion as data, simultaneous-move games and sequential decision processes. The aim of the paper is to review the central ideas, and to explore how they relate to one-another.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.
Abstract: In the field of psychology, the practice ofp value null-hypothesis testing is as widespread as ever. Despite this popularity, or perhaps because of it, most psychologists are not aware of the statistical peculiarities of thep value procedure. In particular,p values are based on data that were never observed, and these hypothetical data are themselves influenced by subjective intentions. Moreover,p values do not quantify statistical evidence. This article reviews thesep value problems and illustrates each problem with concrete examples. The three problems are familiar to statisticians but may be new to psychologists. A practical solution to thesep value problems is to adopt a model selection perspective and use the Bayesian information criterion (BIC) for statistical inference (Raftery, 1995). The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.

1,887 citations

Journal ArticleDOI
TL;DR: This work demonstrates by experimental evidence that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks.
Abstract: Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects’ convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others’ responses was provided. Although groups are initially “wise,” knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The “social influence effect” diminishes the diversity of the crowd without improvements of its collective error. The “range reduction effect” moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The “confidence effect” boosts individuals’ confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.

839 citations

Book
28 Aug 2008
TL;DR: Techniques covered range from traditional multivariate methods, such as multiple regression, principal components, canonical variates, linear discriminant analysis, factor analysis, clustering, multidimensional scaling, and correspondence analysis, to the newer methods of density estimation, projection pursuit, neural networks, and classification and regression trees.
Abstract: Remarkable advances in computation and data storage and the ready availability of huge data sets have been the keys to the growth of the new disciplines of data mining and machine learning, while the enormous success of the Human Genome Project has opened up the field of bioinformatics. These exciting developments, which led to the introduction of many innovative statistical tools for high-dimensional data analysis, are described here in detail. The author takes a broad perspective; for the first time in a book on multivariate analysis, nonlinear methods are discussed in detail as well as linear methods. Techniques covered range from traditional multivariate methods, such as multiple regression, principal components, canonical variates, linear discriminant analysis, factor analysis, clustering, multidimensional scaling, and correspondence analysis, to the newer methods of density estimation, projection pursuit, neural networks, multivariate reduced-rank regression, nonlinear manifold learning, bagging, boosting, random forests, independent component analysis, support vector machines, and classification and regression trees. Another unique feature of this book is the discussion of database management systems. This book is appropriate for advanced undergraduate students, graduate students, and researchers in statistics, computer science, artificial intelligence, psychology, cognitive sciences, business, medicine, bioinformatics, and engineering. Familiarity with multivariable calculus, linear algebra, and probability and statistics is required. The book presents a carefully-integrated mixture of theory and applications, and of classical and modern multivariate statistical techniques, including Bayesian methods. There are over 60 interesting data sets used as examples in the book, over 200 exercises, and many color illustrations and photographs.

698 citations

Journal ArticleDOI
TL;DR: A method of evaluation of ordered categorical responses is presented, where the probability of response in a given category follows a normal integral with an argument dependent on fixed thresholds and random variables sampled from a conceptual distribution with known first and second moments.
Abstract: A method of evaluation of ordered categorical responses is presented. The probability of response in a given category follows a normal integral with an argument dependent on fixed thresholds and random variables sampled from a conceptual distribution with known first and second moments, a priori. The prior distribution and the likelihood function are combined to yield the posterior density from which inferences are made. The mode of the posterior distribution is taken as an estimator of location. Finding this mode entails solving a non-linear system ; estimation equations are presented. Relationships of the procedure to \"generalized linear models\" and \"normal scores are discussed. A numerical example involving sire evaluation for calving ease is used to illustrate the method.

605 citations

Journal ArticleDOI
TL;DR: A new method for selecting a common subset of explanatory variables where the aim is to model several response variables based on the (joint) residual sum of squares while constraining the parameter estimates to lie within a suitable polyhedral region is proposed.
Abstract: We propose a new method for selecting a common subset of explanatory variables where the aim is to model several response variables. The idea is a natural extension of the LASSO technique proposed by Tibshirani (1996) and is based on the (joint) residual sum of squares while constraining the parameter estimates to lie within a suitable polyhedral region. The properties of the resulting convex programming problem are analyzed for the special case of an orthonormal design. For the general case, we develop an efficient interior point algorithm. The method is illustrated on a dataset with infrared spectrometry measurements on 14 qualitatively different but correlated responses using 770 wavelengths. The aim is to select a subset of the wavelengths suitable for use as predictors for as many of the responses as possible.

454 citations