scispace - formally typeset
Search or ask a question
Book

Applied nonparametric statistics

01 Jan 1978-
TL;DR: In this paper, applied nonparametric statistics are applied to the problem of applied non-parametric statistical data collection in the context of the application of applied NN statistics, including:
Abstract: Applied nonparametric statistics , Applied nonparametric statistics , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a menu of paired lottery choices is structured so that the crossover point to the high-risk lottery can be used to infer the degree of risk aversion, and a hybrid "power/expo" utility function with increasing relative and decreasing absolute risk aversion is presented.
Abstract: A menu of paired lottery choices is structured so that the crossover point to the high-risk lottery can be used to infer the degree of risk aversion With normal laboratory payoffs of several dollars, most subjects are risk averse and few are risk loving Scaling up all payoffs by factors of twenty, fifty, and ninety makes little difference when the high payoffs are hypothetical In contrast, subjects become sharply more risk averse when the high payoffs are actually paid in cash A hybrid "power/expo" utility function with increasing relative and decreasing absolute risk aversion nicely replicates the data patterns over this range of payoffs from several dollars to several hundred dollars

4,687 citations

Journal ArticleDOI
TL;DR: The basics are discussed and a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis are given.
Abstract: a b s t r a c t The interest in nonparametric statistical analysis has grown recently in the field of computational intelligence. In many experimental studies, the lack of the required properties for a proper application of parametric procedures - independence, normality, and homoscedasticity - yields to nonparametric ones the task of performing a rigorous comparison among algorithms. In this paper, we will discuss the basics and give a survey of a complete set of nonparametric procedures developed to perform both pairwise and multiple comparisons, for multi-problem analysis. The test problems of the CEC'2005 special session on real parameter optimization will help to illustrate the use of the tests throughout this tutorial, analyzing the results of a set of well-known evolutionary and swarm intelligence algorithms. This tutorial is concluded with a compilation of considerations and recommendations, which will guide practitioners when using these tests to contrast their experimental results.

3,832 citations

Book
01 Jul 2002
TL;DR: In this article, a review is presented of the book "Heuristics and Biases: The Psychology of Intuitive Judgment, edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman".
Abstract: A review is presented of the book “Heuristics and Biases: The Psychology of Intuitive Judgment,” edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman.

3,642 citations

01 Jan 1998
TL;DR: This thesis addresses the problem of feature selection for machine learning through a correlation based approach with CFS (Correlation based Feature Selection), an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy.
Abstract: A central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task. This thesis addresses the problem of feature selection for machine learning through a correlation based approach. The central hypothesis is that good feature sets contain features that are highly correlated with the class, yet uncorrelated with each other. A feature evaluation formula, based on ideas from test theory, provides an operational definition of this hypothesis. CFS (Correlation based Feature Selection) is an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy. CFS was evaluated by experiments on artificial and natural datasets. Three machine learning algorithms were used: C4.5 (a decision tree learner), IB1 (an instance based learner), and naive Bayes. Experiments on artificial datasets showed that CFS quickly identifies and screens irrelevant, redundant, and noisy features, and identifies relevant features as long as their relevance does not strongly depend on other features. On natural domains, CFS typically eliminated well over half the features. In most cases, classification accuracy using the reduced feature set equaled or bettered accuracy using the complete feature set. Feature selection degraded machine learning performance in cases where some features were eliminated which were highly predictive of very small areas of the instance space. Further experiments compared CFS with a wrapper—a well known approach to feature selection that employs the target learning algorithm to evaluate feature sets. In many cases CFS gave comparable results to the wrapper, and in general, outperformed the wrapper on small datasets. CFS executes many times faster than the wrapper, which allows it to scale to larger datasets. Two methods of extending CFS to handle feature interaction are presented and experimentally evaluated. The first considers pairs of features and the second incorporates iii feature weights calculated by the RELIEF algorithm. Experiments on artificial domains showed that both methods were able to identify interacting features. On natural domains, the pairwise method gave more reliable results than using weights provided by RELIEF.

3,533 citations

Journal ArticleDOI
TL;DR: Detailed principles for making design choices during the process of selecting appropriate experts for the Delphi study are given and suggestions for theoretical applications are made.

3,510 citations