scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Rank analysis of incomplete block designs the method of paired comparisons

01 Dec 1952-Biometrika (Oxford University Press)-Vol. 39, pp 324-345
About: This article is published in Biometrika.The article was published on 1952-12-01. It has received 1863 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together, similar to the Bradley-Terry method for paired comparisons.
Abstract: We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated data sets. Classifiers used include linear discriminants, nearest neighbors, adaptive nonlinear methods and the support vector machine.

1,569 citations

Journal ArticleDOI
TL;DR: Bayes factors have been advocated as superior to pp-values for assessing statistical evidence in data as mentioned in this paper, and they have been widely used in the literature for assessing power law and skill acquisition.

1,369 citations


Cites background from "Rank analysis of incomplete block d..."

  • ...Some psychological process models are members of the generalized linear model class, including BradleyTerry-Luce scaling models (Bradley & Terry, 1952; Luce, 1959) and a wide class of signal-detection models (DeCarlo, 1998)....

    [...]

Journal ArticleDOI
TL;DR: Because optimization transfer algorithms often exhibit the slow convergence of EM algorithms, two methods of accelerating optimization transfer are discussed and evaluated in the context of specific problems.
Abstract: The well-known EM algorithm is an optimization transfer algorithm that depends on the notion of incomplete or missing data By invoking convexity arguments, one can construct a variety of other optimization transfer algorithms that do not involve missing data These algorithms all rely on a majorizing or minorizing function that serves as a surrogate for the objective function Optimizing the surrogate function drives the objective function in the correct direction This article illustrates this general principle by a number of specific examples drawn from the statistical literature Because optimization transfer algorithms often exhibit the slow convergence of EM algorithms, two methods of accelerating optimization transfer are discussed and evaluated in the context of specific problems

833 citations


Cites methods from "Rank analysis of incomplete block d..."

  • ...In the sports version of the Bradley and Terry model (Bradley and Terry 1952; Keener 1993), each team i in a league of teams is assigned a rank parameter Oi > 0....

    [...]

Journal ArticleDOI
TL;DR: This work proposes a suitable extension of label ranking that incorporates the calibrated scenario and substantially extends the expressive power of existing approaches and suggests a conceptually novel technique for extending the common learning by pairwise comparison approach to the multilabel scenario, a setting previously not being amenable to the pairwise decomposition technique.
Abstract: Label ranking studies the problem of learning a mapping from instances to rankings over a predefined set of labels. Hitherto existing approaches to label ranking implicitly operate on an underlying (utility) scale which is not calibrated in the sense that it lacks a natural zero point. We propose a suitable extension of label ranking that incorporates the calibrated scenario and substantially extends the expressive power of these approaches. In particular, our extension suggests a conceptually novel technique for extending the common learning by pairwise comparison approach to the multilabel scenario, a setting previously not being amenable to the pairwise decomposition technique. The key idea of the approach is to introduce an artificial calibration label that, in each example, separates the relevant from the irrelevant labels. We show that this technique can be viewed as a combination of pairwise preference learning and the conventional relevance classification technique, where a separate classifier is trained to predict whether a label is relevant or not. Empirical results in the area of text categorization, image classification and gene analysis underscore the merits of the calibrated model in comparison to state-of-the-art multilabel learning methods.

825 citations


Cites background from "Rank analysis of incomplete block d..."

  • ...…(Knerr et al. 1990, 1992; Price et al. 1995; Lu and Ito 1999), support vector machines (Schmidt and Gish 1996; Hastie and Tibshirani 1998; Kreßel 1999; Hsu and Lin 2002), statistical learning (Bradley and Terry 1952; Friedman 1996), rule and decision tree learning (Fürnkranz 2002, 2003) and others....

    [...]

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this paper, the authors argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results.
Abstract: Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality.

697 citations