scispace - formally typeset
Search or ask a question
Topic

Pairwise comparison

About: Pairwise comparison is a research topic. Over the lifetime, 6804 publications have been published within this topic receiving 174081 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales that measure intangibles in relative terms.
Abstract: Decisions involve many intangibles that need to be traded off To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales It is these scales that measure intangibles in relative terms The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes An illustration is included

6,787 citations

Journal Article
TL;DR: The Analytic Hierarchy Process (AHP) as discussed by the authors is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales, these scales are these scales that measure intangibles in relative terms.

5,663 citations

Journal ArticleDOI
TL;DR: In this article, a new method, called best-worst method (BWM) is proposed to solve multi-criteria decision-making (MCDM) problems, in which a number of alternatives are evaluated with respect to different criteria in order to select the best alternative(s).
Abstract: In this paper, a new method, called best-worst method (BWM) is proposed to solve multi-criteria decision-making (MCDM) problems. In an MCDM problem, a number of alternatives are evaluated with respect to a number of criteria in order to select the best alternative(s). According to BWM, the best (e.g. most desirable, most important) and the worst (e.g. least desirable, least important) criteria are identified first by the decision-maker. Pairwise comparisons are then conducted between each of these two criteria (best and worst) and the other criteria. A maximin problem is then formulated and solved to determine the weights of different criteria. The weights of the alternatives with respect to different criteria are obtained using the same process. The final scores of the alternatives are derived by aggregating the weights from different sets of criteria and alternatives, based on which the best alternative is selected. A consistency ratio is proposed for the BWM to check the reliability of the comparisons. To illustrate the proposed method and evaluate its performance, we used some numerical examples and a real-word decision-making problem (mobile phone selection). For the purpose of comparison, we chose AHP (analytic hierarchy process), which is also a pairwise comparison-based method. Statistical results show that BWM performs significantly better than AHP with respect to the consistency ratio, and the other evaluation criteria: minimum violation, total deviation, and conformity. The salient features of the proposed method, compared to the existing MCDM methods, are: (1) it requires less comparison data; (2) it leads to more consistent comparisons, which means that it produces more reliable results.

2,214 citations

Journal ArticleDOI
TL;DR: This work extends the definition of the area under the ROC curve to the case of more than two classes by averaging pairwise comparisons and proposes an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case.
Abstract: The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.

2,044 citations

Proceedings ArticleDOI
20 Jun 2007
TL;DR: It is proposed that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning, and introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning.
Abstract: The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach.

2,003 citations


Network Information
Related Topics (5)
Markov chain
51.9K papers, 1.3M citations
81% related
Cluster analysis
146.5K papers, 2.9M citations
76% related
Deep learning
79.8K papers, 2.1M citations
75% related
Optimization problem
96.4K papers, 2.1M citations
74% related
Robustness (computer science)
94.7K papers, 1.6M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
20231,305
20222,607
2021581
2020554
2019520