scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 1994"


Journal ArticleDOI
TL;DR: In this paper, a number of generalizations of the expected utility preference functional are estimated using experimentally generated data involving 100 pairwise choice questions repeated on two separate occasions and likelihood ratio tests are conducted to investigate the statistical superiority of the various generalizations.
Abstract: A number of generalizations of the expected utility preference functional are estimated using experimentally generated data involving 100 pairwise choice questions repeated on two separate occasions. Likelihood ratio tests are conducted to investigate the statistical superiority of the various generalizations and the Akaike information criterion is used to distinguish between them. The economic superiority of the various generalizations is also explored and the paper concludes that, for many subjects, the superiority of several of the generalizations is not established. Copyright 1994 by The Econometric Society.

1,129 citations


Journal ArticleDOI
TL;DR: The scaling approach is a statistical estimation method which allows for differences in the amount of unexplained variation in different types of data which can then be used together in analysis as discussed by the authors, and has been tested and recommended in the context of combining Stated Preference and revealed preference data.
Abstract: The scaling approach is a statistical estimation method which allows for differences in the amount of unexplained variation in different types of data which can then be used together in analysis. In recent years, this approach has been tested and recommended in the context of combining Stated Preference and Revealed Preference data. The paper provides a description of the approach and a historical overview. The scaling approach can also be used to identify systematic differences in the variance of choices within a single Stated Preference data set due to the way in which the hypothetical choice situations are presented or the responses are obtained. The paper presents the results of two case studies — one looking at rank order effect and the other at fatigue effect. Scale effects appear to exist in both cases: the amount of unexplained variance is shown to increase as rankings become lower, and as the number of pairwise choices completed becomes greater. The implications of these findings for the use of SP ranking tasks and repeated pairwise choice tasks are discussed.

258 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that a simple nonmanipulability requirement is sufficient to characterize the functional form for regret theory with general choice sets, and a number of special cases are derived in which regret theory is equivalent to other well-known theories of choice under uncertainty.
Abstract: The regret theory of choice under uncertainty proposed by Loomes and Sugden has performed well in explaining and predicting violations of Expected Utility theory. The original version of the model was confined to pairwise choices, which limited its usefulness as an economic theory of choice. Axioms for a more general form of regret theory have been proposed by Loomes and Sugden. In this article, it is shown that a simple nonmanipulability requirement is sufficient to characterize the functional form for regret theory with general choice sets. The stochastic dominance and comparative static properties of the model are outlined. A number of special cases are derived in which regret theory is equivalent to other well-known theories of choice under uncertainty.

199 citations


Journal ArticleDOI
TL;DR: This work presents a practical method to approximate the probability that a local alignment score is a result of chance alone, and presents applications to data base searching and the analysis of pairwise and self-comparisons of proteins.
Abstract: A central question in sequence comparison is the statistical significance of an observed similarity. For local alignment containing gaps to optimize sequence similarity this problem has so far not been solved mathematically. Using as a basis the Chen-Stein theory of Poisson approximation, we present a practical method to approximate the probability that a local alignment score is a result of chance alone. For a set of similarity scores and gap penalties only one simulation of random alignments needs to be calculated to derive the key information allowing us to estimate the significance of any alignment calculated under this setting. We present applications to data base searching and the analysis of pairwise and self-comparisons of proteins.

154 citations


Proceedings Article
01 Jan 1994
TL;DR: This work shows how to combine the outputs of the two-class neural networks in order to obtain posterior probabilities for the class decisions and presents results on real world data bases and shows that these results compare favorably to other neural network approaches.
Abstract: Multi-class classification problems can be efficiently solved by partitioning the original problem into sub-problems involving only two classes: for each pair of classes, a (potentially small) neural network is trained using only the data of these two classes. We show how to combine the outputs of the two-class neural networks in order to obtain posterior probabilities for the class decisions. The resulting probabilistic pairwise classifier is part of a handwriting recognition system which is currently applied to check reading. We present results on real world data bases and show that, from a practical point of view, these results compare favorably to other neural network approaches.

153 citations


Journal ArticleDOI
TL;DR: A simpler, more flexible approximation based on the Bonferroni inequality is suggested, as well as an analogue to a sequentially rejective procedure.
Abstract: This paper proposes a method for monitoring multi-armed clinical trials on the basis of pairwise comparisons between arms. The set of pairwise test statistics is examined during the course of the trial in order to make decisions about hypotheses, continuation of treatment arms, and continuation of the trial. Strong control of the Type I error rate is achieved by modifying two-armed group sequential procedures of Pocock (1977, Biometrika 64, 191-199), O'Brien and Fleming (1979, Biometrics 35, 549-556), and Lan and DeMets (1983, Biometrika 70, 659-663) to multi-armed trials. In the fixed-sample situation, these methods reduce to either Dunnett's or Tukey's procedure for multiple comparisons. A simpler, more flexible approximation based on the Bonferroni inequality is suggested, as well as an analogue to a sequentially rejective procedure.

102 citations



Journal ArticleDOI
TL;DR: In this article, two evaluative criteria are used to examine a total of 78 scales which can be derived from two widely used scales, i.e., exponential scales and pairwise comparison scales.
Abstract: One of the most critical issues in many applications of fuzzy sets is the successful evaluation of membership values. A method based on pairwise comparisons provides an interesting way for evaluating membership values. That method was proposed by Saaty, almost 20 years ago, and since then it has captured the interest of many researchers around the world. However, recent investigations reveal that the original scale may cause severe inconsistencies in many decision-making problems. Furthermore, exponential scales seem to be more natural for humans to use in many decision-making problems. In this paper two evaluative criteria are used to examine a total of 78 scales which can be derived from two widely used scales. The findings in this paper reveal that there is no single scale that can outperform all the other scales. Furthermore, the same findings indicate that a few scales are very efficient under certain conditions. Therefore, for a successful application of a pairwise comparison based method the appropriate scale needs to be selected and applied.

74 citations


Journal ArticleDOI
TL;DR: The proposed planning process provides an analytical framework for multicriteria decisionmaking that is rational, consistent, explicit, and defensible for multiobjective decision making.
Abstract: Resource inventory and monitoring (I&M) programs in national parks combine multiple objectives in order to create a plan of action over a finite time horizon. Because all program activities are constrained by time and money, it is critical to plan I&M activities that make the best use of available agency resources. However, multiple objectives complicate a relatively straightforward allocation process. The analytic hierarchy process (AHP) offers a structure for multiobjective decision making so that decision-makers’ preferences can be formally incorporated in seeking potential solutions. Within the AHP, inventory and monitoring program objectives and decision criteria are organized into a hierarchy. Pairwise comparisons among decision elements at any level of the hierarchy provide a ratio scale ranking of those elements. The resulting priority values for all projects are used as each project’s contribution to the value of an overall I&M program. These priorities, along with budget and personnel constraints, are formulated as a zero/one integer programming problem that can be solved to select those projects that produce the best program. An extensive example illustrates how this approach is being applied to I&M projects in national parks in the Pacific Northwest region of the United States. The proposed planning process provides an analytical framework for multicriteria decisionmaking that is rational, consistent, explicit, and defensible.

68 citations


Proceedings ArticleDOI
06 Jun 1994
TL;DR: The flexibility of the MCM problem formulation enables the application of the iterative pairwise matching algorithm to several other important high level synthesis tasks.
Abstract: Many numerically intensive applications have computations that involve a large number of multiplications of one variable with several constants. A proper optimization of this part of the computation, which we call the multiple constant multiplication (MCM) problem, often results in a significant improvement in several key design metrics. After defining the MCM problem, we formulate it as a special case of common subexpression elimination. The algorithm for common subexpression elimination is based on an iterative pairwise matching heuristic. The flexibility of the MCM problem formulation enables the application of the iterative pairwise matching algorithm to several other important high level synthesis tasks. All applications are illustrated by a number of benchmarks.

68 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a new approach, using information available in the intermediate and final phases of the analytic hierarchy process, to explicitly identify which attributes or criteria are determinant in making a choice among several given alternatives.
Abstract: This article develops a new approach, using information available in the intermediate and final phases of the analytic hierarchy process, to explicitly identify which attributes or criteria are determinant in making a choice among several given alternatives. The approach parallels that used in the popular direct dual questioning determinant attribute (DQDA) analysis, which has been widely used in marketing applications. Using the hierarchical structure and pairwise comparisons, the combined relative priorities of the criteria are compared with the relative priorities of the choice alternatives to compute determinance scores. These values are the basis for identifying which of the criteria are both important and different across alternatives (i.e., determinant). This new approach overcomes the potential ambiguities of traditional direct dual questioning methods. Moreover, the approach is easily extended to include decision hierarchies with multiple levels of attributes and subattributes.

Journal ArticleDOI
TL;DR: The problem of assessing the validity of clusters produced by a clustering procedure is addressed and a Monte Carlo test for assessing the value of a U-statistic based on sets of pairwise dissimilarities is described and illustrated on four data sets.

Journal ArticleDOI
01 Nov 1994
TL;DR: This paper attempts to analyze the decision-maker's (DM) preference structure through a descriptive approach to explain the global preferences revealed by the DM from pairwise comparisons of reference alternatives.
Abstract: In general, it is difficult to articulate the decision-maker's (DM) preference structure, specially in taking into account several criteria. Most synthetical approaches (single synthetical criterion approach, synthetical outranking approach,…) are based on some a priori information. In this paper, we attempt to analyze such a structure through a descriptive approach. The idea is to explain the global preferences revealed by the DM from pairwise comparisons of reference alternatives. A disaggregation — aggregation interactive procedure like in PREFCALC is used in ELECCALC, which enables a DM to assess the parameters of ELECTRE II.

Journal ArticleDOI
TL;DR: In this paper, the AHP and one of its variants have the potential to reach the wrong conclusion under certain circumstances, under the assumption that pairwise comparisons, which are used in these methods, take on continuous values.

Journal ArticleDOI
TL;DR: In this paper, a procedure for improving the quality of group decision making is introduced, focusing on the identification of outliers and the establishment of confidence limits in group decision-making.
Abstract: This paper introduces a procedure for improving the quality of group decision making. Emphasis is placed on the identification of outliers and on the establishment of confidence limits in group decision making. Participants in group decision making whose opinions fall outside the group's tolerance level are further studied to annex the source of this variation. The study presents 30 stakeholders with the task of deciding whether or not to adopt a new manufacturing technology (computer integrated manufacturing). The goal in either choice is the improvement of the total productivity of the organization. The decision made by this group is based on a pairwise comparison of seven manufacturing dimensions. Priorities for these criteria are generated using me analytic hierarchy process (AHP). These priorities are analyzed to identify potential outliers. Procedures on how to manage these outliers in order to improve the quality of decisions arrived at by the group are provided.

Journal ArticleDOI
TL;DR: In this paper, the problem of estimating n ≥ 2 object or attribute weights from a set of n (n - 1)/2 pairwise preferences expressed on a ratio scale is considered.

Journal ArticleDOI
01 Nov 1994
TL;DR: A set of tools for group decision support are presented and a general framework for a pairwise group preference structure is proposed, and can be used to finalise the decision.
Abstract: A set of tools for group decision support are presented. Decision problems involving several decision makers, here-after called judges, that have to rank several alternatives, are considered. The toolbox is called JUDGES. It includes the four following procedures: • - a hierarchical representation of the judges allows to display the existing conflicts between groups of judges, • - enhanced box-plots representations of the alternatives are generated in order to detect those that are responsible for the major conflicts, • - specific advice is issued to each judge in order to reach more easily a consensus, • - a general framework for a pairwise group preference structure is proposed, and can be used to finalise the decision. These procedures are embedded in an interactive software, implemented on micro-computer, which currently simulates the use on a network. Actual network implementation is foreseen in the near future. Several applications are presented and future developments are discussed.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: The authors formulate this optimization problem of a pairwise clustering cost function in the maximum entropy framework using a variational principle to derive corresponding data partitionings in a d-dimensional Euclidian space and solve the embedding problem and the grouping of these data into clusters simultaneously and in a selfconsistent fashion.
Abstract: Partitioning a set of data points which are characterized by their mutual dissimilarities instead of an explicit coordinate representation is a difficult, NP-hard combinatorial optimization problem. The authors formulate this optimization problem of a pairwise clustering cost function in the maximum entropy framework using a variational principle to derive corresponding data partitionings in a d-dimensional Euclidian space. This approximation solves the embedding problem and the grouping of these data into clusters simultaneously and in a selfconsistent fashion.

Journal ArticleDOI
TL;DR: In this paper, a preference function is used to measure the importance of five accumulation points of cumulative distributions of outcomes in the choice of a reforestation alternative, which indicate the decision-maker's attitude towards risk.
Abstract: By using the approach presented in this paper, the decision‐maker's risk attitude can be ascertained and taken into account in the comparison of reforestation alternatives of a forest stand. Risks which reforestation alternatives include are described using distributions of outcomes. Cardinal utility values of five accumulation points of cumulative distributions of outcomes, calculated without considering risk preferences, are the variables included in a preference function. The parameters of that additive preference function represent the importance of the accumulation points in the choice of the reforestation alternative. They indicate the decision‐maker's attitude towards risk. The parameters are estimated on the basis of pairwise comparisons between the importance of variables, using Saaty's eigenvalue method. Estimation, application, and interpretation of the preference function are simple to carry out, which is important for an approach applied to practical decision‐making. The approach could be app...

Journal ArticleDOI
TL;DR: A generalization of a recent linear programming methodology is proposed which requires transforming the ranking data into equivalent pairwise choices, and the weights were compared with those obtained from discrete choice models estimated with stated preference data.
Abstract: Perceived importance attribute rankings, obtained from samples of various types of public transport users, are used to determine the relative weights of their level of service vectors. A generalization of a recent linear programming methodology is proposed which requires transforming the ranking data into equivalent pairwise choices. The weights were compared with those obtained from discrete choice models estimated with stated preference data.

Journal ArticleDOI
TL;DR: In this article, an interactive procedure is developed for the bicriterion shortest path problem, where the decision maker's inherant utility function is quasi-concave and nonincreasing, and the network consists of nonnegative, integer valued arc lengths.

Journal ArticleDOI
TL;DR: In this paper, a Pairwise Aggregated Hierarchical Analysis of Ratio-Scale Preferences (PAHAP) is proposed for solving discrete alternative multicriteria decision problems.
Abstract: In this paper, we present a Pairwise Aggregated Hierarchical Analysis of Ratio-Scale Preferences (PAHAP), a new method for solving discrete alternative multicriteria decision problems. Following the Analytic Hierarchy Process (AHP), PAHAP uses pairwise preference judgments to assess the relative attractiveness of the alternatives. By first aggregating the pairwise judgment ratios of the alternatives across all criteria, and then synthesizing based on these aggregate measures, PAHAP determines overall ratio scale priorities and rankings of the alternatives which are not subject to rank reversal, provided that certain weak consistency requirements are satisfied. Hence, PAHAP can serve as a useful alternative to the original AHP if rank reversal is undesirable, for instance when the system is open and criterion scarcity does not affect the relative attractiveness of the alternatives. Moreover, the single matrix of pairwise aggregated ratings constructed in PAHAP provides useful insights into the decision maker's preference structure. PAHAP requires the same preference information as the original AHP (or, altematively, the same information as the Referenced AHP, if the criteria are compared based on average (total) value of the alternatives). As it is easier to implement and interpret than previously proposed variants of the conventional AHP which prevent rank reversal, PAHAP also appears attractive from a practitioner's viewpoint.


Journal ArticleDOI
TL;DR: In this paper, a class of sequencing problems is proposed based on the adjacent pairwise interchange of two objects, necessary and sufficient conditions for an optimal ordering policy are given, and examples from the literature are considered and shown to be special cases of the proposed model.
Abstract: In this note, we are concerned with the study of a sequencing problem applicable to situations where the optimal choice amongn! sequences is sought. A class of sequencing problems is proposed. Based on the adjacent pairwise interchange of two objects, necessary and sufficient conditions for an optimal ordering policy are given. Examples from the literature are considered and shown to be special cases of the proposed model. The results of this paper improve recent results given in Refs. 1 and 2.

Journal ArticleDOI
TL;DR: In this paper, a deterministic model was proposed for the analysis of two-way contingency tables that arise in counts of pairwise interactions. But the decomposition is not unique, which compromises estimation and interpretation of the parameters, and the assumption of decomposibility is supported by neither empirical evidence nor theoretical considerations.
Abstract: The authors review a deterministic model proposed for the analysis of two-way contingency tables that arise in counts of pairwise interactions. This model decomposes the table into the sum of two matrices with special forms: in one the contacts are distributed selectively, in the other they are distributed at random. We show that this model has several inherent problems. The decomposition is not unique, which compromises estimation and interpretation of the parameters; the deterministic framework provides no basis for estimation or hypothesis testing; and the assumption of decomposibility is supported by neither empirical evidence nor theoretical considerations. We show that generalized linear models provide a suitable alternative once the probability process is specified and the overparameterization is removed.


Book ChapterDOI
04 Jul 1994
TL;DR: The running intersection property is the necessary and sufficient condition for pairwise compatibility of prescribed less-dimensional knowledge representations being equivalent to the existence of a global representation.
Abstract: By the marginal problem we understand the problem of the existence of a global (full-dimensional) knowledge representation which has prescribed lessdimensional representations as marginals. The paper deals with this problem in several calculi of AI: probabilistic reasoning, theory of relational databases, possibility theory, Dempster-Shafer's theory of belief functions, Spohn's theory of ordinal conditional functions. The following result, already known in probabilistic framework and in the framework of relational databases, is shown also for the other calculi: the running intersection property is the necessary and sufficient condition for pairwise compatibility of prescribed less-dimensional knowledge representations being equivalent to the existence of a global representation. Moreover, a simple method of solving the marginal problem in the possibilistic framework and its subframeworks is given.

Journal ArticleDOI
TL;DR: It is concluded that the Cook and Kress method accompanied with the proposed Phase II approach can always derive a unique set of weights from a pairwise comparison matrix.

Journal ArticleDOI
TL;DR: This study presents a survey of work that considers the expected likelihood that a subject using the probabilistic model will have transitive responses for pairwise choices on a set of three alternatives.
Abstract: May developed an algebraic choice model to describe pairwise comparisons from an empirical study. A probabilistic choice variation of May's model has also been developed. This study presents a survey of work that considers the expected likelihood that a subject using the probabilistic model will have transitive responses for pairwise choices on a set of three alternatives. Of particular interest is the impact that various factors that influence the probabilistic choice model have on the expected likelihood of transitivity. These factors include the degree of accuracy with which the subject perceives the attributes of the alternatives, the number of attributes of comparison, and the consistency with which alternatives are ranked across attributes.

Journal ArticleDOI
TL;DR: In this article, the authors generalize the pairwise-comparison method to evaluate the possible deals between two parties in conflict, and they consider the case of three parties in conflicts.
Abstract: Starting off with a pairwise-comparison method to evaluate the possible deals between two parties in conflict, we generalize the approach and we consider the case of three parties in conflict. The basic step is the subjective evaluation of a deal where each party offers exactly one concession. The trade-off of benefits and costs is judged in verbal terms which are subsequently converted into numerical values on a discrete geometric scale. Although the number of plausible geometric scales is large, the information to be used by a mediator is scale-independent. The approach is illustrated by the results of an exploratory project aiming at a balanced CO2 emission reduction in Poland, Brazil, and the netherlands. The success of the method depends largely on the information-processing support. Given the limitations of human imagination and human judgement, the method is not likely to be effective in a conflict among four or more parties, although it can easily be generalized.