scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2004"


Journal ArticleDOI
TL;DR: This paper relates product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria and develops an integrated analytic hierarchy process and preemptive goal programming based multi-criteria decision-making methodology.

590 citations


Journal ArticleDOI
TL;DR: The Analytic Network Process (ANP) as discussed by the authors is a multicriteria theory of measurement used to derive relative priority scales of absolute numbers from individual judgments (or from actual measurements normalized to a relative form).
Abstract: The Analytic Network Process (ANP) is a multicriteria theory of measurement used to derive relative priority scales of absolute numbers from individual judgments (or from actual measurements normalized to a relative form) that also belong to a fundamental scale of absolute numbers. These judgments represent the relative influence, of one of two elements over the other in a pairwise comparison process on a third element in the system, with respect to an underlying control criterion. Through its supermatrix, whose entries are themselves matrices of column priorities, the ANP synthesizes the outcome of dependence and feedback within and between clusters of elements. The Analytic Hierarchy Process (AHP) with its independence assumptions on upper levels from lower levels and the independence of the elements in a level is a special case of the ANP. The ANP is an essential tool for articulating our understanding of a decision problem. One had to overcome the limitation of linear hierarchic structures and their mathematical consequences. This part on the ANP summarizes and illustrates the basic concepts of the ANP and shows how informed intuitive judgments can lead to real life answers that are matched by actual measurements in the real world (for example, relative dollar values) as illustrated in market share examples that rely on judgments and not on numerical data.

483 citations


Posted Content
TL;DR: In this paper, a method of stochastic dominance analysis with respect to a function (SDRF) is described and illustrated, which can be applied for conforming utility functions with risk attitudes defined by corresponding ranges of absolute, relative or partial risk aversion coefficients.
Abstract: A method of stochastic dominance analysis with respect to a function (SDRF) is described and illustrated. The method, called stochastic efficiency with respect to a function (SERF), orders a set of risky alternatives in terms of certainty equivalents for a specified range of attitudes to risk. It can be applied for conforming utility functions with risk attitudes defined by corresponding ranges of absolute, relative or partial risk aversion coefficients. Unlike conventional SDRF, SERF involves comparing each alternative with all the other alternatives simultaneously, not pairwise, and hence can produce a smaller efficient set than that found by simple pairwise SDRF over the same range of risk attitudes. Moreover, the method can be implemented in a simple spreadsheet with no special software needed.

297 citations


Journal ArticleDOI
TL;DR: The AHP was used for the purpose of structuring and clarifying the relations and importance between human performance improvement and the style of management, and found that in terms of company culture, participation, human capability, and attitudes the best management style in improving human performance is management by values.
Abstract: In the global economy, the modern commercial and industrial organization needs to develop better methods of assessing the performance of the human resource than simply using performance measures such as efficiency or effectiveness. As organizations seek more aggressive ways to cut costs and to increase global competitiveness, the importance of establishing and sustaining high levels of employee performance increases. The main purpose of this paper is to solve the human performance improvement problem by employing Analytic Hierarchy Process (AHP) method. Decision makers (DMs) often deal with problems that involve multiple criteria. At given moments in time, companies will display characteristics that make certain factors; key factors in their competences. In this paper, we present a model, which illustrates the relations and importance between human performance improvement and the style of management. In using the AHP to model this problem, we developed a hierarchic structure to represent the problem of human performance management and made pairwise comparisons. In this paper, the AHP is suggested as a tool for implementing a multiple criteria performance improvement scheme. The AHP was used for the purpose of structuring and clarifying the relations and importance between human performance improvement and the style of management. The study found that in terms of company culture, participation, human capability, and attitudes the best management style in improving human performance is management by values.

281 citations


Journal ArticleDOI
TL;DR: This work discusses 18 estimating methods for deriving preference values from pairwise judgment matrices under a common framework of effectiveness: distance minimization and correctness in error free cases and points out the importance of commensurate scales when aggregating all the columns of a judgment matrix.

275 citations


Journal ArticleDOI
TL;DR: In this paper, a method of stochastic dominance analysis with respect to a function (SDRF) is described and illustrated, which can be applied for conforming utility functions with risk attitudes defined by corresponding ranges of absolute, relative or partial risk aversion coefficients.
Abstract: A method of stochastic dominance analysis with respect to a function (SDRF) is described and illustrated. The method, called stochastic efficiency with respect to a function (SERF), orders a set of risky alternatives in terms of certainty equivalents for a specified range of attitudes to risk. It can be applied for conforming utility functions with risk attitudes defined by corresponding ranges of absolute, relative or partial risk aversion coefficients. Unlike conventional SDRF, SERF involves comparing each alternative with all the other alternatives simultaneously, not pairwise, and hence can produce a smaller efficient set than that found by simple pairwise SDRF over the same range of risk attitudes. Moreover, the method can be implemented in a simple spreadsheet with no special software needed.

250 citations


Journal ArticleDOI
TL;DR: An interval approach for obtaining interval weights of priorities in the analytic hierarchy process (AHP) reflecting inconsistency of pairwise comparison ratios given by a decision maker is proposed.

193 citations


Journal ArticleDOI
TL;DR: A new approach to deriving crisp priorities from interval pairwise comparison judgements by introducing linear or non-linear membership functions, representing the decision-maker's degree of satisfaction with various crisp priority vectors, which is formulated as a fuzzy mathematical programming problem for obtaining an optimal crisp priority vector.

193 citations


Journal ArticleDOI
TL;DR: It is shown that even if a matrix will pass a consistency test successfully, it can be contradictory.

168 citations


Journal ArticleDOI
TL;DR: This paper evaluated two alternative causal cognitive mapping procedures that exemplify key differences among a number of direct elicitation techniques currently in use in the organizational strategy field: pairwise evaluation of causal relationships and a freeh and approach.
Abstract: The present study evaluates two alternative causal cognitive mapping procedures that exemplify key differences among a number of direct elicitation techniques currently in use in the organizational strategy field: pairwise evaluation of causal relationships and a freeh and approach. The pairwise technique yielded relatively elaborate maps, but participants found the task more difficult, less engaging, and less representative than the freeh and approach. Implications for the choice of procedures in interventionist and research contexts are considered.

162 citations


Journal ArticleDOI
TL;DR: This paper considers the development of a representative criteria hierarchy, and uses data obtained from a pairwise comparison survey based on the UK fisheries of the English Channel to investigate priorities that exist among different interest groups in the fisheries.
Abstract: In determining the importance of criteria in the management of fisheries, two key issues stand out-the definition of a succinct set of criteria and the determination of which interest groups play a defining role in the management development process. This is indeed the case for all natural resource management problems, and many other environmental problems as well. The analytic hierarchy process (AHP) provides an effective framework for such an analysis. The AHP is generally used to evaluate importance amongst criteria based on the concept of paired comparison. This paper considers the development of a representative criteria hierarchy, and uses data obtained from a pairwise comparison survey based on the UK fisheries of the English Channel to investigate priorities that exist among different interest groups in the fisheries. The implementation of the AHP in this application provides a useful tool for analysis of criteria amongst groups involved in the management process with diverse interests.

Journal ArticleDOI
TL;DR: This paper proposes a new approach that involves pairwise comparisons based on the multicriteria decision aid (MCDA) paradigm that is a preference relation that is used to perform pairwise compared among the alternatives.

Proceedings ArticleDOI
04 Jul 2004
TL;DR: Ensembles of nested dichotomies appear to be a good general-purpose method for applying binary classifiers to multi-class problems and are preferable if logistic regression is used, and comparable in the case of C4.5.
Abstract: Nested dichotomies are a standard statistical technique for tackling certain polytomous classification problems with logistic regression. They can be represented as binary trees that recursively split a multi-class classification task into a system of dichotomies and provide a statistically sound way of applying two-class learning algorithms to multi-class problems (assuming these algorithms generate class probability estimates). However, there are usually many candidate trees for a given problem and in the standard approach the choice of a particular tree is based on domain knowledge that may not be available in practice. An alternative is to treat every system of nested dichotomies as equally likely and to form an ensemble classifier based on this assumption. We show that this approach produces more accurate classifications than applying C4.5 and logistic regression directly to multi-class problems. Our results also show that ensembles of nested dichotomies produce more accurate classifiers than pairwise classification if both techniques are used with C4.5, and comparable results for logistic regression. Compared to error-correcting output codes, they are preferable if logistic regression is used, and comparable in the case of C4.5. An additional benefit is that they generate class probability estimates. Consequently they appear to be a good general-purpose method for applying binary classifiers to multi-class problems.

Journal ArticleDOI
TL;DR: The GFPP method combines the group synthesis and prioritization stages into a coherent integrated framework, which does not need additional aggregation procedures and provides a meaningful indicator for measuring the level of group satisfaction and group consistency.

Journal ArticleDOI
01 Jan 2004
TL;DR: It is shown that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes, and that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein-protein interaction networks.
Abstract: Motivation: The diffusion kernel is a general method for computing pairwise distances among all nodes in a graph, based on the sum of weighted paths between each pair of nodes. This technique has been used successfully, in conjunction with kernel-based learning methods, to draw inferences from several types of biological networks. Results: We show that computing the diffusion kernel is equivalent to maximizing the von Neumann entropy, subject to a global constraint on the sum of the Euclidean distances between nodes. This global constraint allows for high variance in the pairwise distances. Accordingly, we propose an alternative, locally constrained diffusion kernel, and we demonstrate that the resulting kernel allows for more accurate support vector machine prediction of protein functional classifications from metabolic and protein--protein interaction networks. Availability: Supplementary results and data are available at noble.gs.washington.edu/proj/maxent

Proceedings Article
01 Jan 2004
TL;DR: A greedy variant of AETG and TCG is developed, deterministic, guaranteeing reproducibility, and shown to provide a logarithmic worst-case guarantee on the test suite size.
Abstract: Pairwise coverage of factors affecting software has been proposed to screen for potential errors. Techniques to generate test suites for pairwise coverage are evaluated according to many criteria. A small number of tests is a main criterion, as this dictates the time for test execution. Randomness has been exploited to search for small test suites, but variation occurs in the test suite produced. A worst-case guarantee on test suite size is desired; repeatable generation is often necessary. The time to construct the test suite is also important. Finally, testers must be able to include certain tests, and to exclude others. The main approaches to generating test suites for pairwise coverage are examined; these are exemplified by AETG, IPO, TCG, TConfig, simulated annealing, and combinatorial design techniques. A greedy variant of AETG and TCG is developed. It is deterministic, guaranteeing reproducibility. It generates only one candidate test at a time, providing faster test suite development. It is shown to provide a logarithmic worst-case guarantee on the test suite size. It permits users to “seed” the test suite with specified tests. Finally, comparisons with other greedy approaches demonstrate that it often yields the smallest test suite.

Journal ArticleDOI
01 Nov 2004
TL;DR: A uniformity method and an aggregating method are proposed to provide both convenience and accuracy in generating the final outcome and higher DM satisfaction and the validity of using multiple preference formats in criteria weight determination is verified.
Abstract: In multiple criteria decision making (MCDM), decision makers (DMs) always give preferences information on alternatives, criteria or decision matrices. Since the DMs may have diverse cultural and educational background and value systems, their preference would be expressed in different ways. This is especially true in cyberspace. In this study, the DMs are asked to express their preferences on a variety of criteria using any one of the following preference formats: preference orderings, utility values, multiplicative preference relation, selected subset, fuzzy selected subset, normal preference relation, fuzzy preference relation, linguistic terms, and pairwise comparison. In addition, we propose a uniformity method and an aggregating method to provide both convenience and accuracy in generating the final outcome and higher DM satisfaction. Finally, the validity of using multiple preference formats in criteria weight determination is verified through an experiment.

Journal ArticleDOI
27 Jun 2004
TL;DR: This paper proposes a discriminative learning approach which can incorporate pairwise constraints into a conventional margin-based learning framework and can directly model the decision boundary and, thus, require fewer model assumptions.
Abstract: To deal with the problem of insufficient labeled data in video object classification, one solution is to utilize additional pairwise constraints that indicate the relationship between two examples, i.e., whether these examples belong to the same class or not. In this paper, we propose a discriminative learning approach which can incorporate pairwise constraints into a conventional margin-based learning framework. Different from previous work that usually attempts to learn better distance metrics or estimate the underlying data distribution, the proposed approach can directly model the decision boundary and, thus, require fewer model assumptions. Moreover, the proposed approach can handle both labeled data and pairwise constraints in a unified framework. In this work, we investigate two families of pairwise loss functions, namely, convex and nonconvex pairwise loss functions, and then derive three pairwise learning algorithms by plugging in the hinge loss and the logistic loss functions. The proposed learning algorithms were evaluated using a people identification task on two surveillance video data sets. The experiments demonstrated that the proposed pairwise learning algorithms considerably outperform the baseline classifiers using only labeled data and two other pairwise learning algorithms with the same amount of pairwise constraints.

Journal ArticleDOI
TL;DR: In this article, the authors describe a Prolog application which helps the decision-maker to build a consistent matrix or a matrix with a controlled error, and give hints on how to continue the comparison process.

Journal Article
TL;DR: It is shown by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques.
Abstract: Pairwise proximity data, given as similarity or dissimilarity matrix, can violate metricity. This occurs either due to noise, fallible estimates, or due to intrinsic non-metric features such as they arise from human judgments. So far the problem of non-metric pairwise data has been tackled by essentially omitting the negative eigenvalues or shifting the spectrum of the associated (pseudo-)covariance matrix for a subsequent embedding. However, little attention has been paid to the negative part of the spectrum itself. In particular no answer was given to whether the directions associated to the negative eigenvalues would at all code variance other than noise related. We show by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques. The information hidden in the negative eigenvalue part of the spectrum is illustrated and discussed for three data sets, namely USPS handwritten digits, text-mining and data from cognitive psychology.

Journal ArticleDOI
M. Ridwan1
TL;DR: This paper introduces a fuzzy preference based model of route choice that may be the first application of fuzzy individual choice in traffic assignment and probably also the first in this class to consider the spatial knowledge of individual travelers.
Abstract: This paper introduces a fuzzy preference based model of route choice. The core of the model is FiPV (Fuzzy individuelle Praferenzen von Verkehrsteilnehmern or fuzzy traveler preferences), that is a choice function based on fuzzy preference relations for travel decisions. The proposed model may be the first application of fuzzy individual choice in traffic assignment and probably also the first in this class to consider the spatial knowledge of individual travelers. It is argued that travelers do not or cannot always follow the maximization principle. Therefore we formulate a model that also takes into account the travelers with non-maximizing behavior. The model is based on fuzzy preference relations, of which elements are fuzzy pairwise comparisons between the available alternatives.

Journal ArticleDOI
TL;DR: In this paper, multiple linear regression models are considered and the design matrices are allowed to be different, and the predictor variables are either unconstrained or constrained to finite intervals.
Abstract: Research on multiple comparison during the past 50 years or so has focused mainly on the comparison of several population means. Several years ago, Spurrier considered the multiple comparison of several simple linear regression lines. He constructed simultaneous confidence bands for all of the contrasts of the simple linear regression lines over the entire range (-∞, ∞) when the models have the same design matrices. This article extends Spurrier's work in several directions. First, multiple linear regression models are considered and the design matrices are allowed to be different. Second, the predictor variables are either unconstrained or constrained to finite intervals. Third, the types of comparison allowed can be very flexible, including pairwise, many–one, and successive. Two simulation methods are proposed for the calculation of critical constants. The methodologies are illustrated with examples.

Journal ArticleDOI
TL;DR: A multi-criterion genetic optimization for solving distribution network problems in supply chain management and provides more control for decision-makers on the determination of the optimization solutions, and gains more information for a better insight into the distribution network.
Abstract: This paper develops a multi-criterion genetic optimization for solving distribution network problems in supply chain management. Distribution problems deal with distribution from a number of sources to a number of destinations, in which various decision factors are closely related and influence each other. Genetic algorithms have been widely adopted as the optimization tool in solving these problems. This paper combines analytic hierarchy processes with genetic algorithms to capture the capability of multi-criterion decision-making. The proposed algorithm allows decision-makers to give weightings for criteria using a pairwise comparison approach. The numerical results obtained from the new approach are compared with the results obtained from linear programming. The result shows that the proposed algorithm is reliable and robust. In addition, it provides more control for decision-makers on the determination of the optimization solutions, and gains more information for a better insight into the distribution network.

Journal ArticleDOI
TL;DR: In this paper, a modified analytic hierarchy process (AHP) is presented, which incorporates probabilistic distributions to include uncertainty in the judgements, and the vector of priorities is calculated using Monte Carlo simulation.
Abstract: The analytic hierarchy process (AHP) is a powerful multiple-criteria decision analysis technique for dealing with complex problems. Traditional AHP forces decision-makers to converge vague judgements to single numeric preferences in order to estimate the pairwise comparisons of all pairs of objectives and decision alternatives required in the AHP. The resultant rankings of alternatives cannot be tested for statistical significance and it lacks a systematic approach that addresses managerial/soft aspects. To overcome the above limitations, the present paper presents a modified analytic hierarchy process, which incorporates probabilistic distributions to include uncertainty in the judgements. The vector of priorities is calculated using Monte Carlo simulation. The final rankings are analysed for rank reversal using analysis of variance, and managerial aspects (stake holder analysis, soft system methods, etc.) are introduced systematically. The focus is on the actual methodology of the modified analytic hierarchy process, which is illustrated by a brief account of a case study.

Journal ArticleDOI
TL;DR: In this article, the priority ranking of each fire safety attribute given by each evaluator and his/her evaluation on each pairwise comparison are combined to construct an approximate probability density distribution.

Book ChapterDOI
20 Sep 2004
TL;DR: A kernel method for using combinations of features across example pairs in learning pairwise classifiers that can give a precision 4 to 8 times higher than that of previous methods in author matching problems.
Abstract: We propose a kernel method for using combinations of features across example pairs in learning pairwise classifiers. Identifying two instances in the same class is an important technique in duplicate detection, entity matching, and other clustering problems. However, it is a difficult problem when instances have few discriminative features. One typical example is to check whether two abbreviated author names in different papers refer to the same person or not. While using combinations of different features from each instance may improve the classification accuracy, doing this straightforwardly is computationally intensive. Our method uses interaction between different features without high computational cost using a kernel. At medium recall levels, this method can give a precision 4 to 8 times higher than that of previous methods in author matching problems.

Journal ArticleDOI
TL;DR: A distance-based framework for analysing compatible decision maker's assignments with properties needed for obtaining an overall rank is proposed and Goal Programming is proposed as an attractive and flexible tool.

Proceedings ArticleDOI
21 Jul 2004
TL;DR: A new algorithm using information extraction support in addition to co-occurring words for tracking person entities in a large document pool significantly outperforms the existing algorithm by 25 percentage points in overall F-measure.
Abstract: It is fairly common that different people are associated with the same name. In tracking person entities in a large document pool, it is important to determine whether multiple mentions of the same name across documents refer to the same entity or not. Previous approach to this problem involves measuring context similarity only based on co-occurring words. This paper presents a new algorithm using information extraction support in addition to co-occurring words. A learning scheme with minimal supervision is developed within the Bayesian framework. Maximum entropy modeling is then used to represent the probability distribution of context similarities based on heterogeneous features. Statistical annealing is applied to derive the final entity coreference chains by globally fitting the pairwise context similarities. Benchmarking shows that our new approach significantly outperforms the existing algorithm by 25 percentage points in overall F-measure.

Proceedings Article
01 Dec 2004
TL;DR: This work addresses the problem of grouping out-of-sample examples after the clustering process has taken place and shows that the very notion of a dominant set offers a simple and efficient way of doing this.
Abstract: Dominant sets are a new graph-theoretic concept that has proven to be relevant in pairwise data clustering problems, such as image segmentation. They generalize the notion of a maximal clique to edge-weighted graphs and have intriguing, non-trivial connections to continuous quadratic optimization and spectral-based grouping. We address the problem of grouping out-of-sample examples after the clustering process has taken place. This may serve either to drastically reduce the computational burden associated to the processing of very large data sets, or to efficiently deal with dynamic situations whereby data sets need to be updated continually. We show that the very notion of a dominant set offers a simple and efficient way of doing this. Numerical experiments on various grouping problems show the effectiveness of the approach.

Journal ArticleDOI
TL;DR: P pairwise dominance is applied to segregate production plans into sets according to their relative environmental and productive efficiency performance, which are used to define distance-based measures of efficiency and environmental performance.