scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2011"


Journal ArticleDOI
TL;DR: A modified fuzzy TOPSIS methodology is proposed for the selection of the best energy technology alternative and the weights of the selection criteria are determined by fuzzy pairwise comparison matrices.
Abstract: Research highlights? We proposed a modified fuzzy TOPSIS methodology for energy planning decisions. ? The weights of the selection criteria are determined by fuzzy AHP. ? The method is applied to a multicriteria energy planning problem. Energy planning is a complex issue which takes technical, economic, environmental and social attributes into account. Selection of the best energy technology requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgments are under uncertainty, it is relatively difficult for them to provide exact numerical values. The fuzzy set theory is a strong tool which can deal with the uncertainty in case of subjective, incomplete, and vague information. It is easier for an energy planning expert to make an evaluation by using linguistic terms. In this paper, a modified fuzzy TOPSIS methodology is proposed for the selection of the best energy technology alternative. TOPSIS is a multicriteria decision making (MCDM) technique which determines the best alternative by calculating the distances from the positive and negative ideal solutions according to the evaluation scores of the experts. In the proposed methodology, the weights of the selection criteria are determined by fuzzy pairwise comparison matrices. The methodology is applied to an energy planning decision-making problem.

395 citations


Proceedings Article
27 Jul 2011
TL;DR: Pro's scalability and effectiveness is established by comparing it to MERT and MIRA and parity is demonstrated on both phrase-based and syntax-based systems in a variety of language pairs, using large scale data scenarios.
Abstract: We offer a simple, effective, and scalable method for statistical machine translation parameter tuning based on the pairwise approach to ranking (Herbrich et al., 1999). Unlike the popular MERT algorithm (Och, 2003), our pairwise ranking optimization (PRO) method is not limited to a handful of parameters and can easily handle systems with thousands of features. Moreover, unlike recent approaches built upon the MIRA algorithm of Crammer and Singer (2003) (Watanabe et al., 2007; Chiang et al., 2008b), PRO is easy to implement. It uses off-the-shelf linear binary classifier software and can be built on top of an existing MERT framework in a matter of hours. We establish PRO's scalability and effectiveness by comparing it to MERT and MIRA and demonstrate parity on both phrase-based and syntax-based systems in a variety of language pairs, using large scale data scenarios.

304 citations


Journal ArticleDOI
TL;DR: Information granularity is viewed as an essential asset, which offers a decision maker a tangible level of flexibility using some initial preferences conveyed by each individual that can be adjusted with the intent to reach a higher level of consensus within the group.
Abstract: In group decision making, one strives to reconcile differences of opinions (judgments) expressed by individual members of the group. Fuzzy-decision-making mechanisms bring a great deal of flexibility. By admitting membership degrees, we are offered flexibility to exploit different aggregation mechanisms and navigate a process of interaction among decision makers to achieve an increasing level of consistency within the group. While the studies reported so far exploit more or less sophisticated ways of adjusting/transforming initial judgments (preferences) of individuals, in this paper, we bring forward a concept of information granularity. Here, information granularity is viewed as an essential asset, which offers a decision maker a tangible level of flexibility using some initial preferences conveyed by each individual that can be adjusted with the intent to reach a higher level of consensus. Our study is concerned with an extension of the well-known analytic hierarchy process to the group decision-making scenario. More specifically, the admitted level of granularity gives rise to a granular matrix of pairwise comparisons. The granular entries represented, e.g., by intervals or fuzzy sets, supply a required flexibility using the fact that we select the most suitable numeric representative of the reciprocal matrix. The proposed concept of granular reciprocal matrices is used to optimize a performance index, which comes as an additive combination of two components. The first one expresses a level of consistency of the individual pairwise comparison matrices; by exploiting the admitted level of granularity, we aim at the minimization of the corresponding inconsistency index. The second part of the performance index quantifies a level of disagreement in terms of the individual preferences. The flexibility offered by the level of granularity is used to increase the level of consensus within the group. Given an implicit nature of relationships between the realizations of the granular pairwise matrices and the values of the performance index, we consider using particle swarm optimization as an optimization vehicle. Two scenarios of allocation of granularity among decision makers are considered, namely, a uniform allocation of granularity and nonuniform distribution of granularity, where the levels of allocated granularity are also subject to optimization. A number of numeric studies are provided to illustrate an essence of the method.

235 citations


Journal ArticleDOI
TL;DR: A simple method, which combines the theorem of matrix multiplication, vectors dot product, and the definition of consistent pair-wise comparison matrix, to identify the inconsistent elements is proposed and the correctness of the proposed method is proved mathematically.

220 citations


Proceedings Article
19 Jun 2011
TL;DR: This work classifies the text of an argument as an instance of one of five common schemes, using features specific to each scheme, and achieves accuracies of 63--91% in one-against-others classification and 80--94% in pairwise classification.
Abstract: Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance of one of five common schemes, using features specific to each scheme. We achieve accuracies of 63--91% in one-against-others classification and 80--94% in pairwise classification (baseline = 50% in both cases).

204 citations


Proceedings Article
28 Jun 2011
TL;DR: A new algorithm is developed, the generalized repeated insertion model (GRIM), for sampling from arbitrary ranking distributions, that develops approximate samplers that are exact for many important special cases—and have provable bounds with pair-wise evidence.
Abstract: Learning preference distributions is a key problem in many areas (e.g., recommender systems, IR, social choice). However, many existing methods require restrictive data models for evidence about user preferences. We relax these restrictions by considering as data arbitrary pairwise comparisons—the fundamental building blocks of ordinal rankings. We develop the first algorithms for learning Mallows models (and mixtures) with pairwise comparisons. At the heart is a new algorithm, the generalized repeated insertion model (GRIM), for sampling from arbitrary ranking distributions. We develop approximate samplers that are exact for many important special cases—and have provable bounds with pair-wise evidence—and derive algorithms for evaluating log-likelihood, learning Mallows mixtures, and non-parametric estimation. Experiments on large, real-world datasets show the effectiveness of our approach.

200 citations


Journal ArticleDOI
TL;DR: This work shows how, given a collection of pairwise shape maps, to define an optimization problem whose output is a set of alternative maps, compositions of those given, which are consistent, and individually at times much better than the original.
Abstract: Finding an informative, structure-preserving map between two shapes has been a long-standing problem in geometry processing, involving a variety of solution approaches and applications. However, in many cases, we are given not only two related shapes, but a collection of them, and considering each pairwise map independently does not take full advantage of all existing information. For example, a notorious problem with computing shape maps is the ambiguity introduced by the symmetry problem — for two similar shapes which have reflectional symmetry there exist two maps which are equally favorable, and no intrinsic mapping algorithm can distinguish between them based on these two shapes alone. Another prominent issue with shape mapping algorithms is their relative sensitivity to how “similar” two shapes are — good maps are much easier to obtain when shapes are very similar. Given the context of additional shape maps connecting our collection, we propose to add the constraint of global map consistency, requiring that any composition of maps between two shapes should be independent of the path chosen in the network. This requirement can help us choose among the equally good symmetric alternatives, or help us replace a “bad” pairwise map with the composition of a few “good” maps between shapes that in some sense interpolate the original ones. We show how, given a collection of pairwise shape maps, to define an optimization problem whose output is a set of alternative maps, compositions of those given, which are consistent, and individually at times much better than the original. Our method is general, and can work on any collection of shapes, as long as a seed set of good pairwise maps is provided. We demonstrate the effectiveness of our method for improving maps generated by state-of-the-art mapping methods on various shape databases.

137 citations


Journal ArticleDOI
TL;DR: A new method of determining the criteria weights, FARE (Factor Relationship), based on the relationships between all the criteria describing the phenomenon considered, is offered.
Abstract: The accuracy of the results obtained by using multicriteria evaluation methods largely depends on the determination of the criteria weights. The accuracy of expert evaluation decreases with the increase of the number of criteria. The application of the analytic hierarchy process, based on pairwise comparison of the criteria, or similar methods, may help to solve this problem. However, if the number of the pairs of criteria is large, the same problems, associated with the accuracy of evaluation, arise. In the present paper, a new method of determining the criteria weights, FARE (Factor Relationship), based on the relationships between all the criteria describing the phenomenon considered, is offered. It means that, at the first stage, a minimal amount of the initial data about the relationships between a part of the set of criteria, as well as their strength and direction, is elicited from experts. Then, based on the conditions of functioning and the specific features of the complete set of criteria, the relations between other criteria of the set and their direction are determined analytically in compliance with those established at the first stage. When the total impact of each particular criterion on other criteria of the set or its total dependence on other criteria of a particular set is known, the criteria weights can be determined.

129 citations


Posted Content
TL;DR: In this paper, the authors apply matrix completion to skew-symmetric matrices and show that the matrix completion algorithm is robust to both noise and incomplete data and apply it to both pairwise comparison and rating data.
Abstract: The process of rank aggregation is intimately intertwined with the structure of skew-symmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skew-symmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skew-symmetric matrix. We extend an algorithm for matrix completion to handle skew-symmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netflix ratings.

119 citations


Journal ArticleDOI
TL;DR: Pairwise Equalizing is developed, a gossip-style, distributed asynchronous iterative algorithm for achieving unconstrained, separable, convex consensus optimization over undirected networks with time-varying topologies, where each component function is strictly convex, continuously differentiable, and has a minimizer.
Abstract: In many applications, nodes in a network desire not only a consensus, but an optimal one. To date, a family of subgradient algorithms have been proposed to solve this problem under general convexity assumptions. This technical note shows that, for the scalar case and by assuming a bit more, novel non-gradient-based algorithms with appealing features can be constructed. Specifically, we develop Pairwise Equalizing (PE) and Pairwise Bisectioning (PB), two gossip algorithms that solve unconstrained, separable, convex consensus optimization problems over undirected networks with time-varying topologies, where each local function is strictly convex, continuously differentiable, and has a minimizer. We show that PE and PB are easy to implement, bypass limitations of the subgradient algorithms, and produce switched, nonlinear, networked dynamical systems that admit a common Lyapunov function and asymptotically converge. Moreover, PE generalizes the well-known Pairwise Averaging and Randomized Gossip Algorithm, while PB relaxes a requirement of PE, allowing nodes to never share their local functions.

116 citations


Posted Content
TL;DR: In this paper, the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects) was examined and a robust, error-tolerant algorithm was proposed.
Abstract: This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of $n$ objects can be identified by standard sorting methods using $n log_2 n$ pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a $d$-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in $R^d$. We show that under this assumption the number of possible rankings grows like $n^{2d}$ and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than $d log n$ adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis.

Proceedings Article
12 Dec 2011
TL;DR: In this paper, the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects) was examined and a robust, error-tolerant algorithm was proposed to identify a randomly selected ranking using just slightly more than d log n adaptively selected pairwise comparison on average.
Abstract: This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of n objects can be identified by standard sorting methods using n log2 n pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a d-dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in ℝd. We show that under this assumption the number of possible rankings grows like n2d and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than d log n adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis.

Journal ArticleDOI
TL;DR: In this paper, a new method, called ELECTREGKMS, which employs robust ordinal regression to construct a set of outranking models compatible with preference information, is presented, where preference information supplied by the decision maker is composed of pairwise comparisons stating the truth or falsity of the outranking relation for some real or fictitious reference alternatives.

Journal ArticleDOI
TL;DR: This paper defines a set of concepts regarding the scale function and the linguistic pairwise comparison matrices of the priority vector and develops a 2-tuple fuzzy linguistic multicriteria approach to select the best numerical scales and the best prioritization methods for different LPCMs.
Abstract: The validity of the priority vector used in the analytic hierarchy process (AHP) relies on two factors: the selection of a numerical scale and the selection of a prioritization method. The traditional AHP selects only one numerical scale (e.g., the Saaty scale) and one prioritization method (e.g., the eigenvector method) for each particular problem. For this traditional selection approach, there is disagreement on which numerical scale and prioritization method is better in deriving a priority vector. In fact, the best numerical scale and the best prioritization method both rely on the content of the pairwise comparison data provided by the AHP decision makers. By defining a set of concepts regarding the scale function and the linguistic pairwise comparison matrices (LPCMs) of the priority vector and by using LPCMs to unify the format of the input and output of AHP, this paper extends the AHP prioritization process under the 2-tuple fuzzy linguistic model. Based on the extended AHP prioritization process, we present two performance measure criteria to evaluate the effect of the numerical scales and prioritization methods. We also use the performance measure criteria to develop a 2-tuple fuzzy linguistic multicriteria approach to select the best numerical scales and the best prioritization methods for different LPCMs. In this paper, we call this type of selection the individual selection of the numerical scale and prioritization method. We also compare this individual selection with traditional selection by using both random and real data and show better results with individual selection.

Journal ArticleDOI
TL;DR: In this paper, Wang and Chen proposed fuzzy linguistic preference relations (Fuzzy LinPreRa) to address the inconsistency of fuzzy AHP and EAM in decision-making.

Proceedings ArticleDOI
01 Sep 2011
TL;DR: An algorithm is proposed that exploits the low-dimensional geometry in order to accurately embed objects based on relatively small number of sequentially selected pairwise comparisons and its performance with experiments is demonstrated.
Abstract: Low-dimensional embedding based on non-metric data (e.g., non-metric multidimensional scaling) is a problem that arises in many applications, especially those involving human subjects. This paper investigates the problem of learning an embedding of n objects into d-dimensional Euclidean space that is consistent with pairwise comparisons of the type “object a is closer to object b than c.” While there are O(n3) such comparisons, experimental studies suggest that relatively few are necessary to uniquely determine the embedding up to the constraints imposed by all possible pairwise comparisons (i.e., the problem is typically over-constrained). This paper is concerned with quantifying the minimum number of pairwise comparisons necessary to uniquely determine an embedding up to all possible comparisons. The comparison constraints stipulate that, with respect to each object, the other objects are ranked relative to their proximity. We prove that at least Q(dn log n) pairwise comparisons are needed to determine the embedding of all n objects. The lower bounds cannot be achieved by using randomly chosen pairwise comparisons. We propose an algorithm that exploits the low-dimensional geometry in order to accurately embed objects based on relatively small number of sequentially selected pairwise comparisons and demonstrate its performance with experiments.

Journal ArticleDOI
TL;DR: A linearization technique is used that provides the closest consistent matrix to a given inconsistent matrix using orthogonal projection in a linear space and, as a result, consistency can be achieved in a closed form.

Proceedings ArticleDOI
24 Jul 2011
TL;DR: The experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users.
Abstract: In this paper, we consider the problem of estimating the relative expertise score of users in community question and answering services (CQA). Previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem. The im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored. Given a question and answering thread, it's likely that the expertise score of the best answerer is higher than the asker's and all other non-best answerers'. The goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users. Formally, we treat each pairwise comparison between two users as a two-player competition with one winner and one loser. Two competition models are proposed to estimate user expertise from pairwise comparisons. Using the NTCIR-8 CQA task data with 3 million questions and introducing answer quality prediction based evaluation metrics, the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches (PageRank and HITS) and pointwise approaches (number of best answers and best answer ratio) for estimating the expertise of active users. Furthermore, it's shown that pairwise comparison based competi-tion models have better discriminative power than other methods. It's also found that answer quality (best answer) is an important factor to estimate user expertise.

Journal ArticleDOI
TL;DR: A generic model for using different decision strategies in multi-criteria, personalized route planning, which results in multiple alternative routes that provide users with the flexibility to select one of them en-route based on the real world situation is presented.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: It is shown that any weakly neutral outcome scoring rule can be represented as the MLE of a weaklyneutral pairwise-independent model, and all such rules admit natural extensions to profiles of partial orders.
Abstract: In many of the possible applications as well as the theoretical models of computational social choice, the agents' preferences are represented as partial orders In this paper, we extend the maximum likelihood approach for defining "optimal" voting rules to this setting We consider distributions in which the pairwise comparisons/incomparabilities between alternatives are drawn iid We call such models pairwise-independent models and show that they correspond to a class of voting rules that we call pairwise scoring rules This generalizes rules such as Kemeny and Borda Moreover, we show that Borda is the only pairwise scoring rule that satisfies neutrality, when the outcome space is the set of all alternatives We then study which voting rules defined for linear orders can be extended to partial orders via our MLE model We show that any weakly neutral outcome scoring rule (including any ranking/candidate scoring rule) based on the weighted majority graph can be represented as the MLE of a weakly neutral pairwise-independent model Therefore, all such rules admit natural extensions to profiles of partial orders Finally, we propose a specific MLE model πk for generating a set of k winning alternatives, and study the computational complexity of winner determination for the MLE of πk

01 Jan 2011
TL;DR: The asymptotic properties of pairwise like- lihood estimation procedures for linear time series models, including ARMA as well as fractionally integrated ARMA processes, are concerned.
Abstract: This note is concerned with the asymptotic properties of pairwise like- lihood estimation procedures for linear time series models. The latter includes ARMA as well as fractionally integrated ARMA processes, where the fractional integration parameter d 0.25, the pairwise likelihood estimator is not even asymptotically normal. A comparison between using all pairs and consecutive pairs of observations in defining the likelihood is given. We also explore the application of pairwise likelihood to a popular nonlinear model for time series of counts. In this case, the likelihood based on the entire data set cannot be computed without resorting to simulation-based procedures. On the other hand, it is possible to numerically compute the pairwise likelihood precisely. We illustrate the good performance of pairwise likelihood in this case.

Journal ArticleDOI
TL;DR: This paper presents a novel pairwise constraint propagation approach by decomposing the challenging constraint propagation problem into a set of independent semi-supervised classification subproblems which can be solved in quadratic time using label propagation based on $$k$$-nearest neighbor graphs.
Abstract: This paper presents a novel pairwise constraint propagation approach by decomposing the challenging constraint propagation problem into a set of independent semi-supervised learning subproblems which can be solved in quadratic time using label propagation based on k-nearest neighbor graphs. Considering that this time cost is proportional to the number of all possible pairwise constraints, our approach actually provides an efficient solution for exhaustively propagating pairwise constraints throughout the entire dataset. The resulting exhaustive set of propagated pairwise constraints are further used to adjust the similarity matrix for constrained spectral clustering. Other than the traditional constraint propagation on single-source data, our approach is also extended to more challenging constraint propagation on multi-source data where each pairwise constraint is defined over a pair of data points from different sources. This multi-source constraint propagation has an important application to cross-modal multimedia retrieval. Extensive results have shown the superior performance of our approach.

Journal ArticleDOI
TL;DR: This paper proposes a new semi-supervised constraint score that uses both pairwise constraints and local properties of the unlabeled data and shows that this new score is less sensitive to the given constraints than the previous scores while providing similar performances.

Proceedings Article
12 Dec 2011
TL;DR: The main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels?
Abstract: Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(n poly(log n, e-1)) preference labels for a regret of e times the optimal loss. This is strictly better, and often significantly better than what non-adaptive sampling could achieve. Our main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels?

Journal Article
TL;DR: A family of efficient NPKL algorithms, termed "SimpleNPKL", which can learn non-parametric kernels from a large set of pairwise constraints efficiently are presented, and the empirical results show that the proposed new technique is significantly more efficient and scalable.
Abstract: Previous studies of Non-Parametric Kernel Learning (NPKL) usually formulate the learning task as a Semi-Definite Programming (SDP) problem that is often solved by some general purpose SDP solvers. However, for N data examples, the time complexity of NPKL using a standard interior-point SDP solver could be as high as O(N6.5), which prohibits NPKL methods applicable to real applications, even for data sets of moderate size. In this paper, we present a family of efficient NPKL algorithms, termed "SimpleNPKL", which can learn non-parametric kernels from a large set of pairwise constraints efficiently. In particular, we propose two efficient SimpleNPKL algorithms. One is SimpleNPKL algorithm with linear loss, which enjoys a closed-form solution that can be efficiently computed by the Lanczos sparse eigen decomposition technique. Another one is SimpleNPKL algorithm with other loss functions (including square hinge loss, hinge loss, square loss) that can be re-formulated as a saddle-point optimization problem, which can be further resolved by a fast iterative algorithm. In contrast to the previous NPKL approaches, our empirical results show that the proposed new technique, maintaining the same accuracy, is significantly more efficient and scalable. Finally, we also demonstrate that the proposed new technique is also applicable to speed up many kernel learning tasks, including colored maximum variance unfolding, minimum volume embedding, and structure preserving embedding.

Proceedings ArticleDOI
13 Jun 2011
TL;DR: The experimental aspects of the work are emphasized, with a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map, and its use of topological persistence to guide the merging of clusters.
Abstract: We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory.The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input.In this conference version of the paper we emphasize the experimental aspects of our work, describing the approach, giving an intuitive overview of its theoretical guarantees, discussing the choice of its parameters in practice, and demonstrating its potential in terms of applications through a series of experimental results obtained on synthetic and real-life data sets. Precise statements and proofs of our theoretical claims can be found in the full version of the paper [7].

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of obtaining an approximate maximum a posteriori estimate of a discrete random field characterized by pairwise potentials that form a truncated convex model.
Abstract: We consider the problem of obtaining an approximate maximum a posteriori estimate of a discrete random field characterized by pairwise potentials that form a truncated convex model. For this problem, we propose two st-MINCUT based move making algorithms that we call Range Swap and Range Expansion. Our algorithms can be thought of as extensions of αβ-Swap and α-Expansion respectively that fully exploit the form of the pairwise potentials. Specifically, instead of dealing with one or two labels at each iteration, our methods explore a large search space by considering a range of labels (that is, an interval of consecutive labels). Furthermore, we show that Range Expansion provides the same multiplicative bounds as the standard linear programming (LP) relaxation in polynomial time. Compared to previous approaches based on the LP relaxation, for example interior-point algorithms or tree-reweighted message passing (TRW), our methods are faster as they use only the efficient st-MINCUT algorithm in their design. We demonstrate the usefulness of the proposed approaches on both synthetic and standard real data problems.

Journal ArticleDOI
TL;DR: A non-statistically significant relative superiority of the fuzzy technology over the AHP technology is indicated in the development of medical diagnosis system, which involves basic symptoms elicitation and analysis.

Proceedings ArticleDOI
27 Jan 2011
TL;DR: A tool chain realizing the MoSo-PoLiTe concept combining combinatorial and model-based testing is presented, which contains a pairwise configuration selection component on the basis of a feature model.
Abstract: Testing Software Product Lines is a very challenging task and approaches like combinatorial testing and model-based testing are frequently used to reduce the effort of testing Software Product Lines and to reuse test artifacts. In this contribution we present a tool chain realizing our MoSo-PoLiTe concept combining combinatorial and model-based testing. Our tool chain contains a pairwise configuration selection component on the basis of a feature model. This component implements an heuristic finding a minimal subset of configurations covering 100% pairwise interaction. Additionally, our tool chain allows the model-based test case generation for each configuration within this generated subset. This tool chain is based on commercial tools since it was developed within industrial cooperations. A non-commercial implementation of pairwise configuration selection is available and an integration with an Open Source model-based testing tool is under development.

Journal ArticleDOI
TL;DR: This paper presents a preference learning method, trained from examples to approximate the comparison function for a pair of objects, that can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects.
Abstract: Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.