scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2015"


Journal ArticleDOI
TL;DR: In this article, a new method, called best-worst method (BWM) is proposed to solve multi-criteria decision-making (MCDM) problems, in which a number of alternatives are evaluated with respect to different criteria in order to select the best alternative(s).
Abstract: In this paper, a new method, called best-worst method (BWM) is proposed to solve multi-criteria decision-making (MCDM) problems. In an MCDM problem, a number of alternatives are evaluated with respect to a number of criteria in order to select the best alternative(s). According to BWM, the best (e.g. most desirable, most important) and the worst (e.g. least desirable, least important) criteria are identified first by the decision-maker. Pairwise comparisons are then conducted between each of these two criteria (best and worst) and the other criteria. A maximin problem is then formulated and solved to determine the weights of different criteria. The weights of the alternatives with respect to different criteria are obtained using the same process. The final scores of the alternatives are derived by aggregating the weights from different sets of criteria and alternatives, based on which the best alternative is selected. A consistency ratio is proposed for the BWM to check the reliability of the comparisons. To illustrate the proposed method and evaluate its performance, we used some numerical examples and a real-word decision-making problem (mobile phone selection). For the purpose of comparison, we chose AHP (analytic hierarchy process), which is also a pairwise comparison-based method. Statistical results show that BWM performs significantly better than AHP with respect to the consistency ratio, and the other evaluation criteria: minimum violation, total deviation, and conformity. The salient features of the proposed method, compared to the existing MCDM methods, are: (1) it requires less comparison data; (2) it leads to more consistent comparisons, which means that it produces more reliable results.

2,214 citations


Journal ArticleDOI
TL;DR: A new R package is presented, called related, that can calculate relatedness based on seven estimators, can account for genotyping errors, missing data and inbreeding, and can estimate 95% confidence intervals.
Abstract: Analyses of pairwise relatedness represent a key component to addressing many topics in biology. However, such analyses have been limited because most available programs provide a means to estimate relatedness based on only a single estimator, making comparison across estimators difficult. Second, all programs to date have been platform specific, working only on a specific operating system. This has the undesirable outcome of making choice of relatedness estimator limited by operating system preference, rather than being based on scientific rationale. Here, we present a new R package, called related, that can calculate relatedness based on seven estimators, can account for genotyping errors, missing data and inbreeding, and can estimate 95% confidence intervals. Moreover, simulation functions are provided that allow for easy comparison of the performance of different estimators and for analyses of how much resolution to expect from a given data set. Because this package works in R, it is platform independent. Combined, this functionality should allow for more appropriate analyses and interpretation of pairwise relatedness and will also allow for the integration of relatedness data into larger R workflows.

296 citations


Journal ArticleDOI
TL;DR: A framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances is presented, which includes a novel method for distance matrix simulation that permits modeling of within-group pairwise distance according to pre-specified population parameters.
Abstract: Motivation The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study.

271 citations


Proceedings Article
07 Dec 2015
TL;DR: This work formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods.
Abstract: Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets.

256 citations


Journal ArticleDOI
TL;DR: A consistency-driven automatic methodology to set interval numerical scales of 2-tuple linguistic term sets in the decision making problems with linguistic preference relations is proposed and interval multiplicative preference relations are used in the pairwise comparisons method.
Abstract: The 2-tuple linguistic modeling is a popular tool for computing with words in decision making. In order to deal with the linguistic term sets that are not uniformly and symmetrically distributed, the numerical scale model has been developed to generalize the 2-tuple linguistic modeling. In the numerical scale model, the key task of the 2-tuple based models is the definition of a numerical scale function that establishes a one to one mapping between the linguistic information and numerical values. In this paper, we propose a consistency-driven automatic methodology to set interval numerical scales of 2-tuple linguistic term sets in the decision making problems with linguistic preference relations. This consistency-driven methodology is based on a natural premise regarding the consistency of preference relations. If linguistic preference relations provided by experts are of acceptable consistency, the corresponding transformed numerical preference relations by the established interval numerical scale are also consistent. Compared with the existing approach based on canonical characteristic values, the consistency-driven methodology provides a new way to set the interval numerical scale without the need of the semantics defined by interval type-2 fuzzy sets. Meanwhile, interval multiplicative preference relations are used in the pairwise comparisons method and the presented theory can be utilized in the pairwise comparisons method as it provides a novel approach to automatic construct interval multiplicative preference relations. Finally, we present the framework for the use of the consistency-driven automatic methodology in linguistic group decision making problems and two numerical examples are given to illustrate the feasibility and validity of this proposal.

243 citations


Journal ArticleDOI
TL;DR: This work introduces a method for learning pairwise interactions in a linear regression or logistic regression model in a manner that satisfies strong hierarchy: whenever an interaction is estimated to be nonzero, both its associated main effects are also included in the model.
Abstract: We introduce a method for learning pairwise interactions in a linear regression or logistic regression model in a manner that satisfies strong hierarchy: whenever an interaction is estimated to be nonzero, both its associated main effects are also included in the model. We motivate our approach by modeling pairwise interactions for categorical variables with arbitrary numbers of levels, and then show how we can accommodate continuous variables as well. Our approach allows us to dispense with explicitly applying constraints on the main effects and interactions for identifiability, which results in interpretable interaction models. We compare our method with existing approaches on both simulated and real data, including a genome-wide association study, all using our R package glinternet.

220 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This work proposes a framework that infers mid-level visual properties of an image by learning about ordinal relationships, and applies this framework to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.
Abstract: We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.

178 citations


Journal ArticleDOI
TL;DR: Five axioms aimed at characterizing inconsistency indices are presented and it is proved that some of the indices proposed in the literature satisfy these axiomatic, whereas others do not, and therefore, in this view, they may fail to correctly evaluate inconsistency.
Abstract: Pairwise comparisons are a well-known method for the representation of the subjective preferences of a decision maker. Evaluating their inconsistency has been a widely studied and discussed topic and several indices have been proposed in the literature to perform this task. As an acceptable level of consistency is closely related to the reliability of preferences, a suitable choice of an inconsistency index is a crucial phase in decision-making processes. The use of different methods for measuring consistency must be carefully evaluated, as it can affect the decision outcome in practical applications. In this paper, we present five axioms aimed at characterizing inconsistency indices. In addition, we prove that some of the indices proposed in the literature satisfy these axioms, whereas others do not, and therefore, in our view, they may fail to correctly evaluate inconsistency.

155 citations


Journal ArticleDOI
TL;DR: This work presents a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning and involves a novel symmetric use of the group-lasso norm.
Abstract: We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parameterization of the model. Supplementary materials for this article are available online.

154 citations


Journal Article
TL;DR: In this paper, a pairwise censored likelihood is used for consistent estimation of the extremes of space-time data under mild mixing conditions, and illustrates this by fitting an extension of a model of Schlather (2002) to hourly rainfall data.
Abstract: Max-stable processes are the natural analogues of the generalized extreme-value distribution when modelling extreme events in space and time. Under suitable conditions, these processes are asymptotically justified models for maxima of independent replications of random fields, and they are also suitable for the modelling of extreme measurements over high thresholds. This paper shows how a pairwise censored likelihood can be used for consistent estimation of the extremes of space-time data under mild mixing conditions, and illustrates this by fitting an extension of a model of Schlather (2002) to hourly rainfall data. A block bootstrap procedure is used for uncertainty assessment. Estimator efficiency is considered and the choice of pairs to be included in the pairwise likelihood is discussed. The proposed model fits the data better than some natural competitors.

153 citations


Journal ArticleDOI
TL;DR: In this article, stochastic geometry is used to analyze cooperation models where the positions of base stations follow a Poisson point process distribution and where Voronoi cells define the planar areas associated with them.
Abstract: Cooperation in cellular networks is a promising scheme to improve system performance, especially for cell-edge users. In this work, stochastic geometry is used to analyze cooperation models where the positions of base stations follow a Poisson point process distribution and where Voronoi cells define the planar areas associated with them. For the service of each user, either one or two base stations are involved. If two, these cooperate by exchange of user data and channel related information with conferencing over some backhaul link. Our framework generally allows for variable levels of channel information at the transmitters. This paper is focused on a case of limited information based on Willems' encoding. The total per-user transmission power is split between the two transmitters and a common message is encoded. The decision for a user to choose service with or without cooperation is directed by a family of geometric policies, depending on its relative position to its two closest base stations. An exact expression of the network coverage probability is derived. Numerical evaluation shows average coverage benefits of up to 17% compared to the non-cooperative case. Various other network problems of cellular cooperation, like the fully adaptive case, can be analyzed within our framework.

Journal ArticleDOI
TL;DR: A novel preference learning algorithm is designed to learn a confidence for each uncertain examination record with the help of transaction records and is called adaptive Bayesian personalized ranking (ABPR), which has the merits of uncertainty reduction on examination records and accurate pairwise preference learning on implicit feedbacks.
Abstract: Implicit feedbacks have recently received much attention in recommendation communities due to their close relationship with real industry problem settings. However, most works only exploit users’ homogeneous implicit feedbacks such as users’ transaction records from “bought” activities, and ignore the other type of implicit feedbacks like examination records from “browsed” activities. The latter are usually more abundant though they are associated with high uncertainty w.r.t. users’ true preferences. In this paper, we study a new recommendation problem called heterogeneous implicit feedbacks (HIF), where the fundamental challenge is the uncertainty of the examination records. As a response, we design a novel preference learning algorithm to learn a confidence for each uncertain examination record with the help of transaction records. Specifically, we generalize Bayesian personalized ranking (BPR), a seminal pairwise learning algorithm for homogeneous implicit feedbacks, and learn the confidence adaptively, which is thus called adaptive Bayesian personalized ranking (ABPR). ABPR has the merits of uncertainty reduction on examination records and accurate pairwise preference learning on implicit feedbacks. Experimental results on two public data sets show that ABPR is able to leverage uncertain examination records effectively, and can achieve better recommendation performance than the state-of-the-art algorithm on various ranking-oriented evaluation metrics.

Journal Article
TL;DR: The Copeland counting algorithm is analyzed, and it is shown to be an optimal method up to constant factors, meaning that it achieves the information-theoretic limits for recovering the top k-subset.
Abstract: We consider data in the form of pairwise comparisons of n items, with the goal of precisely identifying the top k items for some value of k < n, or alternatively, recovering a ranking of all the items. We analyze the Copeland counting algorithm that ranks the items in order of the number of pairwise comparisons won, and show it has three attractive features: (a) its computational efficiency leads to speed-ups of several orders of magnitude in computation time as compared to prior work; (b) it is robust in that theoretical guarantees impose no conditions on the underlying matrix of pairwise-comparison probabilities, in contrast to some prior work that applies only to the BTL parametric model; and (c) it is an optimal method up to constant factors, meaning that it achieves the information-theoretic limits for recovering the top k-subset. We extend our results to obtain sharp guarantees for approximate recovery under the Hamming distortion metric, and more generally, to any arbitrary error requirement that satisfies a simple and natural monotonicity condition.

Journal ArticleDOI
01 Oct 2015-Energy
TL;DR: In this article, an IVIF (interval-valued intuitionistic fuzzy) approach is proposed to deal with vagueness, ambiguity and subjectivity in the human evaluation processes.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: In this article, pairwise costs are added to the min-cost network flow framework for multi-object tracking, and a convex relaxation solution with an efficient rounding heuristic is proposed to give certificates of small suboptimality.
Abstract: Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.

Journal ArticleDOI
TL;DR: The Analytic Hierarchy Process has been applied inconsistently in healthcare research and new insights are needed to determine which target group can best handle the challenges of the AHP.
Abstract: The Analytic Hierarchy Process (AHP), developed by Saaty in the late 1970s, is one of the methods for multi-criteria decision making. The AHP disaggregates a complex decision problem into different hierarchical levels. The weight for each criterion and alternative are judged in pairwise comparisons and priorities are calculated by the Eigenvector method. The slowly increasing application of the AHP was the motivation for this study to explore the current state of its methodology in the healthcare context. A systematic literature review was conducted by searching the Pubmed and Web of Science databases for articles with the following keywords in their titles or abstracts: “Analytic Hierarchy Process,” “Analytical Hierarchy Process,” “multi-criteria decision analysis,” “multiple criteria decision,” “stated preference,” and “pairwise comparison.” In addition, we developed reporting criteria to indicate whether the authors reported important aspects and evaluated the resulting studies’ reporting. The systematic review resulted in 121 articles. The number of studies applying AHP has increased since 2005. Most studies were from Asia (almost 30 %), followed by the US (25.6 %). On average, the studies used 19.64 criteria throughout their hierarchical levels. Furthermore, we restricted a detailed analysis to those articles published within the last 5 years (n = 69). The mean of participants in these studies were 109, whereas we identified major differences in how the surveys were conducted. The evaluation of reporting showed that the mean of reported elements was about 6.75 out of 10. Thus, 12 out of 69 studies reported less than half of the criteria. The AHP has been applied inconsistently in healthcare research. A minority of studies described all the relevant aspects. Thus, the statements in this review may be biased, as they are restricted to the information available in the papers. Hence, further research is required to discover who should be interviewed and how, how inconsistent answers should be dealt with, and how the outcome and stability of the results should be presented. In addition, we need new insights to determine which target group can best handle the challenges of the AHP.

Journal ArticleDOI
TL;DR: This work proposes a unified alternating optimization framework for multi-GM and defines and uses two metrics related to graphwise and pairwise consistencies and shows two embodiments under the proposed framework that can cope with the nonfactorized and factorized affinity matrix, respectively.
Abstract: The problem of graph matching (GM) in general is nondeterministic polynomial-complete and many approximate pairwise matching techniques have been proposed. For a general setting in real applications, it typically requires to find the consistent matching across a batch of graphs. Sequentially performing pairwise matching is prone to error propagation along the pairwise matching sequence, and the sequences generated in different pairwise matching orders can lead to contradictory solutions. Motivated by devising a robust and consistent multiple-GM model, we propose a unified alternating optimization framework for multi-GM. In addition, we define and use two metrics related to graphwise and pairwise consistencies. The former is used to find an appropriate reference graph, which induces a set of basis variables and launches the iteration procedure. The latter defines the order in which the considered graphs in the iterations are manipulated. We show two embodiments under the proposed framework that can cope with the nonfactorized and factorized affinity matrix, respectively. Our multi-GM model has two major characters: 1) the affinity information across multiple graphs are explored in each iteration by fixing part of the matching variables via a consistency-driven mechanism and 2) the framework is flexible to incorporate various existing pairwise GM solvers in an out-of-box fashion, and also can proceed with the output of other multi-GM methods. The experimental results on both synthetic data and real images empirically show that the proposed framework performs competitively with the state-of-the-art.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper considers the pairwise geometric relations between correspondences and proposes a strategy to incorporate these relations at significantly reduced computational cost, which makes it suitable for large-scale object retrieval.
Abstract: Spatial verification is a key step in boosting the performance of object-based image retrieval. It serves to eliminate unreliable correspondences between salient points in a given pair of images, and is typically performed by analyzing the consistency of spatial transformations between the image regions involved in individual correspondences. In this paper, we consider the pairwise geometric relations between correspondences and propose a strategy to incorporate these relations at significantly reduced computational cost, which makes it suitable for large-scale object retrieval. In addition, we combine the information on geometric relations from both the individual correspondences and pairs of correspondences to further improve the verification accuracy. Experimental results on three reference datasets show that the proposed approach results in a substantial performance improvement compared to the existing methods, without making concessions regarding computational efficiency.

Proceedings Article
25 Jul 2015
TL;DR: This work proposes an efficient, highly scalable algorithm that is an order of magnitude faster than existing alternatives for detecting complex events in unconstrained Internet videos in a more difficult zero-shot setting.
Abstract: We focus on detecting complex events in unconstrained Internet videos. While most existing works rely on the abundance of labeled training data, we consider a more difficult zero-shot setting where no training data is supplied. We first pre-train a number of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest. After further refinement to take prediction inaccuracy and discriminative power into account, we apply the discovered concept classifiers on all test videos and obtain multiple score vectors. These distinct score vectors are converted into pairwise comparison matrices and the nuclear norm rank aggregation framework is adopted to seek consensus. To address the challenging optimization formulation, we propose an efficient, highly scalable algorithm that is an order of magnitude faster than existing alternatives. Experiments on recent TRECVID datasets verify the superiority of the proposed approach.

Journal ArticleDOI
TL;DR: This work presents a software tool, NgsRelate, for estimating pairwise relatedness from NGS data that provides maximum likelihood estimates that are based on genotype likelihoods instead of genotypes and thereby takes the inherent uncertainty of the genotypes into account.
Abstract: Motivation: Pairwise relatedness estimation is important in many contexts such as disease mapping and population genetics. However, all existing estimation methods are based on called genotypes, which is not ideal for next-generation sequencing (NGS) data of low depth from which genotypes cannot be called with high certainty. Results: We present a software tool, NgsRelate, for estimating pairwise relatedness from NGS data. It provides maximum likelihood estimates that are based on genotype likelihoods instead of genotypes and thereby takes the inherent uncertainty of the genotypes into account. Using both simulated and real data, we show that NgsRelate provides markedly better estimates for low-depth NGS data than two state-of-the-art genotype-based methods. Availability: NgsRelate is implemented in C++ and is available under the GNU license at www.popgen.dk/software. Contact: kd.uk.fnib@adi Supplementary information: Supplementary data are available at Bioinformatics online.

Proceedings Article
25 Jul 2015
TL;DR: In this article, the generalized calibration for AUC optimization is introduced, and it is shown that it is a necessary condition for consistency of AUC, which can be used to study the consistency of various surrogate losses.
Abstract: AUC (Area Under ROC Curve) has been an important criterion widely used in diverse learning tasks. To optimize AUC, many learning approaches have been developed, most working with pairwise surrogate losses. Thus, it is important to study the AUC consistency based on minimizing pairwise surrogate losses. In this paper, we introduce the generalized calibration for AUC optimization, and prove that it is a necessary condition for AUC consistency. We then provide a sufficient condition for AUC consistency, and show its usefulness in studying the consistency of various surrogate losses, as well as the invention of new consistent losses. We further derive regret bounds for exponential and logistic losses, and present regret bounds for more general surrogate losses in the realizable setting. Finally, we prove regret bounds that disclose the equivalence between the pairwise exponential loss of AUC and univariate exponential loss of accuracy.

Journal ArticleDOI
TL;DR: In this article, a multi-battle team game is considered where players from two rival teams form pairwise matches to fight in distinct component battles, which are carried out sequentially or partially.
Abstract: We consider a multi-battle team contest in which players from two rival teams form pairwise matches to fight in distinct component battles, which are carried out sequentially or (partially...

Journal ArticleDOI
TL;DR: This paper proposes a novel relevance metric learning method with listwise constraints (RMLLCs) by adopting listwise similarities, which consist of the similarity list of each image with respect to all remaining images, and develops an efficient alternating iterative algorithm to jointly learn the optimal metric and the rectification term.
Abstract: Person re-identification aims to match people across non-overlapping camera views, which is an important but challenging task in video surveillance. In order to obtain a robust metric for matching, metric learning has been introduced recently. Most existing works focus on seeking a Mahalanobis distance by employing sparse pairwise constraints, which utilize image pairs with the same person identity as positive samples, and select a small portion of those with different identities as negative samples. However, this training strategy has abandoned a large amount of discriminative information, and ignored the relative similarities. In this paper, we propose a novel relevance metric learning method with listwise constraints (RMLLCs) by adopting listwise similarities, which consist of the similarity list of each image with respect to all remaining images. By virtue of listwise similarities, RMLLC could capture all pairwise similarities, and consequently learn a more discriminative metric by enforcing the metric to conserve predefined similarity lists in a low-dimensional projection subspace. Despite the performance enhancement, RMLLC using predefined similarity lists fails to capture the relative relevance information, which is often unavailable in practice. To address this problem, we further introduce a rectification term to automatically exploit the relative similarities, and develop an efficient alternating iterative algorithm to jointly learn the optimal metric and the rectification term. Extensive experiments on four publicly available benchmarking data sets are carried out and demonstrate that the proposed method is significantly superior to the state-of-the-art approaches. The results also show that the introduction of the rectification term could further boost the performance of RMLLC.

Journal ArticleDOI
TL;DR: A framework for the automatic registration of multiple terrestrial laser scans that can handle arbitrary point clouds with reasonable pairwise overlap, without knowledge about their initial orientation and without the need for artificial markers or other specific objects is presented.
Abstract: In this paper we present a framework for the automatic registration of multiple terrestrial laser scans. The proposed method can handle arbitrary point clouds with reasonable pairwise overlap, without knowledge about their initial orientation and without the need for artificial markers or other specific objects. The framework is divided into a coarse and a fine registration part, which each start with pairwise registration and then enforce consistent global alignment across all scans. While we put forward a complete, functional registration system, the novel contribution of the paper lies in the coarse global alignment step. Merging multiple scans into a consistent network creates loops along which the relative transformations must add up. We pose the task of finding a global alignment as picking the best candidates from a set of putative pairwise registrations, such that they satisfy the loop constraints. This yields a discrete optimization problem that can be solved efficiently with modern combinatorial methods. Having found a coarse global alignment in this way, the framework proceeds by pairwise refinement with standard ICP, followed by global refinement to evenly spread the residual errors. The framework was tested on six challenging, real-world datasets. The discrete global alignment step effectively detects, removes and corrects failures of the pairwise registration procedure, finally producing a globally consistent coarse scan network which can be used as initial guess for the highly non-convex refinement. Our overall system reaches success rates close to 100% at acceptable runtimes 1 h, even in challenging conditions such as scanning in the forest.

Journal ArticleDOI
TL;DR: It is shown that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size.
Abstract: In this Letter, a generalization of pairwise models to non-Markovian epidemics on networks is presented. For the case of infectious periods of fixed length, the resulting pairwise model is a system of delay differential equations, which shows excellent agreement with results based on stochastic simulations. Furthermore, we analytically compute a new R0-like threshold quantity and an analytical relation between this and the final epidemic size. Additionally, we show that the pairwise model and the analytic results can be generalized to an arbitrary distribution of the infectious times, using integro-differential equations, and this leads to a general expression for the final epidemic size. By showing the rigorous link between non-Markovian dynamics and pairwise delay differential equations, we provide the framework for a more systematic understanding of non-Markovian dynamics.

Proceedings Article
07 Dec 2015
TL;DR: The approach is based on constructing a surrogate probability distribution over rankings based on a sorting procedure, for which the pairwise marginals provably coincide with the marginals of the Plackett-Luce distribution.
Abstract: We study the problem of online rank elicitation, assuming that rankings of a set of alternatives obey the Plackett-Luce distribution. Following the setting of the dueling bandits problem, the learner is allowed to query pairwise comparisons between alternatives, i.e., to sample pairwise marginals of the distribution in an online fashion. Using this information, the learner seeks to reliably predict the most probable ranking (or top-alternative). Our approach is based on constructing a surrogate probability distribution over rankings based on a sorting procedure, for which the pairwise marginals provably coincide with the marginals of the Plackett-Luce distribution. In addition to a formal performance and complexity analysis, we present first experimental studies.

Proceedings Article
06 Jul 2015
TL;DR: A large-scale non-convex implementation of AltSVM is developed that trains a factored form of the matrix via alternating minimization, and scales and parallelizes very well to large problem settings.
Abstract: In this paper we consider the collaborative ranking setting: a pool of users each provides a small number of pairwise preferences between d possible items; from these we need to predict each users preferences for items they have not yet seen. We do so by fitting a rank r score matrix to the pairwise data, and provide two main contributions: (a) we show that an algorithm based on convex optimization provides good generalization guarantees once each user provides as few as O(r log2 d) pairwise comparisons - essentially matching the sample complexity required in the related matrix completion setting (which uses actual numerical as opposed to pairwise information), and (b) we develop a large-scale non-convex implementation, which we call AltSVM, that trains a factored form of the matrix via alternating minimization (which we show reduces to alternating SVM problems), and scales and parallelizes very well to large problem settings. It also outperforms common baselines on many moderately large popular collaborative filtering datasets in both NDCG and in other measures of ranking performance.

Journal ArticleDOI
TL;DR: Multi-Criteria Decision Analysis was applied during the selection of nuclear power plants site using GIS software and the El Dabaa site was found to be most suitable, followed by the East El Negila site on the Mediterranean Sea.

Journal ArticleDOI
TL;DR: A simulation approach is used to compare the results of AHP with MCAHP under different levels of uncertainty and shows that as long as the variation in different pairwise comparisons is less than 0.24, the performance of A Hewlett-Packard analytic hierarchy process is not statistically different from the Performance of M CAHP.
Abstract: Despite the extensive application of Monte Carlo analytic hierarchy process (MCAHP) in various fields of decision making, its performance has not been compared with the classic analytic hierarchy process (AHP). Both of these methods are heavily affected by individual or group preferences and thus provide subjective rankings. Since the mere difference between their results does not necessarily warrant the superiority of one against the other, a reliable and robust ranking of alternatives should be available as a comparison basis so that the results of these two methods can be evaluated. In this paper, we use a simulation approach to compare the results of AHP with MCAHP under different levels of uncertainty. We validate our simulation results by comparing the performance of these two alternatives against a real world and reliable ranking of blogs. Our simulation results show that as long as the variation in different pairwise comparisons is less than 0.24, the performance of AHP is not statistically different from the performance of MCAHP. When the uncertainty in terms of variation grows beyond 0.24, MCAHP provides more precise rankings. The findings of this research add to the current body of knowledge in the multicriteria decision analysis as well as Information Systems literature and provide insights for managerial applications of these techniques.

Journal ArticleDOI
TL;DR: The proposed method of localizing the inconsistency may conceivably be of relevance for nonclassical logics (e.g., paraconsistent logic) and for uncertainty reasoning since it accommodates inconsistency by treating inconsistent data as still useful information.
Abstract: One of the major challenges for collective intelligence is inconsistency, which is unavoidable whenever subjective assessments are involved. Pairwise comparisons allow one to represent such subjective assessments and to process them by analyzing, quantifying and identifying the inconsistencies. We propose using smaller scales for pairwise comparisons and provide mathematical and practical justifications for this change. Our postulate's aim is to initiate a paradigm shift in the search for a better scale construction for pairwise comparisons. Beyond pairwise comparisons, the results presented may be relevant to other methods using subjective scales. Keywords: pairwise comparisons, collective intelligence, scale, subjective assessment, inaccuracy, inconsistency.