scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2017"


Proceedings ArticleDOI
TL;DR: This paper proposes a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction, and embeds items into a 'transition space' where users are modeled as translation vectors operating on item sequences.
Abstract: Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or `next-item' recommendation), where the challenges mainly lie in modeling `third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a `transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at this https URL.

268 citations


Journal ArticleDOI
TL;DR: A basic explanation of network meta-analysis conduction is provided, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis.
Abstract: Systematic reviews and pairwise meta-analyses of randomized controlled trials, at the intersection of clinical medicine, epidemiology and statistics, are positioned at the top of evidence-based practice hierarchy. These are important tools to base drugs approval, clinical protocols and guidelines formulation and for decision-making. However, this traditional technique only partially yield information that clinicians, patients and policy-makers need to make informed decisions, since it usually compares only two interventions at the time. In the market, regardless the clinical condition under evaluation, usually many interventions are available and few of them have been studied in head-to-head studies. This scenario precludes conclusions to be drawn from comparisons of all interventions profile (e.g. efficacy and safety). The recent development and introduction of a new technique – usually referred as network meta-analysis, indirect meta-analysis, multiple or mixed treatment comparisons – has allowed the estimation of metrics for all possible comparisons in the same model, simultaneously gathering direct and indirect evidence. Over the last years this statistical tool has matured as technique with models available for all types of raw data, producing different pooled effect measures, using both Frequentist and Bayesian frameworks, with different software packages. However, the conduction, report and interpretation of network meta-analysis still poses multiple challenges that should be carefully considered, especially because this technique inherits all assumptions from pairwise meta-analysis but with increased complexity. Thus, we aim to provide a basic explanation of network meta-analysis conduction, highlighting its risks and benefits for evidence-based practice, including information on statistical methods evolution, assumptions and steps for performing the analysis.

251 citations


Journal ArticleDOI
TL;DR: Rank Centrality as mentioned in this paper is an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons, which has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects.
Abstract: The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g., MSR’s TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding ‘scores’ for each object (e.g., player’s rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equ...

208 citations


Proceedings ArticleDOI
27 Aug 2017
TL;DR: TransRec as discussed by the authors embeds items into a transition space, where users are modeled as translation vectors operating on item sequences, and predicts users' personalized sequential behavior (or "next-item" recommendation) for large-scale sequential prediction.
Abstract: Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users' personalized sequential behavior (or 'next-item' recommendation), where the challenges mainly lie in modeling 'third-order' interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination of pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a 'transition space' where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets. Data and code are available at https://sites.google.com/a/eng.ucsd.edu/ruining-he/.

204 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this article, the optimality of a smoothed network flow problem is expressed as a differentiable function of the pairwise association costs, which can then be learned from data.
Abstract: Data association problems are an important component of many computer vision applications, with multi-object tracking being one of the most prominent examples. A typical approach to data association involves finding a graph matching or network flow that minimizes a sum of pairwise association costs, which are often either hand-crafted or learned as linear functions of fixed features. In this work, we demonstrate that it is possible to learn features for network-flow-based data association via backpropagation, by expressing the optimum of a smoothed network flow problem as a differentiable function of the pairwise association costs. We apply this approach to multi-object tracking with a network flow formulation. Our experiments demonstrate that we are able to successfully learn all cost functions for the association problem in an end-to-end fashion, which outperform hand-crafted costs in all settings. The integration and combination of various sources of inputs becomes easy and the cost functions can be learned entirely from data, alleviating tedious hand-designing of costs.

193 citations


Journal ArticleDOI
TL;DR: In this paper, a penalized approach for subgroup analysis based on a regression model is proposed, in which heterogeneity is driven by unobserved latent factors and thus can be represented by using subject-specific intercepts.
Abstract: An important step in developing individualized treatment strategies is correct identification of subgroups of a heterogeneous population to allow specific treatment for each subgroup. This article considers the problem using samples drawn from a population consisting of subgroups with different mean values, along with certain covariates. We propose a penalized approach for subgroup analysis based on a regression model, in which heterogeneity is driven by unobserved latent factors and thus can be represented by using subject-specific intercepts. We apply concave penalty functions to pairwise differences of the intercepts. This procedure automatically divides the observations into subgroups. To implement the proposed approach, we develop an alternating direction method of multipliers algorithm with concave penalties and demonstrate its convergence. We also establish the theoretical properties of our proposed estimator and determine the order requirement of the minimal difference of signals between g...

140 citations


Proceedings ArticleDOI
Ke Yan1, Yonghong Tian1, Yaowei Wang1, Wei Zeng1, Tiejun Huang1 
01 Oct 2017
TL;DR: This paper model the relationship of vehicle images as multiple grains, and proposes two approaches to alleviate the precise vehicle search problem by exploiting multi-grain ranking constraints, which achieve the state-of-the-art performance on both datasets.
Abstract: Precise search of visually-similar vehicles poses a great challenge in computer vision, which needs to find exactly the same vehicle among a massive vehicles with visually similar appearances for a given query image. In this paper, we model the relationship of vehicle images as multiple grains. Following this, we propose two approaches to alleviate the precise vehicle search problem by exploiting multi-grain ranking constraints. One is Generalized Pairwise Ranking, which generalizes the conventional pairwise from considering only binary similar/dissimilar relations to multiple relations. The other is Multi-Grain based List Ranking, which introduces permutation probability to score a permutation of a multi-grain list, and further optimizes the ranking by the likelihood loss function. We implement the two approaches with multi-attribute classification in a multi-task deep learning framework. To further facilitate the research on precise vehicle search, we also contribute two high-quality and well-annotated vehicle datasets, named VD1 and VD2, which are collected from two different cities with diverse annotated attributes. As two of the largest publicly available precise vehicle search datasets, they contain 1,097,649 and 807,260 vehicle images respectively. Experimental results show that our approaches achieve the state-of-the-art performance on both datasets.

138 citations


Proceedings ArticleDOI
03 Apr 2017
TL;DR: Zhang et al. as discussed by the authors proposed a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation based on the success of the word2vec framework.
Abstract: Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, "work" on weekday and "entertainment" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20% on both datasets for all metrics.

124 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) is proposed for weakly supervised visual relation detection (WSVRD).
Abstract: We aim to tackle a novel vision task called Weakly Supervised Visual Relation Detection (WSVRD) to detect “subject-predicate-object” relations in an image with object relation groundtruths available only at the image level. This is motivated by the fact that it is extremely expensive to label the combinatorial relations between objects at the instance level. Compared to the extensively studied problem, Weakly Supennsed Object Detection (WSOD), WSVRD is more challenging as it needs to examine a large set of regions pairs, which is computationally prohibitive and more likely stuck in a local optimal solution such as those involving wrong spatial context. To this end, we present a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) for WSVRD. It uses a parallel FCN architecture that simultaneously performs pair selection and classification of single regions and region pairs for object and relation detection, while sharing almost all computation shared over the entire image. In particular, we propose a novel position-role-sensitive score map with pairwise RoI pooling to efficiently capture the crucial context associated with a pair of objects. We demonstrate the superiority of PPR-FCN over all baselines in solving the WSVRD challenge by using results of extensive experiments over two visual relation benchmarks.

119 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an evaluation and ranking model using an integration of Taguchi Loss Function, best-worst method (BWM), and VIKOR technique, which allows decision makers to set different target values and consumer's tolerance thresholds for each criterion based on which country's airports are being ranked and also reduce the amount of pairwise comparisons by using BWM.

115 citations


Journal ArticleDOI
TL;DR: This tutorial presents 5 different approaches that can be used in pairwise meta‐analyses of multi‐arm studies, and presents a novel approach (method 4) that to the best of the authors' knowledge has not been presented before.
Abstract: Systematic reviewers conducting pairwise meta-analyses sometimes encounter multi-arm studies. To include these studies, and to avoid a unit-of-analysis error, often two or more arms are combined or the control arm is split. In this tutorial, we present 5 different approaches that can be used. Particularly, we present a novel approach (method 4) that to the best of our knowledge has not been presented before. We demonstrate their application on 3 selected data sets, discuss their scope of application and their advantages and limitations, and give recommendations.

Journal ArticleDOI
TL;DR: This study presented a hybrid use of recently developed “Best Worst Method” (BWM) and strength-weakness-opportunity-threat (SWOT) matrix was presented as a novel strategic multiple criteria strategic technique called B’WOT to alleviate water scarcity in the Yazd province, Iran.
Abstract: Relying on strategic multi-criteria techniques is an effective step for identifying the sources of water management problems, formulating strategies, and prioritizing the alternatives. In this study, a hybrid use of recently developed “Best Worst Method” (BWM) and strength-weakness-opportunity-threat (SWOT) matrix was presented as a novel strategic multiple criteria strategic technique called B’WOT. B’WOT simplifies decision-making by handling rank-reversal in pairwise comparisons. The methodology employed in this paper involves: (1) finding the effective strategic factors of the region with SWOT; (2) evaluating the relative significance of strategic factors through a comparative framework including B’WOT along with a conventional Analytic Hierarchy Process (AHP)-SWOT called A’WOT; (3) prioritizing the strategies with a risk-based multiple criteria technique, and (4) aggregation of divergent ranks of the strategies under different risk-attitudes. Comparison of the BWM vs. AHP in ranking SWOT factors according to consistency ratio (CR) and total deviation (TD) showed the superiority of BWM. Unlike AHP that some of its pairwise comparison matrices violated the acceptable CR’s threshold, all the BWM’s matrices provided consistent outcomes. Moreover, TD values of BWM’s matrices were lower (better) than AHP ones. Employment of a risk-based technique was another merit of the study that provided a wide variety of prioritization lists with respect to pessimistic, neutral, and optimistic scenarios. Based on the aggregated results, “providing alternatives for low efficient and environmentally destructive agriculture by facilitating participation of private sector in industry and tourism sectors” was selected as the first priority to alleviate water scarcity in the Yazd province, Iran. In general, all the high-ranked strategies are -directly or indirectly- contributed to the seriously inefficient agricultural activities within the province.

Proceedings ArticleDOI
03 Apr 2017
TL;DR: A diversified collaborative filtering algorithm (DCF) is proposed to solve the coupled problems of parameterized matrix factorization and structural learning, and a new pairwise accuracy metric and a normalized topic coverage diversity metric to measure the performance of accuracy and diversity respectively.
Abstract: In this study, we investigate diversified recommendation problem by supervised learning, seeking significant improvement in diversity while maintaining accuracy. In particular, we regard each user as a training instance, and heuristically choose a subset of accurate and diverse items as ground-truth for each user. We then represent each user or item as a vector resulted from the factorization of the user-item rating matrix. In our paper, we try to discover a factorization for matching the following supervised learning task. In doing this, we define two coupled optimization problems, parameterized matrix factorization and structural learning, to formulate our task. And we propose a diversified collaborative filtering algorithm (DCF) to solve the coupled problems. We also introduce a new pairwise accuracy metric and a normalized topic coverage diversity metric to measure the performance of accuracy and diversity respectively. Extensive experiments on benchmark datasets show the performance gains of DCF in comparison with the state-of-the-art algorithms.

Posted Content
TL;DR: Zhang et al. as discussed by the authors proposed a parallel, pairwise region-based, fully convolutional network (PPR-FCN) for weakly supervised visual relation detection.
Abstract: We aim to tackle a novel vision task called Weakly Supervised Visual Relation Detection (WSVRD) to detect "subject-predicate-object" relations in an image with object relation groundtruths available only at the image level. This is motivated by the fact that it is extremely expensive to label the combinatorial relations between objects at the instance level. Compared to the extensively studied problem, Weakly Supervised Object Detection (WSOD), WSVRD is more challenging as it needs to examine a large set of regions pairs, which is computationally prohibitive and more likely stuck in a local optimal solution such as those involving wrong spatial context. To this end, we present a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) for WSVRD. It uses a parallel FCN architecture that simultaneously performs pair selection and classification of single regions and region pairs for object and relation detection, while sharing almost all computation shared over the entire image. In particular, we propose a novel position-role-sensitive score map with pairwise RoI pooling to efficiently capture the crucial context associated with a pair of objects. We demonstrate the superiority of PPR-FCN over all baselines in solving the WSVRD challenge by using results of extensive experiments over two visual relation benchmarks.

Posted Content
TL;DR: This work proposes a network design inspired by deep residual networks that allows the efficient computation of this more expressive pairwise similarity objective and proposes an additional generator network based on the Generative Adversarial Networks where the discriminator is the residual pairwise network.
Abstract: Deep neural networks achieve unprecedented performance levels over many tasks and scale well with large quantities of data, but performance in the low-data regime and tasks like one shot learning still lags behind. While recent work suggests many hypotheses from better optimization to more complicated network structures, in this work we hypothesize that having a learnable and more expressive similarity objective is an essential missing component. Towards overcoming that, we propose a network design inspired by deep residual networks that allows the efficient computation of this more expressive pairwise similarity objective. Further, we argue that regularization is key in learning with small amounts of data, and propose an additional generator network based on the Generative Adversarial Networks where the discriminator is our residual pairwise network. This provides a strong regularizer by leveraging the generated data samples. The proposed model can generate plausible variations of exemplars over unseen classes and outperforms strong discriminative baselines for few shot classification tasks. Notably, our residual pairwise network design outperforms previous state-of-theart on the challenging mini-Imagenet dataset for one shot learning by getting over 55% accuracy for the 5-way classification task over unseen classes.

Journal ArticleDOI
TL;DR: This paper aims to design a new cloud service selection model under the fuzzy environment by utilizing the analytical hierarchy process (AHP) and fuzzy technique for order preference by similarity to ideal solution (TOPSIS).
Abstract: Cloud service selection plays a crucial role in terms of on-demand service selection on a subscription basis. As a result of wide-range availability of cloud services with similar functionalities, it is very crucial to determine which service best addresses the user’s desires and objectives. This paper aims to design a new cloud service selection model under the fuzzy environment by utilizing the analytical hierarchy process (AHP) and fuzzy technique for order preference by similarity to ideal solution (TOPSIS). The AHP method is enforced to configure the structure of cloud service selection problem and to impel the criteria weight using the pairwise comparisons, and the TOPSIS method utilizes the final ranking of the solution. In our proposed model, the non-functional quality of service requirements is taken into consideration for selecting appropriate service. Furthermore, the proposed model exploits a set of pre-defined linguistic variables, parameterized by triangular fuzzy numbers for evaluating each criteria weights. The experimental results obtained using the real-time cloud service domains prove the efficacy of our proposed model and demonstrate the effectiveness by inducing better performance, when compared against other available cloud service selection algorithms. Finally, the sensitivity analysis is persuaded to confirm the robustness of our proposed model.

Journal ArticleDOI
TL;DR: An efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations is proposed.
Abstract: The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .

Journal ArticleDOI
TL;DR: In this paper, the authors study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity, and show that the matrix of probabilities can be estimated at the same rate as in standard parametric models up to logarithmic terms.
Abstract: There are various parametric models for analyzing pairwise comparison data, including the Bradley–Terry–Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this paper, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models, including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models up to logarithmic terms. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations.

Journal ArticleDOI
TL;DR: A review of 84 studies published in the literature from 1995 onwards that propose quantitative models to support supply chain performance evaluation shows that most of the studies evaluate more than one performance dimension and are based on multicriteria decision making techniques.

Proceedings Article
18 Jun 2017
TL;DR: This work derives essentially matching upper and lower bounds on the query complexity of r-round algorithms, and shows that Θ(log∗ n) rounds are both necessary and sufficient for achieving the optimal worst case query complexity for identifying the k most biased coins.
Abstract: In many learning settings, active/adaptive querying is possible, but the number of rounds of adaptivity is limited. We study the relationship between query complexity and adaptivity in identifying the k most biased coins among a set of n coins with unknown biases. This problem is a common abstraction of many well-studied problems, including the problem of identifying the k best arms in a stochastic multi-armed bandit, and the problem of top-k ranking from pairwise comparisons. An r-round adaptive algorithm for the k most biased coins problem specifies in each round the set of coin tosses to be performed based on the observed outcomes in earlier rounds, and outputs the set of k most biased coins at the end of r rounds. When r = 1, the algorithm is known as non-adaptive; when r is unbounded, the algorithm is known as fully adaptive. While the power of adaptivity in reducing query complexity is well known, full adaptivity requires repeated interaction with the coin tossing (feedback generation) mechanism, and is highly sequential, since the set of coins to be tossed in each round can only be determined after we have observed the outcomes of the coin tosses from the previous round. In contrast, algorithms with only few rounds of adaptivity require fewer rounds of interaction with the feedback generation mechanism, and offer the benefits of parallelism in algorithmic decision-making. Motivated by these considerations, we consider the question of how much adaptivity is needed to realize the optimal worst case query complexity for identifying the k most biased coins. Given any positive integer r, we derive essentially matching upper and lower bounds on the query complexity of r-round algorithms. We then show that Θ(log∗ n) rounds are both necessary and sufficient for achieving the optimal worst case query complexity for identifying the k most biased coins. In particular, our algorithm achieves the optimal query complexity in at most log∗ n rounds, which implies that on any realistic input, 5 parallel rounds of exploration suffice to achieve the optimal worst-case sample complexity. The best known algorithm prior to our work required Θ(log n) rounds to achieve the optimal worst case query complexity even for the special case of k = 1.

Journal ArticleDOI
Matteo Brunelli1
TL;DR: In this article, a set of properties has been defined to define a family of functions representing inconsistency indices, and the authors expand the set by adding and justifying a new one and continue the study of inconsistency indices to check whether or not they satisfy the above mentioned properties.
Abstract: Pairwise comparisons between alternatives are a well-established tool to decompose decision problems into smaller and more easily tractable sub-problems. However, due to our limited rationality, the subjective preferences expressed by decision makers over pairs of alternatives can hardly ever be consistent. Therefore, several inconsistency indices have been proposed in the literature to quantify the extent of the deviation from complete consistency. Only recently, a set of properties has been proposed to define a family of functions representing inconsistency indices. The scope of this paper is twofold. Firstly, it expands the set of properties by adding and justifying a new one. Secondly, it continues the study of inconsistency indices to check whether or not they satisfy the above mentioned properties. Out of the four indices considered in this paper, in their present form, two fail to satisfy some properties. An adjusted version of one index is proposed so that it fulfills them.

Posted Content
TL;DR: This paper proposed a method for assessing skill from video, applicable to a variety of tasks, ranging from surgery to drawing and rolling pizza dough, using supervised deep ranking. But their work is limited to how-to video collections and general skill determination in video.
Abstract: We present a method for assessing skill from video, applicable to a variety of tasks, ranging from surgery to drawing and rolling pizza dough. We formulate the problem as pairwise (who's better?) and overall (who's best?) ranking of video collections, using supervised deep ranking. We propose a novel loss function that learns discriminative features when a pair of videos exhibit variance in skill, and learns shared features when a pair of videos exhibit comparable skill levels. Results demonstrate our method is applicable across tasks, with the percentage of correctly ordered pairs of videos ranging from 70% to 83% for four datasets. We demonstrate the robustness of our approach via sensitivity analysis of its parameters. We see this work as effort toward the automated organization of how-to video collections and overall, generic skill determination in video.

Journal ArticleDOI
01 Jun 2017
TL;DR: This paper develops some linear programming models with the aid of multidimensional analysis of preference (LINMAP) method to solve interval type-2 fuzzy MAGDM problems, in which the information about attribute weights is incompletely known, and all pairwise comparison judgments over alternatives are represented by IT2FSs.
Abstract: Supplier selection is a key issue in supply chain management, which directly impacts the manufacturer's performance. The problem can be viewed as a multiple attribute group decision making (MAGDM) that concerns many conflicting evaluation attributes, both being of qualitative and quantitative nature. Due to the increasing complexity and uncertainty of socio-economic environment, some evaluations of attributes are not adequately represented by numerical assessments and type-1 fuzzy sets. In this paper, we develop some linear programming models with the aid of multidimensional analysis of preference (LINMAP) method to solve interval type-2 fuzzy MAGDM problems, in which the information about attribute weights is incompletely known, and all pairwise comparison judgments over alternatives are represented by IT2FSs. First, we introduce a new distance measure based on the centroid interval between the IT2FSs. Then, we construct the linear programming model to determine the interval type-2 fuzzy positive ideal solution (IT2PIS) and corresponding attributes weight vector. Based on it, an extended LINMAP method to solve MAGDM problem under IT2FSs environment is developed. Finally, a supplier selection example is provided to demonstrate the usefulness of the proposed method.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions and the proposed pairwise registration framework is superior to the state-of-the-art methods in terms of both registration error and computational time.
Abstract: It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

Journal ArticleDOI
TL;DR: The interactive methods outperformed their a posteriori counterparts, and could discover solutions corresponding better to the DM preferences, and the number of pairwise comparisons needed by the interactive evolutionary methods to construct a satisfactory solution could be decreased.
Abstract: This paper evaluates the applicability of different multi-objective optimization methods for environmentally conscious supply chain design. We analyze a case study with three objectives: costs, CO 2 and fine dust (also known as PM – Particulate Matters) emissions. We approximate the Pareto front using the weighted sum and epsilon constraint scalarization methods with pre-defined or adaptively selected parameters, two popular evolutionary algorithms, SPEA2 and NSGA-II, with different selection strategies, and their interactive counterparts that incorporate Decision Maker׳s (DM׳s) indirect preferences into the search process. Within this case study, the CO 2 emissions could be lowered significantly by accepting a marginal increase of costs over their global minimum. NSGA-II and SPEA2 enabled faster estimation of the Pareto front, but produced significantly worse solutions than the exact optimization methods. The interactive methods outperformed their a posteriori counterparts, and could discover solutions corresponding better to the DM preferences. In addition, by adjusting appropriately the elicitation interval and starting generation of the elicitation, the number of pairwise comparisons needed by the interactive evolutionary methods to construct a satisfactory solution could be decreased.

Posted Content
TL;DR: This paper improves on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low.
Abstract: Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment.

Journal ArticleDOI
TL;DR: A new Multi criteria GDM approach in adroit exploitation of the group heterogeneity during evaluation process and restrict the biasness of information while decision making is proposed.

Journal ArticleDOI
TL;DR: A new method of content based medical image retrieval through considering fused, context-sensitive similarity, which has been evaluated on the retrieval of the Common CT Imaging Signs of Lung Diseases and achieved not only better retrieval results but also the satisfactory computation efficiency.

Journal ArticleDOI
TL;DR: A novel approach to obtain consistent matches without requiring initial pairwise solutions to be given as input is introduced by optimizing a joint measure of metric distortion directly over the space of cycle‐consistent maps.
Abstract: Recent efforts in the area of joint object matching approach the problem by taking as input a set of pairwise maps, which are then jointly optimized across the whole collection so that certain accuracy and consistency criteria are satisfied. One natural requirement is cycle-consistency-namely the fact that map composition should give the same result regardless of the path taken in the shape collection. In this paper, we introduce a novel approach to obtain consistent matches without requiring initial pairwise solutions to be given as input. We do so by optimizing a joint measure of metric distortion directly over the space of cycle-consistent maps; in order to allow for partially similar and extra-class shapes, we formulate the problem as a series of quadratic programs with sparsity-inducing constraints, making our technique a natural candidate for analysing collections with a large presence of outliers. The particular form of the problem allows us to leverage results and tools from the field of evolutionary game theory. This enables a highly efficient optimization procedure which assures accurate and provably consistent solutions in a matter of minutes in collections with hundreds of shapes.

Journal ArticleDOI
TL;DR: In this article, an objective-based analytical hierarchical process (AHP) method is proposed for the prioritization of pavement maintenance of roads, where pairwise comparison values are assigned based on the collected field data from a road network in Mumbai city, consisting of 28 road sections.
Abstract: The application of Analytic Hierarchy Process (AHP) method for the prioritization of pavement maintenance sections is widespread now-a-days. Although the evaluation of pavement maintenance section through AHP method is simple, where the relative importance (on Saaty’s scale) assigned to each parameter in the hierarchy varies between the experts (transportation professionals) consulted, which leads to discrepancies in the final rankings of the sections’, due to the subjectivity in the process. Further, experts base their decisions solely on their experience while consideration is not given to the actual quantitative physical condition of the roads. To overcome these difficulties an objective based AHP method is proposed in this study, where pairwise comparison values are assigned based on the collected field data from a road network in Mumbai city, consisting of 28 road sections. The final ranking list of candidate sections takes into consideration the priority weight of alternatives, which reflect the road conditions. The solution of priority ratings of AHP method is compared with the corresponding solution of road condition index method, a traditional pavement maintenance procedure. The findings of the present study suggest that objective based AHP method is more suitable for the prioritization of pavement maintenance of roads.