scispace - formally typeset
Search or ask a question

Showing papers on "Pairwise comparison published in 2022"


Journal ArticleDOI
TL;DR: In this paper , a pyramid PSO (PPSO) with novel competitive and cooperative strategies to update particles' information is proposed, which has superior performance in terms of accuracy, Wilcoxon signed-rank test and convergence speed, yet achieves comparable running time in most cases.

54 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a pairwise loss function that enables ReID models to learn the fine-grained features by adaptively enforcing an exponential penalization on the images of small differences and a bounded penalization for images of large differences.
Abstract: Person Re-IDentification (ReID) aims at re-identifying persons from different viewpoints across multiple cameras. Capturing the fine-grained appearance differences is often the key to accurate person ReID, because many identities can be differentiated only when looking into these fine-grained differences. However, most state-of-the-art person ReID approaches, typically driven by a triplet loss, fail to effectively learn the fine-grained features as they are focused more on differentiating large appearance differences. To address this issue, we introduce a novel pairwise loss function that enables ReID models to learn the fine-grained features by adaptively enforcing an exponential penalization on the images of small differences and a bounded penalization on the images of large differences. The proposed loss is generic and can be used as a plugin to replace the triplet loss to significantly enhance different types of state-of-the-art approaches. Experimental results on four benchmark datasets show that the proposed loss substantially outperforms a number of popular loss functions by large margins; and it also enables significantly improved data efficiency.

47 citations


Journal ArticleDOI
01 Jan 2022-Chaos
TL;DR: This research shows that the second-order interactions, even if of weak strength, can lead to synchronization under significantly lower first-order coupling strengths, and the overall synchronization cost is reduced due to the introduction of three-body interactions if compared to pairwise interactions.
Abstract: Higher-order interactions might play a significant role in the collective dynamics of the brain. With this motivation, we here consider a simplicial complex of neurons, in particular, studying the effects of pairwise and three-body interactions on the emergence of synchronization. We assume pairwise interactions to be mediated through electrical synapses, while for second-order interactions, we separately study diffusive coupling and nonlinear chemical coupling. For all the considered cases, we derive the necessary conditions for synchronization by means of linear stability analysis, and we compute the synchronization errors numerically. Our research shows that the second-order interactions, even if of weak strength, can lead to synchronization under significantly lower first-order coupling strengths. Moreover, the overall synchronization cost is reduced due to the introduction of three-body interactions if compared to pairwise interactions.

44 citations


Journal ArticleDOI
Don Ryan1
TL;DR: In this paper , a probabilistic alignment of ST experiments (PASTE) is proposed to align and integrate multiple adjacent tissue slices using an optimal transport formulation that models both transcriptional similarity and physical distances between spots.
Abstract: Spatial transcriptomics (ST) measures mRNA expression across thousands of spots from a tissue slice while recording the two-dimensional (2D) coordinates of each spot. We introduce probabilistic alignment of ST experiments (PASTE), a method to align and integrate ST data from multiple adjacent tissue slices. PASTE computes pairwise alignments of slices using an optimal transport formulation that models both transcriptional similarity and physical distances between spots. PASTE further combines pairwise alignments to construct a stacked 3D alignment of a tissue. Alternatively, PASTE can integrate multiple ST slices into a single consensus slice. We show that PASTE accurately aligns spots across adjacent slices in both simulated and real ST data, demonstrating the advantages of using both transcriptional similarity and spatial information. We further show that the PASTE integrated slice improves the identification of cell types and differentially expressed genes compared with existing approaches that either analyze single ST slices or ignore spatial information.

41 citations


Journal ArticleDOI
TL;DR: US-align as mentioned in this paper is the first universal platform to uniformly align monomer and complex structures of different macromolecules-proteins, RNAs and DNAs, using a uniform TM-score objective function coupled with a heuristic alignment searching algorithm.
Abstract: Structure comparison and alignment are of fundamental importance in structural biology studies. We developed the first universal platform, US-align, to uniformly align monomer and complex structures of different macromolecules-proteins, RNAs and DNAs. The pipeline is built on a uniform TM-score objective function coupled with a heuristic alignment searching algorithm. Large-scale benchmarks demonstrated consistent advantages of US-align over state-of-the-art methods in pairwise and multiple structure alignments of different molecules. Detailed analyses showed that the main advantage of US-align lies in the extensive optimization of the unified objective function powered by efficient heuristic search iterations, which substantially improve the accuracy and speed of the structural alignment process. Meanwhile, the universal protocol fusing different molecular and structural types helps facilitate the heterogeneous oligomer structure comparison and template-based protein-protein and protein-RNA/DNA docking.

41 citations


Journal ArticleDOI
TL;DR: How the structures of correlations can have opposite effects on the different functions of neural populations, thus creating trade-offs and constraints for the structure-function relationships of population codes is discussed.

41 citations


Journal ArticleDOI
TL;DR: In this article , a complex Pythagorean fuzzy ELECTRE II method was proposed for group decision making in complex PGF framework, which is designed to perform pairwise comparisons of the alternatives using the core notions of concordance, discordance and indifferent sets.
Abstract: Abstract This article contributes to the advancement and evolution of outranking decision-making methodologies, with a novel essay on the ELimination and Choice Translating REality (ELECTRE) family of methods. Its primary target is to unfold the constituents and expound the implementation of the ELECTRE II method for group decision making in complex Pythagorean fuzzy framework. This results in the complex Pythagorean fuzzy ELECTRE II method. By inception, it is intrinsically superior to models using one-dimensional data. It is designed to perform the pairwise comparisons of the alternatives using the core notions of concordance, discordance and indifferent sets, which is then followed by the construction of complex Pythagorean fuzzy concordance and discordance matrices. Further, the strong and weak outranking relations are developed by the comparison of concordance and discordance indices with the concordance and discordance levels. Later, the forward, reverse and average rankings of the alternatives are computed by the dint of strong and weak outranking graphs. This methodology is supported by a case study for the selection of wastewater treatment process, and by a numerical example for the selection of the best cloud solution for a big data project. Its consistency is confirmed by an effectiveness test and comparison analysis with the Pythagorean fuzzy ELECTRE II and complex Pythagorean fuzzy ELECTRE I methods.

36 citations


Journal ArticleDOI
TL;DR: In this paper , it was shown that |F| ≥ (nk)−(n−sk), provided n≥53sk−23s and s is sufficiently large.

36 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper found that subjective measures using visual surveys could capture more subtle human perceptions, thus providing stronger predictive power to housing prices, while the objective view indexes collectively explained more price variances, the five perceptions individually exhibited stronger strength.

36 citations


Journal ArticleDOI
TL;DR: It is found that information transmission rates are frequently low when actual disease transmission rates in the physical network are low or medium, and it is shown that this can be mitigated effectively by introducing 2-simplex interactions in the social network.
Abstract: Simplicial complexes describe the simple fact that in social networks a link can connect more than two individuals. As we show here, this has far-reaching consequences for epidemic spreading, in particular in the context of a multilayer network model, where one layer is a virtual social network and the other one is a physical contact network. The social network layer is responsible for the transmission of information via pairwise or higher order 2-simplex interactions among individuals, while the physical layer is responsible for the epidemic spreading. We use the microscopic Markov chain approach to derive the probability transition equations and to determine epidemic outbreak thresholds. We further support these results with Monte Carlo simulations, which are in good agreement, thus confirming the analytical tractability of the proposed model. We find that information transmission rates are frequently low when actual disease transmission rates in the physical network are low or medium, and we show that this can be mitigated effectively by introducing 2-simplex interactions in the social network. The relative ease of introducing higher-order interactions in virtual social networks means that this could be exploited to inhibit epidemic outbreaks.

34 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a new trust assessment framework GATrust which integrates multi-aspect properties of users, including user context-specific information, network topological structure information, and locally generated social trust relationships.
Abstract: Social trust assessment that characterizes a pairwise trustworthiness relationship can spur diversified applications. Extensive efforts have been put in exploration, but mainly focusing on applying graph convolutional network to establish a social trust evaluation model, overlooking user feature factors related to context-aware information on social trust prediction. In this article, we aim to design a new trust assessment framework GATrust which integrates multi-aspect properties of users, including user context-specific information, network topological structure information, and locally-generated social trust relationships. GATrust can assigns different attention coefficients to multi-aspect properties of users in online social networks, for improving the prediction accuracy of social trust evaluation. The framework can then learn multiple latent factors of each trustor-trustee pair to establish a social trust evaluation model, by fusing graph attention network and graph convolution network. We conduct extensive experiments on two popular real-world datasets and the results exhibit that our proposed framework can improve the precision of social trust prediction, outperforming the state-of-the-art in the literature by 4.3% and 5.5% on both two datasets, respectively.

Journal ArticleDOI
31 Mar 2022
TL;DR: ProtTucker as mentioned in this paper uses single protein representations from protein Language Models (pLMs) for contrastive learning, which optimizes constraints captured by hierarchical classifications of protein 3D structures.
Abstract: Abstract Experimental structures are leveraged through multiple sequence alignments, or more generally through homology-based inference (HBI), facilitating the transfer of information from a protein with known annotation to a query without any annotation. A recent alternative expands the concept of HBI from sequence-distance lookup to embedding-based annotation transfer (EAT). These embeddings are derived from protein Language Models (pLMs). Here, we introduce using single protein representations from pLMs for contrastive learning. This learning procedure creates a new set of embeddings that optimizes constraints captured by hierarchical classifications of protein 3D structures defined by the CATH resource. The approach, dubbed ProtTucker, has an improved ability to recognize distant homologous relationships than more traditional techniques such as threading or fold recognition. Thus, these embeddings have allowed sequence comparison to step into the ‘midnight zone’ of protein similarity, i.e. the region in which distantly related sequences have a seemingly random pairwise sequence similarity. The novelty of this work is in the particular combination of tools and sampling techniques that ascertained good performance comparable or better to existing state-of-the-art sequence comparison methods. Additionally, since this method does not need to generate alignments it is also orders of magnitudes faster. The code is available at https://github.com/Rostlab/EAT.

Journal ArticleDOI
TL;DR: In this article, a full simulation of random parameters is undertaken for out-of-sample injury-severity predictions, and the prediction accuracy of the estimated models was assessed, not surprisingly, that the random parameters logit model with heterogeneity in the means and variances outperformed other models in predictive performance.


Journal ArticleDOI
TL;DR: In this paper , a full simulation of random parameters is undertaken for out-of-sample injury-severity predictions, and the prediction accuracy of the estimated models was assessed, not surprisingly, that the random parameters logit model with heterogeneity in the means and variances outperformed other models in predictive performance.

Journal ArticleDOI
TL;DR: An attention-based KG representation learning framework, namely DDKG, is proposed to fully utilize the information of KGs for improved performance of DDI prediction and is superior to state-of-the-art algorithms on the DDI Prediction task in terms of different evaluation metrics across all datasets.
Abstract: Drug-drug interactions (DDIs) are known as the main cause of life-threatening adverse events, and their identification is a key task in drug development. Existing computational algorithms mainly solve this problem by using advanced representation learning techniques. Though effective, few of them are capable of performing their tasks on biomedical knowledge graphs (KGs) that provide more detailed information about drug attributes and drug-related triple facts. In this work, an attention-based KG representation learning framework, namely DDKG, is proposed to fully utilize the information of KGs for improved performance of DDI prediction. In particular, DDKG first initializes the representations of drugs with their embeddings derived from drug attributes with an encoder-decoder layer, and then learns the representations of drugs by recursively propagating and aggregating first-order neighboring information along top-ranked network paths determined by neighboring node embeddings and triple facts. Last, DDKG estimates the probability of being interacting for pairwise drugs with their representations in an end-to-end manner. To evaluate the effectiveness of DDKG, extensive experiments have been conducted on two practical datasets with different sizes, and the results demonstrate that DDKG is superior to state-of-the-art algorithms on the DDI prediction task in terms of different evaluation metrics across all datasets.

Journal ArticleDOI
TL;DR: PIGNet as discussed by the authors predicts the atom-atom pairwise interactions via physics-informed equations parameterized with neural networks and provides the total binding affinity of a protein-ligand complex as their sum.
Abstract: Recently, deep neural network (DNN)-based drug-target interaction (DTI) models were highlighted for their high accuracy with affordable computational costs. Yet, the models' insufficient generalization remains a challenging problem in the practice of in silico drug discovery. We propose two key strategies to enhance generalization in the DTI model. The first is to predict the atom-atom pairwise interactions via physics-informed equations parameterized with neural networks and provides the total binding affinity of a protein-ligand complex as their sum. We further improved the model generalization by augmenting a broader range of binding poses and ligands to training data. We validated our model, PIGNet, in the comparative assessment of scoring functions (CASF) 2016, demonstrating the outperforming docking and screening powers than previous methods. Our physics-informing strategy also enables the interpretation of predicted affinities by visualizing the contribution of ligand substructures, providing insights for further ligand optimization.

Proceedings ArticleDOI
26 Feb 2022
TL;DR: A Contrastive learning framework to disentangle Long and Short-term interests for Recommendation (CLSR) with self-supervision with pairwise contrastive tasks designed to supervise the similarity between interest representations and their corresponding interest proxies.
Abstract: Modeling user’s long-term and short-term interests is crucial for accurate recommendation. However, since there is no manually annotated label for user interests, existing approaches always follow the paradigm of entangling these two aspects, which may lead to inferior recommendation accuracy and interpretability. In this paper, to address it, we propose a Contrastive learning framework to disentangle Long and Short-term interests for Recommendation (CLSR) with self-supervision. Specifically, we first propose two separate encoders to independently capture user interests of different time scales. We then extract long-term and short-term interests proxies from the interaction sequences, which serve as pseudo labels for user interests. Then pairwise contrastive tasks are designed to supervise the similarity between interest representations and their corresponding interest proxies. Finally, since the importance of long-term and short-term interests is dynamically changing, we propose to adaptively aggregate them through an attention-based network for prediction. We conduct experiments on two large-scale real-world datasets for e-commerce and short-video recommendation. Empirical results show that our CLSR consistently outperforms all state-of-the-art models with significant improvements: GAUC is improved by over 0.01, and NDCG is improved by over 4%. Further counterfactual evaluations demonstrate that stronger disentanglement of long and short-term interests is successfully achieved by CLSR. The code and data are available at https://github.com/tsinghua-fib-lab/CLSR.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: A technique for analyzing the structure of relations between types of DDOS attacks has been developed and the result is the stability of features, the values of which are invariant to the measurement scales.
Abstract: The problem of detecting types of DDOS attacks in large-scale networks is considered. The complexity of detection is explained by the presence of a large number of connected and diverse devices, the high volume of incoming traffic, the need to introduce special restrictions when searching for anomalies. The technology of developing information security models using data mining (DM) methods is proposed. The features of machine learning of DM algorithms are related to the choice of methods for preprocessing big data (Big Data). A technique for analyzing the structure of relations between types of DDOS attacks has been developed. Within the framework of this technique, a procedure for pairwise comparison of data by types of attacks with normal traffic is implemented. The result of the comparison is the stability of features, the values of which are invariant to the measurement scales. The analysis of the structure of relations by grouping algorithms was carried out according to the stability values on the determined sets of features. When forming the sets, the stability ranking was used. For classification, various existing methods of machine learning are analyzed.

Journal ArticleDOI
TL;DR: In this paper , the authors examined volatility connectedness of 11 sectoral indices in the US using daily data from January 1, 2013 to December 31, 2020 and found an extraordinary increase in total connectedness, from early stages of international spread to the end of July 2020.

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a decision-making trial and evaluation laboratory (DEMATEL) method to analyze the relevant factors and ranked the factors by the preference ranking organization method for enrichment of evaluations (PROMETHEE).

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a deep semantic information propagation approach in the novel context of multiple unlabeled target domains and one labeled source domain, where the transductive ability of the graph attention network can conduct semantic propagation of the related samples among multiple domains.
Abstract: Domain adaptation, which transfers the knowledge from label-rich source domain to unlabeled target domains, is a challenging task in machine learning. The prior domain adaptation methods focus on pairwise adaptation assumption with a single source and a single target domain, while little work concerns the scenario of one source domain and multiple target domains. Applying pairwise adaptation methods to this setting may be suboptimal, as they fail to consider the semantic association among multiple target domains. In this work we propose a deep semantic information propagation approach in the novel context of multiple unlabeled target domains and one labeled source domain. Our model aims to learn a unified subspace common for all domains with a heterogeneous graph attention network, where the transductive ability of the graph attention network can conduct semantic propagation of the related samples among multiple domains. In particular, the attention mechanism is applied to optimize the relationships of multiple domain samples for better semantic transfer. Then, the pseudo labels of the target domains predicted by the graph attention network are utilized to learn domain-invariant representations by aligning labeled source centroid and pseudo-labeled target centroid. We test our approach on four challenging public datasets, and it outperforms several popular domain adaptation methods.

Proceedings ArticleDOI
28 Apr 2022
TL;DR: A generic curriculum learning based optimization framework called CL-DRD that controls the difficulty level of training data produced by the re-ranking (teacher) model is proposed that iteratively optimizes the dense retrieval (student) model by increasing the difficulty of the knowledge distillation data made available to it.
Abstract: Recent work has shown that more effective dense retrieval models can be obtained by distilling ranking knowledge from an existing base re-ranking model. In this paper, we propose a generic curriculum learning based optimization framework called CL-DRD that controls the difficulty level of training data produced by the re-ranking (teacher) model. CL-DRD iteratively optimizes the dense retrieval (student) model by increasing the difficulty of the knowledge distillation data made available to it. In more detail, we initially provide the student model coarse-grained preference pairs between documents in the teacher's ranking, and progressively move towards finer-grained pairwise document ordering requirements. In our experiments, we apply a simple implementation of the CL-DRD framework to enhance two state-of-the-art dense retrieval models. Experiments on three public passage retrieval datasets demonstrate the effectiveness of our proposed framework.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an attention-based KG representation learning framework to fully utilize the information of KGs for improved performance of drug-drug interactions (DDIs) prediction.
Abstract: Drug-drug interactions (DDIs) are known as the main cause of life-threatening adverse events, and their identification is a key task in drug development. Existing computational algorithms mainly solve this problem by using advanced representation learning techniques. Though effective, few of them are capable of performing their tasks on biomedical knowledge graphs (KGs) that provide more detailed information about drug attributes and drug-related triple facts. In this work, an attention-based KG representation learning framework, namely DDKG, is proposed to fully utilize the information of KGs for improved performance of DDI prediction. In particular, DDKG first initializes the representations of drugs with their embeddings derived from drug attributes with an encoder-decoder layer, and then learns the representations of drugs by recursively propagating and aggregating first-order neighboring information along top-ranked network paths determined by neighboring node embeddings and triple facts. Last, DDKG estimates the probability of being interacting for pairwise drugs with their representations in an end-to-end manner. To evaluate the effectiveness of DDKG, extensive experiments have been conducted on two practical datasets with different sizes, and the results demonstrate that DDKG is superior to state-of-the-art algorithms on the DDI prediction task in terms of different evaluation metrics across all datasets.

Journal ArticleDOI
TL;DR: The seqwish algorithm is designed, which builds a variation graph from a set of sequences and alignments between them, and it is demonstrated that the method scales to very large graph induction problems by applying it to build pangenome graphs for several species.
Abstract: Motivation Pangenome variation graphs model the mutual alignment of collections of DNA sequences. A set of pairwise alignments implies a variation graph, but there are no scalable methods to generate such a graph from these alignments. Existing related approaches depend on a single reference, a specific ordering of genomes, or a de Bruijn model based on a fixed k-mer length. A scalable, self-contained method to build pangenome graphs without such limitations would be a key step in pangenome construction and manipulation pipelines. Results We design the seqwish algorithm, which builds a variation graph from a set of sequences and alignments between them. We first transform the alignment set into an implicit interval tree. To build up the variation graph, we query this tree-based representation of the alignments to reduce transitive matches into single DNA segments in a sequence graph. By recording the mapping from input sequence to output graph, we can trace the original paths through this graph, yielding a pangenome variation graph. We present an implementation that operates in external memory, using disk-backed data structures and lock-free parallel methods to drive the core graph induction step. We demonstrate that our method scales to very large graph induction problems by applying it to build pangenome graphs for several species. Availability seqwish is published as free software under the MIT open source license. Source code and documentation are available at https://github.com/ekg/seqwish. seqwish can be installed via Bioconda https://bioconda.github.io/recipes/seqwish/README.html or GNU Guix https://github.com/ekg/guix-genomics/blob/master/seqwish.scm. Contact egarris5@uthsc.edu

Journal ArticleDOI
TL;DR: In this paper , a general framework combining statistical inference and expectation maximization is proposed to fully reconstruct 2-simplicial complexes with two and three-body interactions based on binary time-series data from two types of discrete-state dynamics.
Abstract: Abstract Previous efforts on data-based reconstruction focused on complex networks with pairwise or two-body interactions. There is a growing interest in networks with higher-order or many-body interactions, raising the need to reconstruct such networks based on observational data. We develop a general framework combining statistical inference and expectation maximization to fully reconstruct 2-simplicial complexes with two- and three-body interactions based on binary time-series data from two types of discrete-state dynamics. We further articulate a two-step scheme to improve the reconstruction accuracy while significantly reducing the computational load. Through synthetic and real-world 2-simplicial complexes, we validate the framework by showing that all the connections can be faithfully identified and the full topology of the 2-simplicial complexes can be inferred. The effects of noisy data or stochastic disturbance are studied, demonstrating the robustness of the proposed framework.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a multilayer network model, where the upper layer network represents a resource network composed of random simplicial complexes to transmit resources, while the lower-layer network represents the network of physical contacts where the disease can spread.
Abstract: Recent studies have shown that personal resources have a significant impact on the dynamics of epidemic spreading. In previous studies, the main way for individuals to be able to obtain resources was through pairwise interactions. However, the human relationship network is often characterized also by group interactions, not just by pairwise interactions. To study the impact of resource diffusion on disease propagation in such higher-order networks, we therefore propose a multilayer network model, where the upper-layer network represents a resource network composed of random simplicial complexes to transmit resources, while the lower-layer network represents the network of physical contacts where the disease can spread. We derive the outbreak threshold expression for the epidemic by means of the micro Markov chain method, which reveals that the diffusion of resources may substantially change the epidemic threshold. We also show that the final fractions of infected individuals obtained via the micro Markov chain method and the classical Monte Carlo method are very similar, thus confirming that the model can predict well the epidemic spreading within the networked population. Finally, through extensive simulations, we show also that increasing the spread of resources on 2-simplexes can suppress the epidemic spreading and outbreaks, thus outlining possibilities for novel containment strategies.

Journal ArticleDOI
TL;DR: In this paper , a decision tool for planning offshore wind farm locations, combining multi-criteria decision analysis and geographic information systems, was developed for a case study in the Atlantic coastal areas of Portugal, Spain, and France.

Posted ContentDOI
17 Aug 2022-bioRxiv
TL;DR: The bidirectional WFA algorithm (BiWFA), the first gap-affine algorithm capable of computing optimal alignments in O(s) memory while retaining WFA’s time complexity of O(ns), is presented.
Abstract: Motivation Pairwise sequence alignment remains a fundamental problem in computational biology and bioinformatics. Recent advances in genomics and sequencing technologies demand faster and scalable algorithms that can cope with the ever-increasing sequence lengths. Classical pairwise alignment algorithms based on dynamic programming are strongly limited by quadratic requirements in time and memory. The recently proposed wavefront alignment algorithm (WFA) introduced an efficient algorithm to perform exact gap-affine alignment in O(ns) time, where s is the optimal score and n is the sequence length. Notwithstanding these bounds, WFA’s O(s2) memory requirements become computationally impractical for genome-scale alignments, leading to a need for further improvement. Results In this paper, we present the bidirectional WFA algorithm (BiWFA), the first gap-affine algorithm capable of computing optimal alignments in O(s) memory while retaining WFA’s time complexity of O(ns). As a result, this work improves the lowest known memory bound O(n) to compute gap-affine alignments. In practice, our implementation never requires more than a few hundred MBs aligning noisy Oxford Nanopore Technologies reads up to 1 Mbp long while maintaining competitive execution times. Availability All code is publicly available at https://github.com/smarco/BiWFA-paper Contact santiagomsola@gmail.com

Journal ArticleDOI
TL;DR: In this article , the authors have introduced an approach to overcome limitations in the existing methodology, by providing an easier way of handling the data with an advantage of keeping all the criteria at the same level, validating the data and minimizing the time required by experts by simplifying the input data requirement for large data problems.
Abstract: • Large data matrix solution in Fuzzy AHP. • Novel approach of data mapping in Fuzzy AHP. • Discussions on consistency ratio for large size matrix. • Time investment of experts is reduced. Fuzzy AHP is one of the widely used methods in Multi Criteria Decision Making, even in recent times. In Fuzzy AHP, the experts compare the pairwise criteria either jointly or individually. If the number of criteria to be compared is large, there can be ambiguity in the comparisons leading to inconsistency. Some researchers have tried to overcome this issue by considering criteria at global and local levels to reduce the number of criteria to be compared pairwise in each matrix. This simplifies individual matrix as the pairwise comparisons to be made are less in number. But, the numbers of matrices to be solved are increased in such cases. Also, there is a distinct drawback of creating global and local criteria. Secondly, for solving large data problems, it is a tedious affair to find experts sharing their time. In this paper, the authors have introduced an innovative approach to overcome limitations in the existing methodology, by providing an easier way of handling the data with an advantage of keeping all the criteria at the same level, validating the data and minimizing the time required by experts by simplifying the input data requirement for large data problems. New areas of research are cited for solving large matrix problems where pair wise comparisons are desired.