scispace - formally typeset
Search or ask a question

Showing papers on "Tuple published in 2023"


Journal ArticleDOI
TL;DR: In this paper , the authors explored and extended the measurement and ranking of alternatives according to the compromise solution under the background of 2-tuple linguistic q-rung picture fuzzy sets.

12 citations


Journal ArticleDOI
TL;DR: The first fully dynamic constructions of volume-hiding encrypted multi-maps that are both asymptotically and concretely efficient are presented and simultaneously provide forward and backward privacy that are the de-facto standard security notions for dynamic STE schemes.
Abstract: We study encrypted storage schemes where a client outsources data to an untrusted third-party server (such as a cloud storage provider) while maintaining the ability to privately query and dynamically update the data. We focus on encrypted multi-maps (EMMs), a structured encryption (STE) scheme that stores pairs of label and value tuples. EMMs allow queries on labels and return the associated value tuple. As responses are variable-length, EMMs are subject to volume leakage attacks introduced by Kellaris et al. [CCS'16]. To prevent these attacks, volume-hiding EMMs were introduced by Kamara and Moataz [Eurocrypt'19] that hide the label volumes (i.e., the value tuple lengths). As our main contribution, we present the first fully dynamic volume-hiding EMMs that are both asymptotically and concretely efficient. Furthermore, they are simultaneously forward and backward private which are the de-facto standard security notions for dynamic STE schemes. Additionally, we implement our schemes to showcase their concrete efficiency. Our experimental evaluations show that our constructions are able to add dynamicity with minimal to no additional cost compared to the prior best static volume-hiding schemes of Patel et al. [CCS'19].

11 citations


Journal ArticleDOI
TL;DR: In this article , a 2-tuple linguistic Fermatean fuzzy set (2TLFFS) is used for group decision-making in 2-tree linguistic FER context and a flowchart is developed to comprehend the algorithm of 2TLFF-ELECTRE II.

7 citations


Journal ArticleDOI
TL;DR: The main goal of this paper is to propose an extended multi-objective optimization ratio analysis plus full multiplication form (MULTIMOORA) method that is based on a 2-tuple spherical fuzzy linguistic set (2TSFLS).
Abstract: The selection of an appropriate mining method is considered as an important tool in the mining design process. The adoption of a mining method can be regarded as a complex multi-attribute group decision-making (MAGDM) problem as it may contain uncertainty and vagueness. The main goal of this paper is to propose an extended multi-objective optimization ratio analysis plus full multiplication form (MULTIMOORA) method that is based on a 2-tuple spherical fuzzy linguistic set (2TSFLS). The MULTIMOORA method under 2TSFL conditinos has been developled as a novel approach to deal with uncertainty in decision-making problems. The proposed work shows that 2TSFLSs contain collaborated features of spherical fuzzy sets (SFSs) and 2-tuple linguistic term sets (2TLTSs) and, hence, can be considered as a rapid and efficient tool to represent the experts' judgments. Thus, the broader structure of SFSs, the ability of 2TLTSs to represent linguistic assessments, and the efficiency of the MULTIMOORA approach have motivated us to present this work. To attain our desired results, we built a normalized Hamming distance measure and score function for 2TSFLSs. We demonstrate the applicability and realism of the proposed method with the help of a numerical example, that is, the selection of a suitable mining method for the Kaiyang phosphate mine. Then, the results of the proposed work are compared with the results of existing methods to better reflect the strength and effectiveness of the proposed work. Finally, we conclude that the proposed MULTIMOORA method within a 2TSFLS framework is quite efficient and comprehensive to deal with the arising MAGDM problems.

6 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper introduced a weighted aggregated sum product assessment (WASPAS) method with a 2-tuple linguistic Fermatean fuzzy (2TLFF) set for the SWDLS problem by using the Hamacher aggregation operators.
Abstract: Manufacturing plants generate toxic waste that can be harmful to workers, the population and the atmosphere. Solid waste disposal location selection (SWDLS) for manufacturing plants is one of the fastest growing challenges in many countries. The weighted aggregated sum product assessment (WASPAS) is a unique combination of the weighted sum model and the weighted product model. The purpose of this research paper is to introduce a WASPAS method with a 2-tuple linguistic Fermatean fuzzy (2TLFF) set for the SWDLS problem by using the Hamacher aggregation operators. As it is based on simple and sound mathematics, being quite comprehensive in nature, it can be successfully applied to any decision-making problem. First, we briefly introduce the definition, operational laws and some aggregation operators of 2-tuple linguistic Fermatean fuzzy numbers. Thereafter, we extend the WASPAS model to the 2TLFF environment to build the 2TLFF-WASPAS model. Then, the calculation steps for the proposed WASPAS model are presented in a simplified form. Our proposed method, which is more reasonable and scientific in terms of considering the subjectivity of the decision maker's behaviors and the dominance of each alternative over others. Finally, a numerical example for SWDLS is proposed to illustrate the new method, and some comparisons are also conducted to further illustrate the advantages of the new method. The analysis shows that the results of the proposed method are stable and consistent with the results of some existing methods.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors introduced and investigated a new seminorm of operator tuples on a complex Hilbert space H when an additional semi-inner product structure defined by a positive (semi-definite) operator A on H is considered.
Abstract: The aim of this paper was to introduce and investigate a new seminorm of operator tuples on a complex Hilbert space H when an additional semi-inner product structure defined by a positive (semi-definite) operator A on H is considered. We prove the equality between this new seminorm and the well-known A-joint seminorm in the case of A-doubly-commuting tuples of A-hyponormal operators. This study is an extension of a well-known result in [Results Math 75, 93(2020)] and allows us to show that the following equalities rA(T)=ωA(T)=∥T∥A hold for every A-doubly-commuting d-tuple of A-hyponormal operators T=(T1,…,Td). Here, rA(T),∥T∥A, and ωA(T) denote the A-joint spectral radius, the A-joint operator seminorm, and the A-joint numerical radius of T, respectively.

4 citations


Journal ArticleDOI
TL;DR: In this paper , the p-tuples of bounded linear operators on a complex Hilbert space with adjoint operators defined with respect to a non-zero positive operator A were studied and several sharp inequalities involving the classical A-numerical radius and the A-seminorm of semi-Hilbert space operators were established.
Abstract: In this paper, we study p-tuples of bounded linear operators on a complex Hilbert space with adjoint operators defined with respect to a non-zero positive operator A. Our main objective is to investigate the joint A-numerical radius of the p-tuple.We established several upper bounds for it, some of which extend and improve upon a previous work of the second author. Additionally, we provide several sharp inequalities involving the classical A-numerical radius and the A-seminorm of semi-Hilbert space operators as applications of our results.

3 citations


Journal ArticleDOI
TL;DR: In this article , a dynamic coreset-based approach, called D YN C ORE , is proposed for continuous k-regret minimization in dynamic databases, which achieves the same (asymptotically optimal) upper bound on the maximum k -regret ratio as the best-known static algorithm.
Abstract: — Finding a small set of representative tuples from a large database is an important functionality for supporting multi-criteria decision making. Top- k queries and skyline queries are two widely studied queries to fulfill this task. However, both of them have some limitations: a top- k query requires the user to provide her utility functions for finding the k tuples with the highest scores as the result; a skyline query does not need any user-specified utility function but cannot control the result size. To overcome their drawbacks, the k -regret minimization query was proposed and received much attention recently, since it does not require any user-specified utility function and returns a fixed-size result set. Specifically, it selects a set R of tuples with a pre-defined size r from a database D such that the maximum k -regret ratio , which captures how well the top-ranked tuple in R represents the top- k tuples in D for any possible utility function, is minimized. Although there have been many methods for k -regret minimization query processing, most of them are designed for static databases without tuple insertions and deletions. The only known algorithm to process continuous k -regret minimization queries (C k RMQ) in dynamic databases suffers from suboptimal approximation and high time complexity. In this paper, we propose a novel dynamic coreset-based approach, called D YN C ORE , for C k RMQ processing. It achieves the same (asymptotically optimal) upper bound on the maximum k -regret ratio as the best-known static algorithm. Meanwhile, its time complexity is sublinear to the database size, which is significantly lower than that of the existing dynamic algorithm. The efficiency and effectiveness of D YN C ORE is confirmed by experimental results on real-world and synthetic datasets.

3 citations


Journal ArticleDOI
TL;DR: This article proposed a relationship facilitated local classifier distillation (ReFilled) approach, which decouples the GKD flow of the embedding and the top-layer classifier.
Abstract: The knowledge of a well-trained deep neural network (a.k.a. the teacher) is valuable for learning similar tasks. Knowledge distillation extracts knowledge from the teacher and integrates it with the target model (a.k.a. the student), which expands the student's knowledge and improves its learning efficacy. Instead of restricting the teacher from working on the same task as the student, we borrow the knowledge of a teacher trained from a general label space --- in this Generalized Knowledge Distillation (GKD), the classes of the teacher and the student may be the same, completely different, or partially overlapped. We claim that the comparison ability between instances acts as an essential factor threading knowledge across tasks, and propose the Relationship Facilitated Local Classifier Distillation (ReFilled) approach, which decouples the GKD flow of the embedding and the top-layer classifier. In particular, different from reconciling the instance-label confidence between models, ReFilled requires the teacher to reweight the hard tuples push forwarded by the student adaptively and then matches the similarity comparison levels between instances. ReFilled demonstrates strong discriminative ability when the classes of the teacher vary from the same to a fully non-overlapped set w.r.t. the student.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a robust multi-attribute decision support mechanism for assessing patients' susceptibility to brain tumours is proposed, which is regarded as more reliable and generalised for handling information-based uncertainties because its complex components and fuzzy parameterisation are designed to deal with the periodic nature of the data and dubious parameters.
Abstract: Susceptibility analysis is an intelligent technique that not only assists decision makers in assessing the suspected severity of any sort of brain tumour in a patient but also helps them diagnose and cure these tumours. This technique has been proven more useful in those developing countries where the available health-based and funding-based resources are limited. By employing set-based operations of an arithmetical model, namely fuzzy parameterised complex intuitionistic fuzzy hypersoft set (FPCIFHSS), this study seeks to develop a robust multi-attribute decision support mechanism for appraising patients’ susceptibility to brain tumours. The FPCIFHSS is regarded as more reliable and generalised for handling information-based uncertainties because its complex components and fuzzy parameterisation are designed to deal with the periodic nature of the data and dubious parameters (sub-parameters), respectively. In the proposed FPCIFHSS-susceptibility model, some suitable types of brain tumours are approximated with respect to the most relevant symptoms (parameters) based on the expert opinions of decision makers in terms of complex intuitionistic fuzzy numbers (CIFNs). After determining the fuzzy parameterised values of multi-argument-based tuples and converting the CIFNs into fuzzy values, the scores for such types of tumours are computed based on a core matrix which relates them with fuzzy parameterised multi-argument-based tuples. The sub-intervals within [0, 1] denote the susceptibility degrees of patients corresponding to these types of brain tumours. The susceptibility of patients is examined by observing the membership of score values in the sub-intervals.

3 citations


Proceedings ArticleDOI
18 Jun 2023
TL;DR: In this paper , the authors present a dynamic index structure for join sampling, which uses O(IN) space, supports a tuple update of any relation in O(1) time, and returns a uniform sample from the join result in ~O(INρ* / /max{1, OUT} ) time with high probability (w.h.p.).
Abstract: We present a dynamic index structure for join sampling. Built for an (equi-) join Q --- let IN be the total number of tuples in the input relations of Q --- the structure uses ~O(IN) space, supports a tuple update of any relation in ~O(1) time, and returns a uniform sample from the join result in ~O(INρ* / /max{1, OUT} ) time with high probability (w.h.p.), where OUT and ρ* are the join's output size and fractional edge covering number, respectively; notation ~O(.) hides a factor polylogarithmic to IN. We further show how our result justifies the O(INρ* ) running time of existing worst-case optimal join algorithms (for full result reporting) even when OUT łl INρ*. Specifically, unless the combinatorial k-clique hypothesis is false, no combinatorial algorithms (i.e., algorithms not relying on fast matrix multiplication) can compute the join result in O(INρ*-ε ) time w.h.p. even if OUT łe INε, regardless of how small the constant ε > 0 is.

Journal ArticleDOI
TL;DR: In this article , a 2-tuple linguistic decision-making method was proposed to adjust the consistency of an original 2-TLPR to a predetermined level, and a convergent consistency-improving algorithm was employed to employ a minimum adjustment strategy to preserve the DM's initial evaluation information.

Posted ContentDOI
TL;DR: Weighing as mentioned in this paper assigns equal weight to each join key group (rather than each tuple) and then distribute the weights among tuples to counteract join fanouts, which has been used in various areas, such as market attribution and order management, ensuring metrics consistency even for many-to-many joins.
Abstract: Analysts often struggle with analyzing data from multiple tables in a database due to their lack of knowledge on how to join and aggregate the data. To address this, data engineers pre-specify"semantic layers"which include the join conditions and"metrics"of interest with aggregation functions and expressions. However, joins can cause"aggregation consistency issues". For example, analysts may observe inflated total revenue caused by double counting from join fanouts. Existing BI tools rely on heuristics for deduplication, resulting in imprecise and challenging-to-understand outcomes. To overcome these challenges, we propose"weighing"as a core primitive to counteract join fanouts."Weighing"has been used in various areas, such as market attribution and order management, ensuring metrics consistency (e.g., total revenue remains the same) even for many-to-many joins. The idea is to assign equal weight to each join key group (rather than each tuple) and then distribute the weights among tuples. Implementing weighing techniques necessitates user input; therefore, we recommend a human-in-the-loop framework that enables users to iteratively explore different strategies and visualize the results.

Journal ArticleDOI
16 Jan 2023-Symmetry
TL;DR: In this paper, a linguistic multi-attribute group decision-making (MAGDM) approach with complex fractional orthotriple fuzzy 2-tuple linguistic (CFOF2TL) assessment details is presented.
Abstract: In this research, we provide tools to overcome the information loss limitation resulting from the requirement to estimate the results in the discrete initial expression domain. Through the use of 2-tuples, which are made up of a linguistic term and a numerical value calculated between [0.5,0.5), the linguistic information will be expressed. This model supports continuous representation of the linguistic data within its scope, permitting it to express any information counting received through an aggregation procedure. This study provides a novel approach to develop a linguistic multi-attribute group decision-making (MAGDM) approach with complex fractional orthotriple fuzzy 2-tuple linguistic (CFOF2TL) assessment details. Initially, the concept of a complex fractional orthotriple fuzzy 2-tuple linguistic set (CFO2TLS) is proposed to convey uncertain and fuzzy information. In the meantime, simple aggregation operators, such as CFOF2TL weighted average and geometric operators, are defined. In addition, the CFOF2TL Maclaurin’s symmetric mean (CFOF2TLMSM) operators and their weighted shapes are presented, and their attractive characteristics are also discussed. A new MAGDM approach is built using the developed aggregation operators to address managing economic crises under COVID-19 with the CFOF2TL information. As a result, the effectiveness and robustness of the developed method are accompanied by an empirical example, and a comparative study is carried out by contrasting it with previous approaches.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors formulated the automated urban planning problem into a task of deep generative learning and developed an adversarial learning framework, in which a generator takes the surrounding context representations as input to generate a land-use configuration, and a discriminator learns to distinguish positive and negative samples.
Abstract: Urban planning refers to the efforts of designing land-use configurations given a region. However, to obtain effective urban plans, urban experts have to spend much time and effort analyzing sophisticated planning constraints based on domain knowledge and personal experiences. To alleviate the heavy burden of them and produce consistent urban plans, we want to ask that can AI accelerate the urban planning process, so that human planners only adjust generated configurations for specific needs? The recent advance of deep generative models provides a possible answer, which inspires us to automate urban planning from an adversarial learning perspective. However, three major challenges arise: (1) how to define a quantitative land-use configuration? (2) how to automate configuration planning? (3) how to evaluate the quality of a generated configuration? In this article, we systematically address the three challenges. Specifically, (1) We define a land-use configuration as a longitude-latitude-channel tensor. (2) We formulate the automated urban planning problem into a task of deep generative learning. The objective is to generate a configuration tensor given the surrounding contexts of a target region. In particular, we first construct spatial graphs using geographic and human mobility data crawled from websites to learn graph representations. We then combine each target area and its surrounding context representations as a tuple, and categorize all tuples into positive (well-planned areas) and negative samples (poorly-planned areas). Next, we develop an adversarial learning framework, in which a generator takes the surrounding context representations as input to generate a land-use configuration, and a discriminator learns to distinguish between positive and negative samples. (3) We provide quantitative evaluation metrics and conduct extensive experiments to demonstrate the effectiveness of our framework.

Journal ArticleDOI
TL;DR: In this paper , the authors describe a scenario in which a group of scientists are working on a project in which they try to find a solution to the problem of climate change in the Philippines.
Abstract: 著者ら(塩野・山口, 2022)はタクソンΣの生存期間Tを,Σに含まれる種の出現時刻と絶滅時刻を端点とする小区間に直和分割したとき,種の生存期間は小区間の和集合として表現できることや種の生存期間に関する集合演算は,空集合,単一の小区間,複数の小区間の和集合からなる集合TM 上の演算であることを示した.本稿では集合TM 上の演算をブール代数の観点から考察し,次のような結果をえた.⑴集合TM は零元を∅,単位元をTとするブール代数である.⑵Tをn個の小区間に分割したとき,ブール代数TM はB = {0,1}の直積集合Bn と同型である.⑶Bn からTM へのブール同型写像を使うと,TM の元に関する集合演算はコンピュータ処理に適したタプル(Bn の元)の計算に変換できる.

Proceedings ArticleDOI
02 Jun 2023
TL;DR: In this paper , it was shown that for any 3-ary predicate P:Σ3 → {0, 1} such that P has no linear embedding, an SDP integrality gap instance of a P-CSP instance with gap (1,s) can be translated into a dictatorship test with completeness 1 and soundness s+o(1), under certain additional conditions on the instance.
Abstract: Let Σ be an alphabet and µ be a distribution on Σk for some k ≥ 2. Let α > 0 be the minimum probability of a tuple in the support of µ (denoted supp(µ)). Here, the support of µ is the set of all tuples in Σk that have a positive probability mass under µ. We treat the parameters Σ, k, µ, α as fixed and constant. We say that the distribution µ has a linear embedding if there exist an Abelian group G (with the identity element 0G) and mappings σi : Σ → G, 1 ≤ i ≤ k, such that at least one of the mappings is non-constant and for every (a1, a2, …, ak)∈ supp(µ), ∑i=1k σi(ai) = 0G. Let fi: Σn→ [−1,1] be bounded functions, such that at least one of the functions fi essentially has degree at least d, meaning that the Fourier mass of fi on terms of degree less than d is negligible, say at most δ. In particular, |E[fi]| ≤ δ. The Fourier representation is w.r.t. the marginal of µ on the ith co-ordinate, denoted (Σ, µi). If µ has no linear embedding (over any Abelian group), then is it necessarily the case that |E(x1, x2, …, xk)∼ µ⊗ n[f1(x1)f2(x2)⋯ fk(xk)] = od, δ(1), where the right hand side → 0 as the degree d → ∞ and δ → 0? In this paper, we answer this analytical question fully and in the affirmative for k=3. We also show the following two applications of the result. The first application is related to hardness of approximation. We show that for every 3-ary predicate P:Σ3 → {0,1} such that P has no linear embedding, an SDP integrality gap instance of a P-CSP instance with gap (1,s) can be translated into a dictatorship test with completeness 1 and soundness s+o(1), under certain additional conditions on the instance. The second application is related to additive combinatorics. We show that if the distribution µ on Σ3 has no linear embedding, marginals of µ are uniform on Σ, and (a,a,a)∈ supp(µ) for every a∈ Σ, then every large enough subset of Σn contains a triple (x1, x2,x3) from µ⊗ n (and in fact a significant density of such triples).

Journal ArticleDOI
TL;DR: In this article, the authors generalized the notions of three-way decisions and decision theoretic rough sets in the framework of Complex q-rung orthopair 2-tuple linguistic variables (CQRO2-TLV) and then deliberated some of its important properties.
Abstract: In this manuscript, we generalized the notions of three-way decisions (3WD) and decision theoretic rough sets (DTRS) in the framework of Complex q-rung orthopair 2-tuple linguistic variables (CQRO2-TLV) and then deliberated some of its important properties. Moreover, we considered some very useful and prominent aggregation operators in the framework of CQRO2-TLV, while further observing the importance of the generalized Maclurin symmetric mean (GMSM) due to its applications in symmetry analysis, interpolation techniques, analyzing inequalities, measuring central tendency, mathematical analysis and many other real life problems. We initiated complex q-rung orthopair 2-tuple linguistic (CQRO2-TL) information and GMSM to introduce the CQRO2-TL GMSM (CQRO2-TLGMSM) operator and the weighted CQRO2-TL GMSM (WCQRO2-TLGMSM) operator, and then demonstrated their properties such as idempotency, commutativity, monotonicity and boundedness. We also investigated a CQRO2-TL DTRS model. In the end, a comparative study is given to prove the authenticity, supremacy, and effectiveness of our proposed notions.

Journal ArticleDOI
TL;DR: In this paper , a free k-nilpotent n-tuple semigroup is constructed and the least k-n-potent congruence is characterized on a free n-Tuple semigroup.
Abstract: Abstract An n-tuple semigroup is an algebra defined on a set with n binary associative operations. This notion play a prominent role in the theory of n-tuple algebras of associative type. Our paper is devoted to the development of the variety theory of n-tuple semigroups. We construct a free k-nilpotent n-tuple semigroup and characterize the least k-nilpotent congruence on a free n-tuple semigroup.

Journal ArticleDOI
Fei Gao, Wenjiang Liu, Xu Mu, Wenhao Bi, An Zhang 
TL;DR: Zhang et al. as mentioned in this paper integrated 2-tuple linguistic variables and DEMATEL method to assess the dependence among human actions in human reliability analysis (HRA), which can effectively address the uncertainty in the dependence assessment while capturing the relationship among the influential factors.
Abstract: Human reliability analysis (HRA), which is to analyze human contribution to system risk, is an effective way to model and assess human errors. Dependence assessment among human errors is an essential part of HRA, which often depends on the judgments of experts. As real-world problems often involve many complex factors, the judgments provided by the experts are often linguistic terms, even under uncertainty. To this end, by integrating 2-tuple linguistic variables and DEMATEL method, this paper presents a novel way to assess the dependence among human actions in HRA. In the proposed method, the linguistic judgments of the experts are modeled using 2-tuple linguistic variables, and the weights of the influential factors are determined using DEMATEL method, furthermore, the conditional human error probability is calculated by aggregating the 2-tuples of different influential factors based on the 2-tuple weighted average operator, where a novel weight calculation method is developed to determine the weights of different experts. Finally, a case study is presented to demonstrate the effectiveness and reliability of the proposed method. By adopting 2-tuple linguistic variables and DEMATEL method, the proposed method could effectively address the uncertainty in the dependence assessment while capturing the relationship among the influential factors.

Journal ArticleDOI
TL;DR: In this article , the authors developed an extended multi-attributive border approximation area comparison (MABAC) method for solving multiple attribute group decision-making problems in this study.
Abstract: In recent years, fossil fuel resources have become increasingly rare and caused a variety of problems, with a global impact on economy, society and environment. To tackle this challenge, we must promote the development and diffusion of alternative fuel technologies. The use of cleaner fuels can reduce not only economic cost but also the emission of gaseous pollutants that deplete the ozone layer and accelerate global warming. To select an optimal alternative fuel, different fuzzy decision analysis methodologies can be utilized. In comparison to other extensions of fuzzy sets, the $ T $-spherical fuzzy set is an emerging tool to cope with uncertainty by quantifying acceptance, abstention and rejection jointly. It provides a general framework to unify various fuzzy models including fuzzy sets, picture fuzzy sets, spherical fuzzy sets, intuitionistic fuzzy sets, Pythagorean fuzzy sets and generalized orthopair fuzzy sets. Meanwhile, decision makers prefer to employ linguistic terms when expressing qualitative evaluation in real-life applications. In view of these facts, we develop an extended multi-attributive border approximation area comparison (MABAC) method for solving multiple attribute group decision-making problems in this study. Firstly, the combination of $ T $-spherical fuzzy sets with 2-tuple linguistic representation is presented, which provides a general framework for expressing and computing qualitative evaluation. Secondly, we put forward four kinds of 2-tuple linguistic $ T $-spherical fuzzy aggregation operators by considering the Heronian mean operator. We investigate some fundamental properties of the proposed 2-tuple linguistic $ T $-spherical fuzzy aggregation operators. Lastly, an extended MABAC method based on the 2-tuple linguistic $ T $-spherical fuzzy generalized weighted Heronian mean and the 2-tuple linguistic $ T $-spherical fuzzy weighted geometric Heronian mean operators is developed. For illustration, a case study on fuel technology selection with 2-tuple linguistic $ T $-spherical fuzzy information is also conducted. Moreover, we show the validity and feasibility of our approach by comparing it with several existing approaches.

Journal ArticleDOI
TL;DR: This paper presents a general framework for separability in OBDM in the context of Ontology-based Data Management, in which a domain ontology provides a high-level, logic-based specification of a domain of interest, semantically linked through suitable mapping assertions to the data source layer of an information system.
Abstract: Given two datasets, i.e., two sets of tuples of constants, representing positive and negative examples, logical separability is the reasoning task of finding a formula in a certain target query language that separates them. As already pointed out in previous works, this task turns out to be relevant in several application scenarios such as concept learning and generating referring expressions. Besides, if we think of the input datasets of positive and negative examples as composed of tuples of constants classified, respectively, positively and negatively by a black-box model, then the separating formula can be used to provide global post-hoc explanations of such a model. In this paper, we study the separability task in the context of Ontology-based Data Management (OBDM), in which a domain ontology provides a high-level, logic-based specification of a domain of interest, semantically linked through suitable mapping assertions to the data source layer of an information system. Since a formula that properly separates (proper separation) two input datasets does not always exist, our first contribution is to propose (best) approximations of the proper separation, called (minimally) complete and (maximally) sound separations. We do this by presenting a general framework for separability in OBDM. Then, in a scenario that uses by far the most popular languages for the OBDM paradigm, our second contribution is a comprehensive study of three natural computational problems associated with the framework, namely Verification (check whether a given formula is a proper, complete, or sound separation of two given datasets), Existence (check whether a proper, or best approximated separation of two given datasets exists at all), and Computation (compute any proper, or any best approximated separation of two given datasets).

Journal ArticleDOI
TL;DR: In this article , the authors proposed a novel mathematical models by integrating 2-tuple linguistic setting into rough approximations and cloud theory to handle uncertainty with randomness and multi-granularity simultaneously, and a hybrid weighting scheme is utilized to evaluate the relative importance of waste factors using both subjective and objective aspects of uncertainty.

Book ChapterDOI
02 Dec 2022
TL;DR: In this article , a multiple attribute group decision-making (MAGDM) approach based on q-rung orthopair fuzzy 2-tuple linguistic set (q-ROFTLS) was investigated.
Abstract: In this chapter, we investigate a multiple attribute group decision-making (MAGDM) approach based on q-rung orthopair fuzzy 2-tuple linguistic set (q-ROFTLS) to increase the capability of 2-tuple linguistic terms to describe ambiguous information. The Hamy mean (HM) operator can capture the significant connection between multiple integration parameters as an effective aggregation tool. Then, the q-ROFTL Hamy mean (q-ROFTLHM) operator, the q-ROFTL weighted Hamy mean (q-ROFTLWHM) operator, the q-ROFTL dual Hamy mean (q-ROFTLDHM) operator, and the q-ROFTL weighted dual Hamy mean (q-ROFTLWDHM) operator are presented in q-ROFTL environment. In the MAGDM environment, the framework of proposed aggregation operators can efficiently combine attribute values. Aspects of these operators, properties are also investigated. Moreover, in this research study, we use the evaluation based on the distance from the average solution (EDAS) approach to MAGDM with q-ROFTLNs. The developed q-ROFTL-EDAS approach is one of the most successful ways for solving MAGDM issues with the choice of the best option. In a summary, the framework key steps with the graphical representation for the ranking of alternatives are presented. To be precise, the proposed technique can adequately respond to the unpredictability of MAGDM and increase the rationality of group decision-making since it completely incorporates the psychological aspects of decision experts. Finally, to demonstrate the suggested decision procedure, we present an innovative case for evaluating the ecological value of a forest bio-diversity vacation area.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a Ubiquitous Digital Twins model for the information management of complex infrastructure systems based on Domain-Driven Design, where six domains are deployed in the proposed model with sequential or parallel tuples for shared understanding of overall system framework or specific functional modules.

Journal ArticleDOI
TL;DR: In this article , PosKHG uses an embedding space with basis vectors to represent entities' positional and role information through a linear combination, which allows for similar representations of entities with related roles and positions.
Abstract: Abstract Link prediction in knowledge hypergraphs is essential for various knowledge-based applications, including question answering and recommendation systems. However, many current approaches simply extend binary relation methods from knowledge graphs to n -ary relations, which does not allow for capturing entity positional and role information in n -ary tuples. To address this issue, we introduce PosKHG, a method that considers entities’ positions and roles within n -ary tuples. PosKHG uses an embedding space with basis vectors to represent entities’ positional and role information through a linear combination, which allows for similar representations of entities with related roles and positions. Additionally, PosKHG employs a relation matrix to capture the compatibility of both information with all associated entities and a scoring function to measure the plausibility of tuples made up of entities with specific roles and positions. PosKHG achieves full expressiveness and high prediction efficiency. In experimental results, PosKHG achieved an average improvement of 4.1% on MRR compared to other state-of-the-art knowledge hypergraph embedding methods. Our code is available at https://anonymous.4open.science/r/PosKHG-C5B3/ .

Posted ContentDOI
09 May 2023
TL;DR: In this paper , the singular vector tuples of a system of tensors assigned to the edges of a directed hypergraph were studied and the dimension and degree of the variety of singular vectors of a sufficiently generic hyperquiver representation were computed.
Abstract: We count singular vector tuples of a system of tensors assigned to the edges of a directed hypergraph. To do so, we study the generalisation of quivers to directed hypergraphs. Assigning vector spaces to the nodes of a hypergraph and multilinear maps to its hyperedges gives a hyperquiver representation. Hyperquiver representations generalise quiver representations (where all hyperedges are edges) and tensors (where there is only one multilinear map). The singular vectors of a hyperquiver representation are a compatible assignment of vectors to the nodes. We compute the dimension and degree of the variety of singular vectors of a sufficiently generic hyperquiver representation. Our formula specialises to known results that count the singular vectors and eigenvectors of a generic tensor.

Journal ArticleDOI
TL;DR: In this paper , the 2-tuple linguistic neutrosophic number cross-entropy (2TLNN-CE) method is defined based on the traditional crossentropy and 2-tree NLN sets, and a numerical example for MHE evaluation of college students is given and some comparisons are also conducted to further illustrate advantages of the built method.
Abstract: Generally speaking, the evaluation of mental health education (MHE) in colleges is an activity and process of evaluating the elements, processes and effects of MHE in schools by systematically collecting relevant information, following reasonable evaluation principles and applying specialized evaluation methods and techniques according to certain evaluation index systems and value judgment systems. The fundamental goal of MHE evaluation in colleges is to promote and regulate the scientific, healthy and smooth development of MHE in colleges and universities, improve the quality of MHE, promote the reform of MHE, build a good psychological atmosphere in colleges and universities, and effectively improve the psychological quality and mental health of college students. The MHE evaluation of college students is looked as multiple attribute group decision-making (MAGDM). In this paper, the 2-tuple linguistic neutrosophic number cross-entropy (2TLNN-CE) method is defined based on the traditional cross-entropy and 2-tuple linguistic neutrosophic sets (2TLNSs). Then, 2TLNN-CE method is established for MAGDM. Finally, a numerical example for MHE evaluation of college students was given and some comparisons are also conducted to further illustrate advantages of the built method.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an efficient high utility occupancy mining approach based on novel indexed list-based structures and devised novel constructing and mining methods that are suitable for the proposed data structures and utility occupancy functions.
Abstract: High utility pattern mining has been proposed to improve the traditional support-based pattern mining methods that process binary databases. High utility patterns are discovered by effectively considering the quantity and importance of items. Recently, high utility occupancy pattern mining studies have been conducted to extract high-quality patterns by utilizing both the occupancy utility and frequency measure. Although the previous approaches provide worthy information in terms of utility occupancy, they require time-consuming tasks because of numerous comparison operations in exploring entries in global data structures. This results in significant performance degradation when the database is large, or a pre-defined threshold is low. An indexed list structure improves the inefficiency of the list-based approach by structurally connecting each tuple. In this paper, we propose an efficient high utility occupancy mining approach based on novel indexed list-based structures. The two newly designed data structures maintain index information on items or patterns and facilitate rapid pattern extension. Our approach improves the cost of generating long patterns of list-based ones by reducing a large number of comparison overheads. In addition, we devise novel constructing and mining methods that are suitable for the proposed data structures and utility occupancy functions. To narrow the wide search space, efficient pruning techniques apply to the designed methods. Thorough performance experiments using real and synthetic datasets show that our method is more efficient than state-of-the-art methods in environments where given thresholds change.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel adaptive data stream classification (ADSC) framework for solving the concept drift, class imbalance, and data redundancy problems with higher computational and classification efficiency.
Abstract: Skewed evolving data stream (SEDS) classification is a challenging research problem for online streaming data applications. The fundamental challenges in streaming data classification are class imbalance and concept drift. However, recently, either independently or together, the two topics have received enough attention; the data redundancy while performing stream data mining and classification remains unexplored. Moreover, the existing solutions for the classification of SEDSs have focused on solving concept drift and/or class imbalance problems using the sliding window mechanism, which leads to higher computational complexity and data redundancy problems. To end this, we propose a novel Adaptive Data Stream Classification (ADSC) framework for solving the concept drift, class imbalance, and data redundancy problems with higher computational and classification efficiency. Data approximation, adaptive clustering, classification, and actionable knowledge extraction are the major phases of ADSC. For the purpose of approximating unique items in the data stream with data pre-processing during the data approximation phase, we develop the Flajolet Martin (FM) algorithm. The periodically approximated tuples are grouped into distinct classes using an adaptive clustering algorithm to address the problem of concept drift and class imbalance. In the classification phase, the supervised classifiers are employed to classify the unknown incoming data streams into either of the classes discovered by the adaptive clustering algorithm. We then extract the actionable knowledge using classified skewed evolved data stream information for the end user decision-making process. The ADSC framework is empirically assessed utilizing two streaming datasets regarding classification and computing efficiency factors. The experimental results shows the better efficiency of the proposed ADSC framework as compared with existing classification methods.